text
stringlengths
3.22k
165k
The Lycopene Cyclase CrtY from Pantoea ananatis (Formerly Erwinia uredovora) Catalyzes an FADred-dependent Non-redox Reaction* The cyclization of lycopene generates provitamin A carotenoids such as β-carotene and paves the way toward the formation of cyclic xanthophylls playing distinct roles in photosynthesis and as precursors for regulatory molecules in plants and animals. The biochemistry of lycopene cyclization has been enigmatic, as the previously proposed acid-base catalysis conflicted with the possibility of redox catalysis as predicted by the presence of a dinucleotide binding site. We show that reduced FAD is the essential lycopene cyclase (CrtY) cofactor. Using flavin analogs, mass spectrometry, and mutagenesis, evidence was obtained based on which a catalytic mechanism relying on cryptic (net) electron transfer can be refuted. The role of reduced FAD is proposed to reside in the stabilization of a transition state carrying a (partial) positive charge or of a positively charged intermediate via a charge transfer interaction, acid-base catalysis serving as the underlying catalytic principle. Lycopene cyclase, thus, ranks among the novel class of non-redox flavoproteins, such as isopentenyl diphosphate:dimethylallyl diphosphate isomerase type 2 (IDI-2) that requires the reduced form of the cofactor. The cyclization of the linear C 40 -carotene lycopene represents a key reaction in carotenogenesis. The introduction of ␤or ⑀-ionone end groups paves the way toward cyclic xanthophylls formed through subsequent ring oxygenation reactions. In plants, bicyclic xanthophylls such as lutein, violaxanthin, neoxanthin, and zeaxanthin are constituents of antenna, such as the LHCII (1), where they exert dual functions in light harvesting and in averting photodamage (2). Moreover, the process of non-photochemical quenching (3) requires the bicyclic ␤-carotene derivative zeaxanthin, among other components (4). Plant photosystem II contains two ␤-carotene molecules (5) that are thought to quench 1 O 2 (6) and act as electron donors in a side path reaction protecting photosystem II (7). Bicyclic carotenoids are also the source for apocarotenoids, which arise through specific cleaving reactions mediated by carotenoid cleavage dioxygenases leading to key regulatory molecules, such as abscisic acid (8) and the strigolactones (Refs. 9 -11 and citations therein). Given these functions, plants defective in cyclization are expected to be nonviable. Cyclic carotenoids and their derivatives are also biosynthesized by many microorganisms, including all photosynthetic and some heterotrophic bacteria and fungi. In contrast, animals lack such biosynthetic capacity and need to acquire these pigments from the food chain, because of their function as precursors of the vision pigment retinal and the vertebrate morphogen retinoic acid (12). Retinal is synthesized through cleavage of the central C15-C15Ј double bond either directly from ␤-carotene, as shown for animals (13) and fungi (14,15), or from longer monocyclic apocarotenals in cyanobacteria (16,17). Retinoids can only be derived from carotenoids with at least one unsubstituted ␤-ionone ring. Thus, lycopene cyclases are the enzymes capable of providing provitamin A carotenoids. Consequently they have been employed to help alleviate vitamin A deficiency diseases (18) by enhancing the provitamin A content of crop plant tissues by genetic engineering (19 -21). In some tissues, such as rice endosperm, lycopene cyclase was not required thanks to a sufficient lycopene-␤-cyclase activity in the wild-type tissue (22,23). The wide occurrence of lycopene cyclization across taxa is contrasted by structural diversity. There are four families of lycopene cyclases that are only in part related to each other (24). The CrtY-type, which is the subject of this article, is found in many proteobacteria, whereas lycopene cyclases from cyanobacteria and plants belong to the CrtL family. CrtL and CrtY cyclases do not share much similarity; however, they contain conserved sequence patterns indicating evolutionary relatedness (25). Plants express two related versions of CrtL-enzymes capable of producing either ␤ or ⑀-ionone end groups, thus, defining an important branching point in carotenogenesis. The enzyme capsanthin/capsorubin synthase, known from Capsicum annuum (26,27), represents another member of the CrtL family. It catalyzes a ring contraction to form the so-called -ring by utilizing a mechanism thought to be very similar to that of lycopene cyclases. Additional CrtL cyclases are represented by the capsanthin/capsorubin synthase homologs from tomato and potato that were shown to mediate neoxanthin syn-thesis (28,29). However, the tentative neoxanthin synthase has been shown to represent a chromoplast-specific lycopene-␤cyclase in tomato (30). Much of the knowledge on the diverse lycopene cyclases has been gained in vivo, such as through color complementation in lycopene-producing Escherichia coli cells or the analysis of mutants. Information on the enzymology of these enzymes is scarce and still insufficient to draw conclusions on the mechanism employed. A landmark experiment was performed in vivo some decades ago with a Flavobacterium species that contains a CrtY-type cyclase (31). Formation of xanthophylls in deuterium oxide led to the incorporation of two 2 H atoms into the product. This is compatible with lycopene cyclization proceeding as outlined in Fig. 1. This model is related to isomerization reactions and agrees with the absence of net mass changes between the lycopene substrate and the ␤-carotene product (both have the sum formula C 40 H 56 ). However, the likely presence of a Rossmannfold in CrtY and CrtL-type cyclases (24) would be consistent with the presence of a dinucleotide cofactor such as FAD(H 2 ) or NAD(P)(H) and raises the possibility of a mechanism involving redox chemistry. This, however, would be in contrast with the mentioned absence of changes in the redox status between substrate and product. A multitude of different cofactors have been tested in the past in relation to lycopene cyclase catalyzed reactions (see "Discussion"). However, no clear cut data on the nature and possible role of the cofactor involved have been obtained thus far. In view of the basic and mechanistic relevance of this biochemical step, we undertook research mainly with the CrtYtype lycopene cyclase from Pantoea ananatis (formerly Erwinia uredovora, ACC D90087). Biphasic liposomal assays were used to account for the high lipophilicity of the C 40 hydrocarbons lycopene and the cyclic reaction products. Here, we focus on the mechanistic role of the flavin, which was identified as the cofactor of CrtY in the course of this study. EXPERIMENTAL PROCEDURES Chemicals Used-A compilation of flavin cofactors modified at various positions has been reported elsewhere (32). It con-tains references to appropriate sources or synthetic methods and to methods for conversion of riboflavin analogs into the corresponding FMN and FAD derivatives as well as their redox potentials. FAD synthetase from Corynebacterium ammoniagenes (33,34) was kindly provided by Dr. M. Medina (Universidad de Zaragoza). Flavin cofactor analogs were purified using HPLC 3 system 3, and their identity was verified by UV-visible spectroscopy and LC-MS. Neurosporene, 5,5-di-cis-lycopene, and ␥-carotene were purchased from CaroteNature. Prolycopene was extracted and purified from fruits of the tangerine cultivar of tomato according to reported methods (35). 2 (36). For this purpose, the entry vector pEntr/D-CrtY was produced by amplification of the CrtY gene from pBAD-TOPO-Thio-CrtY expressing the enzyme in fusion with thioredoxin. The amplification was performed with the primers CrtY GWF (5Ј-CACCGATGACGATGACAAGC-TCGCCCTTATG-3Ј) and CrtY-RϩT (5Ј-TCATCCTTTATC-TCGTCTGTCAGGA-3Ј) using 500 nM concentrations of each primer, 150 M dNTPs, and 1 unit of Phusion TM High-Fidelity DNA Polymerase (Finnzymes) in the buffer provided. The resulting product was purified and cloned into pENTR/D-TOPO (Invitrogen) according to the manufacturer's instructions. The expression plasmids pCrtY-HMGWA, pCrtY-HGGWA, pCrtY-HNGWA, and pCrtY-HXGWA encode the fusion proteins His 6 -MBP-EK-CrtY (termed mCrtY in the text; MBP is maltose-binding protein), His 6 -GST-EK-CrtY (GST is glutathione S-transferase), His 6 -NusA-EK-CrtY (Nus is N-utilizing substance A), and His 6 -TRX-EK-CrtY (TRX is thioredoxin), respectively (see Ref. 36, EK denotes an enterokinase cleavage site introduced through primer GWF). The plasmids were obtained by transferring the CrtY-gene from pEntr/D-CrtY into the corresponding Gateway destination vectors using the Gateway LR Clonase Enzyme Mix (Invitrogen) according to the manufacturer's instructions. For color complementation, JM109 E. coli cells containing the plasmid pFarbeR enabling lycopene synthesis (15) were transformed with the plasmid pCrtY-HMGWA, pThio-OsLYCb, or pE196A. Bacteria were grown overnight at 28°C, harvested, and extracted with acetone. After partition against petroleum benzene:diethyl ether 2:1 (v/v) and washing with water, the organic epiphase was dried and subjected to HPLC (system 1) analysis. Cell Disintegration, Solubilization, and Protein Purification-All procedures were carried out on ice. Cells from a 500-ml culture were harvested at an A 600 of 2.0 and resuspended in 6 ml of buffer A (100 mM Tris-HCl, pH 7.0, 5 mM MgCl 2 , 300 mM NaCl, and glycerol 10%) by vortexing and disintegrated by two passages through a French press cell operated at 18,000 p.s.i. A centrifugation step at 17,000 ϫ g for 15 min was used to remove large cell debris. The supernatant was solubilized with Tween 20 at a 10 ϫ critical micellar concentration (0.067%) final concentration. The suspension was incubated for 30 min with occasional shaking before adding 2 ml of the IMAC resin suspension of Talon (Clontech) containing Co 2ϩ as the metal ligand (purification method 1). The material was equilibrated with Buffer A containing 0.067% Tween 20 before use. The His-tagged mCrtY was allowed to bind for 45 min under continuous shaking at 37 rpm, and the resin was recovered by centrifugation. Three washing steps with buffer A containing 0.02% Tween 20 and 4 mM imidazole were employed to remove unspecifically bound protein. Elution of the bound protein was accomplished with buffer B, which is the same as buffer A but adjusted to a pH of 8.0 and containing 100 mM EDTA and 0.02% Tween 20. The preparation was dialyzed against buffer D (see below) before use. For cofactor analysis, the IMAC-purified lycopene cyclase (10 mg) was heat-denatured for 10 min, and the protein was removed by centrifugation. The supernatant was lyophilized, and the residue was redissolved in 100 l water of which 2 l was applied to LC-MS analysis. SDS-PAGE was carried out according to standard procedures using 10% polyacrylamide gels. Proteins were detected using Coomassie Brilliant Blue G250 (Sigma). Protein concentrations were determined using the Bradford method. Apoprotein Preparation and Reconstitution-The mCrtY apoenzyme was prepared according to published procedures (37) with some modifications. In brief, 1 ml of ice-cold 3 M KBr in buffer D was added slowly to 1 ml of IMAC-purified protein (2 mg/ml; protein purification method 2). After dialysis overnight at 4°C against buffer D containing 2 M KBr and removal of the KBr by dialysis against buffer D, eventually occurring precipitates were removed by centrifugation at 21,000 ϫ g for 15 min. The absence of cofactors was confirmed by UV-visible spectroscopy, fluorescence measurement, and the absence of enzymatic activity. For reconstitution (see Fig. 6A), 40 l of 6 M apoenzyme was supplemented under anaerobic conditions with different concentrations of flavins, then buffer D was added to a total of 58 l followed by the addition of 2 l of freshly prepared Ti(III) citrate (see below) to reduce FAD. The mixtures were incubated at room temperature for 30 min before adding 16-l aliquots from each mixture to the standard assay system (see below). Enzymatic Assays-To prepare E. coli membranes containing lycopene, cell pellets from a 5-liter E. coli culture expressing pFarbeR and grown overnight at 28°C in LB medium were resuspended in 60 ml of buffer A by vortexing, and cells were disintegrated by two passages through a French press cell, operated at 18,000 p.s.i.. Centrifugation at 17,000 ϫ g for 15 min removed large cell debris. The crude supernatant was ultracentrifuged at 140,000 ϫ g for 4 h. The colorless membrane-free supernatant was dialyzed against buffer A to be used in assays, as indicated. The membrane fraction was resuspended in buffer A with a Dounce homogenizer followed by another ultracentrifugation at 140,000 ϫ g. The pellet was resuspended in buffer A and stored at Ϫ20°C for subsequent assays. To prepare protein-free liposomes containing carotene substrates, soybean lecithin (Sigma) was dissolved in CHCl 3 at a concentration of 20 mg/ml. Carotene stock solutions were prepared separately in chloroform/methanol 2:1 (v/v). Carotene concentrations were estimated spectrophotometrically (Shimadzu, UV-2501PC) using an ⑀ 470 nm ϭ 185,230 liters mol Ϫ1 cm Ϫ1 for lycopene, ⑀ 440 nm ϭ 161,160 liters mol Ϫ1 cm Ϫ1 for neurosporene, ⑀ 460 nm ϭ 166,470 liters mol Ϫ1 cm Ϫ1 for ␥-carotene, and ⑀ 450 nm ϭ 134,500 liters mol Ϫ1 cm Ϫ1 for ␤-carotene. From these stock solutions aliquots corresponding to 160 g were added to 1 ml of lecithin solution. After adding 800 l of chloroform/methanol 2:1 (v/v) and drying, the residue was taken up in 2 ml of buffer D without detergent. Liposomes were formed by sonication for about 30 min on ice. To assess incorporation of carotenes into the lipid bilayer, an aliquot was extracted with chloroform/methanol 2:1 (v/v) and measured photometrically, as described above. The standard lycopene cyclase assay consisted of a liposome suspension volume that resulted in a final carotene concentration of 5 M (typically ϳ30 l) when diluted to a final volume of 200 l with buffer D, which is at the pH optimum of pH 5.8 determined for the reaction. 5 g of cyclase protein was added, and the assay was supplemented with 20 l of hexane. FAD was added to a 100 M final concentration. Reducing conditions were attained with freshly prepared Ti(III)-citrate following the procedure given in Zehnder and Wuhrmann (38). For this purpose 187 l of a 10% Ti(III) chloride solution (Sigma) were added to 600 l of an aqueous 0.42 M sodium citrate solution. The mixture was subsequently neutralized with a saturated sodium carbonate solution. 4 l of the Ti(III) citrate solution were added to a standard incubation assay. All solutions were equilibrated with N 2 before use, and the reactions were carried out in a glove box under an N 2 atmosphere. Incubation time was 30 min unless otherwise indicated. Enzymatic reduction of FAD was carried out using the flavin:NADH reductase PrnF from Pseudomonas fluorescens (39), kindly provided by Prof. van Pée (Technical University Dresden). For this purpose, a reaction system consisting of 200 M FAD, 2 mM NADH, and 10 units of pure PrnF in 400 l buffer D was placed in one chamber of a two-cell dialysis apparatus. The mCrtY-FAD ox 4 holoenzyme and protein-free lycopene liposomes in 400 l of buffer D were placed in the other chamber. For photoreduction, the assays not containing any chemical reductant were kept under ambient daylight conditions in the N 2 atmosphere. To carry out deuteration experiments, all buffers and the lycopene-liposome suspension were prepared as described above in 2 H 2 O. The protein, Ti(III) citrate, and FAD solutions, added in very small volumes (less than 2%) were in H 2 O. Extraction and Analytical Methods-Assays were extracted twice with 1 volume of chloroform/methanol 2:1 (v/v). The lipophilic phases were combined and dried under reduced pressure. The residue was redissolved in 50 l of chloroform of which 20 l were used for HPLC analysis. The HPLC device used (Waters, Alliance 2695) was equipped with a photodiode array detector and was controlled by the EMPOWER software program. HPLC system 1 was used for the separation of carotene substrates and products employing a 3-m C 30 reversed phase column (YMC-Europe) with the solvent system A (methanol/tert-butylmethyl ether (TBME)/water 5:1:1 (v/v/v)) and B (methanol/TBME 1:3 (v/v)). The gradient started at 30% A followed by a linear gradient to 0% A within 10 min at a flow rate of 1 ml/min. An isocratic segment, run for 12 min at 0% A, completed the separation program. Individual peaks resolved were integrated electronically at their individual max with the aid of the "maxplot" function of the software. Detector response curves for ␤-carotene and lycopene standard solutions were used for quantification. HPLC system 2 was used for LC-MS applications to analyze carotenes. It consisted of a 3-m C 30 reversed phase column (YMC-Europe) that was developed with a gradient consisting of A (methanol, 0.01% aqueous ammonium acetate, TBME 70:25:5 (v/v/v)) and B (methanol, 0.01% aqueous ammonium acetate, TBME 7:3:90 (v/v/v)). The gradient was developed from 85% A to 0% A within 10 min followed by an isocratic segment at 0% A for 5 min before re-equilibration. NADP(H), NAD(H), FAD, FMN, and flavin analogs were identified by LC-MS with HPLC system 3 consisting of a 3-m C 18 reverse phase column (Hypersil Gold, Thermo-Fisher Scientific) and the solvent system A (50 mM aqueous ammonium acetate in 1% formic acid) and B (1.7 mM ammonium acetate in 70% methanol acidified with 1% formic acid). The gradient was run at a flow rate of 700 l/min from 100% A to 50% A within 10 min with the final conditions held isocratically for 5 min. A Surveyor HPLC system coupled to a LTQ mass spectrometer (Thermo-Fisher Scientific) was used with the HPLC systems 2 and 3. Carotenes were atmospheric pressure chemical ionization-ionized using N 2 as the reagent gas and analyzed in the positive ion mode. Further conditions were: capillary temperature, 150°C; vaporizer temperature, 350°C; source voltage, 6 kV; capillary voltage, 49 V; source current, 5 A. Separated carotenes were identified by their spectra, by retention times in comparison with authentic references, and by their quasi-molecular (M ϩ1 ) ions. Nucleotide cofactors were subjected to electrospray ionization using a spray voltage of 5.3 kV, the capillary voltage was maintained at 49 V, and the capillary temperature was held at 350°C. For identification we used the chromatographic comparison with the authentic reference substances; in addition, single reaction monitoring was used to increase analytical confidence. For this purpose, the predominant MS 2 daughter fragments were determined using reference substances and used to filter those peaks that displayed the correct molecular ion and the expected MS 2 fragments. Daughter fragments were: FAD, M ϩ1 786. For fluorescence spectra, samples with a concentration of ϳ3.6 mg of protein/ml were measured in a Cary Eclipse Spectrofluorimeter (Varian; excitation slit width, 10 nm; emission 5 nm). Potentiometric measurements were made with a Clarktype oxygen electrode using 2 ml of standard assay mixture (25°C). For electron microscopy, the liposomes used in standard assays with and without hexane addition were negatively stained with 1% neutral phosphotungstic acid on carboncoated copper-grid and analyzed in a Philips FEI CM 10 electron microscope at 80 kV. RESULTS mCrtY Apoprotein Production and Purification-Initial overexpression of CrtY in the form of N-terminal thioredoxin fusion constructs resulted in largely insoluble protein. The cloning system described by Busso et al. (36) was used to increase the proportion of soluble active protein. Among the four different vectors, pHGGWA, pHNGWA, pHXGWA, and pHMGWA, the latter gave the best results producing ϳ50% protein in inclusion bodies (estimated by 13,000 ϫ g centrifugation), whereas the remainder sedimented at 140,000 ϫ g. This is indicative of membrane-bound mCrtY. Complementation in lycopene-producing E. coli cells using the vector pCrtY-HMGWA showed that the fusion protein was enzymatically active. It converted pink-to-yellow-colored colonies due to ␤-carotene formation. This was confirmed by HPLC analysis (system 1, not shown). The membrane-bound fusion protein was solubilized and subjected to purification by metal-ion affinity chromatography (using protein purification method 1) and GPC. Tween 20 was found to be the best-suited detergent in suppressing aggregation during the purification procedures and maintaining enzymatic activity. Fig. 2 shows the purity of the protein at the stages of purification. mCrtY obtained after the GPC step did not exhibit relevant absorbance at Ͼ 300 nm, which is consistent with the absence of dinucleotide cofactors. Because the enzyme showed to be enzymatically active upon complementation in lycopene-accumulating E. coli, this apoprotein was considered to be in a native conformation and suitable for experiments involving binding of cofactors. First experiments were conducted with purified membrane fractions isolated from lycopene-producing E. coli cells. This was based on the reasoning that in bacteria, CrtY (presumably binding a redox-active dinucleotide) was bound to the plasma membrane containing the lycopene substrate and, therefore, exposed to redox-active components e.g. as part of the respiratory chain. mCrtY Apoprotein; Behavior at E. coli Membranes-Membrane preparations from lycopene-producing E. coli cells were obtained by differential centrifugation. The assays, containing purified mCrtY, lycopene-containing membranes, and various cofactors, were stopped after 30 min by the addition of CHCl 3 / MeOH. No conversion of the membrane-bound lycopene into ␤-carotene took place in the absence of added cofactors. Among the cofactors used, NADH and NADPH were effective, whereas the oxidized forms were not (supplemental Fig. 1A). It is worth noting that the combination of NADH and FAD led to a triplication of the conversion rate of NADH. ATP was not effective under these conditions but was included because it had been used in lycopene cyclization assays with chromoplast stroma (27). Significant further stimulation was achieved when the assays were "contaminated" by adding back supernatant from the 140,000 ϫ g centrifugation from wild-type or lycopene-producing E. coli cells. Because this effect was abolished upon dialysis of the supernatant small cofactors could be relevant (supplemental Fig. 1B). Tests indicated that first, reduced cofactors were stimulatory. Second, ATP was stimulatory when combined with other cofactors, and third, a mixture of all cofactors was best. Seemingly a combination of diverse metabolic reactions led to a reduced membrane redox component needed for mCrtY activity. The stimulation upon removal of dioxygen pointed in the same direction. In E. coli, 15 primary dehydrogenases are known (40) capable of oxidizing a plethora of metabolites, all reducing quinones. Consistently, the glycolysis and tricarboxylic acid cycle intermediates such as glucose, fructose 6-phosphate, and succinate were stimulatory in the presence of dialyzed supernatant, whereas 3-phosphoglycerate, phosphoenolpyruvate, and pyruvate (not delivering reduction equivalents during glycolysis) were ineffective (supplemental Fig. 1C). In sum, all means leading to respiratory chain reduction strongly facilitated the cyclization of lycopene, initially interpreted in terms of an interaction of CrtY with elements of membrane-bound redox cofactors. This is very similar to the requirements reported for a carotene isomerization reaction catalyzed by the plant enzyme CrtISO (41). Alternatively, because the respiratory chain activities yielded anaerobic conditions very rapidly (as determined potentiometrically), a direct inhibitory effect of oxygen could not be excluded. To distinguish between these two possibilities, the redox-active cofactor bound to CrtY needed to be molecularly identified. CrtY Is a Flavoprotein Binding FAD-Preliminary purification attempts led to the isolation of the mCrtY apoprotein, which required the addition of cofactors for activity. Purification of the holoprotein required further refinement of the procedure. The protein purification method 2 (see "Experimental Procedures") allowed recovery of a yellowish protein fraction. The UV-visible spectra (Fig. 3) of such preparations were consistent with the presence of an oxidized flavin. The fluorescence emission spectra of untreated mCrtY, displayed in Fig. 3, showed two bands. One band had a max ϭ 520 nm, which is typical of the flavin chromophore. As expected, the addition of dithionite, which generates non-fluorescent reduced flavin, eliminated the 520-nm flavin emission band. The addition of ferricyanide to a similar sample led to the disappearance of the 433 nm band, which is typical of NAD(P)H, reflecting the oxidation of the reduced nicotinamide to the nonfluorescent oxidized species. To distinguish between FAD/FMN and NAD(H)/NADP(H), the protein was heat-denatured, and the supernatant was lyophilized, redissolved, and applied to LC-MS-MS using HPLC system 3. Single reaction monitoring was employed, optimized for the specific identification of these cofactors. The results depicted in Fig. 4 reveal the presence of FAD as the main component besides minute amounts of FMN, which were undetectable in the UV-visible trace. The analysis also indicated the presence of NAD at low levels in an NAD/FAD molar ratio of approximately of 1:9, as determined separately by HPLC/UV-visible spectroscopic detection. No signal was observed for NADP(H) (data not shown). It could not be determined at this point whether NAD and FMN stemmed from contaminating proteins present after IMAC purification. Because GPC separation, resulting in purification to near homogeneity (Fig. 2), also led to a loss of all bound cofactors, enzymology assays needed to be employed to identify the nature of the effective CrtY cofactor. A Reduced Flavin Cofactor Is Required for Activity-To avoid the use of redox-active biological membranes, proteinfree liposomes containing lycopene were prepared. Supplementation with the mCrtY holoprotein (protein purification method 2) did not induce activity. Based on the observation of a potential inhibitory role of oxygen (see above), we developed an assay system containing reduced FAD. This was achieved in an N 2 atmosphere and in the presence of freshly prepared Ti(III) citrate (ϳ10 mM) as a reductant. Under these conditions the reaction proceeded, albeit at a low rate. A strong stimulation was achieved by the addition of 10% hexane to the incubations. This resulted in considerable turbidity produced by structural changes of the PC-liposomes, which fused to produce rod-like structures (Fig. 5A). Under these conditions the lycopene substrate is assumed to become more accessible, which proved to be a reproducible effect as shown in Fig. 5B. Catalysis was also observed, although at lower rates, when FAD was reduced photochem-ically (42). The rate of ␤-carotene formation was further optimized by adding excess FAD, which probably leads to mCrtY saturation with reduced FAD. The reaction showed a pH optimum at pH 5.8 (data not shown). To assess the role of NAD(H), apomCrtY was prepared by treatment with KBr as detailed under "Experimental Procedures." Removal of all cofactors was verified by UV-visible and fluorescence spectroscopy as well as by the absence of enzymatic activity. The reconstitution of the active holo-mCrtY with increasing amounts of FAD was carried out under N 2 and in the presence of excess Ti(III)-citrate (Fig. 6A) and showed a stoichiometry equal to 1. Reconstitution attempts with NAD(P)H did not restore activity. Reduced FMN, on the other hand, was effective, yielding mCrtY with Ϸ50% of the specific activity observed with reduced FAD (data not shown). Reduced FAD, once protein-bound, appears to be protected from reoxidation (Fig. 6B). When the holoprotein (purification Similarly, NAD ϩ was hardly detectable in the UV (data not shown). No MS-single reaction monitoring signal was observed for NADP ϩ (trace not shown). method 2) was subjected at 37°C to anaerobic photoreduction in the presence of excess FAD but in the absence of Ti(III) citrate, the reaction proceeded as expected (compare with Fig. 5B). Under these conditions, reduced FAD is generated. Exposure to oxygen did not abolish catalysis, although the observed rate of conversion was temporarily reduced. The latter effect is attributed to the lowered incubation temperature that occurred during the manipulation. At 37°C and in the presence of oxygen the reaction then continued at comparable conversion rates as under N 2 . Because all previous experiments consistently showed activity only with reduced FAD (which, in its free form, is not stable in the presence of oxygen), this experiment indicates that reduced FAD bound to mCrtY does not react efficiently with O 2 . Furthermore, the flavin does not dissociate significantly from holo-mCrtY during the Ϸ30 min duration of the experiment. Formation of fully active mCrtY was also achieved by starting from a mCrtY-FAD ox holoenzyme and incubating anaerobically in the presence of (4 mM) NADH. It is possible that mCrtY-bound FAD ox is reduced by NADH similarly as shown with the Type II IPP-DMAPP isomerase (43). On the other hand, the possibility of an FAD ox /FAD red exchange in mCrtY was shown as follows; mCrtY-FAD ox holoenzyme was placed in one chamber of a two-cell dialysis apparatus; the other chamber contained an FAD red -generating system consisting of NADH, FAD, and the flavin reductase PrnF according to Unversucht et al. (44). In an N 2 atmosphere, the generated FAD red was able to activate mCrtY, although at a slow rate. Acid-Base Catalysis Remains the Catalytic Principle-Taken at face value the requirement of reduced FAD is not easy to be reconciled with the acid-base reaction mechanism previously proposed (Ref. 31, see Fig. 1). Alternatives have been discussed (35,45). To be reassured of the occurrence of acid-base catalysis under our experimental conditions, lycopene cyclization was studied under analogous conditions, however, in 2 H 2 O buffer, using Ti(III) citrate as the reductant (Fig. 7). Clearly, the bicyclic ␤-carotene formed has two additional mass units, which is Lycopene cyclization took place in highly turbid assays in which lycopene (red) was converted into ␤-carotene (yellow). A standard assay is shown in the absence (Con) and presence of mCrtY after 1 h of incubation. B, shown is the time course of lycopene cyclization using 5 g of holo-mCrtY isolated according to protein purification method 2 in anaerobic standard assays in the presence of different reductants. When photoreduction was used, the addition of FAD was needed to achieve activity in a concentration-dependent manner, raising the specific activity from 0.14 pmol g of mCrtY Ϫ1 min Ϫ1 (20 M FAD) to 2.2 pmol g of mCrtY Ϫ1 min Ϫ1 (400 M FAD). No addition of FAD was needed when Ti(III) citrate was used as the reductant to achieve an activity of 1.7 pmol g of mCrtY Ϫ1 min Ϫ1 , but FAD addition (20 M) was stimulatory, leading to an activity of 2.5 pmol g of mCrtY Ϫ1 min Ϫ1 . The stimulation achieved by FAD addition indicates the presence of some apo-mCrtY. Filled symbols are for assays carried out in the absence of a reductant both in the presence or absence of FAD, resulting in complete absence of enzymatic activity. FIGURE 6. A, reconstitution of holo-mCrtY from the apo-form with FAD is shown. 40 l of 6 M apoenzyme was supplemented with different amounts of FAD to give the indicated molar ratios, then buffer D was added to arrive at 58 l followed by adding 2 l of freshly prepared Ti(III) citrate as the reductant. After incubation at room temperature for 30 min, 16-l aliquots from each incubation mixtures were applied to the standard anaerobic assay system. The activity was assayed by using HPLC system 1. B, shown is oxygen insensitivity of mCrtY-bound FAD red . mCrtY (10 g/200 l) was irradiated for photoreduction in the presence of 400 M FAD under standard anaerobic conditions. After 10 min the sample was removed from the anaerobic glove box, and FAD red was oxidized by O 2 (shaking with air). Reactions continued unaffected in the air and in the dark after equilibration of the samples at 37°C. The activity plateau is due to an unavoidable transient drop in the assay temperature during sample transfer from anaerobic to aerobic conditions. compatible with addition/abstraction of one hydrogen per ring in accordance with a putative acid-base mechanism as the catalytic principle. The monocyclic intermediate ␥-carotene with one additional mass unit was not observed. This finding, however, does not provide information of whether catalysis by reduced FAD involves cryptic redox cycles. The Role of Reduced Flavin in Lycopene Cyclization-Both FMN red and FAD red served as cofactors for mCrtY; however, FMN red was less effective (see above). Therefore, in the present studies involving modified flavin cofactors, both forms were used depending on availability and purity. Both of the reduced deazaflavins (used as their FMN derivatives, see "Experimental Procedures" and chemical structures in Fig. 9) supported lycopene cyclization. Compared with reduced FMN (0.7 Ϯ 0.3 pmol of ␤-carotene g of mCrtY Ϫ1 min Ϫ1 ), 5-deazaFMN red showed an even better specific activity (1.4 Ϯ 0.2 pmol of ␤-carotene g of mCrtY Ϫ1 min Ϫ1 ), and 1-deazaFMN red was active at 0.4 Ϯ 0.1 pmol of ␤-carotene g of mCrtY Ϫ1 min Ϫ1 . Moreover, as is shown in Fig. 8, the cyclase activity is dependent on the redox potential of the reduced flavin analogs (see Fig. 9 for chemical structures), the rate increasing with decreasing E m , this altogether demonstrating a different role of FAD red (see "Discussion"). The results obtained with reduced deazaflavins speak against a role of FAD red itself as an acid-base catalyst. However, plant lycopene cyclases and the related capsanthin/capsorubin synthase carry a conserved FLEET motif necessary for activity (27). CrtY shows a 194 LIEDT 198 motif at an equivalent position. Therein, the glutamate was shown to be essential. 5 To assess its role, Glu 196 was exchanged for Ala in the otherwise identical mCrtY fusion protein. The resultant E196A-mCrtY was purified and showed overall properties closely similar to those of wild-type mCrtY. Specifically it bound FAD, and the UV-visible spectra of the holoenzyme indicated that the microenvironment at the flavin site was essentially unchanged. (supplemental Fig. 2A). Moreover, the CD spectra of the wild-type and mutated mCrtY apoprotein were practically identical (supplemental Fig. 2B), indicating the absence of substantial structural perturbations caused by the one amino acid exchange. In 5 B. Camara, personal communication. Fig. 1). The mass spectra recorded for lycopene (1) and ␤-carotene (2) are given. contrast to this, purified E196A-mCrtY was completely inactive in the presence of FAD red . This suggests that Glu 196 , rather than reduced FAD, plays a role as an acid-base catalyst. Cyclization of Carotenes Other Than All-trans Lycopene-Apart from lycopene, another candidate substrate for cyclization is the monocyclic ␥-carotene (supplemental Fig. 3), which is expected to arise as a cyclization intermediate but was hardly detectable in our assays. When used as a substrate at equimolar concentrations, it was converted into ␤-carotene at high velocity (7.9 pmol of ␤-carotene g of mCrtY Ϫ1 min Ϫ1 ) yielding about 100% conversion in the standard assay of 20 min where the rate of lycopene conversion was still linear. This observation is not consistent with the idea of a "half-site" recognition of the symmetrical lycopene molecule, as at equimolar concentration, lycopene provides the double concentration of cyclizing sites as compared with ␥-carotene. The desaturation intermediate neurosporene is another carotene in which the polyene configuration is identical in one-half the molecule met in lycopene. It is expected to form ␤-zeacarotene upon cyclization. Surprisingly, conversion took place in trace amounts-only under standard conditions and at very prolonged incubation times. This may again indicate that CrtY accommodates the substrate as a whole and is capable of distinguishing the difference of one double bond. 15-15ЈApolycopenal was used to mimic the half-site of lycopene. It is in concordance with the absence of half-site substrate recognition that this substrate failed to be converted into retinal. Prolycopene (7,9,9Ј,7Ј tetracis lycopene), the lycopene produced by the plant carotene desaturase system (45,46), was not accepted as a substrate. DISCUSSION We have presented evidence that the lycopene cyclase CrtY is a flavoprotein. It employs reduced FAD as a cofactor, also effective with FMN red . Moreover, heterologously overexpressed mCrtY retains FAD when purified under mild conditions, suggesting that this flavin form is the likely cofactor. The nature of the cofactor that binds to the dinucleotide binding site in CrtY or to the related CrtL-type plant cyclases has long been enigmatic. Several cofactors have been used to account for the predicted requirement, such as NAD(P)H with CrtY (47,48), NADP ϩ , NADPH, and ATP with both the C. annuum lycopene cyclase and the related capsanthin/capsorubin synthase (27). NAD(P)H was found to be essential to drive a cis to trans isomerization plus a cyclization reaction in Narcissus pseudonarcissus chromoplast homogenates, which was attributed to the isomerization partial reaction (35). The carotene isomerase CrtISO (49, 50) had not been identified at the time. An earlier report (51) showed FAD to be essential in one of the protein fractions obtained from spinach, whereas NADP had a stimulatory effect. In our assays, NAD(P)H was effective in the presence of membranes, strongly stimulated by additional FAD (supplemental Fig. 1A) and/or by cytoplasmic proteins (supplemental Fig. 1, B and C). Under those conditions a multitude of cofactors and primary catabolites stimulated lycopene cyclization. This stimulation is, as we show, due to anaerobic and reducing conditions in the assays, attained by increased respiratory chain activity. In conclusion, much of the existing confusion about cofactors used by CrtY and plant cyclases is probably due to the complexity of the systems employed. In this context it is worth noting that chromoplast membranes, such as from N. pseudonarcissus, possess an alternative redox chain that utilizes oxygen as a terminal electron acceptor, leading to anaerobic conditions at the expense of NAD(P)H (52). This suggests that the time has come to revisit the role of NAD(P)H requirement in prolycopene cyclization. Anerobic conditions driving cyclization have been reported previously but were misinterpreted. The cyclization of prolycopene (7,9,9Ј,7Ј-tetra-cis lycopene) was found to be possible only under anaerobic conditions (35,45), which was interpreted in terms of a redirection of electrons toward the cis/trans isomerase/cyclase system to drive cryptic redox cycles instead of having an electron flux toward oxygen. Kushwaha et al. (51) noted that their soluble lycopene cyclase preparation from spinach was significantly more active in a nitrogen atmosphere. This was interpreted in terms of sulfhydryl protection from oxidation. As we show here, the function of anaerobic conditions in vitro is to establish the conditions needed to allow the formation of the mCrtY-FAD red complex required for cyclization activity. Overexpressed CrtL-type lycopene cyclase from rice (OsLYCb) was shown to require the same reaction conditions for activity as CrtY, indicating that the underlying mechanisms are very similar (data not shown). Any mechanistic consideration for CrtY must be confronted with the unquestionable recognition that (i) the flavin is necessary for catalysis, (ii) it must be in its reduced form, (iii) it does not catalyze a redox reaction, and (iv) the general mechanism relies on acid-base catalysis. The latter is in agreement with the fact that there is no change in the redox state between lycopene and ␤-carotene. In recent years a new mechanistic function has emerged for flavoenzymes that utilize reduced flavins but do not catalyze a net redox reaction (53). Isopentenyl diphosphate:dimethylallyl diphosphate isomerase type 2 (IDI-2), for instance, requires reduced FMN for catalysis (43). ApoIDI-2 reconstituted with 5-deazaFMN resulted in an inactive enzyme, whereas 1-deazaFMN-IDI-2 was active. It has been proposed (55-57) that IDI-2 employed reduced FMN as an acid-base catalyst. The flavin positions N(1) and, more likely N(5), were proposed to act as the acid/base functional groups (58). This contrasts with the present finding that mCrtY is catalytically active with both reduced 5-and 1-deazaflavins. From this, a role of the positions N(5) and N(1) and, thus, a role of the flavin as an acid/base catalyst appears very improbable. The substitution of N(5) with C(5)-Hm as in 5-deazaflavinsm introduces severe restraints in the capacity of the reduced form to carry out acid-base reactions. The same holds for 1-deazaflavins (59). These two deazaflavins in their reduced state can, thus, be used to assess the roles of these specific positions in acid-base catalysis. The role of an acid-base catalyst is proposed to be played in CrtY by Glu 196 alone or in conjunction with an additional still unidentified group. Mutation of this conserved amino acid resulted in enzyme inactivation, and reduced FAD was not able to carry out the reaction, i.e. to act as an acid-base catalyst. In this context it is interesting to note that Laupitz et al. (60) have identified a patch of conserved amino acid residues (His 147 , Asn 149 , Gln 152 , and Glu 153 ) in close proximity to the FMN bind-ing site in the mechanistically related IDI-2. In CrtY an analogous group could represent the second acid/base required. As a further mechanistic variant, cryptic net one electron transfer, previously suggested for IDI-2 (43) and later revoked (56), is also unlikely in CrtY. This deduction is based on the present finding that reduced 5-deazaflavin is at least as good a catalyst as normal reduced flavin (Fig. 8). However, for thermodynamic reasons, 5-deazaflavins are not prone to form radical species. They are hindered in redox catalysis due to the kinetic stability of the C(5)-H2 function and are, thus, considered to be "half-dead" redox cofactors for several types of catalysis (61)(62)(63). The interpretation and contraposition of data from IDI-2 and CrtY, thus, present a dilemma; Are the underlying mechanisms different despite the apparent similarities with respect to requirement of reduced flavin and absence of redox changes? To investigate, we took advantage of the fact that the introduction of substitutions with specific properties into the isoalloxazine system affects its redox potential (32,64), thus enabling the use in the determination of linear free energy relationships (32). Because the introduction of modifications can have steric and chemical consequences, experience suggests modifications at a single position in the isoalloxazine ring system. The flavin position C(8) has proven to be the less critical (Fig. 9) and most sensitive as it is placed in para position to the site of redox catalysis, N(5)-C(4a). The experiment shown in Fig. 8 is based on the long established linear free energy relationship concept, according to which the rate of a reaction proceeding via a charged transient state will correlate with the electron donating/accepting properties of the involved molecules. A linear correlation between the 1e Ϫ oxidation potential of the reduced flavin and the E m has previously been demonstrated for a series of flavins carrying different substitutions at position C(8) (64). The results of Fig. 8 show that the rate of the lycopene cyclization increases with increasing "electron donating properties" (i.e. with decreasing E m ) of the reduced flavin analogs. This is compatible with "transfer of negative charge" from the reduced flavin in the transition state. In view of this, we concur in a mechanistic interpretation with the basic concept that was formulated by Laupitz et al. (60) for the IDI-2 reaction, "… the cofactor might act as a dipole stabilizing a cationic intermediate or transition state of the reaction." However, it is necessary to extend this concept to read, "… the (anionic) reduced flavin cofactor might stabilize a cationic intermediate or transition state of the reaction." The mechanism of Arabidopsis lycopene cyclases has been speculated to be similar (65). We, thus, think that the present data can be interpreted in terms of Scheme 1. The reaction is initiated by formation of a -complex between the reduced flavin in its anionized form and lycopene. Two variants can be envisaged. In one, shown on top, an acidic group (ϳB 1 H S ϩ ) carrying a solvent-borne hydrogen (H S ) interacts with the lycopene C(1)AC(2) double bond, whereas concomitantly, a base initiates a nucleophilic attack on H C . In the ensuing transition state the orbitals of the (partially) positively charged lycopene overlap with those of the negatively charged reduced flavin in a charge transfer complex. In the second variant (bottom structures) the reaction proceeds via a definite intermediate in which a positive charge is located either at the lycopene position C(1) or C(5) as in the original formulation by Britton et al. (31). The stereochemistry of orbital over- SCHEME 1 lap is formulated in analogy to that proposed by Arigoni et al. (54) for ring formation in the biosynthesis of lutein. Note that cyclization goes along with the formal transfer of a H ϩ from base B 1 to base B 2 . One of these could be Glu 196 . Although it is assumed that the species involved interact face to face via a -complex, the further orientation of the molecules is arbitrary, and only the flavin orbitals that carry the largest negative charge density are shown. To further validate the model, attempts to crystallize the mCrtY-FAD-lycopene complex are currently under way.
Determinants of access to antenatal care and birth outcomes in Kumasi, Ghana This study aimed to investigate factors that influence antenatal care utilization and their association with adverse pregnancy outcomes (defined as low birth weight, stillbirth, preterm delivery or small for gestational age) among pregnant women in Kumasi. A quantitative cross-sectional study was conducted of 643 women aged 19–48 years who presented for delivery at selected public hospitals and private traditional birth attendants from July–November 2011. Participants’ information and factors influencing antenatal attendance were collected using a structured questionnaire and antenatal records. Associations between these factors and adverse pregnancy outcomes were assessed using chi-square and logistic regression. Nineteen percent of the women experienced an adverse pregnancy outcome. For 49% of the women, cost influenced their antenatal attendance. Cost was associated with increased likelihood of a woman experiencing an adverse outcome (adjusted OR = 2.15; 95% CI = 1.16–3.99; p = 0.016). Also, women with >5 births had an increased likelihood of an adverse outcome compared with women with single deliveries (adjusted OR = 3.77; 95% CI = 1.50–9.53; p = 0.005). The prevalence of adverse outcomes was lower than previously reported (44.6 versus 19%). Cost and distance were associated with adverse outcomes after adjusting for confounders. Cost and distance could be minimized through a wider application of the Ghana National Health Insurance Scheme. Abstract This study aimed to investigate factors that influence antenatal care utilization and their association with adverse pregnancy outcomes (defined as low birth weight, stillbirth, preterm delivery or small for gestational age) among pregnant women in Kumasi. A quantitative cross-sectional study was conducted of 643 women aged 19-48 years who presented for delivery at selected public hospitals and private traditional birth attendants from July-November 2011. Partici-pantsÕ information and factors influencing antenatal attendance were collected using a structured questionnaire and antenatal records. Associations between these factors and adverse pregnancy outcomes were assessed using chi-square and logistic regression. Nineteen percent of the women experienced an adverse pregnancy outcome. For 49% of the women, cost influenced their antenatal attendance. Cost was associated with increased likelihood of a woman experiencing an adverse outcome (adjusted OR = 2.15; 95% CI = 1. 16-3.99; p = 0.016). Also, women with >5 births had an increased likelihood of an adverse outcome compared with women with single deliveries (adjusted OR = 3.77; 95% CI = 1.50-9.53; p = 0.005). The prevalence of adverse outcomes was lower than previously reported (44.6 versus 19%). Cost and distance were associated with adverse outcomes after adjusting for confound- Introduction There is wide recognition that one of the major factors contributing to the high rate of adverse birth outcomes is the low use of prenatal and maternal health services [1,2]. Antenatal care (ANC) remains one of the Safe Motherhood interventions that if properly implemented has the potential to significantly reduce maternal and perinatal mortalities [3]. The antenatal period presents opportunities for reaching pregnant women with interventions to maximize maternal and neonatal health [4,5]. Regular ANC visits provide health personnel with an opportunity to manage the pregnancy. It is a period during which a variety of services such as treatment of pregnancy-induced hypertension, tetanus immunization [6][7][8], prophylaxis and micronutrient supplementation are provided [5,9]. These measures have been shown to be effective in improving pregnancy and neonatal outcomes [10]. A 44.6% prevalence of adverse pregnancy outcome has been reported among pregnant women in Kumasi, Ghana [11]. This high prevalence could be a result of barriers associated with accessing ANC services. To address some of these barriers, the government of Ghana established the National Health Insurance Scheme (NHIS) in 2003 to replace the previous ''cash-and-carry'' system. The goal was to provide essential health services without out-of-pocket payment at the point of service. In this scheme, the Ôcore poorÕ, defined as being unemployed, with no visible source of income and no fixed residence, were exempt from paying insurance premiums. People who were not living in a household with someone who was employed and had a fixed residence were also exempt [12]. While the insurance scheme was intended to achieve universal coverage, only a small percentage of eligible women, especially pregnant women, were enrolled in the program. To address this inequality, pregnant women were exempted from paying the insurance premiums beginning in 2008 [13]. Under the free maternal care policy, maternal and prenatal care are covered [14]. While ANC in developed countries is characterized by a high number of antenatal visits and early attendance, it is the opposite in developing countries with fewer, late or no antenatal visits [3]. A study in Kenya indicated that 52.5% of women in rural areas and 49.2% in urban settings attended ANC once prior to delivery and the first ANC visit was after 28 weeks of pregnancy [15]. In Ghana 85% attended at least one antenatal visit with a skilled provider before delivery. Seventy-three percent of pregnant women in urban areas and 55% in rural areas were more likely to attend 4 or more antenatal visits [6,16]. Though it has been reported that up to 40% of pregnant women in developing countries receive no ANC [17], a study in Ghana reported that 14% of women did not attend ANC at all [6]. Different factors influence the healthcareseeking behavior of pregnant women [18]. These factors could be organizational, such as the availability of services, or socio-demographics [9,19]. Socio-demographic characteristics, such as education, occupation and number of children, were related to the use of ANC services in Vietnam [20,21]. In Punjab, Pakistan, family finances and the womanÕs level of education were important determinants of ANC use [22]. In Nigeria, perceived quality of care was one of the factors responsible for the low utilization rate of ANC services in tertiary institutions in the Southwest part of the country [3]. The reasons why some women in sub-Saharan countries including Ghana do not seek or get adequate ANC are not obvious. In order to improve the planning and provision of ANC services, it is important to understand perceived or apparent barriers to ANC services. This will enable the formulation and implementation of interventions that will sustain ANC utilization [3,9]. The objective of this study was to investigate the factors that influence the utilization of ANC services among pregnant women in Kumasi and determine if these factors are associated with adverse pregnancy outcomes. Study setting A quantitative cross-sectional study was conducted to investigate factors that influence participation in ANC services and their association with adverse pregnancy outcomes in Kumasi. Participants Eligible participants were pregnant women, 19 years and older, who resided in Kumasi at the time of conception or moved to Kumasi within 1-2 months following conception and presented to the study hospitals or TBAs for delivery. Women with singleton, spontaneous, vaginal deliveries occurring without complications between July and November 2011 were eligible for enrollment in this study. Women with pregnancy-induced hypertension or pre-eclampsia were excluded because this condition would cause them to attend more than the required number of ANC visits. Potential participants who presented for delivery at the study health facilities were informed of the study by the attending midwives during their admission to the labor ward while the TBAs informed their clients. Informed consent was obtained from all participants who participated in the study. Data from 643 of the 647 women were used for this study. Trained study personnel administered questionnaires to the participants 1-2 hrs following their delivery. Participants were questioned in a private area, no identifying information was recorded and confidentiality was assured. Questionnaires were reviewed for completeness. The Institutional Review Board of the University of Alabama at Birmingham, USA, and the Committee on Human Research, Publications and Ethics, School of Medical Sciences, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana, approved the study protocol. Data collection A 92-item structured questionnaire was used to ascertain information on: (1) socio-demographics, (2) obstetric and reproductive history, (3) occupation and lifestyle factors, (4) ANC services and treatment received, and (5) perception of quality of ANC services received and level of satisfaction. The socio-demographic section was adapted from the Malaria Monitoring and Evaluation Group [23]. It included questions about health insurance and duration of the insurance. Prior to the commencement of the study, the entire questionnaire was reviewed by six senior midwives for content validity and cultural sensitivity. To improve its reliability, the validated instrument was pre-tested on five pregnant women attending ANC and six new mothers. Following pre-test modifications, twelve new mothers who met the study eligibility requirements pilot tested the questionnaire. The questionnaire was modified accordingly before use. Primary exposure of interest ANC attendance was assessed using data abstracted from the maternal antenatal booklet and responses to the following questions: 1. How many times did you attend antenatal clinic? 2. Did you know you had to attend at least 8 times? 3. Did you know you had to attend a total of 13 times? Barriers to ANC attendance were assessed by asking women whether they did not attend the expected number of antenatal clinic visits because of any of the following reasons; (a) I did not know I had to attend that many times; (b) I could not afford it; (c) lack of insurance; (d) No time to attend; (e) I have had other children without any problems; (f) I was not sick; (g) Hospital too far from where I live; (h) I do not like the attitude of the hospital staff; (i) Fear of knowing my HIV status; (j) Cultural beliefs; and (k) lack of confidence in the services provided. Primary outcome of interest Any adverse outcome was defined as: low birth weight (birth weight <2500 g), preterm delivery (<37 weeks of gestation), and small for gestational age (sex-specific birth weight at or below the 10th percentile for the weight-for-gestational age of an international reference population) [8]. Stillbirth was defined as death of an infant more than 12 h prior to or within 12 h of delivery. Information on low birth weight, small for gestational age and stillbirth was ascertained from the maternity record at delivery and before discharge to the ''Lying Inn Ward.'' Determination of preterm was based on the response to the question on duration of pregnancy. Data analysis The data were individually entered into a Microsoft Access 2010 database and imported to SAS. Descriptive statistics of the study participants were computed as frequency distributions (character variables), means and standard deviations (numeric variables). Association of participant characteristics and pregnancy outcomes was assessed using chisquare or FisherÕs exact tests. ANC attendance was categorized as <7 or 8-13 times (GhanaÕs standard). Association between barriers, ANC attendance and adverse pregnancy outcomes were examined using chi-square test. Two multivariable models were used to assess the association between the identified barriers and adverse pregnancy outcomes. In the first multivariable model, all the variables in the bivariate model were included irrespective of their level of significance. In the second multivariable model, all variables with a p-value 6 0.20 from the first multivariable model or biologically plausible were included while adjusting for age, marital status and level of education. The change-in-estimate criteria were used to select potential confounders. A variable was considered a confounder if the change in estimate from the crude and adjusted model was at least 10 percent [24]. Crude and adjusted odds ratios (ORs) with 95% confidence intervals (CI) and p-values were calculated using logistic regression. All tests were two-sided and p-values 6 0.05 were considered statistically significant. SASÒ 9.2 (SAS Institute, Cary, NC, USA) was used for analyses. ParticipantsÕ characteristics Participation rate was 99.7%. Three participants were recruited through the TBAs while 73.7% (474/643) and 25.8% (166/643) were recruited from KATH and Manhyia, respectively. Participant characteristics are presented in Table 1. Mean (±standard deviation [SD]) age was 28 (5.7) years and ranged from 19 to 48 years. Ten percent of the unemployed were housewives and 9.6% were students. Most of the self-employed were traders (58%) and hairdressers or seamstresses (30.3%). Thirty-eight percent of women with >5 children experienced an adverse outcome compared with women with 2-5 children (22.4%). Obstetric history of participants One hundred and twenty-two participants (19.0%) experienced an adverse outcome. The proportion of the various adverse outcomes is shown in Table 1. About 15% (33/226) of primiparas experienced an adverse event compared with 33.3% (139/417) for the multiparas. Sixty-one percent were multiparas (2-5 children) and 3.7% were considered grande multipara (>5 children). Women 36 years or older with a primary level of education or no formal education were more likely to experience an adverse outcome. They were also likely to be poor with a monthly income of less than GH¢500.00, or have had insurance for only 3 months prior to delivery. Determinants of ANC attendance Women who attended >13 ANC visits were excluded since 8-13 ANC visits are required. Data for 574 participants were used for this analysis. Approximately 1.1% (7/643) of the women did not attend ANC. Ten percent (66/643) attended 1-3 visits, 45.9% attended 4-7 times and 42.8% attended 8-13 ANC visits. A summary of the reasons for inadequate ANC visits, number of ANC attended and pregnancy outcomes is presented in Table 2. Cost, lack of insurance, being unaware of pregnancy, and not being sick were reasons that statistically influenced ANC attendance. Only cost was statistically associated with pregnancy outcomes. Distance and cultural beliefs were marginally associated with pregnancy outcomes. In a cross-tabulation of identified barriers with age and level of education, women who said cost was a factor were more likely to be younger (19-25 years) (p = 0.003), and have a primary school or no formal education (p = 0.008). For 62.5% of women 19-25 years, fear of knowing their HIV status (p = 0.038) was another reason for inadequate ANC attendance. Adverse pregnancy outcomes by barriers The association between adverse pregnancy outcomes and barriers to ANC attendance is shown in Table 3. Cost was associated with an increased likelihood of a woman experiencing an adverse outcome (OR = 1.92, 95% CI = 1.11-3.33; p = 0.020) (crude model). In Model 1, the association between cost and adverse outcome remained significant (adjusted OR = 2.15; 95% CI = 1.16-3.99; p = 0.016). Having 2 or more children was significantly associated with a woman experiencing an adverse outcome. The strength of association increased with increasing order of children. Women with >5 prior deliveries were more likely to experience an adverse outcome compared with women with a single delivery (OR = 3.33; 95% CI = 1.35-8.17) (Model 1). In Model 2, women with >5 deliveries were nearly 4 times more likely to experience an adverse outcome compared with women with one delivery (adjusted OR = 3.77, 95% CI = 1.50-9.53). The associations of distance to hospital and cultural beliefs with adverse outcomes were not statistically significant in the crude model. However, women who did not attend the required number of antenatal visits due to distance or cultural beliefs were twice as likely to experience an adverse outcome compared with women whose attendance was not influenced by these factors (OR = 2.02, 95% CI = 0.96-4.25; OR = 2.59, 95% CI = 0.95-7.08). After adjusting for age, level of education and marital status, only cost and distance were statistically significant. Women whose ANC attendance was influenced by cost or distance were two times more likely to experience an adverse outcome compared with women whose attendance was not influenced by these factors (adjusted OR = 1.86, 95% CI = 1.04-3.32, p = 0.035; adjusted OR = 2.24, 95% CI = 1.00-5.03, p = 0.051) (Model 2). Discussion Identifying non-geographic and modifiable barriers to ANC is important for policy formulation. Results from this study suggest that cost, parity and distance influence ANC attendance and are also associated with adverse pregnancy outcomes. These factors could be contributing to adverse outcomes by limiting the number of ANC visits attended and consequently the services obtained. In a prior study in Ghana, cost incurred while accessing ANC services was partly due to consultation fees and drugs [6]. The introduction of the NHIS in 2003 mandated that insured pregnant women get free antenatal services [14]. It has been reported that women insured by the present insurance scheme were more likely to use prenatal care and less likely to experience birth complications, while the uninsured were more likely to delay seeking ANC and develop Other reasons (being shy, it was stressful, attended more than 8 but not recorded, not enough personnel/nothing done in 1st trimester). a Fishers exact test, Bold = statistically significant. obstetric complications [14]. This study did not investigate the cost associated with ANC attendance. However, cost may be related to travel and unofficial fees [25]. Cost could also be due to feeding expenses for the pregnant woman and more so if she was accompanied by a family member. Buying drugs and supplies that were not provided or not covered by the NHIS could also constitute cost. Cost was also cited as an obstacle to enrolling women in the NHIS [14,26]. To avoid the long wait time in public facilities, some of these women may have ended up in private or maternal home facilities. The fees charged could be high and may determine how many times a woman attends ANC. Cost as a determinant is re-enforced by the fact that 49.2% of these women had a primary level or no formal education and were of low-income level. The level of education of the pregnant woman [7,8] and that of her husband has been shown to be a barrier in accessing ANC even in developed countries [27]. A higher level of education would increase the womanÕs knowledge, awareness and effectiveness of antenatal services and the consequences. This knowledge could influence her healthcare decision-making. Lack of knowledge of obstetric complications was associated with underutilization of antenatal services in Indonesia [7]. Similar studies involving Planned Parenthood and other healthcare services in Metro Cebus, Philippines and Haiti observed that maternal education was the most consistent and important determinant of ANC use [28][29][30]. Educational level was a strong determinant of enrollment in the NHIS and those with less education were less likely to enroll [14]. Designing health education programs that take into consideration those with no formal or basic education would likely increase ANC utilization and likely reduce adverse birth outcomes. Educating women on the dangers of inadequate ANC utilization may be the best way to encourage ANC use [28]. There are many radio and television stations in Kumasi that broadcast health programs. Including and increasing the frequency of broadcast of antenatal health education programs could likely increase the uptake of ANC services. Cell phone ownership in Ghana is high and most are fitted with radio or TV. This approach may be more convenient for some of these women (traders, seamstresses and hairdressers). These women spend a significant amount of time in the market every day and may be less aware of the dangers of inadequate ANC attendance. Exposure to mass media was seen to increase the odds of women seeking ANC in India [19], while less exposure to mass media was associated with underutilization of ANC services in Indonesia [7]. When a woman goes for antenatal care, the next ANC visit date is usually indicated in the maternal antenatal booklet. This is helpful, but can only be meaningful if the woman initiates ANC early. That is why many of the women indicated attending ANC as requested. The distance traveled by some of the women to the hospital or health center for ANC could be substantial. While this study did not investigate partic-ipantÕs distance to the point of ANC service, distances longer than 3-5 km are deterrents to seeking ANC [9]. Even when distance was cited as a barrier to ANC use in Kenya, 18% of women still did not visit the nearest ANC facility [25]. The women in this study as in the Kenya study could also be considering the quality of care that is offered at their preferred point of service. Some of these women would prefer KATH (a referral hospital), where complications, if any, could be identified easily and early. The absence of comfortable transportation, and the pregnant womanÕs physical inability to walk or travel long distances could be reasons why distance was considered a factor by the women in this study. One study observed that eliminating travel distance to ANC increased demand for sufficient care [6]. Not all private health providers accept the government insurance. A policy that could facilitate acceptance of this insurance by all providers will offer pregnant women the choice of either using a private or public facility taking distance into consideration. This choice could likely minimize the aspect of cost and distance thereby increasing access to ANC, which may lower the prevalence of adverse outcomes. Adverse outcomes are confounded by both cost and distance. Women who may not attend the required number of ANC visits are more likely to be poor and malnourished. They may also be living far from the maternity center or hospital and may not be able to arrive in time for obstetrical intervention to save the pregnancy. High parity was associated with adverse pregnancy outcomes. This finding is supported by a study in rural north India and Indonesia where they found an association between parity with reduced ANC use. Women who have experienced a previous pregnancy without complications may feel little need to seek care. Also, practical issues of attending a health facility when caring for children may influence ANC attendance [7]. In India, it was found that women with many children were less likely to use ANC services [19]. Despite the barriers, this study observed a low prevalence (19%) of adverse pregnancy outcomes compared with a previous report of 44.6% [11]. This low prevalence could be due to the introduction of the NHIS in 2003, and the changes made to the antenatal protocol in 2005 that provided for prophylactic treatment for malaria and intestinal helminths (infections that have been consistently linked with adverse pregnancy outcomes). The drop in the prevalence of adverse outcomes from the publication of the Yatich et al. study and this study could be due to the fact that the data for Yatich et al. were collected in 2006 while the data for the current study were collected in 2011. These changes in both insurance and preventive treatment were not in full effect in 2006 but were in effect in 2011 [11]. The rate of ANC attendance from this study is very high. Also, the proportion of women who did not attend ANC is lower (1.1%) compared with what was reported by Overbosch et al. (14%) [6]. The high rate of adverse pregnancy outcomes observed in this population despite the increase in ANC attendance cannot be explained by ANC attendance alone. Environmental, nutritional/metabolic, disease or genetic conditions could be playing a part in maintaining the high prevalence of adverse outcomes observed in this population. The content, quality and effectiveness of ANC services should be investigated. This study was done in two hospitals that provide greater coverage of antenatal services not only to the people of Kumasi but the entire Ashanti and surrounding Brong-Ahafo, Central and Western regions. There is a dearth of information on the psychosocial and socioeconomic factors that influence the uptake of antenatal services and their impact on pregnancy outcomes in Kumasi, Ghana. Studies on risk factors for adverse pregnancy outcomes have mostly focused on family wealth and infectious diseases. To date, no study has assessed a wide array of psychosocial factors that influence ANC utilization and their association with birth outcomes. One study examined the association of family wealth and access to ANC, while another study investigated the association of family wealth with antepartum and intrapartum stillbirth [6,31]. This study not only investigated factors influencing ANC utilization and their association with stillbirth, but also with preterm delivery, low birth weight and small for gestational age. Though working with a similar sub-population (women with uncomplicated pregnancy), the study by Yatich et al. and this study examined different risk factors for adverse pregnancy outcomes. The study by Yatich et al. examined the impact of parasitic infections on stillbirths while this study investigated the barriers associated with access to ANC services and their impact on pregnancy outcomes. The findings could be a representation of women with uncomplicated pregnancy in Kumasi since these two facilities serve people of all walks of life. This study corroborates other studies and re-enforces the need for a concerted action in addressing the persistent issues of cost, distance and the role of health education in accessing ANC. However, this study does not establish causality and is limited to cross-sectional interpretation. Excluding women who did not meet the eligibility criteria might have impacted the prevalence of the adverse outcomes and may not reflect the true prevalence in the entire population. Reasons for not attending ANC visits are not usually recorded in the maternal antenatal booklet. There is the problem of recall bias since an unfavorable outcome could influence a participantÕs response. This bias could be limited to women with stillbirths. Also, recall bias could lead to misclassification of preterm delivery since the duration of pregnancy was self-reported. Though the findings of this study suggest a low prevalence of adverse outcomes compared with that of a prior study, the results should be taken with some caution considering the above limitations. Conclusion Cost, distance and high parity were identified as some of the factors for inadequate utilization of ANC services. These factors were also associated with adverse pregnancy outcomes. Association is limited to cross-sectional interpretation. Minimizing cost and distance through a wider application of the NHIS and increasing awareness through antenatal health education could likely increase the use of antenatal services and further lower the prevalence of adverse pregnancy outcomes. Conflict of interest None declared.
Standards for the diagnosis and management of complex regional pain syndrome: Results of a European Pain Federation task force Abstract Background Complex regional pain syndrome is a painful and disabling post‐traumatic primary pain disorder. Acute and chronic complex regional pain syndrome (CRPS) are major clinical challenges. In Europe, progress is hampered by significant heterogeneity in clinical practice. We sought to establish standards for the diagnosis and management of CRPS. Methods The European Pain Federation established a pan‐European task force of experts in CRPS who followed a four‐stage consensus challenge process to produce mandatory quality standards worded as grammatically imperative (must‐do) statements. Results We developed 17 standards in 8 areas of care. There are 2 standards in diagnosis, 1 in multidisciplinary care, 1 in assessment, 3 for care pathways, 1 in information and education, 4 in pain management, 3 in physical rehabilitation and 2 on distress management. The standards are presented and summarized, and their generation and consequences were discussed. Also presented are domains of practice for which no agreement on a standard could be reached. Areas of research needed to improve the validity and uptake of these standards are discussed. Conclusion The European Pain Federation task force present 17 standards of the diagnosis and management of CRPS for use in Europe. These are considered achievable for most countries and aspirational for a minority of countries depending on their healthcare resource and structures. Significance This position statement summarizes expert opinion on acceptable standards for CRPS care in Europe. The clinical presentations of CRPS vary enormously between patients (Figure 1). For example, the affected limb may appear hot and red, or cold and blue; these symptoms and signs can also fluctuate in any single patient over time. Patients often also report disordered spatial awareness, and bodily and limb agency distortions (Lewis, Kersten, McCabe, McPherson, & Blake, 2007). The aetiology of CRPS is likely multifactorial . It is thought to be pathological not psychopathological in origin (Beerthuizen et al., 2012;Beerthuizen, van 't Spijker, Huygen, Klein, & de Wit, 2009). Most patients will improve over time (de Mos et al., 2009;Zyluk, 1998), although appropriate management very likely hastens recovery (Gillespie, Cowell, Cheung, & Brown, 2016). However, full recovery is less common, and many patients will be left with varying degrees of persistent pain and functional impairment (Bean, Johnson, Heiss-Dunlop, & Kydd, 2016). For some people, CRPS may become a long-lasting, highly disabling and distressing chronic pain condition. The costs of CRPS are significant, at a personal, familial and societal level (Kemler & Furnee, 2002;van Velzen et al., 2014). In 2016, the European Pain Federation convened a CRPS Task Force to support the development of best care for these patients through Europe. The Task Force members were CRPS experts with geographical and professional representation within Europe and a patient representative. As its first objective, the Task Force was asked to develop standards that could guide minimally acceptable levels of CRPS care applicable across a diversity of healthcare structures and economies within Europe. Some European countries have developed their own guidelines for CRPS care (Birklein, Humm et al., 2018;Ceruso et al.., 2014;Goebel et al., 2018;Perez et al., 2014); however, any adoption within additional countries is often impeded by differences in healthcare economics and structures. The application of standards can go some way to establish a primary common position. We recognize that terms such as "standards," "guidelines," "policy" and "procedure" are often used interchangeably, and currently, there is no internationally agreed definition for the term "standards" as applied to health care. For our purposes, we considered the UK Faculty of Pain Medicine interpretation of standards, as applied to pain: "Standards must be followed. Standards aim to represent current best practice in pain management as published in relevant literature and/or agreed by a body of experts" (Grady & al., 2015, p. 8). Notably, standards can change over time ( Figure 2). Standards can act as a benchmark, but can also be utilized as a tool for healthcare professionals, commissioners and policymakers in the identification and appropriate allocation of resources. | METHODS Our development process followed that outlined by the UK National Institute of Clinical Excellence (NICE, see Supporting Information Appendix S1). A patient-member (IT) provided service user perspectives. The CRPS standards mandatory quality standards worded as grammatically imperative (must-do) statements. Results: We developed 17 standards in 8 areas of care. There are 2 standards in diagnosis, 1 in multidisciplinary care, 1 in assessment, 3 for care pathways, 1 in information and education, 4 in pain management, 3 in physical rehabilitation and 2 on distress management. The standards are presented and summarized, and their generation and consequences were discussed. Also presented are domains of practice for which no agreement on a standard could be reached. Areas of research needed to improve the validity and uptake of these standards are discussed. Conclusion: The European Pain Federation task force present 17 standards of the diagnosis and management of CRPS for use in Europe. These are considered achievable for most countries and aspirational for a minority of countries depending on their healthcare resource and structures. Significance: This position statement summarizes expert opinion on acceptable standards for CRPS care in Europe. were derived through discussion and a process of consolidation and challenge which had four stages: First, we took account of the evidence from recently published systematic reviews (Duong, Bravo, Todd, & Finlayson, 2018;O'Connell, Wand, McAuley, Marston, & Moseley, 2013). Second, a draft document outlining the domains of practice and the likely areas of difference was produced and discussed in e-mail and telephone discussions from November 2016 to May 2017. Third, we convened a one-day face-to-face meeting in June 2017. The focus of the meeting was to seek agreement among the members of the Task Force on the areas of practice. A "challenge" process was developed in which we drafted the standards as grammatically imperative (must-do) statements. This presentation of each standard of care as mandatory was useful because it forced members to think about exceptional cases or alternatives. Finally, a draft document of standards was then drafted. Each member of the group had one more opportunity to veto any highly contentious area and suggest further changes. No veto was enacted. The resulting standards were considered achievable for most countries and aspirational for a minority of countries depending on their healthcare resource and structures (Eccleston, Wells, & Morlion, 2018). | RESULTS We developed 17 standards, highlighted in italics, in 8 areas of care. There are 2 standards in diagnosis, 1 in multidisciplinarity, 1 in assessment, 3 in care pathways, 1 in information and education, 4 in pain management, 3 in physical rehabilitation and 2 in distress management. F I G U R E 1 Budapest Diagnostic Criteria for CRPS. Notes: (1) If the patient has a lower number of signs or symptoms, or no signs, but signs and/or symptoms cannot be explained by another diagnosis, "CRPS-NOS" (not otherwise specified) can be diagnosed. This includes patients who had documented CRPS signs/symptoms in the past. (2) If A, B, C and D above are all ticked, please diagnose CRPS. If in doubt, or for confirmation, please refer to your local specialist. (3) Psychological findings, such as anxiety, depression or psychosis, do not preclude the diagnosis of CRPS (3) Distinction between CRPS type 1 (no nerve injury) and CRPS type 2 (major nerve injury) is possible, but has little relevance for treatment. Explanation of terms: "Hyperalgesia" is when a normally painful sensation (e.g., from a pinprick) is more painful than normal; "allodynia" is when a normally not painful sensation (e.g., from touching the skin) is now painful; and "hyperaesthesia" is when the skin is more sensitive to a sensation than normal. A special feature in CRPS: In category 4, the decreased range of motion/weakness is not always due to pain. It is also not necessarily due to nerve damage or a joint or skin problem. Some patients' experience of an inability to move their limb may be due to yet poorly understood, disturbed motor coordination which can be reversible. A helpful question to assess this feature is: "If I had a magic wand to take your pain away, could you then move your… (e.g., fingers)?" Many patients will answer with "no" to that question. Unusual CRPS: Around 5% of patients cannot recall a specific trauma or may report that their CRPS developed with an everyday activity such as walking or typewriting. In very few people, CRPS can have a bilateral onset. In some patients, CRPS can spread to involve other limbs. Around 15% of CRPS cases do not improve after 2 years. It is appropriate to make the diagnosis of CRPS in these unusual cases 3.1 | The diagnosis of complex regional pain syndrome Complex regional pain syndrome is diagnosed according to the "New IASP Criteria" (sensitivity: 0.99; specificity: 0.68 for the "clinical" criteria) (Harden & Bruehl, 2005;Harden et al., 2010) which are sometimes also referred to as the "Budapest criteria" (Figure 1). The use of these criteria requires some degree of prior belief that the condition is likely to be CRPS, that is, the patient has a regional affection distally in extremities, not corresponding to a nerve innervation territory. As an exception, the rare subtype of CRPS II after nerve injury can sometimes correspond to the injured nerve's innervation territory. These criteria stipulate that CRPS is a diagnosis of exclusion, and alternative ("differential") diagnoses are provided in Box 1. Uncertainty about the diagnosis can be distressing to patients and may lead to inappropriate treatment. European countries differ in their current standards about the timely manner of diagnosing CRPS; however, each country is better than the worst situation: where patients are never being diagnosed ( Figure 2). Improvements in diagnostic standards are possible and desirable through information and training of healthcare professionals and patients. For example, in Switzerland an information leaflet about CRPS was sent to all practising medical doctors in the country, and there is consensus that awareness has improved (SUVA, 2013). While there are "perfect" diagnostic standards, it is important to establish realistic, country-related next goals and consequently identify which steps that aim to improve current standards will help to achieve these goals (see Figure 2)-this process can later be repeated as appropriate. The use of a diagnostic checklist is helpful, as shown in Figure 1. The European Pain Federation task force members recognize the challenges about the future development of the CRPS Budapest criteria (Table 2); these challenges include, among others, the diagnostic approach to a small number of patients diagnosed according to Budapest criteria, who over time lose some of their CRPS signs such as swelling, but have unchanged pain. These patients are currently labelled as "CRPS-not otherwise specified" (CRPS-NOS, Table 2), which has sometimes led to challenges with the reimbursement of therapies, or in the context of insurance-and medico-legal proceedings, and a better solution may be required. Standard 1: "Budapest" diagnostic criteria for CRPS must be used, as they provide acceptable sensitivity and specificity. Standard 2: Diagnosing CRPS does not require diagnostic tests, except to exclude other diagnoses. It is worth noting that different opinions existed within the Task Force regarding the usefulness of three-phase bone scintigraphy or magnetic resonance imaging for the diagnosis of CRPS, with some members considering these techniques useful, and the majority not. There was agreement that existing tests do not reflect pathognomonic parameters. | The management and referral of patients with CRPS Standard 3: The management of mild (mild pain and mild disability) CRPS may not require a multiprofessional team; however, the degree of severity and complexity of CRPS must dictate the need for appropriately matched multi-professional care (for details, see section care structure and Figure 3). Standard 4: Patients diagnosed with CRPS must be appropriately assessed; this assessment must establish any triggering cause of their CRPS, their pain intensity and the interference their pain causes on their function, their activities of daily living, participation in other activities, quality of life, sleep and mood. Most patients have short-lasting CRPS which may improve within a few months, even without treatment (Zyluk, 1998), so that these patients are best treated in non-specialized care, provided by healthcare professionals who have had standard training within their discipline (e.g., physiotherapist and general practitioner-see Figure 3); early treatment is highly likely to shorten the time of suffering for many patients (Gillespie et al., 2016). Standard 5: Referral to specialized care must be initiated for those patients who do not have clearly reducing pain and improving function, within 2 months of commencing treatment for their CRPS, despite good patient engagement in rehabilitation. There is consensus that the best exact time may vary somewhat between patients, but that 2 months is a reasonable guide. Standard 6: Referral to super-specialized care must be initiated for the small number of patients with complications such as CRPS spread, fixed dystonia, myoclonus, skin ulcerations or infections or malignant oedema in the affected limb, and those with extreme psychological distress. Referral to super-specialized care may also be appropriate for patients which are not improving in specialized services: (a) for additional expertise in treating this rare patient group and (b) for consideration of interventions not available in specialized care (Figure 3). There was no consensus about the best names for these three types of services, although most Task Force members considered the current wording in standard 6 to be acceptable. There is agreement that other wordings may be substituted as is nationally or locally appropriate. Treating healthcare professionals should be aware of appropriate specialized care services and any services with specific expertise and interest in the management of CRPS nationally ("super-specialized" care facilities), Figure 3. Standard 7: Specialized care facilities must provide advanced treatments for CRPS including multidisciplinary psychologically informed rehabilitative pain management programmes (PMP). If they do not provide these treatments, then they must refer for these treatments, if needed, to other specialized care facilities, or to super-specialized care facilities (Figure 2). We propose that specialized care facilities (Figure 3), who wish to establish quality indicators about their regional CRPS pathway, should in the first instance establish an internal registry of CRPS cases seen. Since the incidence of CRPS in Europe (20-26/100,000) is known, such a registry may support BOX 1 Possible differential diagnoses 1. Local pathology: Distortion, fracture, pseudoarthrosis, arthrosis, inflammation (cellulitis, myositis, vasculitis, arthritis, osteomyelitis and fasciitis), compartment syndrome and immobilization-induced symptoms. Persistent defects after limb injury: osteoarthritis developing after joint fractures; myofascial pain due to changed (protective) movement patterns 2. Affection of arteries, veins or lymphatics, for example traumatic vasospasm, vasculitis, arterial insufficiency, thrombosis, Raynaud's syndrome, thromboangiitis obliterans (Buerger's syndrome), lymphedema and secondary erythromelalgia. 3. Connective tissue disorder 4. Central lesion, for example spinal tumour 5. Peripheral nervous system lesion (nerve compression, cervico-brachial or lumbo-sacral plexus affection, acute sensory polyneuropathy, (poly-)neuritis, autoimmune (e.g., posttraumatic vasculitis) and infectious (e.g., borreliosis)) 6. Malignancy (Pancoast tumour/paraneoplastic syndrome/occult malignancy) 7. Factitious disorder Particular awareness about differential diagnosis is advised in spontaneously developing CRPS (no trauma, about 5% of cases), when the involvement is a proximal part of the limb, such as the shoulder, or when there is primary involvement of more than one limb. professionals to estimate whether those patients in their region who are in need of their service do in fact reach them. The registry, once established, can also serve as a basis for additional quality improvement efforts. Each Chapter of the European Pain Federation should institute an appropriate treatment guideline for CRPS that is valid for the circumstances in that country, even if this is adapted from existing guidelines in other countries. Production of lay audience-appropriate versions should be considered. | Prevention Early, appropriate rehabilitation treatment post-trauma may prevent the development of CRPS; however, more data are needed to fully understand its impact (Gillespie et al., 2016). A high pain score one week after trauma may indicate a "fracture at risk" (Moseley et al., 2014) and thus identify patients who benefit most from preventative early rehabilitation. There is conflicting evidence about the value of using vitamin C after distal radius fracture to prevent the development of CRPS. There is also very preliminary evidence about the value of steroids to prevent a prolonged course of CRPS after very early CRPS has been diagnosed. More studies are needed before recommendations can be given. The Task Force decided that there is insufficient evidence for or against any methods of prevention to allow for a standard to be written. | Patient information and education Standard 8: Patients, and where appropriate their relatives and carers must receive adequate information soon after diagnosis on (a) CRPS, (b) its causation (including the limits of current scientific knowledge), (c) its natural course, (d) signs and symptoms, including body perception abnormalities, (e) typical outcomes and (f) treatment options. Provision of information is by all therapeutic disciplines and must be repeated as appropriate. Emphasis should be put on the goals of treatment and on the patient's active involvement in the treatment plan. The typically benign prognosis should be emphasized. F I G U R E 3 Services and competencies. PMP = multidisciplinary pain management programme integrating psychological care and functional rehabilitation; & additionally "Hand Therapists" in some European Countries, *note, some pain clinics and rehabilitation facilities do not provide group-based PMP, whereas others additionally provide "super-specialized" services; **neuromodulation is listed to highlight the care structure within which it is delivered; some centres will not provide this service Information is available from various sources (e.g., ARUK, 2016;Birklein, Humm, et al., 2018;Ceruso et al., 2014;Crpsvereniging, 2018;Goebel et al., 2018;Perez et al., 2014). | Pain managementmedication and procedures Standard 9: Patients must have access to pharmacological treatments that are believed to be effective in CRPS. Appropriate pain medication treatments are considered broadly similar with those for neuropathic pains, although high-quality studies in CRPS are not available (Duong et al., 2018). All patients with CRPS must receive a pain treatment plan consistent with any geographically relevant guidelines. Treatment with bisphosphonates and/or steroids has also been considered. However, the Task Force members did not reach agreement about the evidence for or against their efficacy and safety. Standard 10: Efforts to achieve pain control must be accompanied by a tailored rehabilitation plan Standard 11: Medications aimed at pain relief may not be effective in CRPS, while causing important adverse effects; therefore, stopping rules should be established and a medication reduction plan must be in place if on balance continuation is not warranted. Standard 12: CRPS assessment (see above) must be repeated as appropriate, because both the natural development of the disease and of treatment may change the clinical picture over time. Some patients who have not responded to other treatments may be considered for invasive neuromodulation and should be referred for assessment. | Physical and vocational rehabilitation In partnership with the patient, appropriate, generally gentle, graded exercises in the presence of pain should be advised upon by a trained healthcare professional; this is essential as to give the best chance of a good outcome and minimize distress. Immobilization of the CRPS limb should be avoided wherever possible. (Gillespie et al., 2016;Oerlemans, Oostendorp, de Boot, & Goris, 1999/10). Standard 13: Patient's limb function, overall function and activity participation, including in the home and at work or school, must be assessed early and repeatedly as appropriate. Patients should have access to vocational rehabilitation (as relevant). Standard 14: Patients with CRPS must have access to rehabilitation treatment, delivered by physiotherapists and/or occupational therapists, as early as possible in their treatment pathway. This may shorten the early disease course and preserve limb function. In some European countries, these treatments are guided by medical doctors, including rehabilitation specialists, general practitioners or others. Standard 15: Physiotherapists and occupational therapists must have access to training in basic methods of pain rehabilitation and CRPS rehabilitation | Identifying and treating distress Standard 16: Patients must be screened for distress including depression, anxiety, posttraumatic stress, pain-related fear and avoidance. This must be repeated where appropriate (Bean, Johnson, Heiss-Dunlop, Lee, & Kydd, 2015). Standard 17: Where required, patients must have access to evidence-based psychological treatment | Long-term care Some patients will continue to experience impediments to their quality of life even after appropriate treatment has been completed. These impediments either are due to ongoing consequences of CRPS even though the condition has improved (about 40% of all patients), or are caused by unresolved CRPS (about 15%-20%; de Mos et al., 2009). Particularly, the latter group may benefit from the offer of a long-term management plan, mainly aiming to maximize support for self-management. Long-term management is ideally initiated through specialized or super-specialized services and may include referral back to these services if CRPS-specific symptoms change ( Figure 3); an example is described here (RUHNHSFT, 2016). T A B L E 1 European Pain Federation standards for the diagnosis and management of complex regional pain syndrome Diagnosis Standard 1 "Budapest" diagnostic criteria for CRPS must be used, as they provide acceptable sensitivity and specificity. Standard 2 Diagnosing CRPS does not require diagnostic tests, except to exclude other diagnoses. Management and Referral Standard 3 The management of mild (mild pain and mild disability) CRPS may not require a multi-professional team; however, the degree of severity and complexity of CRPS must dictate the need for appropriately matched multi-professional care (for details, see section care structure and Figure 3). Standard 4 Patients diagnosed with CRPS must be appropriately assessed; this assessment must establish any triggering cause of their CRPS, their pain intensity and the interference their pain causes on their function, their activities of daily living, participation in other activities, quality of life, sleep and mood. Standard 5 Referral to specialized care must be initiated for those patients who do not have clearly reducing pain and improving function within 2 months of commencing treatment for their CRPS despite good patient engagement in rehabilitation. Standard 6 Referral to super-specialized care must be initiated for the small number of patients with complications such as CRPS spread, fixed dystonia, myoclonus, skin ulcerations or infections or malignant oedema in the affected limb, and those with extreme psychological distress. Standard 7 Specialized care facilities must provide advanced treatments for CRPS including multidisciplinary psychologically informed rehabilitative pain management programmes (PMP). If they do not provide these treatments, then they must refer for these treatments, if needed, to other specialized care facilities, or to super-specialized care facilities ( Figure 3). Prevention None No Standards were considered as having sufficient support to recommend as mandatory. Information and Education Standard 8 Patients and where appropriate their relatives and carers must receive adequate information soon after diagnosis on (a) CRPS, (b) its causation (including the limits of current scientific knowledge), (c) its natural course, (d) signs and symptoms, including body perception abnormalities, (e) typical outcomes and (f) treatment options. Provision of information is by all therapeutic disciplines and must be repeated as appropriate. Pain Management Standard 9 Patients must have access to pharmacological treatments that are believed to be effective in CRPS. Appropriate pain medication treatments are considered broadly similar with those for neuropathic pains, although high-quality studies in CRPS are not available (Duong et al., 2018). All patients with CRPS must receive a pain treatment plan consistent with any geographically relevant guidelines. Standard 10 Efforts to achieve pain control must be accompanied by a tailored rehabilitation plan. Standard 11 Medications aiming at pain relief may not be effective in CRPS, while causing important side effects; therefore, stopping rules should be established and a medication reduction plan must be in place if on balance continuation is not warranted. Standard 12 CRPS assessment (see above) must be repeated as appropriate, because both the natural development of the disease and of the treatment may change the clinical picture over time. Physical and Vocational Rehabilitation Standard 13 Patient's limb function, overall function and activity participation, including in the home and at work or school, must be assessed early and repeatedly as appropriate. Patients should have access to vocational rehabilitation (as relevant). Standard 14 Patients with CRPS must have access to rehabilitation treatment, delivered by physiotherapists and/or occupational therapists, as early as possible in their treatment pathway. Standard 15 Physiotherapists and occupational therapists must have access to training in basic methods of pain rehabilitation and CRPS rehabilitation. Identifying and Treating Distress Standard 16 Patients must be screened for distress including depression, anxiety, post-traumatic stress, pain-related fear and avoidance. This must be repeated where appropriate. Standard 16 Where required, patients must have access to evidence-based psychological treatment. Long-term Care None No standards were considered as having sufficient support to recommend as mandatory. | DISCUSSION We here present 17 standards for the diagnosis and management of CRPS for consideration of adoption in Europe. They are summarized in Table 1. These standards can be considered best practice in CRPS as supported by expert and patient agreement. We followed a method that focused on evidence review but which prioritized the production of a series of mandatory statements of optimal clinical practice that could be followed in the majority of the 37 countries who are members of the European Pain Federation. We deliberately avoided statements of optional, desirable or aspirational practice, focusing on what was considered achievable by most. There are a number of limitations to our approach that should be taken into account. First, we did not canvas all clinicians working in this field across all of the 37 countries, or managers, politicians or other non-healthcare stakeholders. We focused instead on an expert group supported by a patient representative. It is possible that different experts would have produced different standards. This was a deliberate decision on our behalf as we needed to set a first list of expert-driven standards from which to build. Second, we did not produce a series of evidence syntheses (e.g., meta-analytic review of efficacy or review of assessment tools). We judged that such an effort would be resource-heavy and unlikely to yield any clarity due to the well-documented absence of primary research into this orphan disease. Instead, we relied on the extant literature which is well known to the group. Third, our decision to craft standards as mandatory meant that the heterogeneity of different views and nuanced opinions was not reported. Presented only is the result. The standards and their production have clinical, research and policy implications. 1. Clinical implications: the next step is to share the standards with Federation members, which has a number of challenges. First, language translation of the standards is necessary. Second, we need to survey clinicians for current practice as it relates to the standards to establish a baseline of common clinical practice. 2. Research implications: there is no standard that could not benefit from further study. And there are two areas where we were unable to set standards of care. The group considered that a priority for research was to better understand the heterogeneity of presentation within the current broad category of CRPS. For example, there is a need to differentiate between an early and late presentation. There is a need to look at sex and age differences. And, there is a need to look at CRPS in the context of comorbidities. There are also challenges to the Budapest criteria which, summarized in Table 2, need urgent attention. 3. Policy implications: these standards are the first step in a process. Standards are essentially a tool to improve practice, but practice only improves if they are used. We next need to understand the barriers to their implementation, whether they are resource, educational, legislative or organizational. We propose that a CRPS pain champion be appointed by each of the 37 national pain chapters, who can guide development and be a point of contact for this work. Finally, we recognize that these standards are open to change and should be reviewed regularly. In particular, we need to take account of national standards, practice reviews, guidance and guidelines, either from individual pain societies or those in rehabilitation, neurology or other therapy areas. We need also to be mindful of non-European work that could influence these standards, including any new and emerging evidence. We have therefore agreed with the European Pain Federation to review these standards five years from their date of publication. T A B L E 2 Challenges for the future development of CRPS Budapest criteria that arise from the 17 Standards Challenge 1 How should we deal with "CRPS-like conditions" fulfilling only some diagnostic criteria (from the start-never fulfilled Budapest diagnosis)? Challenge 2 How shall we term those cases of CRPS which initially clearly conformed with Budapest criteria, but which have now very few signs not conforming with Budapest criteria, but ongoing pain. This includes cases, where that pain is as strong as initially, and (more often) other cases, where the pain is improved, but stable and still problematic to the patient's quality of life. We recognize that these cases are rare, since sensory and motor signs providing the basis for the Budapest diagnosis are almost always present. Where the diagnosis of CRPS was correctly made, and documented in the past, might these cases be termed, for example, "partially recovered," or "sequelae"? Challenge 3 How can we better clarify the specificity of the Budapest diagnosis outside neuropathic pain settings?
Glycosylation: A “Last Word” in the Protein-Mediated Biomineralization Process Post-translational modifications are one way that biomineral-associated cells control the function and fate of proteins. Of the ten different types of post-translational modifications, one of the most interesting and complex is glycosylation, or the covalent attachment of carbohydrates to amino acid sidechains Asn, Ser, and Thr of proteins. In this review the author surveys some of the known biomineral-associated glycoproteins and summarizes recent in vitro recombinant protein experiments which test the impact of glycosylation on biomineralization protein functions, such as nucleation, crystal growth, and matrix assembly. These in vitro studies show that glycosylation does not alter the inherent function of the polypeptide chain; rather, it either accentuates or attenuates functionality. In essence, glycosylation gives the cell the “last word” as to what degree a biomineralization protein will participate in the biomineralization process. Introduction Over the last forty years there has been a concerted effort to understand how organisms craft biomineralized skeletal structures for survival [1][2][3]. This effort has focused along two lines. First, how do mineral crystals or amorphous minerals form under biological conditions? Recent evidence points to a mineral precursor nucleation process involving nanoparticle synthesis followed by particle assembly into larger mineral mesoscale structures [4][5][6]. Second, what agents are biosynthetically created by these same organisms to manage the mineral formation process? With regard to the latter, it has been well documented that the genomes of biomineralizing organisms code for families of proteins that are mineral-specific and unique with regard to primary sequence construction and structure [7][8][9][10]. The appearance of these proteins in the extracellular matrix during mineral formation is a clear attempt by cells to regulate the nucleation and assembly stages that lead to the final mineral product of the skeletal elements that are necessary for organism survival. Thus, to understand how biominerals form into larger, useful structures, we must understand the role or function that these proteins play in nucleation and particle assembly. In the majority of eukaryotic organisms, the overall complexity of the biomineral proteomes is augmented by a process known as post-translational modification [11][12][13]. In essence, once a nascent protein polypeptide chain is produced on the ribosomal complexes, in some cases the cells express enzymes that perform further covalent modifications of certain amino acid sidechains on the protein, thereby altering the functionality of these sidechains. These covalent modifications occur in compartments that are separate from the cell cytoplasm (e.g., Golgi apparatus, rough endoplasmic reticulum (rER)), intracellular vesicles) [11][12][13]. A summary of common post-translational modifications (Table 1) [12] indicates that certain amino acid sidechains are targeted by cells for covalent modification. These modifications are performed by intracellular enzymes and in the majority of cases the finalized Perhaps the most complex post-translational modification process is glycosylation, or the addition of one or more carbohydrate monomers (known as monosaccharides) to specific amino acid sidechains on a protein, thus converting the polypeptide into a glycoprotein [11][12][13][14][15][16]. There are three classifications of glycoproteins, depending on which amino acids serve as attachment points for carbohydrates [11][12][13][14][15][16]: (1) O-linked, where the oligosaccharide attachment occurs on Ser and/or Thr residues and is performed in the Golgi apparatus; (2) N-linked, where the oligosaccharide attachment occurs on Asn and is performed within the endoplasmic reticulum (ER); (3) hybrid, in which a glycoprotein has O-linked (Ser, Thr) glycans and N-linked (Asn) glycans. Several features contribute to the overall complexity of glycosylation [11][12][13][14][15][16]: (a) The number of carbohydrate groups added to a single amino acid sidechain site can vary; (b) the number and type of amino acid sites for attachment on a given protein can vary; (c) at a given attachment point on a protein, the carbohydrate groups can be constructed as linear or branching chains; (d) the hydroxyl-rich carbohydrate groups themselves can be modified by the addition of chemical groups, such as carboxylate, sulfate, N-acetyl amino, and hydroxyl. Thus, unlike other post-translational modifications, glycosylation represents a unique opportunity for the cell to combine two very different macromolecular building blocks (amino acids, carbohydrates) into one macromolecule, which in turn may have a significant impact on the function and distribution of this protein class within a biomineralizing system. For the purposes of this review, the focus will be on glycosylation and the impact that this post-translational modification has on known biomineralization processes. The review will begin by identifying notable well-studied biomineralization glycoproteins [17][18][19][20][21][22][23][24][25] and briefly touch upon their roles in their respective mineralization processes. Then, a discussion of recent in vitro studies [26][27][28][29][30] will follow, which investigated the effects of glycosylation on biomineralization protein mineralization functions (e.g., nucleation, crystal growth, particle assembly and protein-protein interactions). Finally, suggestions as to the direction of future studies of glycosylated biomineralization proteins will be offered. Table 2 provides a summary of specific mineral matrix proteins that have been identified as glycoproteins and report the complete amino acid sequence [17][18][19][20][21][22][23][24][25]. Admittedly, this table is sparse, Crystals 2020, 10, 818 3 of 11 and at the time of this writing very few biomineral-associated glycoproteins have complete protein sequence data or oligosaccharide composition/sequence data available. Note that some studies have identified glycoproteins in the extracellular matrices of different organisms [31][32][33][34][35], but to date these proteins have not been sequenced nor rigorously characterized. The majority of the identified biomineralization glycoproteins are found in association with calcium-based biominerals [17][18][19][20][21][22][23][24][25][31][32][33][34][35]; however, it should be acknowledged that glycoproteins may eventually be identified in other non-calcium based biominerals, such as magnetite (Fe 3 O 4 ) [36] or silicates (SiO 4 ) [37]. To provide some examples of the roles that glycoproteins play in biomineralization, we will briefly describe the proteins in Table 1. Note that in only a few examples the oligosaccharide chain attachment and composition are known at this time [26,27]. Enamelin This is a glycoprotein found in the tooth enamel of vertebrates [17,18]. This protein plays a role in hydroxyapatite (HAP) formation from the amorphous precursor, amorphous calcium phosphate (ACP) [17]. This protein has two groups of oligosaccharides chains consisting of fucose, galactose, mannose, N-acetylglucosamine, and N-acetylneuraminic acid [18]. Enamelin combines with another enamel matrix protein, amelogenin, to form a protein-protein complex that stabilizes ACP and modulates HAP crystal growth during tooth formation [18]. EDIL3, MFGE8 These two proteins are found in avian eggshells [19] and, although the polypeptide chains have been sequenced, they have not been fully characterized with regard to their oligosaccharide content or sequence, it is known that they bind to amorphous calcium carbonate (ACC)-containing matrix vesicles-and guide these vesicles to the mineralization front, where calcite crystals form from the ACC particles [19]. Proteoglycans These are a family of complex macromolecules that are composed of glycosaminoglycan (GAG) chains covalently attached to a core protein through a tetrasaccharide linker [20,21]. Proteoglycans act as polysaccharides rather than proteins as 95% of their weight is composed of glycosaminoglycans. The glycosaminoglycan chains consist of alternating hexosamine and hexuronic acid or galactose units [20,21]. There are also glycopeptide linkage regions that connect the polysaccharide chains to the core proteins that contain N-and/or O-linked oligosaccharides. Although found in the extracellular matrix of many tissues, PGs comprise a significant portion of bone and tooth dentine HAP-containing extracellular matrices and are believed to be involved in ion and water sequestration in these matrices [20,21]. SIBLING Family There is a family of proteins found in bone and tooth dentine that are known as the SIBLING proteins (small integrin binding ligand N-glycosylated) [22,23]. These proteins, all have Arg, Gly, Asp RGD-cell binding domains, are anionic, and all are glycosylated [22,23]. At present, there is scant information regarding the N-linked oligosaccharide chain composition, sequence, or attachment location. SIBLINGS are found in multiple HAP-containing tissues in addition to bone and dentine and are multifunctional: cell signaling, hydroxyapatite binding, and mineral formation. The SIBLING proteins are osteopontin (bone sialoprotein 1), dentin matrix protein 1 (DMP1), bone sialoprotein (BSP2), matrix extracellular phosphoglycoprotein (MEPE) and the products of the dspp gene, dentin sialoprotein (DSP) and dentin phosphoprotein (DPP) [22,23]. SpSM30A-F In the developing sea urchin Strongylocentrotus purpuratus embryo, the first skeletal element that emerges is the spicule [9], a stirrup-like structure that initially forms from ACC and transforms into mesocrystal calcite [27,29,30]. The matrix of the spicule is formed via many spicule matrix proteins (denoted as SpSM) [9] of which a subset of six isoforms, known as SpSM30A-F, are known to be glycosylated [24,27]. These proteins are known to stabilize ACC, inhabit the intracrystalline regions of mesocrystal calcite [4,5] and most likely contribute to the fracture resistance of the spicule itself [1,27]. The SpSM30 proteins are known to interact with the major spicule matrix protein, SpSM50, and these interactions are important for the assembly of the spicule matrix [27,29,30]. AP24 In the formation of the aragonitic nacre layer in the shells of mollusks, there exists families of proteins that inhabit the interior regions of aragonite crystals [6][7][8] and are termed intracrystalline proteins [25]. These proteins modify the material properties of the aragonite crystal and convey fracture resistance and ductility to these crystals, thus strengthening the shell itself [1,6,25,26]. In the Pacific red abalone, Haliotis rufescens, a family of intracrystalline proteins (the AP series) have been identified [25,26], with one member of this family, AP24, identified as a glycoprotein [25,26]. Subsequent studies confirmed that AP24 acts as a blocker of calcite formation, which then allows the metastable aragonite to form in the presence of extracellular Mg(II) [25,26]. The Impact of Glycosylation on Protein Function Does the attachment of oligosaccharides affect the molecular behavior of a polypeptide chain? To answer this question, one could envision a comparative study wherein the function of an unglycosylated variant of a given protein is contrasted against that of a glycosylated variant, with each possessing the identical primary sequence. Here, the only variable would be the presence (or absence) of oligosaccharide chains. Recently, this type of study was executed on two proteins, AP24 (aragonite nacre layer, Pacific red abalone H. rufescens [25] and SpSM30B/C (calcitic spicule matrix, S. purpuratus, purple sea urchin) [24]. Both proteins have been the subject of in vitro glycosylation studies in insect cells, where it was discovered that AP24 and SpSM30B/C belong to the hybrid classification-i.e., they consist of N-and O-linked linear and branching oligosaccharide chains [26,27]. Interestingly, the glycosylated variants of AP24 and SpSM30B/C both contain anionic monosialylated, bisialylated and monosulfated, bisulfated monosaccharides [26,27]. Given that both proteins inhabit a Ca(II)-rich environment in vivo, the anionic monosaccharides could serve as putative sites for Ca(II)-protein or mineral-protein interactions. To a certain extent, both proteins are similar in function: they are involved in the formation of the organic matrix, forming hydrogel particles that assemble mineral nanoparticles [26,27]. In addition, both protein hydrogels become occluded within calcium carbonates and modify the material and surface properties of the minerals they inhabit [26,27]. In the following section, we review these studies and their comparative use of two recombinant variants: (1) a non-glycosylated variant expressed in E. coli bacteria, and (2) a glycosylated variant expressed in baculovirus-infected sf9 insect cells [26,27]. By using these two variations within parallel mineralization and biophysical studies, it was possible to measure the contributions of oligosaccharide chains to the function of each protein [26,27]. The Nacre Glycoprotein AP24 In this in vitro study the protein was expressed in bacterial and insect cells as a single polypeptide. In Sf9 cells, the recombinant form of AP24 (denoted as rAP24G) is expressed with variations in glycosylation that create microheterogeneity in protein molecular masses [26]. The overall molecular mass of the oligosaccharide component was found to range from 650 Da to 6.5 kDa. It was observed that both rAP24G and the non-glycosylated variant (denoted as rAP24NG) aggregate to form protein hydrogels, with rAP24NG exhibiting a higher aggregation propensity compared to rAP24G [26]. With regard to functionality, both rAP24G and rAP24NG exhibit similar behavior within in vitro calcium carbonate mineralization assays and Ca(II) potentiometric titrations that measure prenucleation cluster appearance and ACC formation/transformation [26]. An interesting difference was noted in these studies: rAP24G modifies crystal growth directions and is a stronger nucleation inhibitor, whereas rAP24NG exhibits higher mineral phase stabilization and nanoparticle containment [26]. Hence, oligosaccharides may modulate certain functions of the nacre glycoprotein AP24 but have little effect on other intrinsic functionalities. The Spicule Matrix Glycoprotein SpSM30B/C Similarly, the spicule matrix protein SpSM30B/C is expressed in insect and bacterial cells as a single polypeptide. The recombinant glycosylated form (rSpSM30B/C-G) also contains variations in glycosylation that create microheterogeneity in rSpSM30B/C molecular masses [27]. The overall molecular mass of the oligosaccharide component was found to range from 1.2 kDa to 3.6 kDa to 7.5 kDa. In terms of aggregation propensities and hydrogel formation, the bacteria expressed non-glycosylated variant (rSpSM30B/C-NG) has a lower aggregation propensity compared to the glycosylated rSpSM30B/C-G variant. Both variants promote faceted growth and create surface texturing of calcite crystals in vitro, with rSpSM30B/C-G promoting these effect with higher intensity (Figure 1) [27]. Crystals 2020, 10, x FOR PEER REVIEW 5 of 11 [26,27]. In addition, both protein hydrogels become occluded within calcium carbonates and modify the material and surface properties of the minerals they inhabit [26,27]. In the following section, we review these studies and their comparative use of two recombinant variants: (1) a non-glycosylated variant expressed in E. coli bacteria, and (2) a glycosylated variant expressed in baculovirus-infected sf9 insect cells [26,27]. By using these two variations within parallel mineralization and biophysical studies, it was possible to measure the contributions of oligosaccharide chains to the function of each protein [26,27]. The Nacre Glycoprotein AP24 In this in vitro study the protein was expressed in bacterial and insect cells as a single polypeptide. In Sf9 cells, the recombinant form of AP24 (denoted as rAP24G) is expressed with variations in glycosylation that create microheterogeneity in protein molecular masses [26]. The overall molecular mass of the oligosaccharide component was found to range from 650 Da to 6.5 kDa. It was observed that both rAP24G and the non-glycosylated variant (denoted as rAP24NG) aggregate to form protein hydrogels, with rAP24NG exhibiting a higher aggregation propensity compared to rAP24G [26]. With regard to functionality, both rAP24G and rAP24NG exhibit similar behavior within in vitro calcium carbonate mineralization assays and Ca(II) potentiometric titrations that measure prenucleation cluster appearance and ACC formation/transformation [26]. An interesting difference was noted in these studies: rAP24G modifies crystal growth directions and is a stronger nucleation inhibitor, whereas rAP24NG exhibits higher mineral phase stabilization and nanoparticle containment [26]. Hence, oligosaccharides may modulate certain functions of the nacre glycoprotein AP24 but have little effect on other intrinsic functionalities. The Spicule Matrix Glycoprotein SpSM30B/C Similarly, the spicule matrix protein SpSM30B/C is expressed in insect and bacterial cells as a single polypeptide. The recombinant glycosylated form (rSpSM30B/C-G) also contains variations in glycosylation that create microheterogeneity in rSpSM30B/C molecular masses [27]. The overall molecular mass of the oligosaccharide component was found to range from 1.2 kDa to 3.6 kDa to 7.5 kDa. In terms of aggregation propensities and hydrogel formation, the bacteria expressed nonglycosylated variant (rSpSM30B/C-NG) has a lower aggregation propensity compared to the glycosylated rSpSM30B/C-G variant. Both variants promote faceted growth and create surface texturing of calcite crystals in vitro, with rSpSM30B/C-G promoting these effect with higher intensity (Figure 1) [27]. Figure 1. SEM images of in vitro calcium carbonate mineralization assay samples, following the protocol described in [27]. (A) Negative control, no protein added; (B) + rSpSM30B/C, nonglycosylated, 1.5 µM; (C) + rSpSM30B/C-G, glycosylated, 1.5 µM. Note faceted nanotexturing produced by both proteins, which is more pronounced in the presence of the glycosylated variant in (C). White arrow in (C) denotes protein hydrogel deposit that forms within the mineralization assay. Scalebars = 2 µm. Figure 1. SEM images of in vitro calcium carbonate mineralization assay samples, following the protocol described in [27]. (A) Negative control, no protein added; (B) + rSpSM30B/C, non-glycosylated, 1.5 µM; (C) + rSpSM30B/C-G, glycosylated, 1.5 µM. Note faceted nanotexturing produced by both proteins, which is more pronounced in the presence of the glycosylated variant in (C). White arrow in (C) denotes protein hydrogel deposit that forms within the mineralization assay. Scalebars = 2 µm. How Does Glycosylation Impact Function? From these two studies we note a trend where glycosylation does not change the intrinsic function of the polypeptide chain; rather, the attachment of anionic oligosaccharide moieties either (1) attenuates specific functions or has no effect (AP24) or (2) accentuates protein functionality [SpSM30B/C]. Other studies with multiple glycoproteins will hopefully confirm this trend or provide evidence of other effects that oligosaccharides impose upon polypeptides. The Impact of Glycosylation on Protein-Protein Interaction (Matrix Formation) In addition to modulating the mineral formation process, a key role of biomineralization proteins is the assembly and organization of multiple proteins to form an organic matrix within which the nucleation and crystal growth processes take place [1,2,7]. We pose the question: how does glycosylation affect protein-protein interactions that dominate the matrix formation process? To address this question, investigations were conducted on molluscan (AP7, AP24, H. rufescens) [6,25,28] and sea urchin (SpSM50, SpSM30B/C, S. purpuratus) [6,24,29,30] recombinant two-protein systems. In both organisms it is known that each pair of proteins co-exist in vivo within the extracellular matrix [6,24]. AP7-AP24 Complex It is known that AP7 forms a complex with AP24 in the nacre layer [25]. Using sensitive quartz crystal microbalance with dissipation (QCM-D) measurements, this complex formation was confirmed and it was found that both the glycosylated and non-glycosylated variants of recombinant AP24 bound to recombinant AP7 but with different quantities and binding kinetics ( Figure 2). Interestingly, non-glycosylated recombinant AP24 underwent a conformational change when binding to AP7, but the glycosylated variant did not [28]. Moreover, the binding of AP7 with non-glycosylated and glycosylated variants of AP24 was found to be Ca(II)-dependent and -independent, respectively ( Figure 2) [28]. Thus, AP7 and AP24 protein complexes form as a direct result of polypeptide-polypeptide chain recognition and not polypeptide-oligosaccharide recognition. However, the presence of anionic oligosaccharides on AP24 appears to modulate the intensity of AP7-AP24 protein-protein interactions and potentially stabilizes the AP24 conformation upon binding to AP7. As shown in Figure 3, both proteins have numerous surface-accessible regions or domains where interactions might take place. How Does Glycosylation Impact Function? From these two studies we note a trend where glycosylation does not change the intrinsic function of the polypeptide chain; rather, the attachment of anionic oligosaccharide moieties either (1) attenuates specific functions or has no effect (AP24) or (2) accentuates protein functionality [SpSM30B/C]. Other studies with multiple glycoproteins will hopefully confirm this trend or provide evidence of other effects that oligosaccharides impose upon polypeptides. The Impact of Glycosylation on Protein-Protein Interaction (Matrix Formation) In addition to modulating the mineral formation process, a key role of biomineralization proteins is the assembly and organization of multiple proteins to form an organic matrix within which the nucleation and crystal growth processes take place [1,2,7]. We pose the question: how does glycosylation affect protein-protein interactions that dominate the matrix formation process? To address this question, investigations were conducted on molluscan (AP7, AP24, H. rufescens) [6,25,28] and sea urchin (SpSM50, SpSM30B/C, S. purpuratus) [6,24,29,30] recombinant two-protein systems. In both organisms it is known that each pair of proteins co-exist in vivo within the extracellular matrix [6,24]. AP7-AP24 Complex It is known that AP7 forms a complex with AP24 in the nacre layer [25]. Using sensitive quartz crystal microbalance with dissipation (QCM-D) measurements, this complex formation was confirmed and it was found that both the glycosylated and non-glycosylated variants of recombinant AP24 bound to recombinant AP7 but with different quantities and binding kinetics ( Figure 2). Interestingly, non-glycosylated recombinant AP24 underwent a conformational change when binding to AP7, but the glycosylated variant did not [28]. Moreover, the binding of AP7 with nonglycosylated and glycosylated variants of AP24 was found to be Ca(II)-dependent and -independent, respectively ( Figure 2) [28]. Thus, AP7 and AP24 protein complexes form as a direct result of polypeptide-polypeptide chain recognition and not polypeptide-oligosaccharide recognition. However, the presence of anionic oligosaccharides on AP24 appears to modulate the intensity of AP7-AP24 protein-protein interactions and potentially stabilizes the AP24 conformation upon binding to AP7. As shown in Figure 3, both proteins have numerous surface-accessible regions or domains where interactions might take place. For more information on the QCM-D method and experimental protocol, please refer to [28]. rAP7 is adsorbed onto the poly-L-Lys coated QCM-D chip, then unbound rAP7 is washed off and then rAP24G (glycosylated) or rAP24NG (non-glycosylated) variants are introduced into the flowcell. Plots show the third harmonic frequency (F3, blue) and dissipation (D3, red) observed under each scenario. For more information on the QCM-D method and experimental protocol, please refer to [28]. rAP7 is adsorbed onto the poly-L-Lys coated QCM-D chip, then unbound rAP7 is washed off and then rAP24G (glycosylated) or rAP24NG (non-glycosylated) variants are introduced into the flowcell. Deflection in frequency and dissipation result from rAP24 protein adsorbing onto the immobilized rAP7 layer on the chip, with amplitudes of the deflection proportional to the amount of protein bound. The time-dependent introduction of proteins is noted on the plots by arrows. These experiments were repeated and found to be reproducible. Deflection in frequency and dissipation result from rAP24 protein adsorbing onto the immobilized rAP7 layer on the chip, with amplitudes of the deflection proportional to the amount of protein bound. The time-dependent introduction of proteins is noted on the plots by arrows. These experiments were repeated and found to be reproducible. Figure 3. INTFOLD-predicted three-dimensional structures of H. rufescens AP7 and AP24 proteins, in ribbon representation. The protocol for structure prediction is provided in [27]. AP24 is represented without glycan groups. Note that each protein has surface-accessible domains or regions which could serve as sites for protein-protein interaction. SpSM50-SpSM30B/C Complex SpSM50 is the major matrix protein of the sea urchin spicules in S. purpuratus embryos with other SpSM proteins, such as the six SpSM30A-F isoforms comprising smaller amounts in the matrix [9,23]. With SpSM50 in large abundance, there is the possibility that other SpSM proteins interact with SpSM50 to form the matrix and control mineralization. This was tested in a recent in vitro study, where recombinant forms of SpSM50 and glycosylated and non-glycosylated variants of SpSM30B/C were investigated for their ability to form protein-protein complexes [29,30]. The results were quite dramatic: the formation of a SpSM50-SpSM30B/C complex requires glycosylation and, in contrast to the AP7-AP24 study described above, these interactions were found to be Ca(II)-independent for both variants [29,30]. The glycosylation requirement clearly indicates that the SpSM50 polypeptide sequence recognizes and binds to the glycan moieties on the surface of SpSM30B/C. As shown in Figure 4, the SpSM50 sequence contains a conserved C-type lectin domain, which is known to bind to carbohydrates, and presumably it is this domain that would interact with the glycan groups of SpSM30B/C [9,23,29,30]. [27]. AP24 is represented without glycan groups. Note that each protein has surface-accessible domains or regions which could serve as sites for protein-protein interaction. SpSM50-SpSM30B/C Complex SpSM50 is the major matrix protein of the sea urchin spicules in S. purpuratus embryos with other SpSM proteins, such as the six SpSM30A-F isoforms comprising smaller amounts in the matrix [9,23]. With SpSM50 in large abundance, there is the possibility that other SpSM proteins interact with SpSM50 to form the matrix and control mineralization. This was tested in a recent in vitro study, where recombinant forms of SpSM50 and glycosylated and non-glycosylated variants of SpSM30B/C were investigated for their ability to form protein-protein complexes [29,30]. The results were quite dramatic: the formation of a SpSM50-SpSM30B/C complex requires glycosylation and, in contrast to the AP7-AP24 study described above, these interactions were found to be Ca(II)-independent for both variants [29,30]. The glycosylation requirement clearly indicates that the SpSM50 polypeptide sequence recognizes and binds to the glycan moieties on the surface of SpSM30B/C. As shown in Figure 4, the SpSM50 sequence contains a conserved C-type lectin domain, which is known to bind to carbohydrates, and presumably it is this domain that would interact with the glycan groups of SpSM30B/C [9,23,29,30]. Crystals 2020, 10, x FOR PEER REVIEW 7 of 11 Deflection in frequency and dissipation result from rAP24 protein adsorbing onto the immobilized rAP7 layer on the chip, with amplitudes of the deflection proportional to the amount of protein bound. The time-dependent introduction of proteins is noted on the plots by arrows. These experiments were repeated and found to be reproducible. The protocol for structure prediction is provided in [27]. AP24 is represented without glycan groups. Note that each protein has surface-accessible domains or regions which could serve as sites for protein-protein interaction. SpSM50-SpSM30B/C Complex SpSM50 is the major matrix protein of the sea urchin spicules in S. purpuratus embryos with other SpSM proteins, such as the six SpSM30A-F isoforms comprising smaller amounts in the matrix [9,23]. With SpSM50 in large abundance, there is the possibility that other SpSM proteins interact with SpSM50 to form the matrix and control mineralization. This was tested in a recent in vitro study, where recombinant forms of SpSM50 and glycosylated and non-glycosylated variants of SpSM30B/C were investigated for their ability to form protein-protein complexes [29,30]. The results were quite dramatic: the formation of a SpSM50-SpSM30B/C complex requires glycosylation and, in contrast to the AP7-AP24 study described above, these interactions were found to be Ca(II)-independent for both variants [29,30]. The glycosylation requirement clearly indicates that the SpSM50 polypeptide sequence recognizes and binds to the glycan moieties on the surface of SpSM30B/C. As shown in Figure 4, the SpSM50 sequence contains a conserved C-type lectin domain, which is known to bind to carbohydrates, and presumably it is this domain that would interact with the glycan groups of SpSM30B/C [9,23,29,30]. The protocol for structure prediction is provided in references 27 and 29. Note that the SpSM50 protein possesses a surface-accessible C-type lectin carbohydrate binding domain, which presumably acts as a site for interaction with SpSM30B/C glycan groups. Summary and Future Directions From the foregoing, we can observe that glycosylation provides an additional degree of control over extracellular protein function by either accentuating or attenuating the intrinsic functionality of the polypeptide sequence. In a sense, the cell can have the "last word" as to the degree of participation within the biomineralization process. In some cases (e.g., AP24), the oligosaccharides stabilize the conformation of the glycoprotein, which is a known trait of N-linked oligosaccharides [12][13][14][15][16]. The author proposes that glycosylation can serve several purposes vis a vis the biomineralization process: (1) "tweak" or "tune" protein mineralization function to suit the situation or need; (2) act as a site for molecular recognition and binding with other matrix proteins; (3) conformationally stabilize a protein, thereby enhancing functionality; (4) create additional anionic sites for ionic (e.g., Ca(II)), mineral, or water interactions; (5) invoke cell activation or deactivation via binding to outer membrane receptor proteins. Clearly, there may be other benefits that arise from glycosylation, and thus this process represents a powerful method that cells can exploit to create skeletal elements under ambient or extreme conditions [1,2]. The author believes that the biomineralization field is still in its infancy with regard to understanding the role that glycoproteins and their associated oligosaccharides play in the skeletal formation process. To make progress in this area, the author proposes several key issues that need to be addressed, which are elaborated upon in Sections 5.1-5.4, below. A More Aggressive Approach to Glycoprotein Isolation and Identification Simply put, the genomics of biomineralization have advanced quite rapidly [8,9], but the proteomics and the identification of post-translational modifications to these proteins lag to a certain extent, especially when compared to the advances in glycobiology within the fields of immunology and other medical branches [13,14]. It is the author's opinion that this is not due to limitations in methodologies, technology, or skill; rather, it is due to at least two factors: (1) the unwillingness of laboratories to pursue these intensive and costly projects and (2) insufficient grant funding to permit these projects to move forward. It is hoped that this situation will change for the better over time. Improvements in Glycoprotein Purification and Structure Determination For a variety of reasons, glycoproteins can be difficult to purify to homogeneity for structural determinations [13][14][15][16][38][39][40]. Further, oligosaccharide chains and the protein region(s) to which they are attached are typically conformationally labile, making structural determination by X-ray crystallography or NMR to be highly problematic, which, in turn, makes it nearly impossible to establish protein structure-function relationships [14][15][16]. Currently, molecular modeling (i.e., energy minimization, molecular dynamics) is the only route to obtain structural protein-oligosaccharide information, albeit in qualitative form [41]. Thus, there is a need for new methodologies to obtain glycoproteins in a highly purified form and to decipher the three-dimensional structure of the oligosaccharide-polypeptide chain complex. Improvements in Glycoprotein Localization One can identify the location of proteins in situ within the extracellular matrix using monoclonal or polyclonal antibody recognition of protein epitopes [39]. In the case of glycoproteins, this becomes a more complicated issue, since the antibodies raised to epitopes on glycoproteins might be specific to only the polypeptide chain, to certain oligosaccharide chains, or to both [39]. Given that some glycoproteins often exhibit variations in glycosylation [26,27], the in situ identification of glycoproteins using antibodies may not be so straightforward. In such cases, it may be prudent to synthesize select protein sequence regions and/or glycan chains and use these for antibody generation. Further, improved methods of in situ glycoprotein detection will propel the field further and allow interpretations of biomineral formation in the presence of matrix-specific glycoproteins. Improvements in Understanding the Role of Variations in Glycosylation The fact that in some in vitro systems there is variation in the degree of oligosaccharide chain completion and site attachment [26,27] creates a diverse pool of glycoproteins coded for by a single gene. Is this a flaw of the cellular system, or, is this deliberate and with a purpose? If deliberate, how
Statistical Inference on the Cure Time In population-based cancer survival analysis, the net survival is important for government to assess health care programs. For decades, it is observed that the net survival reaches a plateau after long-term follow-up, this is so called ``statistical cure''. Several methods were proposed to address the statistical cure. Besides, the cure time can be used to evaluate the time period of a health care program for a specific patient population, and it also can be helpful for a clinician to explain the prognosis for patients, therefore the cure time is an important health care index. However, those proposed methods assume the cure time to be infinity, thus it is inconvenient to make inference on the cure time. In this dissertation, we define a more general concept of statistical cure via conditional survival. Based on the newly defined statistical cure, the cure time is well defined. We develop cure time model methodologies and show a variety of properties through simulation. In data analysis, cure times are estimated for 22 major cancers in Taiwan, we further use colorectal cancer data as an example to conduct statistical inference via cure time model with covariate sex, age group, and stage. This dissertation provides a methodology to obtain cure time estimate, which can contribute to public health policy making. Under (1.1), the idea of relative survival can be used to estimate S D (t) non-parametrically by where S T (t) is an estimate of S T (t) (e.g. the Kaplan-Meier estimator), and S O (t) can be obtained from national death certificate database. There are different kinds of relative survival estimates, depending on the method used to calculate S O (t) (Ederer et al., 1961;Hakulinen, 1982). Statistical cure and cure rate model In decades, more and more complex diseases are said to be curable (Castillo et al., 2013). One can also observe the cure phenomenon from a diseased population after long-term follow-up, that is "S D (t) reaches a plateau π after long-term follow-up", which can be formulated as lim t→∞ S D (t) = π. (1. 3) It also implies from the relation (1.2) that the excess hazard h D (t) decreases to 0 as t goes to infinity. In this situation, patients will no longer die from the disease of interest, which is called "population cure" or "statistical cure" (Dubecz et al., 2012). The constant π in (1.3) represents the proportion of patients that will no longer die from the disease of interest, which is called the cure rate. Notice that the concept of "cure" can be interpreted in individual level and population level. In the individual level, the cure can be thought of as "medical cure", which means asymptomatic of an individual after receiving medical treatment. In the population level, "population cure" or "statistical cure" occurs when the excess hazard decreases to zero (Lambert et al., 2007). In order to characterize the information of cure, we often use the cure rate model (or cure fraction model) to estimate the cure rate π. Cure rate model has been well developed in these decades, and we can simply classify the models into mixture cure rate model (De Angelis et al., 1999) and non-mixture cure rate model (Andersson et al., 2011). The mixture cure rate model considers the mixture distribution of cure and uncure for each patient. Let the cure condition R ∼ Bernoulli(π). If a patient will be cured, then R = 1, otherwise R = 0. Then, with conditions in Lemma 1, it can be shown that where S u (t) = P (D > t|R = 0) is the survival of uncured patients, and π = P (R = 1) is the cure rate. (a) O ⊥ (D, R), Another method, the non-mixture cure rate model, derives S T (t) from a different perspective. Let N be the number of metastatic-competent cancer cell number for each patient after treatment, and let F 0 (t) be the cdf of the event time with a metastatic-competent cancer cell. The non-mixture cure rate model assumes that N ∼ Poisson(λ), then it is straightforward that those patients without metastatic-competent cancer cell are considered cured, i.e. π = P (N = 0) = e −λ is the cure rate. With conditions in Lemma 2, it can be shown that S D (t) is of the form S T (t) = S O (t)π F 0 (t) . (1.5) Lemma 2. The following conditions implies (1.5). (a) D|N = min(D 1 , D 2 , . . . , D N ), where D i denotes the event time from the i-th metastatic-competent cancer cell. Note that (1.5) can also be represented as the form of mixture cure rate model where π F 0 (t) −π 1−π is a proper survival function, and can be used to model S u (t). Under appropriate modelling of S u (t) for (1.4), or F 0 (t) for (1.5), one can estimate π via MLE inference procedure. Recently, the flexible parametric cure rate model (Andersson et al., 2011), which uses the restricted cubic spline function to model F 0 (t), is considered to be a suitable method in describing cure in a variety of cancers. Compare with (1.2), the above model is equivalent to model the net survival S D (t) in the mixture cure rate model, and as in the non-mixture cure rate model. Note that, in both types of cure rate models S D (t) are improper survival functions, since (1.3) tells that the long-term follow-up time of S D (t) attains π as t goes to infinity. Cure time Equation (1.3) indicates that the cure rate can be attained as t tends to infinity. However, it is observed that the net survival may attain the cure rate after a specific time point τ within the follow-up time, instead of infinity. This specific time point τ is called "cure time". The government may want to know the cure time, so that the health policies can be conducted more efficiently. In Taiwan, the a cancer patient will be assigned a catastrophic illness certificate, and it should be re-evaluated after the cancer cure time. Moreover, the burden of diseases can be assessed more accurately if the government has a better estimate of cure time (Blakely et al., 2010(Blakely et al., , 2012). An example is that the years lived with disability (YLD) needs a time point to exclude those patients who live a long period so that the disability is negligible. In pharmaceuticals, it is important to know the time when patients are identical to general population after taking new treatment. Clinicians may be also interested in cure time for precise health care suggestions to patients. There are naïve ways to determine the cure time τ . Some non-parametric methods were suggested in practical use, one of them suggested that the estimated cure time occurred after 95% or 99% of the deaths had elapsed (Woods et al., 2009;Smoll et al., 2012). The above strategy is easy to implement but may fail to apply if the true time point occurs beyond the follow-up time, or the cure assumption (1.3) is inappropriate. It was also suggested to use conditional relative survival to find the cure time, i.e. to find the smallest τ such that the conditional relative survival exceeds 95% (Janssen-Heijnen et al., 2007Dal Maso et al., 2014) where RS(t|k) = RS(t)/RS(k) is the conditional relative survival. However, the choice of 95% is subjective, and it is possible to see a non-negligible decreasing trend of conditional relative survival even if it exceeds 95%. Baade, Youlden, and Chambers (2011) proposed to use conditional relative survival to determine τ visually after which RS(t|τ ) is nearly a constant. Blakely et al. (2012) suggested a convenient way by visually identify the time point of nondeclination in the model-based or non-parametric net survival curve. These methods still face the problem that the determination of τ is subjective. Moreover, there exists no statistical inference procedure for τ in the above mentioned methods. The research aim of this dissertation is to give a new perspective of statistical cure, from which the information of cure time can be included. We also propose a parametric method to model τ , which would be able for the researchers to make statistical inference about cure time. In application, the proposed methodology can be used not only on the population-based cancer survival analysis, but also on the clinical-based research, in which the diseased cohort with certain medical treatment can be compared to the general population. Chapter 2 A New Perspective of Statistical Cure with Cure time In this chapter, we propose another concept to define "statistical cure" from which the cure time can be directly characterized. We begin from a comparison of general population and diseased population. One can treat the general population as a pool of normal persons with negligible risk of death from the disease of interest. Therefore, it is obvious that normal persons are expected to have better survival experience than diseased population, i.e. S O (t) ≥ S T (t) ∀ t > 0. Taking the colorectal cancer population and the corresponding disease-free survival in Taiwan as an example, Figure 2.1(a) shows that the disease-free survival S O (t) is uniformly higher than the observed survival of colorectal cancer patient. We can also see that S T (t) decreased rapidly in the beginning, but the decreasing trend becomes similar to S O (t) when t > 5. It implies that patients who survived at 5 years may have similar survival experience as the general population. It motivates us to find the cure time by comparing the conditional survival functions of the diseased population and general population. For any time point k, the conditional survival of diseased population given surviving at k is defined to be which can be explained as the survival probability of a person who lived upon k from the beginning of follow-up (e.g., diagnosis of disease). Note that S T (t) can be expressed as S T (t|0). The cure time τ is defined as the minimum time point satisfying statistical cure as Definition 1 means that those patients who have survived at τ can not be distinguished from the general population in the sense of conditional survival. Definition 1 also provides a connection between cure time and cure rate as summarized below. Theorem 1. Assume condition (1.1). Then (2.1) is equivalent to where π = S D (τ ) is the cure rate, and τ is the cure time. In (2.3), τ is used to demonstrate the time that S D (t) attains π, while in (1.3), τ is forced to be infinity. Thus (2.3) shows not only cure rate π, but also cure time τ . We note that Theorem 2. Assume condition (1.1). Then, either statement (a) or (b) below is equivalent to where I(.) denotes the indicator function. The sample is in the form of {Z, δ, X}, where Z = min(T, C) is the last observed time, C is the censoring time, X ∈ R p is the covariate, and δ is the censoring status. Taiwan has a welldeveloped health care system, and considering the high-quality death certificate information should be helpful to obtain more efficient statistical inference. In this study, we propose a more general perspective to apply the cause of death information, and define a more general version of censoring status, that is where D is obtained based on the cause of death information. Note that O indicates the time from all causes of death except the disease of interest, therefore the censoring should not include any other cause of death, such as car accident. Since covariate X has been involved in estimation, (1.1) should be modified as a relaxed assumption. It is also reasonable to assume that C is independent of all the last observed time, or (O, D). The assumptions used in estimation can therefore be expressed below In the previous population-based methodologies it was suggested using T and C to define δ, and ignore death certificate information completely since the accuracy of death certificates are often problematic (Howlader et al., 2010;Huang et al., 2014). However, it is reasonable for researchers to determine whether to use the cause of death information completely or partially, according to quality of the database from health care system of their countries. In this data structure, δ = 3 means that we know that the last observed time is T but do not know whether T = D or T = O, and δ = 3 often occurs in the case of uncertain cause of death from death certificate database (Naghavi et al., 2010). Moreover, researchers can choose not to use the information of O and D, but instead set δ = 3 for an individual if his/her cause of death information is doubtful, or if the quality of the cause of death information is not reliable. Therefore, we provide a flexible way of data usage to let researchers make use of data more thoroughly. Model specification According to Theorem 2(b), excess hazard h D and cure time are affected by X. Therefore, we propose the cure time model (CTM) where X (1) and X (2) are subsets of X, respectively. Since the cure time τ must not be negative, τ can be modelled using any link function with positive range, such as where β is the parameter corresponding to X (2) . h D (t|X (1) ) is assumed to be the excess hazard function from a parametric distribution. For example, the excess hazard function can be modelled as Weibull hazard function, where the link function is set to be exponential function for both shape and scale parameters. In population-based survival analysis, we obtain the information of h O (t) through vital statistics from government. Estimation We use maximum likelihood estimation to obtain the estimate of (α, β) T , where α is the parameter of the parametric distribution to model D. If we model D as Weibull distribution, then α = (α 1 , α 2 ) T , where α 1 is used to model the shape parameter in the form of exp(α T 1 X (1) ), and α 2 is used to model the scale parameter in the form of exp(α T 2 X (1) ). Since δ contains four levels, we can derive the likelihood function of each level through the corresponding pdf respectively. For a censored case with given covariate X = x, the observation is (z, δ = 0, x), and we have where U D (z; τ |x) is similar to (2.5), but involving covariate x, that is In (3.3), the censoring time C contributed to the pdf via f C (z), and we did not observe the exact time from O and D, but O > z and D > z, therefore the pdf was contributed via S O (z) and U D (z; τ |x), respectively. Since we assume that C, O, and D are independent of each other, the pdf (3.3) are simply the product of U D (z; τ |x), S O (z), and f C (z). For a case with the last observed time being O, the observation is (z, δ = 1, x), and we have In (3.5), O contributed to the pdf via f O (z), and we did not observe the exact time from C and D, but we know that C > z and D > z, therefore the pdf was contributed via S C (z) and The case of δ = 2 For a case with the last observed time being D, we have observation (z, δ = 2, x), and The pdf of (Z, δ = 2) given covariate x is Note that, since (z, δ = 2, x) is the event time from the disease of interest, it is an "uncured" case, and the last observed time z is therefore smaller than the cure time τ (3.7) should be considered as a natural constraint for any observation with δ = 2 during estimation of β, which will be demonstrated later. In (3.6), D contributed to the pdf via f D (z|x (1) ), and we did not observe the exact time from O and C, but only O > z and C > z, therefore the pdf was contributed via S O (z) and S C (z), respectively. The case of δ = 3 For a case with the last observed time being T , the observation is (z, δ = 3, x), and The pdf of (Z, δ = 3) given covariate x is , and we did not observe the exact time from C, but only C > z, therefore the pdf was contributed via S C (z). Note that , which can be obtained through the national death certificate database, must be further included in this pdf. By incorporating the pdfs from the corresponding levels of δ, we can obtain the likelihood function L(α, β) as summarized below , where z i and x i are the i-th last observed time and covariate, respectively, and δ i is defined as (3.1). Assume (C1) and (C2). The where l(α, β) = ln L(α, β), and κ ≥ 0 is the smoothing parameter to obtain more stable estimation. We suggest κ to be related to the sample size n, such as κ = 1/n or κ = 1/( √ n log n), one can also set κ = 0 to remove the penalty effect. Eliminating those parts independent of α, β, we obtain Note that the censored case (δ = 0), and the case dying from any cause except the disease of interest (δ = 1), have equally contribution to the objective function l(α, β). Implementation We use the gradient descent method for α estimation given fixed β, which is described in Chapter 3.4.1, and use the gradient projection method to estimate β given fixed α in a modified objective function, which is described in Chapter 3.4.2. The above two methods are implemented iteratively until convergence. When β is given, τ x (2) i is a constant, therefore the observation can be partitioned into i }, the objective function (3.9) can be expressed as We use the gradient descent method to optimize l(α, β) given β, which is the same as optimizing is naturally a linear constraint, that should be considered in the optimization of l(α, β). Let l α (β) be l(α, β) given α, and without the information of those δ i = 2 (3.10) The optimization of l α (β) is equivalent to the following optimization problem i ) leads to non-differentiation at β. To deal with this problem, Ma and Huang (2007) suggested using the sigmoid function to approximate I u ≥ 0 , where the tuning parameter σ n is a sequence of positive numbers satisfying lim n→∞ σ n = 0. Note that lim σn→0 R(u; σ n ) = I u ≥ 0 . For a fixed α, the gradient projection method (Luenberger & Ye, 2008) is used to solve the optimization problem We use the gradient projection method (Luenberger & Ye, 2008) to obtain the estimate of β given α. Standard error The parametric bootstrap method is used to generate the null distribution and estimate the standard error of ( α, β) T . Parametric bootstrap algorithm for CTM by using vital statistics from government (in Taiwan we and algorithm stated in section 3.4.3. Obtain standard error by Remark 2 (σ n selection). It is convenient to choose a suitable σ n before optimizing l α,σn (β). Let σ n = n − 1 w , where w ∈ R + can be several candidates. A small w makes R(u; σ n ) a better approximation to I(u ≥ 0) as u → 0, but maybe more unstable in differentiation, thus there is a trade-off in selecting an appropriate σ n . One can use cross-validation to select σ n . However, for convenience one can just subjectively choose one of the candidates mentioned above, since different σ n 's give almost the same results in estimation. Here we use σ n = n − 1 2 in the following simulation chapter and data analysis chapter. Remark 3. If β 0 is the only parameter to be estimated in cure time, i.e., τ = β 0 , then we suggest to obtain the estimate and standard error of τ using grid search directly. Specifically, We suggest to optimize l α (β) using grid search instead of optimizing l α,σn (β). Note that in this case the estimation does not involve the sigmoid function approximation. Chapter 4 Simulation Studies In this chapter, we conduct simulation to evaluate our proposed method under three simulation studies. In (S1), four datasets with different distributions are used to validate the methodology S D (t) and τ , respectively, it is natural for a practitioner to use the same covariate X to describe the behavior of D and the cure time. Therefore, we use the same covariate X to model all parameters (i.e., X (1) = X (2) = X) in our simulation studies. For each setting, we generate 200 datasets, each with sample size n = 500. The covariate is X = (X 0 , X 1 , X 2 ) T , where X 0 is set to be 1 for the intercept, and (X 1 , X 2 ) T is generated from the normal distribution with mean vector 0 and covariance matrix Conditional on X, D is generated from the Weibull distribution with the shape parameter exp(α T 1 X), where α 1 = (α 10 , α 11 , α 12 ) T , and the scale parameter exp(α T 2 X), where α 2 = (α 20 , α 21 , α 22 ) T . The cure time parameter τ is modelled as τ = exp(β T X), where β = (β 0 , β 1 , β 2 ) T . The life table of the general population in Taiwan is used to generate O. C is generated from the Weibull distribution to achieve different censoring rates. For each setting, we calculate mean and standard deviation (SD) of the estimates, and obtain the standard error (SE) and the square root mean squared error (SMSE), from 200 bootstrapped samples. For convenience we subjectively choose σ n = n − 1 2 , since different σ n 's give similar results. Simulation results under (S1) In (S1), we evaluate the behaviors of the proposed method under different combinations of without censoring. Note that in real world it is a rare case that a dataset contains almost no censored sample, and it is also rare that we know exactly if Z = D or Z = O for each patient, due to the difficulty in identifying the underlying cause of death for all patients. Therefore the information of cause of death in death certificate may contain the garbage codes, which motivates the usage of δ = 3. (S1)-2 is the same as (S1)-1 except that q C increases. (S1)-3 is the same as (S1)-1, except that α 20 = 3.912, such that q D becomes smaller than (S1)-1. Unlike those settings with q T = 0, in (S1)-4 we set the q T to be higher. Simulation results of (S1) under different settings for δ. (S1)-1 Since (S1)-1 is the ideal situation, we can obtain correct estimates, and the bootstrapped SE is similar to the SD for all parameters. The main difference between (S1)-1 and (S1)-2 is q C and q O , In (S1)-1, q C = 1% and q O = 41%; in (S1)-2, q C = 34% and q O = 11%. The SD and the SE in (S1)-2 are all slightly larger than that of the corresponding estimates in (S1)-1. It makes sense that the higher SD exists in a data containing more censored cases. The main difference between (S1)-1 and (S1)-3 is q O and q D , In (S1)-1, q O = 41% and q D = 58%; in (S1)-3, q O = 69% and q D = 28%. The smaller q D in (S1)-3 means fewer information of constraints contribute to the estimation process, thus less efficient estimates are obtained. Therefore, the SD and the SE in (S1)-3 are all larger than that of the corresponding estimates in (S1)-1. (S1)-4 aims to mimic the situation that most of the exact statuses for (O, D) are not available, where one can only observe Z = T for most of the uncensored subjects. The SD and the SE in (S1)-4 are slightly larger than that of the corresponding estimates in (S1)-1. Note that the information used in estimation is quite different between (S1)-1 and (S1)-4. The estimation even works in a (O, D) unclear dataset, with similar SD between (S1)-1 and (S1)-4. Also note that the similar estimation between (S1)-1 and (S1)-4 may be resulted from that (S1)-1 naturally contains higher percentage of D. The comparison with lower percentage of D will be further demonstrated in study-(S2). Simulation results under (S2) In this simulation, we show a situation in which the status of O is partly mislabelling to D and vice versa, which implies a poor cause of death quality. To avoid using wrong information of (O, D), we arbitrarily set a portion of (O, D) to be T . We also arbitrarily set all (O, D) to be T to see the robustness of our method. Obviously, (S2)-2 shows worse estimation result than (S2)-1 because of the mislabelling. Moreover, mislabelling affects a lot even when the mislabelling rate is small. In the estimation of (S2)-2, the mislabelling rate 5% causes much more bias and much higher SD of all estimates than (S2)-1. Both (S2)-3 and (S2)-4 give more accurate estimates than that of (S2)-2 even if there exists mislabelling information. (S2)-4 performs better than (S2)-3 from the perspective of SMSE, since (S2)-3 can still affected by mislabelling, while (S2)-4 does not. It implies that in real applications, one is suggested to set the status of an unclear cause of death to be δ = 3 (Z = T ) to avoid poor estimation. Simulation results under (S3) In this simulation study, we show the robustness of the cure time estimation when S D (t) was misspecified. In order to see how bias affect the cure time estimate, we estimate τ by using both the correct distribution (Weibull distribution), and incorrect distribution (log-normal distribution) to model the distribution of D. Note that in this simulation, the shape parameter of Weibull distribution corresponds to increasing hazards. However, the log-normal distribution has the limitation to model an increasing hazard, therefore it is obvious that the misspecifying distribution of D would lead to a poor estimation. D is generated from Weibull distribution such that q O > q D , which means that this simulated patient population is more likely to death from O (general cause) than D (disease). That is, we simulate mild disease patient population. In (S3)-1 we use the almost uncensored data; in (S3)-2 we use the same data as (S3)-1, but (O, D) is converted to T (denoted by (O, D) → T ); in (S3)-3 we use the censored data; in (S3)-4 weuse the same data as (S3)-3, but (O, D) is all converted to T . Simulation results are reported in Table 4.3. In Table 4 In order to conduct a reliable estimation of the cure time, we suggest the following steps, where we use colorectal cancer as an example to illustrate the analysis procedure. The results are shown in Table 5.1. From We further observe that the differences between the CTM estimated cure time and the last observed time are all less than 1 year among kidney and other urinary, liver, oesophagus, and ovary, which implies that we may not obtain stable cure time estimate until those corresponding follow-up time are long enough. One can still calculate these cure time, but we do not recommend using these results in application, since all we know about the statistical cure information of these cancer sites is that the cure time are larger than the last observed time. Model-based net survival and relative survival (5.3e, 5.3f), horizontal and vertical dashed lines represent locations of CTM-estimated cure time and cure rate, respectively. Dal Maso et al., 2014) 1-9 1st to 9th major cancers in Taiwan Cancer Registry Annual Report 2016. * No statistical cure. The estimated cure times, which are mostly close to the last observed time, are still shown in the table. Taiwan colorectal cancer data analysis In population-based studies, although it is enough for one to apply the method described in Section 5.1 to obtain cure time estimate, some drawbacks should be noted. First, one may We use the process stated on Figure 5.5. There is no statistical cure being observed on all strata containing age group 80+ and stage IV, therefore we exclude all strata containing age group 80+ and stage IV. One can imagine that there is no statistical cure in older or late stage patient population. is used to build up the CTM. Log-normal distribution is used to model D with parameter µ = α T 1 X and σ = α T 2 X, where α 1 = {α 10 , α 11 , . . . , α 16 }, and α 2 = {α 20 , α 21 , . . . , α 26 }. The standard error (SE) and two-sided p-value are obtained from 500 bootstrapping. The cause of death information usage issue is important. In the previous population-based methodologies it was suggested using T and ignore death certificate information (O, D) completely (Howlader et al., 2010;Huang et al., 2014). In this study we derive the likelihood (Theorem 3) that allows the usage of partial or full death certificate information, to help obtain more efficient estimate. In Taiwan, we have high quality of death certificate system, the ignorance of this information does not make sense. The conventional approaches that ignore all death certificate information are just a special case in our perspective of likelihood function derivation. Using this concept, researchers may improve the conventional cure rate model with Appendix C Proof of Theorem 1 Proof. By (1.1), the left hand side of (2.1) can be expressed as And the right hand side of (2.1) is After some simplification, we have and we have Divide left sides of (D.2) by S(τ ), and right side of (D.2) by S O (τ )S D (τ ), we get (2.1). Then we show that Theorem 2(a) implies (2.1). Under (1.1), the survival function of T is Then we have (2.1) follows. Finally we show that Theorem 2(b) implies Theorem 2(a). Under (1.1), Theorem 2(b) can be expressed as Thus Theorem 2(a) follows. Model-based net survival and relative survival (E.21e, E.21f), horizontal and vertical dashed lines represent locations of CTM-estimated cure time and cure rate, respectively.
Gravity and strings This is a broad-brush review of how string theory addresses several important questions of gravitational physics. The problem of non-renormalizability is first reviewed, followed by introduction of string theory as an ultraviolet-finite theory of gravity. String theory's successes also include predicting both gauge theory and fermions. The difficulty of extra dimensions becomes a possible virtue, when one notes that these lead to mechanisms to explain fermion generations, as well as a means to break the large gauge symmetries of string theory. Finally, a long standing problem of string theory, that of fixing the size and shape of the extra dimensions, has recently been addressed and may shed light on the origin of the cosmological constant, the ultimate fate of our universe, as well as the question of why gravity_is_ so weak. PUZZLES OF GRAVITY In this lecture I plan to convey the basic ideas of string theory, particularly as they relate to some puzzles of gravitational physics. At my home institution, string theory is a two to three quarter class, just to teach the foundation, so the best that can be done here is to give a very impressionistic view of some of the features of the theory, introducing some of the central ideas. (For the same reason, I will only give a brief guide to the literature at the end, rather than inclusive references.) Nonetheless, I will also endeavor to bring the reader up to speed on some of the newest -and most bizarre -ideas of string theory, particularly pertaining to cosmology and the fate of the Universe. We'll begin with the question of the day. Why is gravity so weak? SLAC is famous for electron-positron scattering, and we know that a centrally important process here is Bhabha scattering, in which the final state is also an electron and positron, e + e − → e + e − . The leading contribution to this process is from one-photon exchange, fig. 1. We know that the amplitude for this contribution is proportional to the square of the electron's charge, since there is an e for each vertex where the photon attaches to the electron or positron: A EM e + e − ∝ e 2 = α . Here, and for the rest of the paper, we will use units so thath = c = 1. However, when SLAC scatters electrons and positrons, there is a subleading contribution to Bhabha scattering arising from the process where the photon is replaced by a graviton exchanged between the two particles, as in fig 2. This amplitude can be computed from the lagrangian for gravity, Here g is the spacetime metric, R the curvature scalar, G N Newton's constant, and L the familiar Dirac lagrangian for the electron field ψ e , in a general metric. To get gravitational scattering from this, we expand the metric to exhibit fluctuations about flat space: and the leading terms in a power series expansion of the action in h take the form where K is a generalized d'Alembertian for tensors, we've suppressed terms with higher powers of h and/or more derivatives than two, and T µν is the stress tensor for the electron field. The leading interaction term in (4) gives us one of the vertices in fig. 2, and we see that it contains a factor of √ G N . Dimensional analysis of (2) tells us that G N has mass dimension minus two, and, up to convention-dependent normalization, this defines the Planck mass scale, (In a theory with n extra dimensions, we instead have G N = M −(2+n) P .) Given this fact, we see that whereas for the electromagnetic contribution to Bhabha scattering we had a dimensionless factor of α, the amplitude for gravity contains in its place a factor proportional to G N . This factor must also be dimensionless, which means it must include a factor involving the characteristic energy scale E of the scattering process: It's easy to see that the Planck mass is and so the gravitational amplitude is tiny even at TeV energies. So today's question appears to morph into the question, why is the SLAC beam energy so low? Figure 3: A gravitational loop diagram contributing to Bhabha scattering. The problem of predictivity Once we are able to build a sufficiently high-energy accelerator, gravitational scattering will become important. And, as in QED, Bhabha scattering will receive quantum corrections given by loop diagrams such as the one shown in fig. 3. Here we encounter quantum gravity's nasty surprise. The one-loop amplitude shown has four interactions, hence two powers of G N , and thus by dimensional analysis must have two more powers of energy. Indeed, when the diagram is computed using the Feynman rules, we find that what enters is the loop energy E ′ , and the correction to (6) behaves like This diverges badly at high loop energies/short distances. Worse still, at higher-loop order, we have more powers of G N , thus more powers of loop energy, and worse and worse divergences. One could attempt to follow the usual program of renormalization, and absorb these divergences into the coupling constants of the theory. But since there are an infinite number of divergences, present for any gravitational process, one needs an infinite number of coupling constants to renormalize the theory. Practically, this means the theory is non-predictive: in order to predict the outcome of various high-energy gravitational scattering experiments, we'd have to know the value of this infinite number of coupling constants, which would require infinitely many experiments to begin with. Technically, we say the theory is non-renormalizable. The same problem occurs whenever we have a theory with coupling constant with negative mass dimension. Physicists have encountered this problem previously, with the four-fermion weak interactions. These interactions are described by terms in the lagrangian of the form where J W is a weak current, bilinear in fermions, and G F is Fermi's constant. From dimensional analysis, we find that G F also has mass dimension minus two, and thus the four-fermi theory is as non-renormalizable and non-predictive as gravity. But this is a problem we've seen resolved; we know that (9) has the underlying structure where g is a dimensionless coupling constant, and M W is the mass of a heavy weak vector boson. The lagrangian (10) is the low energy limit of an expression arising from exchange of the weak boson between two fermion lines, analogous to figs. 1,2; 1/M 2 W arises as the low-energy limit of the propagator ∼ 1/(p 2 + M 2 W ). The underlying theory of spontaneously broken SU (2) × U (1) gauge symmetry is renormalizable, and thus predictive. Why do we care about predictivity in gravity? There are several reasons to be concerned about the breakdown of predictivity in quantum gravity. The first is simply one of principle: ultimately one can imagine building an accelerator that can scatter electrons at E ≥ M P , and then both single-graviton exchange as in fig. 2 and the higher loop corrections, like fig. 3 become important. We should have a theory that describes this physics. Part of the story is black hole formation, but we should say more. This becomes even more important when we recognize that with large or warped extra dimensions, one might encounter the fundamental Planck scale, and thus strong gravitational scattering, at the TeV scale. A second motivation is that a complete theory of physics should encompass cosmology, and early in the history of the universe, typical particle energies approached or perhaps exceeded M P . Thus, to fully understand the initial conditions for our universe, we need a more complete theory. Thirdly, there is exceptionally strong evidence that astrophysical black holes exist. If we want to answer the question of what happens to an observer who falls into a black hole and reaches the high density/strong curvature region, we need to know more about the quantum mechanics of gravity. Moreover, black holes evaporate, and small enough black holes would do so quite rapidly; to fully understand this process requires a predictive theory of quantum gravity. Yet another possible motivation is that one place where we have attempted to combine basic quantum-mechanical notions and gravity has, so far, led to glaringly wrong predictions. Quantum mechanics predicts a vacuum energy, which would contribute to the cosmological constant. Any attempt to estimate the value, however, is off by tens of orders of magnitude from the value indicated by recent astrophysical observations, which is the same order of magnitude as the matter density in the universe. We hope that a more complete understanding of gravity will help us with this problem Finally, while the Standard Model is a useful theoretical guide to current experiment, it contains many parameters and certainly doesn't appear to be the final picture of physics. Shorter-distance physics should provide boundary conditions that determine the parameters of the Standard Model, much as electroweak gauge theory determines the low-energy parameters of the four-fermion interaction. Ultimately, specification of these boundary conditions will force us to understand gravity. Gravity from strings We've just outlined some of the most prominent reasons to seek a predictive theory of quantum-mechanical gravity. Despite the importance of this problem, it is not fully solved. Following the cue of the electroweak theory, the first place we might look is for a more fundamental field theory that gives an underlying renormalizable description of gravitational phenomena. No such theory has been found. But, through various historical accidents, a completely different approach to the problem has emerged. This is a much more surprising modification of the theory at the classical level. The basic picture is that the graviton exchange diagram of fig. 2 is derived from shorter-distance physics in which the basic objects are not particles, but rather strings. We think of such a string as an infinitesimally thin filament of energy. The incoming electron of the diagram, sufficiently magnified, is a small piece of string, and the graviton exchange corresponds to exchange of a loop of string, as shown in fig. 4. The starting point for a mathematical description of such a theory is that amplitudes, such as the one shown, are given by extending Feynman's sum-over-histories to a sum over string worldsheets interpolating between the string configurations in the initial and final states, Here the quantity playing the role of the action is essentially the Area of the worldsheet swept out by the strings as they move through spacetime. This prescription is simple but radical -how do we know it reproduces gravity? With more time we could compute the simplest amplitude of the form (11), that for two-to-two scattering of the lowest excited state, call it "T ," of oscillation of a string. The result is The ingredients of this amplitude, known as the "Virasoro-Shapiro" amplitude, are as follows. Γ is the well-known generalized factorial. The quantities s, t, and u are the Mandelstam invariants, written in terms of the four incoming momenta (p 1 , p 2 , p 3 , p 4 ) as and α ′ is a fundamental constant of the theory, with mass dimension minus two. We can write where M S is known as the string mass scale. It's a straightforward exercise, using properties of the gamma function, to show that the amplitude (12) has resonances, poles to be precise, at momenta such that Ignore the negative pole; it is eliminated in the supersymmetric version of the theory. The pole at zero arises from a resonance corresponding to a massless state, and the higher poles to higher values of m 2 . To infer the spin of the massless state we can take another limit of the amplitude (12), namely s → ∞ with t fixed; this is the Regge limit. In this limit, it is also a straightforward exercise to show where f is a function just of t. Basic resonance theory tells us that the exponent of s corresponds to the spin J of the intermediate state. For a pole at t = 0, we thus find J = 2. The physical interpretation of this is that the lowest excited state of vibration of the string has a quadrupole like waveform, characteristic of spin two, as can be confirmed from further analysis. This state thus behaves just like a massless spin-two particle. Now we can use a general result that goes back to Feynman: any theory of an interacting spin two massless particle must describe gravity. So string theory must reproduce gravitational physics. This miraculous result has a couple of catches (which, we'll see, turn out to be bonuses). First, the theory is only really sensible in D = 26 spacetime dimensions, and otherwise has mathematical inconsistencies. Second, it is really only the supersymmetric extension of what we've discussed that gives a well-defined theory; otherwise the first resonance in (15) is present and signals a tachyonic instability. For the supersymmetric theory, the special dimension is instead D = 10. With these caveats, we've made a major advance: the problem of non-renormalizability has apparently been cured! For example the string version of the one-loop diagram of fig. 3 is shown in fig. 5. When computed, it's found that this diagram has no high-energy divergence; it is ultraviolet finite. The reason for this is that, if we compute the diagram fig. 3 in a position-space representation, the singularity comes from coincident interaction points. However, in the string diagram of fig. 5, there are no special interaction points to give us a divergent coincidence. At a deeper level, the ultraviolet divergence can be understood to be removed by certain duality symmetries, which relate potential ultraviolet divergences instead to infrared behavior. This continues to higher loops, and a wonderful thing has occurred: string theory gives us a theory of gravity that is ultraviolet finite order-by-order in perturbation theory. The power of string theory String theory has offered us a solution to one major problem -but that is only the beginning. We know that non-gravitational forces are described by gauge theories, do these occur in string theory? Amplitudes with open strings are computed just like those for closed strings. For example, two-to-two scattering is computed by summing worldsheet diagrams like those in fig. 6. Because the strings have endpoints, the worldsheet now has boundaries. Once again the action is essentially just the area, and if we again denote the lowest state of the string as "T," we find the amplitude the Veneziano amplitude. Again the mass and spins of resonances in this amplitude follow from its poles and its Regge behavior, and in particular, aside from the tachyon that is eliminated in the supersymmetric version of the theory, the lowest excited state is zero mass and has spin one. Moreover, taking into account the charges on the two ends of the string, we find states that behave just like non-abelian gauge bosons of the SU (N ) group. At this point one might expect, like in field theory, that there are many possible theories with many different gauge groups. Here is where part of the power of string theory enters -string theory is a very tight mathematical structure, and it turns out that all but a handful of theories are mathematically inconsistent, suffering from quantum anomalies. The basic consistent string theories that we find are all supersymmetric and make sense only in ten spacetime dimensions. One consists of both open and closed strings; in a theory of interacting open strings, two ends of a string can always join to give a closed string. The only consistent gauge group for this theory proves to be SO(32). The rest are closed string theories. Two of these have no non-abelian gauge structure; they are called the type IIA and IIB theories, and differ by the chirality of the fermions that get introduced when incorporating supersymmetry. Finally, there is yet another way to get non-abelian gauge groups, which is too complex to present here, but which yields the final two string theories, the heterotic string theories with gauge groups E 8 × E 8 and SO(32). That's itjust five theories. Another thing that the theory predicts is the existence of Dp branes. These are extended p-dimensional objects. So p = 1 is a new kind of string, p = 2 gives a membrane, and so on. The "D" stands for Dirichlet. It turns out that open strings satisfy Dirichlet boundary conditions at a D brane, which physically means that open string ends get stuck on D branes. The endpoints can move along the brane, but not transversely to it. This actually gives another mechanism to get other gauge groups -for example SU (N ) for strings moving along a stack of N branes, but the corresponding gauge theories exist on the branes and not in the full nine spatial dimensions. Naïvely this suggests that there might be more string constructions, but at the same time, it turns out that the existence of branes help one to show that all five of the theories mentioned above are just different versions of the same underlying theory. This ultimately is seen to happen through the existence of powerful duality symmetries which relate the theories. Let's take stock so far. The simple assumption that matter consists of strings, not particles, has produced gravity, and moreover this gravitational theory appears not to suffer the usual inconsistencies upon quantization. Moreover, consistency of the theory requires supersymmetry, and hence fermionic matter, which is a first bonus of the theory. Finally, the theory naturally produces gauge symmetry. It's quite amazing that most of the ingredients of known physics come out of one simple assumption. But, at the same time, there are some apparent difficulties in describing the physical world. The theory only makes sense in ten spacetime dimensions. The gauge groups it produces are too big, and finally, while it predicts fermions, it is not clear how to get fermionic matter with the structure we see, for example generations. The remarkable thing is that, once we figure out how to solve one problem -that of too many dimensionsmechanisms that can solve the other problems naturally appear. HIDDEN DIMENSIONS The basic idea in reconciling string theory with the four-dimensional reality of experience is that the ten dimensions may be configured so that six of them are folded into a small compact manifold M , and only the four that we see are extended over large distances (see fig. 7). If the characteristic size, R c , of the manifold is assumed to be small compared to 1/TeV, then there is no reason that we would have unearthed this interesting structure. (And, with the brane world idea, the extra dimensions could be even bigger.) Hiding the extra dimensions in this manner immediately yields a second bonus: a natural mechanism to break the large gauge groups we've encountered to something more realistic. Specifically, the existence of non-trivial topology of the extra dimensions means that the gauge field configuration can include flux lines that are trapped in the topology, as illustrated in fig. 8. When present, these trapped flux lines, known as Wilson lines, break the gauge group. One pattern of breaking looks like SO (10) is well-known to be a good group for grand unification, which can then break to the SU (3) × SU (2) × U (1) of the Standard Model. This is one possible mechanism to get the Standard Model; yet another way is from gauge symmetries on intersecting D-branes. A third bonus also can emerge from the presence of the extra dimensions: an answer to Rabi's old question, "who ordered the muon?" To see how extra dimensions can solve this problem of the generations, we realize that in the point-particle limit, a string configuration is described by a wavefunction ψ(x, y) which is a function of the non-compact coordinates x and compact coordinates y. The wavefunction satisfies a generalized Dirac equation of the form for the lowest oscillation state of the string. Here the / D's are generalized Dirac operators with subscript indicating the dimension. Then ψ can be decomposed into normal modes ψ n of the compact operator, with eigenvalues m n : Eq. (19) shows that the eigenvalue of the six-dimensional Dirac operator plays the role of the four-dimensional mass. So, if eq. (20) has multiple eigenstates with the same charge, that leads to a replication of the spectrum of low-energy fermions, and could thus produce the generations (ν e , e, u, d) , (ν µ , µ, c, s) , (ν τ , τ, t, b) . So far, the story is quite remarkable. We've assumed that matter is made of strings. As output, we've found a quantum theory of gravity that is ultraviolet finite, gauge theories and thus the possibility to describe the standard model, fermions, a mechanism to produce generations, and, it turns out, scalars that can play the role of the Higgs. Since it looks like all known physics can come out of string theory, it's been called a "theory of everything," (TOE) although I prefer the phrase "theory of all physics" (TOP). Problems with moduli Before becoming too elated over all the successes of string theory, there's a critical question to ask: what fixes the compact space M ? Of course, this manifold must satisfy the equations of motion of string theory, which are, to leading approximation, the vacuum Einstein equations, where R mn is the Ricci tensor. The relevant manifolds are called Calabi-Yau manifolds. Here we encounter a serious problem. First, there are many topologies of Calabi-Yau manifolds, which represent discrete choices for the configuration of the extra six dimensions. Moreover, there are many possible configurations of D branes wrapping the compact space. But even worse, there are continuous families of Calabi-Yau manifolds, where the shape and size of the manifold varies continuously. Moreover, these parameters may vary as a function of four-dimensional coordinate x. The simplest example is illustrated in fig. 9, where the overall size of the manifold varies from point to point. This variation is parametrized by a four-dimensional field R(x) giving the characteristic size as a function of position. Moreover, the fact that we have a solution of (22) for any constant R tells us that there is no potential for this field: in the four-dimensional effective theory, it is a massless field. One likewise finds other massless fields corresponding to various shape parameters of the manifold, for example the size of handles, etc. These massless fields are all called moduli fields, and they are a disaster. First, the lack of any prediction of the values of the moduli means that we lack predictivity: parameters in the four-dimensional lagrangian, such as fermion masses and coupling constants will all vary with the moduli. Worse still, the modulus fields interact with the other fields of the theory with gravitational strength. Massless scalars with such interactions lead to fifth forces, time-dependent coupling constants, and/or extra light matter, none of which are seen experimentally. This represents a very serious problem for string theory, which has been present since the string revolution of 1984. There have recently been some ideas about how to solve this problem, and relate it to another critical problem, that of the cosmological constant. I'll summarize some of these ideas in the rest of the lecture. The landscape of string vacua We begin by reviewing one other ingredient of string theory: q-form fluxes. These are generalizations of electromagnetism, which has potential A µ , and anti-symmetric field strength Quantum geometry p-brane q-flux Figure 10: From an initial "quantum geometry," as yet incompletely understood, our four dimensions and the compact dimensions should emerge. In the process, branes and fluxes can be "frozen" into the geometry. Recall that the dynamics of electromagnetism is encoded in the Maxwell lagrangian, This structure can be generalized: consider a fully antisymmetric rank q − 1 potential A µ1···µq−1 , and define an antisymmetric field strength F µ1···µq = ∂ µ1 A µ2···µq ± permutations of (µ 1 , · · · , µ q ) with action It turns out that these q-form fields are present in string theory, and in fact, D-branes serve as sources for them much the same way an electron sources the electromagnetic field. Now, let us consider the Universe's evolution for its first few instants; a cartoon of this is shown in fig. 10. At the earliest times, we expect the very notion of classical spacetime geometry to break down, and be replaced by something more exotic. As this evolves, we then might expect usual geometry to freeze out of this "quantum geometry." But, just like in any phase transition, remnants, such as defects, of the initial strongly fluctuating phase can be left behind. For example, when it freezes out, the compact manifold could have some p-branes wrapped around some of its cycles. Since we are interested in vacua that are approximately Poincaré invariant, we only consider the case where these branes are "spacefilling," that is span the three spatial dimensions we see. Likewise, the freeze-out of geometry can leave behind fluxes that are trapped in the six-dimensional topology. Such trapped branes and fluxes then lead to an energy that depends on the shapes and sizes of the extra dimensions. If we look just at the dependence on the overall scale R, a p-brane has energy that grows with the p − 3-volume it wraps in the extra dimensions (since three of its directions are extended over visible dimensions), For fluxes, integrals of the form over q-submanifolds are fixed by quantization conditions, and so the energy behaves as where R n comes from the volume of the n compact dimensions. The energies (27), (29) then give an effective potential for R, in the theory used to summarize the physics seen by a four-dimensional observer. It turns out that a conversion factor is needed to express these energies in units used by a four-dimensional observer; when that's included, the effective potential is with n = 6 for the usual string case. The resulting four-dimensional effective theory takes the form where k is a constant. The presence of such effective potentials implies that wrapped branes and fluxes, along with other more exotic effects, can therefore fix the moduli. A rough sketch of an example of a potential is shown in fig. 11. This example has two minima. The value of the potential at a minimum corresponds to a four-dimensional vacuum energy, that is cosmological constant, So the negative minimum gives a negative cosmological constant, and the resulting vacuum cosmology is anti-de Sitter space. Likewise, the minimum at positive potential gives a positive cosmological constant, which produces four-dimensional de Sitter space. The space of configurations of the extra dimensions is multi-dimensional, and in general there will be a complicated potential on this space, something like that sketched in fig. 12. This has been called the "landscape" of string vacua. Minima in this landscape correspond to (locally) stable four-dimensional vacua, with the potential at a minimum giving the cosmological constant. One immediate problem in comparing this with observation is that, because the natural parameters entering the potential are set by string theory, typically the minima of the potential have values V (R i ) ∼ M 4 S ∼ M 4 P , which is about 10 120 times the value Λ obs that best fits recent astrophysical observations. However, the space parametrizing the vacua is many dimensional, and the values of the minima of the potential are essentially randomly distributed. So, if there are enough such vacua, which seems to be the case, then this random distribution of vacuum energies will yield some vacua with cosmological constant comparable to or smaller than Λ obs . If one thinks about possible initial conditions for the Universe, it is quite plausible that they evolve into a state where the landscape is populated so that different regions of the Universe are in different vacua in landscape. Since Λ obs is approximately the maximum allowed for galaxy formation, which would seem to be a necessary condition for life, we couldn't have evolved in a region of the Universe with a larger magnitude for the cosmological constant. Since the distribution of cosmological constants is presumably dominated by the largest allowed value, this could serve as an explanation for the observed value Λ obs , arising from the anthropic principle. This picture of string vacua is fairly new, and still being tested, but does seem quite plausible. One potential issue is whether it emerges from a complete string theory analysis in a truly systematic fashion. There are issues in carefully justifying the various approximations used in this analysis, and in the question of how to properly treat time-dependent solutions, such as cosmologies, in string theory, so work is still being done on this overall picture. Assuming the landscape scenario survives, there are various other interesting aspects of it. First, the basic picture gives us possible candidates for the fields necessary for inflation, namely the moduli fields and the fields describing motion of a D3 brane on the internal space. (See fig. 13.) Moreover, brane collisions may have interesting effects, as in the ekpyrotic proposal. Second, some points in the landscape may have large extra dimensions, or large warping, and could lead to scenarios of TeV scale gravity, where the true Planck scale is near the TeV energy scale. In this case colliders like LHC might produce the ultimate exotica: black holes. Finally, anthropic ideas applied to the D3 brane y(x) R 1 (x) Figure 13: Motion of a brane on the internal space, or moduli fields, such as the size of a handle, can give candidates for the scalar field needed to drive inflation. landscape have removed one fine-tuning, that of the cosmological constant. This raises the question of the role of other fine tunings and hierarchies in nature, and their relationship to the cosmological constant. For example, in the landscape it may be plausible to have the supersymmetry breaking scale much higher than the TeV scale, in which case superpartners may not be found anytime soon. It may in fact be that anthropic considerations fix the small relative size of the Higgs mass as compared to the Planck mass. If so, this ultimately answers the question we started with, "why is gravity so weak?" This is clearly a very interesting line of research, and debate continues on these and other important points. A final fascinating point regards the final fate of our observable part of the Universe. Since we observe a positive cosmological constant, we are apparently stuck at a positive minimum as shown in fig. 14. Now, it is possible to show on very general grounds that as the size of the extra dimensions goes to infinity, the potential always vanishes. This feature is similar to the topographical transition from the Rockies to the great plains; the infinite plain tending to infinite volume is a generic feature of the landscape. This means that our region of the Universe is at best metastable, and will ultimately decay. The generic decay is formation of a bubble of nine-dimensional space, which would then grow at the speed of light, consuming everything in its path. We might refer to this process as spontaneous decompactification of the extra dimensions. (Other more exotic, and equally deadly, decays of four-dimensions may also be present.) Fortunately it will only happen on a timescale of exp{10 120 } -with a number this big, you can pick your favorite units. But nonetheless, it's satisfying to have a possible understanding of the ultimate fate of our
Change-Point Detection of Peak Tibial Acceleration in Overground Running Retraining A method is presented for detecting changes in the axial peak tibial acceleration while adapting to self-discovered lower-impact running. Ten runners with high peak tibial acceleration were equipped with a wearable auditory biofeedback system. They ran on an athletic track without and with real-time auditory biofeedback at the instructed speed of 3.2 m·s−1. Because inter-subject variation may underline the importance of individualized retraining, a change-point analysis was used for each subject. The tuned change-point application detected major and subtle changes in the time series. No changes were found in the no-biofeedback condition. In the biofeedback condition, a first change in the axial peak tibial acceleration occurred on average after 309 running gait cycles (3′40″). The major change was a mean reduction of 2.45 g which occurred after 699 running gait cycles (8′04″) in this group. The time needed to achieve the major reduction varied considerably between subjects. Because of the individualized approach to gait retraining and its relatively quick response due to a strong sensorimotor coupling, we want to highlight the potential of a stand-alone biofeedback system that provides real-time, continuous, and auditory feedback in response to the axial peak tibial acceleration for lower-impact running. Introduction The peak tibial acceleration of the axial component can be defined as the maximum positive value of the signal during stance. The axial peak tibial acceleration is considered a surrogate measure for impact loading and can be registered by an accelerometer [1][2][3]. Peak tibial acceleration has been used as input to biofeedback systems [4,5]. These biofeedback systems can provide acoustic signals scaled to the magnitude registered by a shin-mounted accelerometer. The peak tibial acceleration could be lowered during running on a treadmill with real-time auditory and/or visual biofeedback compared with running without the biofeedback [4,[6][7][8]. Lowering the axial peak tibial acceleration in runners experiencing high-impact loading has been done with the goal of reducing the risk of running-related injuries [9][10][11]. These findings highlight the potential of an individualized approach of gait retraining using augmented feedback on peak tibial acceleration in real time. A drawback of the studies on running retraining using biofeedback, next to the treadmill setup, is that a limited amount of steps were analyzed for each recording period (e.g., 20 in [4]) ( Table 1). As a result, our knowledge of the time course of changes in the targeted biomechanical signal is not yet understood. Therefore, a wearable biofeedback system that continuously collects tibial acceleration was recently developed and The transition to a running technique involving less axial peak tibial acceleration is a process of motor learning, which may occur in stages [15]. Inspection of separable stages allows the design of experiments with higher specificity for certain aspects of that process [15]. Desired elements of a movement can be learned at different rates [15,16], meaning that motor skill improvement can vary between subjects. As such, the profile (location and duration) of evolution in axial peak tibial acceleration may also vary between runners initiating gait retraining. The gait retraining studies providing unimodal (i.e., auditory or visual) biofeedback on the axial peak tibial acceleration have focused on the early adaptation phase of running with less peak tibial acceleration. The change(s) and the variability in peak tibial acceleration inherent to this locomotor task have been neglected within a session [4][5][6]8]. A reason to neglect the time course of the axial peak tibial acceleration may at a high sampling rate and to immediately detect the magnitude and the time of the peaks of the axial component [12]. Under supervised use, a reduction of almost 30% in axial peak tibial acceleration was found when comparing the end of a 20-minute biofeedback run with the nobiofeedback condition [5]. Given that the inter-subject response to a reduction in axial peak tibial acceleration can vary [7], one might expect an individual evolution in magnitude and presumably also in the timing of the change next to the evolution of the whole group. Although the technical aspect of augmented feedback systems is developing rapidly (Table 1), little attention has been paid to when or how people interact with biofeedback on a running gait parameter [13,14], while this is imperative for understanding motor adaptations induced by the feedback parameter. For example, a first session of gait retraining may comprise a half-hour of running, whereas a major change in the desired performance may already be achieved after several minutes. Therefore, timing values are valuable for the design of gait retraining programs. The transition to a running technique involving less axial peak tibial acceleration is a process of motor learning, which may occur in stages [15]. Inspection of separable stages allows the design of experiments with higher specificity for certain aspects of that process [15]. Desired elements of a movement can be learned at different rates [15,16], meaning that motor skill improvement can vary between subjects. As such, the profile (location and duration) of evolution in axial peak tibial acceleration may also vary between runners initiating gait retraining. The gait retraining studies providing unimodal (i.e., auditory or visual) biofeedback on the axial peak tibial acceleration have focused on the early adaptation phase of running with less peak tibial acceleration. The change(s) and the variability in peak tibial acceleration inherent to this locomotor task have been neglected within a session [4][5][6]8]. A reason to neglect the time course of the axial peak tibial acceleration may + Sensors 2020, 20, x FOR PEER REVIEW 2 of 17 at a high sampling rate and to immediately detect the magnitude and the time of the peaks of the axial component [12]. Under supervised use, a reduction of almost 30% in axial peak tibial acceleration was found when comparing the end of a 20-minute biofeedback run with the nobiofeedback condition [5]. Given that the inter-subject response to a reduction in axial peak tibial acceleration can vary [7], one might expect an individual evolution in magnitude and presumably also in the timing of the change next to the evolution of the whole group. Although the technical aspect of augmented feedback systems is developing rapidly (Table 1), little attention has been paid to when or how people interact with biofeedback on a running gait parameter [13,14], while this is imperative for understanding motor adaptations induced by the feedback parameter. For example, a first session of gait retraining may comprise a half-hour of running, whereas a major change in the desired performance may already be achieved after several minutes. Therefore, timing values are valuable for the design of gait retraining programs. The transition to a running technique involving less axial peak tibial acceleration is a process of motor learning, which may occur in stages [15]. Inspection of separable stages allows the design of experiments with higher specificity for certain aspects of that process [15]. Desired elements of a movement can be learned at different rates [15,16], meaning that motor skill improvement can vary between subjects. As such, the profile (location and duration) of evolution in axial peak tibial acceleration may also vary between runners initiating gait retraining. The gait retraining studies providing unimodal (i.e., auditory or visual) biofeedback on the axial peak tibial acceleration have focused on the early adaptation phase of running with less peak tibial acceleration. The change(s) and the variability in peak tibial acceleration inherent to this locomotor task have been neglected within a session [4][5][6]8]. A reason to neglect the time course of the axial peak tibial acceleration may 1 × accelerometer 1 × computer with speakers at a high sampling rate and to immediately detect the magnitude and the time of the peaks of the axial component [12]. Under supervised use, a reduction of almost 30% in axial peak tibial acceleration was found when comparing the end of a 20-minute biofeedback run with the nobiofeedback condition [5]. Given that the inter-subject response to a reduction in axial peak tibial acceleration can vary [7], one might expect an individual evolution in magnitude and presumably also in the timing of the change next to the evolution of the whole group. Although the technical aspect of augmented feedback systems is developing rapidly (Table 1), little attention has been paid to when or how people interact with biofeedback on a running gait parameter [13,14], while this is imperative for understanding motor adaptations induced by the feedback parameter. For example, a first session of gait retraining may comprise a half-hour of running, whereas a major change in the desired performance may already be achieved after several minutes. Therefore, timing values are valuable for the design of gait retraining programs. The transition to a running technique involving less axial peak tibial acceleration is a process of motor learning, which may occur in stages [15]. Inspection of separable stages allows the design of experiments with higher specificity for certain aspects of that process [15]. Desired elements of a movement can be learned at different rates [15,16], meaning that motor skill improvement can vary between subjects. As such, the profile (location and duration) of evolution in axial peak tibial acceleration may also vary between runners initiating gait retraining. The gait retraining studies providing unimodal (i.e., auditory or visual) biofeedback on the axial peak tibial acceleration have focused on the early adaptation phase of running with less peak tibial acceleration. The change(s) and the variability in peak tibial acceleration inherent to this locomotor task have been neglected within a session [4][5][6]8]. A reason to neglect the time course of the axial peak tibial acceleration may Treadmill, laboratory 20 averaged per condition Present study 2 × accelerometers 1 × instrumented backpack 1 × headphone acceleration was found when comparing the end of a 20-minute biofeedback run with the nobiofeedback condition [5]. Given that the inter-subject response to a reduction in axial peak tibial acceleration can vary [7], one might expect an individual evolution in magnitude and presumably also in the timing of the change next to the evolution of the whole group. Although the technical aspect of augmented feedback systems is developing rapidly (Table 1), little attention has been paid to when or how people interact with biofeedback on a running gait parameter [13,14], while this is imperative for understanding motor adaptations induced by the feedback parameter. For example, a first session of gait retraining may comprise a half-hour of running, whereas a major change in the desired performance may already be achieved after several minutes. Therefore, timing values are valuable for the design of gait retraining programs. The transition to a running technique involving less axial peak tibial acceleration is a process of motor learning, which may occur in stages [15]. Inspection of separable stages allows the design of experiments with higher specificity for certain aspects of that process [15]. Desired elements of a movement can be learned at different rates [15,16], meaning that motor skill improvement can vary between subjects. As such, the profile (location and duration) of evolution in axial peak tibial acceleration may also vary between runners initiating gait retraining. The gait retraining studies providing unimodal (i.e., auditory or visual) biofeedback on the axial peak tibial acceleration have focused on the early adaptation phase of running with less peak tibial acceleration. The change(s) and the variability in peak tibial acceleration inherent to this locomotor task have been neglected within a session [4][5][6]8]. A reason to neglect the time course of the axial peak tibial acceleration may Overground, athletic facility 1853 ± 88 (mean ± SD) in total The transition to a running technique involving less axial peak tibial acceleration is a process of motor learning, which may occur in stages [15]. Inspection of separable stages allows the design of experiments with higher specificity for certain aspects of that process [15]. Desired elements of a movement can be learned at different rates [15,16], meaning that motor skill improvement can vary between subjects. As such, the profile (location and duration) of evolution in axial peak tibial acceleration may also vary between runners initiating gait retraining. The gait retraining studies providing unimodal (i.e., auditory or visual) biofeedback on the axial peak tibial acceleration have focused on the early adaptation phase of running with less peak tibial acceleration. The change(s) and the variability in peak tibial acceleration inherent to this locomotor task have been neglected within a session [4][5][6]8]. A reason to neglect the time course of the axial peak tibial acceleration may be that relevant changes in such a signal are usually not easily discernible by sight. The technique of change-point analysis may be of use to detect event(s) at which the underlying dynamics of a signal changes over time [17][18][19][20][21][22][23]. Several types of control statistics have been used for change-point discovery. For example, control charting provides upper and lower bounds of an individual chart with the assumption that no change has occurred [24]. Change-point analysis may be more powerful to detect relatively small or sustained shifts from the average because it better characterizes the time at which a change began to occur, controls the overall error rate, and is easily applicable for time series segmentation [25]. We employed a change-point application with tuned parameters to evaluate the time course of the axial peak tibial acceleration in the early adaptation phase of biofeedback-driven gait retraining. As subjects were expected to respond differently to the biofeedback-driven approach of gait retraining, a typical analysis of group data might have masked individual changes. Therefore, a single-subject analysis was employed to identify when runners shifted their axial peak tibial acceleration in the early adaptation phase of gait retraining. Subjects Following an initial screening session, ten runners (five males and five females, body height: 1.70 ± 0.07 m, body mass: 67.7 ± 7.4 kg, age: 33 ± 9 years) with high axial peak tibial acceleration impacting the lower leg (at least 8 g at 3.2 m·s −1 , mean ± SD: 11.1 ± 1.8 g) participated in our study. This sample size is in line with previous studies using biofeedback to stimulate running with less impact loading (i.e., lower-impact running) [4,8]. Requirements for participation were to run ≥ 15 km/week in non-minimalist footwear and to be injury free for ≥ 6 months preceding the experiment [26]. Thus, the subjects reported running 29 ± 12 km per week at 2.88 ± 0.31 m·s −1 (mean ± SD). The cohort consisted of nine rearfoot strikers and one forefoot striker (Appendix A), categorized using plantar pressure measurements characterized by high temporal and spatial resolution [27]. All subjects signed an informed consent approved by the ethical committee upon participation (Bimetra number 2015/0864). Intervention The subject was equipped with a stand-alone backpack system, connected to two lightweight sensors. The backpack system was developed for real-time auditory feedback with respect to peak tibial acceleration in overground running environments. The main components are indicated in Figure 1. The sensor was powered using a battery that allowed long uninterrupted runs, in order to use the least amount of power possible. The sensor of interest was a low-power MEMS three-axis accelerometer (Sparkfun, Boulder, CO, USA). The accelerometer characteristics were as follows: mass: 20 milligrams, resolution: 70 mg, with digital output (SPI-compatible). The breakout board (dimensions: 21 × 13 mm) was fitted in a shrink socket [12]. The total mass was less than 3 grams, making it lighter than commercially available sensors in a plastic housing that have been used for the registration of tibial acceleration during running [2][3][4]. A very lightweight accelerometer is beneficial because it is less susceptible to unwanted secondary oscillations due to inertia. The accelerometer of interest had dynamically user-selectable full scales of ±6 g/±12 g/±24 g and was capable of measuring accelerations with output data rates from 0.5 Hz to 1 kHz [28]. Provot and colleagues [2] recommended a sampling rate of at least 400 Hz for tests involving the measurement of tibial acceleration during running activities. A sampling rate of 1000 Hz was selected because lower rates might have caused the actual value of the peak to be missed. The tibial acceleration was continuously measured. The axial component was chosen for analysis because tibial acceleration has typically been analyzed unidirectionally [3,[6][7][8][9]29,30] and because it has been associated with a history of tibial stress fracture in distance runners [30]. If the signal range exceeds the capture range of the sensor, the measured signal is clipped at the extremities [1]. The highest value of axial peak tibial acceleration registered in a previous study with the system while running on a sports floor was 12.4 g at the same running speed compared to the present experiment. Therefore, we expected the accelerometer to have a sufficient range (±24 g) to prevent clipping while running overground at 3.2 m·s −1 . Post hoc inspection of the values of axial peak tibial acceleration revealed that the selected measurement range was more than enough for the goal of our study. Sensors 2020, 20, 1720 4 of 16 have a sufficient range (±24 g) to prevent clipping while running overground at 3.2 m·s -1 . Post hoc inspection of the values of axial peak tibial acceleration revealed that the selected measurement range was more than enough for the goal of our study (Figure 4). The tibial skin was prestretched bilaterally at ~8 cm superior to each medial malleolus to minimize skin oscillation [9,12]. An illustration of such prestretch through the use of zinc oxide tape (Strappal, Smith and Nephew, UK) is shown in Figure 2. Each accelerometer was placed on the tight skin of the prestretched area. The axial axis of an accelerometer was visually aligned with the longitudinal axis of each shin while the subject was standing [12,29]. The distal aspect of both lower legs was locally wrapped in a non-elastic adhesive bandage (Strappal) [12]. The manner of attachment with visual alignment and taping of the sensor to the skin has been applied in research on tibial acceleration in running [9,12]. The simple mounting technique has resulted in repeatable mean values of the tibial shock between running sessions [12], even without highly accurate standardization. The total mass of the stripped backpack with the electronic components strapped to the inside shell was equal to 1.6 kg. The same backpack has been used in previous studies intertwining locomotion and music [e.g. 13,29,30]. Subjects wore their habitual running footwear to reflect the usual running habits and to increase the ecological validity of the study. A passive noise-canceling headphone was worn. The tibial skin was prestretched bilaterally at~8 cm superior to each medial malleolus to minimize skin oscillation [9,12]. An illustration of such prestretch through the use of zinc oxide tape (Strappal, Smith and Nephew, UK) is shown in Figure 2. Each accelerometer was placed on the tight skin of the prestretched area. The axial axis of an accelerometer was visually aligned with the longitudinal axis of each shin while the subject was standing [12,29]. The distal aspect of both lower legs was locally wrapped in a non-elastic adhesive bandage (Strappal) [12]. The manner of attachment with visual alignment and taping of the sensor to the skin has been applied in research on tibial acceleration in running [9,12]. The simple mounting technique has resulted in repeatable mean values of the tibial shock between running sessions [12], even without highly accurate standardization. The total mass of the stripped backpack with the electronic components strapped to the inside shell was equal to 1.6 kg. The same backpack has been used in previous studies intertwining locomotion and music (e.g., [13,29,30]). Subjects wore their habitual running footwear to reflect the usual running habits and to increase the ecological validity of the study. A passive noise-canceling headphone was worn. The running session was performed on an athletic track at an indoor training facility (Figure 3) (Video S1, Supplementary Materials). The session consisted of a no-biofeedback condition and a biofeedback condition, representing the control and experimental conditions, respectively. Accelerometer data were acquired with real-time detection of the magnitude and the timing of axial peak tibial acceleration [5]. The no-biofeedback condition was a warm-up run of 4.5 min at the instructed speed of 3.2 ± 0.2 m·s −1 . In the case of bilateral elevation of axial peak tibial acceleration, the leg with the highest value was addressed in the retraining [8]. Thereafter, auditory biofeedback on axial peak tibial acceleration was continuously provided in real time. Biofeedback helps to develop the connection between the extrinsic feedback and the internal sensory cues associated with the desired motor performance during the first phase of motor retraining (i.e., the early adaptation phase) [10]. A patch Sensors 2020, 20, 1720 5 of 16 was designed in Max MSP software (v7, Cycling'74, San Francisco, CA, USA) to provide the auditory biofeedback [13]. The concurrent auditory feedback consisted of a music track that was continuously synchronized to the step frequency of a runner. A music database consisting of 77 tracks with a clear beat in a tempo range of endurance running was preselected. D-Jogger technology was employed to continuously align the beats per minute of the music to the steps per minute of the runner [31]. When step frequency changed by > 4% in steps per minute for 8 s, another song from which the beats per minute better matched the steps per minute automatically started playing. The biofeedback consisted of pink noise that was superimposed onto the music. Importantly, the noise's intensity was perceivable and depended on the magnitude of axial peak tibial acceleration [5,13]. The past five values of axial peak tibial acceleration were averaged through a 5-point moving average [9]. Thus, the wearable system detected the peak tibial acceleration and compared the selected gait parameter over a window of several strides with respect to a relative threshold value. The noise was added whenever that value exceeded a predetermined threshold of approximately 50% of the baseline value in the no-biofeedback condition. The chosen target was similar to previous gait retraining studies [6][7][8][9]. Six levels of noise loudness were empirically created for good discretization [13]: % noise, % baseline axial peak tibial acceleration: 100%, >113%; 80%, 96%-113%; 60%, 80%-95%; 40%, 65%-79%; 20%, 48%-64; 0%, <48%. The noise loudness was calculated as a percentage of the root mean square of the amplitude level of the music. Only synchronized music was provided when the momentary axial peak tibial acceleration of the runner was below the threshold target. The baseline value of axial peak tibial acceleration was the mean axial peak tibial acceleration of 90 s in the no-biofeedback condition. The running session was performed on an athletic track at an indoor training facility ( Figure 3) (Video 1, Supplementary Materials). The session consisted of a no-biofeedback condition and a biofeedback condition, representing the control and experimental conditions, respectively. Accelerometer data were acquired with real-time detection of the magnitude and the timing of axial peak tibial acceleration [5]. The no-biofeedback condition was a warm-up run of 4.5 min at the instructed speed of 3.2 ± 0.2 m·s -1 . In the case of bilateral elevation of axial peak tibial acceleration, the leg with the highest value was addressed in the retraining [8]. Thereafter, auditory biofeedback on axial peak tibial acceleration was continuously provided in real time. Biofeedback helps to develop the connection between the extrinsic feedback and the internal sensory cues associated with the desired motor performance during the first phase of motor retraining (i.e., the early adaptation phase) [10]. A patch was designed in Max MSP software (v7, Cycling'74, San Francisco, CA, USA) to provide the auditory biofeedback [13]. The concurrent auditory feedback consisted of a music track that was continuously synchronized to the step frequency of a runner. A music database consisting of 77 tracks with a clear beat in a tempo range of endurance running was preselected. D-Jogger technology was employed to continuously align the beats per minute of the music to the steps per minute of the runner [31]. When step frequency changed by > 4% in steps per minute for 8 s, another song from which the beats per minute better matched the steps per minute automatically started playing. The biofeedback consisted of pink noise that was superimposed onto the music. Importantly, the noise's intensity was perceivable and depended on the magnitude of axial peak tibial acceleration [5,13]. The past five values of axial peak tibial acceleration were averaged through a 5-point moving average [9]. Thus, the wearable system detected the peak tibial acceleration and compared the selected gait A self-discovery strategy was elicited. Each runner was instructed to find a way to run with less axial peak tibial acceleration by increasing the musical quality (i.e., lowering the noise loudness level), although no instructions were given on how to achieve this [4,9]. Subjects subsequently ran for 20 min in total, separated by a short technical break after 10 min to check the software. The instructions were repeated during the break. The running speed was monitored by a chronometer to provide verbal feedback on a lap-by-lap basis. Acceleration data of one subject were not recorded during the second half of his warm-up. Real-time auditory biofeedback in response to the axial peak tibial acceleration was provided by a wearable interactive system to the runner with high axial peak tibial acceleration. The sensor processing involved real-time peak detection. The music processing comprised tempo synchronization of the music combined with peak-based noise added to the music playing. A self-discovery strategy was elicited. Each runner was instructed to find a way to run with less axial peak tibial acceleration by increasing the musical quality (i.e., lowering the noise loudness level), although no instructions were given on how to achieve this [4,9]. Subjects subsequently ran for 20 min in total, separated by a short technical break after 10 minutes to check the software. The instructions were repeated during the break. The running speed was monitored by a chronometer to provide verbal feedback on a lap-by-lap basis. Acceleration data of one subject were not recorded during the second half of his warm-up. Data Processing All detected axial peak tibial accelerations (n = 18,529) were preprocessed using custom MATLAB scripts [12]. The data of the no-biofeedback condition (1.5 min, baseline) were concatenated with the data of the biofeedback condition (2 time periods of 10 min for the change-point analysis). The first 90 s were composed of the no-biofeedback condition. The values have been deposited in a public repository [32]. Because all subsequent values were part of the time series, we could determine the timing and the duration of a change. Change-Point Analyzer (v2.3, Taylor Enterprises, Libertyville, IL, USA) was employed for each subject to detect individual changes in the axial peak tibial acceleration over time. The analysis tool has been previously used in health sciences to determine if and when statistically significant changes in 1D time series occurred [33,34]. The procedure proposed by Taylor [24,35] for performing the change-point analysis uses a combination of cumulative sum charts and serial bootstrap sampling. Both the application of cumulative sum charts [33,34] and the application of bootstrapping [35] have been suggested for the problem of detecting a single change [25]. The procedure combines these two approaches, whereby Change-Point Analyzer allows multiple changes to be detected iteratively in a time series. In essence, this technique searches across the time frames looking for changes in the values that are so large that they cannot reasonably be explained by chance alone [36]. We refer to Appendix B for a more detailed description of the change-point analysis that is based on a statistical mean-shift model. The changes are accompanied by associated confidence levels and confidence intervals for the times of the Real-time auditory biofeedback in response to the axial peak tibial acceleration was provided by a wearable interactive system to the runner with high axial peak tibial acceleration. The sensor processing involved real-time peak detection. The music processing comprised tempo synchronization of the music combined with peak-based noise added to the music playing. Data Processing All detected axial peak tibial accelerations (n = 18,529) were preprocessed using custom MATLAB scripts [12]. The data of the no-biofeedback condition (1.5 min, baseline) were concatenated with the data of the biofeedback condition (2 time periods of 10 min for the change-point analysis). The first 90 s were composed of the no-biofeedback condition. The values have been deposited in a public repository [32]. Because all subsequent values were part of the time series, we could determine the timing and the duration of a change. Change-Point Analyzer (v2.3, Taylor Enterprises, Libertyville, IL, USA) was employed for each subject to detect individual changes in the axial peak tibial acceleration over time. The analysis tool has been previously used in health sciences to determine if and when statistically significant changes in 1D time series occurred [33,34]. The procedure proposed by Taylor [24,35] for performing the change-point analysis uses a combination of cumulative sum charts and serial bootstrap sampling. Both the application of cumulative sum charts [33,34] and the application of bootstrapping [35] have been suggested for the problem of detecting a single change [25]. The procedure combines these two approaches, whereby Change-Point Analyzer allows multiple changes to be detected iteratively in a time series. In essence, this technique searches across the time frames looking for changes in the values that are so large that they cannot reasonably be explained by chance alone [36]. We refer to Appendix B for a more detailed description of the change-point analysis that is based on a statistical mean-shift model. The changes are accompanied by associated confidence levels and confidence intervals for the times of the changes. The following configuration was applied in the Change-Point Analyzer: the confidence level for the time interval of changes: 95%; the number of bootstraps: 1000; randomization without replacement; mean square error-based time estimates; groups of 33 rows. As such, no assumptions were violated. Importantly, the confidence level for candidate changes and for the inclusion of changes was set at 95% and 99%, respectively. The no-biofeedback functioned as the control condition, so we assumed this time period to be steady-state. The default confidence levels were set at 50% and 90%, which would have led to a false identification of change points in the time period of no-biofeedback. Thus, these levels were upscaled to increase the likelihood of detecting valid change points that represented a valuable mean shift in the time series of interest. We examined the individual changes in the axial peak tibial acceleration, being an increase or a decrease, accompanied by its confidence interval and location, the lowest zone of axial peak tibial acceleration and its duration, and the (change in) standard deviation of the grouped signal. The signal variability reported throughout this paper is always a long-term variability on the grouped signal of 33 consecutive axial peak tibial accelerations. The estimated standard deviation of the grouped signal is based on the whole running session by concatenating both running conditions and by considering the change in the signal. The timing of occurrence and the magnitude of the detected first and major change points were averaged for our cohort of runners with high axial peak tibial acceleration. The occurrence of change is expressed in terms of running gait cycles (strides) or in units of time. Results All subjects discovered a way to run with less axial peak tibial acceleration. No change point was detected in the no-biofeedback condition. At least one change point was detected for each subject in the biofeedback condition ( Table 2), meaning that the runners swiftly reacted to the real-time auditory biofeedback. The first change of -1.26 ± 2.59 g in axial peak tibial acceleration was found after 309 ± 212 running gait cycles (3 40" ± 2 24") of running with biofeedback. This first change did not correspond to the major change in eight out of ten runners. The major change in axial peak tibial acceleration was consistently a reduction in axial peak tibial acceleration of 2.45 ± 1.99 g. The major change was found after 699 ± 388 running gait cycles (8 04" ± 4 38"). Table 2. Detected change points in the runners with high axial peak tibial acceleration (APTA). Each row represents a subject. Subjects are sorted according to the number of detected change points, and then, according to the timing of the first change in APTA. The individual location corresponds to the detected APTA in the biofeedback condition. The + and -signs indicate an increase and a decrease, respectively, in the APTA. a indicates the change in the APTA signal that corresponds to the major decrease in magnitude as identified by the Change-Point Analyzer. The estimated standard deviation of the grouped APTA is based on the whole running session. As expected, the location of detected change points varied considerably between runners (Figure 4). For example, in subject 1 the real-time biofeedback resulted in a fast, substantial, and sustained reduction in axial peak tibial acceleration throughout the intervention. Following an initial reduction, eight subjects further shifted (further decline or slight increase) in axial peak tibial acceleration. After reaching a temporary minimum in axial peak tibial acceleration, its magnitude slightly increased for six subjects but remained below the baseline. The first change in axial peak tibial acceleration, which also induced the zone of lowest axial peak tibial acceleration, was sustained by two subjects until the end of the biofeedback condition. Most subjects further adapted in the biofeedback condition. No significant change was detected for the standard deviation of the averaged values over 33 grouped axial peak tibial accelerations, indicating no discernible long-term variability in axial peak tibial acceleration of a runner during the early adaptation process of lower-impact running. Discussion We present a simple method to detect changes in the time course of a biomechanical signal when runners engage in overground gait retraining. As such, we could provide strong empirical evidence when runners changed their axial peak tibial acceleration in response to real-time auditory biofeedback on it. An interactive feedback device was used that modulated the runner's system dynamics in a self-discovery manner without giving specific instruction on running gait (i.e., "land softer" [6][7][8]). For that aim, we used a reinforcement learning paradigm for biofeedback control in which less axial peak tibial acceleration maximizes the positive reward (i.e., clear sound and synchronized music) and minimizes the negative reward (i.e., noise added to synchronized music). Without explicit cued instructions for an altered running technique, the chosen auditory biofeedback can influence the ongoing running style due to strong auditory-motor couplings in the human brain, thereby providing an avenue for a shift in musculoskeletal loading that may be beneficial to reduce running-related injuries. In the early adaptation phase of lower-impact running, runners with high axial peak tibial acceleration reacted differently in time and in magnitude to the auditory biofeedback that stimulated lower-impact running. The inter-subject variation in time to the changes in axial peak tibial acceleration during the intervention highlights the relevance of the single-subject analysis. A first swift change demonstrates the ability of humans to react relatively fast to an auditory biofeedback stimulus on a modifiable outcome parameter of running gait. The major reduction in axial peak tibial acceleration was generally found after about 8 min of biofeedback with no change in grouped signal variability. In general, such a short time frame might suffice to successfully explore a biofeedback-driven style of lower-impact running. Considerable variation in the time to the major reduction in axial peak tibial acceleration (4 to 1329 gait cycles) was, however, noticed among the high-impact runners. Our data seem to suggest a possible distinction between slow and fast gait adapters based on biomechanical, physiological, and motor control determinants. The inter-subject variance in the profile of change may be due to the individualized motor retraining approach, through auditory biofeedback on an outcome parameter, whereby numerous (combinations of) gait adaptations might result in a reduction of the axial peak tibial acceleration. The inter-subject variation in this group of high-impact runners is further illustrated in Figure 5. The empirical cumulative distribution function was created using the Kaplan-Meier estimator to approximate the distribution of the time to the detected changes. The group was able to temporarily reduce the axial peak tibial acceleration to a minimum zone of 68% compared with running without biofeedback. It is debatable whether an extreme target of −50% in axial peak tibial acceleration [6][7][8][9], which was generally too hard to achieve or to maintain, is required in the early adaptation phase of biofeedback-driven running retraining. Furthermore, not all high-impact runners could maintain their major reduction throughout the session. Full retainment of the major reduction in axial peak tibial acceleration may depend on the mental and physical loads required to handle the auditory-motor coupling at the instructed running speed, the target of reduction in peak tibial acceleration, and/or the specific task dealing with implicit motor learning. A more realistic target for the targeted population seems to be -30% in axial peak tibial acceleration, which will also reinforce the reward of running with music only (i.e., no noise). This is in agreement with the recent finding of runners experiencing high axial peak tibial acceleration who were able to achieve and maintain a reduction of about 30% of its magnitude after completing a retraining program in the laboratory [11]. Individual long-term variability in axial peak tibial acceleration did not change when a state of lower-impact running was achieved by the applied configuration and measurement techniques. The impending change(s) in the movement pattern induced a similar variability in the axial peak tibial acceleration. Hence, variability in the magnitude of the axial peak tibial acceleration is inherent to both high-and lower-impact running when engaging in biofeedback-driven gait retraining. If we assume the axial peak tibial acceleration to be an expression of motor coordination, then its consistent variation at the end of the biofeedback condition suggests a stable running pattern, even a new phase in the motor learning process. The group was able to temporarily reduce the axial peak tibial acceleration to a minimum zone of 68% compared with running without biofeedback. It is debatable whether an extreme target of -50% in axial peak tibial acceleration [6][7][8][9], which was generally too hard to achieve or to maintain, is required in the early adaptation phase of biofeedback-driven running retraining. Furthermore, not all high-impact runners could maintain their major reduction throughout the session. Full retainment of the major reduction in axial peak tibial acceleration may depend on the mental and physical loads required to handle the auditory-motor coupling at the instructed running speed, the target of reduction in peak tibial acceleration, and/or the specific task dealing with implicit motor learning. A more realistic target for the targeted population seems to be -30% in axial peak tibial acceleration, Figure 5. The cumulative distribution function describing (a) the first change in axial peak tibial acceleration, (b) the first reduction, (c) the major change, and (d) the zone of the lowest axial peak tibial acceleration in the biofeedback condition. In each panel, the horizontal axis shows the number of the gait cycles (strides) and the vertical axis shows the cumulative probability (F(stride)) between zero and one. The dashed lines indicate the Greenwood confidence interval. Due to a lack of a retention test following the intervention, the adaptation phenomenon cannot be linked to a learning effect. We do not yet know whether this retraining results in a stable and lasting reduction in the axial peak tibial acceleration in the long term. Nonetheless, early adaptation is the first step towards feasible motor retraining outside the laboratory. Caution is required when interpreting our results. The biofeedback run was paused after 10 min while the verbal instructions were repeated. This intervention may have influenced the observed learning rate in the retraining session. Overall, we believe that, based on our findings, a change-point analysis can be employed to determine when runners start responding to real-time biofeedback that stimulates lower-impact running. Next to the simple method of change point detection in a biomechanical signal (i.e., axial peak tibial acceleration), our experimental work aids in understanding the human dynamic system and its adaptive control of movement over time. The understanding of the adaptation to running overground with a wearable auditory biofeedback system is one of the many steps in the evolution toward evidence-informed use of wearable technology in daily life. Similar to Moens and Lorenzoni and colleagues [13,31], none of the subjects complained about the stripped backpack. Nevertheless, the weight of the system could be trimmed and a higher level of comfort could be simultaneously achieved by opting for a backpack commonly used in trail running, which may be filled with a slim processing unit that permits wireless data transfer. Furthermore, smart textiles could enhance the standardization of the sensor's location and orientation. While the applied system has been proven reliable both within sessions and between them using simple mounting principles [12], embedding a wireless accelerometer in a leg compression sleeve may further improve the reliability of the measurement of axial peak tibial acceleration between sessions. An improvement on the biofeedback side may be to replace an arbitrary level of change by a detected change. The offline analysis following data collection may be a stepping stone to the development of an online detection of change points. An online detection during the gait retraining has not yet been explored, but may permit to steer the noise loudness levels according to the abilities of a subject instead of being bound to preconfigured levels. Future research could develop online change point detection to better steer and individualize the level of biofeedback. In this paper, we provide an extension of previous works related to gait retraining using real-time biofeedback with respect to peak tibial acceleration. The main contributions of this paper compared to previous works are the evaluation of results in a different running environment and the implementation of change point detection for a particular biomechanical signal. On the one hand, this study provides a motivational approach, through the use of synchronized music, to transition biofeedback-driven running retraining from the laboratory to the field. Efforts were made to enable continuous sensing of and feedback on peak tibial acceleration in order to go beyond the traditional laboratory setting. The wearable system drives lower-impact running by reducing the peak tibial acceleration of overground running versus running without a device. On the other hand, the simple-to-use application enables a subject-specific evaluation of adaptive changes in peak tibial acceleration during the biofeedback-driven gait retraining in time. Because of the swift reduction in axial peak tibial acceleration when initiating gait retraining, we want to highlight the potential of a stand-alone biofeedback system and its strong sensorimotor coupling. Supplementary Materials: Supplemetary materials are available on the following link: http://www.mdpi.com/ 1424-8220/20/6/1720/s1. Video fragment of a subject wearing the biofeedback system while running indoors on an athletic track. It can be observed how the test leaders held supervision on a lap-by-lap basis. Sensors 2020, 20, x FOR PEER REVIEW 1 of 17 Figure A1: A peak pressure footprint for each subject (numbered 1 to 10) in the no-biofeedback condition. The centre of pressure path is indicated as a dotted line. Figure A1. A peak pressure footprint for each subject (numbered 1 to 10) in the no-biofeedback condition. The centre of pressure path is indicated as a dotted line.
Voicing the Challenges of ESP Teaching: Lessons from ESP in Non-English Departments Along with the growing practice of teaching English for Specific Purposes (ESP) in non-English departments of tertiary education, it is essential to investigate the challenges faced by ESP teachers. It can be a basis for proposing policies for the improvement of ESP practice. This study was driven by the fact that ESP classes in non-English departments are allocated limited credit hours, and the teachers are generally General English teachers with no experience and training in teaching ESP. Thus, this study attempted to investigate the fundamental challenges faced by ESP teachers in one state and four private higher education institutions. The data of this qualitative study were obtained through interviews with five ESP teachers. The interview questions were mainly concerned with the knowledgeability and competence in teaching related to subject-specific contexts, adequacy of ESP training, needs analysis, and classroom condition. The findings reveal that the evident challenges encountered by ESP teachers were: lacking knowledge on students’ field of study, lacking of ESP training, lacking of proper needs analysis, large classes, and various learners’ English competencies. The findings of this study suggest that policymakers (stakeholders) should pay more attention to the practice of ESP teaching, especially in non-English departments, by reforming * Corresponding author, email: luluk007@gmail.com Citation in APA style: Iswati, L., & Triastuti, A. (2021). Voicing the challenges of ESP teaching: Lessons from ESP in non-English departments. Studies in English Language and Education, 8(1), 276293. Received July 6, 2020; Revised December 17, 2020; Accepted December 18, 2020; Published Online January 3, 2021 https://doi.org/10.24815/siele.v8i1.17301 L. Iswati & A. Triastuti, Voicing the challenges of ESP teaching: Lessons from ESP in nonEnglish departments | 277 policy in order to minimize the problems faced by ESP teachers and to improve the practice of ESP teaching. INTRODUCTION In the past few decades, the demand for teaching English for Specific Purposes, or ESP, in higher education has been increasing. Widely considered to be a better approach for non-English department students, ESP is generally designed to fulfill what learners currently need and what their future careers demand (Dudley-Evans & St. John, 1998). Teaching ESP is challenging because the teachers are generally English for general purposes teachers (Pei & Milner, 2016). Moreover, teaching ESP courses requires not only the teachers' English proficiency but also the mastery of knowledge in a specific field of study. Additionally, the adoption of an interdisciplinary approach in ESP classes presents a challenge for ESP teachers (Prudnikova, 2013). Thus, investigating the challenges of teaching ESP in higher education is vital because emergent issues can be used as a basis for proposing policies towards the improvement of ESP practices. Research on ESP has been primarily focused on investigating needs analysis in ESP curriculum or materials development (Aldohon, 2014;Bialik & Fadel, 2015;Boroujeni & Fard, 2013;Gass, 2012;Gestanti et al., 2019;Hou, 2013;Kazar & Mede, 2015;Kellerman et al., 2010;Özyel et al., 2012;Poedjiastutie & Oliver, 2017;Saragih, 2014;Serafini et al., 2015;Setiawati, 2016;Trisyanti, 2009). However, little has been done to reveal the evident challenges encountered by the ESP teachers. In the Indonesian context, studies undertaken by Marwan (2017) and Poedjiastutie (2017) had acknowledged some challenges of ESP teaching, which are students' low learning motivation, the discrepancy between reality and expectations, teachers' workload, and low quality of resources (Marwan, 2017). While in Poedjiastutie's (2017) study, both teachers' and students' readiness in ESP teaching and learning emerged as a tough challenge. However, both studies present weak evidence due to their limited research contexts. While Marwan's (2017) qualitative study involved only a single research participant in a particular university, Poedjiastutie's (2017) research findings were generated from data gathered from numerous research participants in a single university. Thus, both studies cannot represent the general condition concerning challenges faced by ESP teachers in higher education within the Indonesian context. None of these studies display recent convincing evidence pertaining to the challenges of teaching ESP in various non-English departments of tertiary education. Due to the lack of prominent studies that explore the challenges of ESP teaching, the current study attempts to gain more evidence related to the challenges of teaching ESP in different departments of private and public universities in Yogyakarta Province. Thus, the research question of this study is: What are the actual challenges faced by ESP teachers in higher educational institutions? The findings of this study can be used to urge policymakers and stakeholders of ESP in the non-English departments of higher educational institutions to give more serious attention to ESP practice as well as to improve it. ESP Teachers' Required Competencies Teaching ESP should not be taken into account as being merely similar to teaching general English courses. It demands complex tasks that teachers must carry out. As emphasized by Luo and Garner (2017), a novel approach focusing on the use of language for communication should be employed by ESP teachers. Furthermore, ESP teaching needs learners' active involvement to construct a learning environment useful for their current or future work. The decision to apply ESP in language teaching cannot be separated from the central roles of the teachers. Some conditions ideally should be fulfilled when an institution decides to hire ESP teachers; one of them is by equipping them with preservice training. Bezukladnikov and Kruze (2012) urge the significance of having adequate education on ESP teaching as there are substantial problems relating to the development of curriculum, syllabus, and teaching materials. Correspondingly, Harmer (2001) asserts that ESP teachers need to have some training to enhance not only their language proficiency but also their content knowledge related to the subject matter. According to Bracaj (2014), ESP teacher training plays an essential role since being knowledgeable on the specified subject matter will contribute to the fulfillment of learners' needs. The importance of training for ESP teachers is also highlighted by some researchers (e.g. Alsharif & Shukri, 2018;Bracaj, 2014;Chen, 2013;Kusni, 2013;Liton, 2013;Tabatabaei, 2007;Xu et al., 2018;Zhang, 2017). Furthermore, Bracaj (2014) pinpoints some ways that contribute to ESP teachers' professionalism. Firstly, only specialized teachers (those who master content knowledge) who are ready to teach ESP can teach ESP. Secondly, ESP teachers should be well-educated or have the willingness to pursue higher education in language teaching. Thirdly, they should get general professional training as a teacher and as an educator to acquire pedagogical concepts and other aspects related to teaching and educating. Fourthly, there must be special training for either EFL or ESL teachers to understand learners' needs and what to offer to fulfill their learning needs. Similarly, Tabatabaei (2007) asserts that there are several ways to make ESP teachers professionally competent. They should make themselves specialized in a particular discipline, join training to enhance their knowledge and teaching skill, and conduct ESP research. Professional competence, which is affected by teachers' motivation in teaching, is also viewed as the outcome of teachers' understanding of their strengths and weaknesses (Suslu, 2006). Recognizing their strengths and weaknesses can become an indicator that teachers care about their professional development; therefore, they will figure out to make use of their strengths and overcome their weaknesses. Maleki (2008) identifies the required skills that teachers should possess, which contribute to their professionalism and effectiveness in teaching. They should have "(a) English language knowledge, (b) thorough command of course design, and (c) expert knowledge of the related field of science" (Maleki, 2008, p. 9). ESP teachers' professionalism, as uttered by Maleki (2008), is gained not only by knowing the English language but also by having the ability to develop the course and master the content knowledge. The Pivotal Role of Needs Analysis in ESP It is widely agreed and has never been debated that the role of needs analysis is crucial in ESP. Hyland (2002) urges that ESP curriculum development began with assessing learners' needs. Hence, the curriculum, including the learning goals of ESP, should be designed based on learners' specific needs. Basturkmen (2010) suggests that conducting needs analysis means identifying the language skills used in determining and selecting materials based on ESP. Besides, needs analysis can be used to evaluate learners and the learning process when the learning program is over. Similarly, Ellis and Johnson (1994) assert that through needs analysis, learners' needs in learning ESP can be described. Therefore, the role of needs analysis in the ESP course is undeniably pivotal. Ahmed (2014) asserts that to set the learning outcomes, ESP teachers rely on needs analysis. It means that learning outcomes will not be appropriately formulated if no needs analysis is conducted. As it is crucial and becomes a basis in developing the curriculum, learning outcomes, materials, and teaching activities, the absence of needs analysis can make teaching challenging. Classroom Conditions Classroom conditions can contribute to the success of the language teaching learning process. Two typical issues related to classroom conditions are class size and students' language ability. Harmer (2001) notes that large classes give more challenges than smaller classes, such as the lack of personal attention to students, limited interaction among students, and difficulty in making smooth and effective organization. Similarly, Brown (2007) argues that to meet an ideal condition, a language class should not consist of more than twelve students. Regarding students' ability in the target language, their mixed ability in a class makes it difficult for teachers to execute their well-planned lessons (Harmer, 2001). However, both large and small classes must always have students with various language abilities and proficiencies that make teaching challenging (Brown, 2007). Review of Relevant Studies Previous studies have brought findings that indicate various challenges faced by ESP teachers, and they are mostly related to the design of ESP courses and materials. Basturkmen (2010) asserted that designing an ESP course, which is usually only applied for a short period, is a demanding task for teachers, as they have to investigate learners' needs beforehand. A study conducted by Hoa and Mai (2016) in Vietnamese universities revealed complex problems about the practice of ESP. Three major issues related to the teachers, students, and the environment in which ESP was taught were brought to the surface. The key findings were large classes, students varied English proficiency and inadequate qualification of ESP teachers. Having many students is also found in a study by Poedjiastutie and Oliver (2017) in a private university in Indonesia. It is mentioned that putting a large number of students into one class is because, unlike state universities that are funded by the government, private universities have to finance their teaching and learning process. In other words, by having more students, the universities will make more money to keep the courses going. Hoa and Mai (2016) suggested several recommendations for some emerging issues. For universities that run ESP classes, the class size should be decreased to facilitate more effective learning. Concerning ESP students, more active participation during the learning process is suggested. Regarding ESP teachers, they should seek opportunities to attend training to increase their qualifications. Although Hoa and Mai's (2016) study had successfully highlighted pivotal issues in ESP generated from many research respondents, it would have been more comprehensive if the problem investigation was conducted more deeply through interviews. Alsharif and Shukri (2018) studied the pedagogical challenges encountered by ESP teachers in Saudi Arabian universities. Employing a mixed-method, the results of the study showcased some key issues regarding ESP teaching. The most crucial issues were the absence of training provided by employers, which resulted in the lack of readiness in teaching ESP, and also teachers' unfamiliarity with the content knowledge of students' related discipline. The findings of the study suggested that collaboration between an English teacher and a content teacher be established in order to minimize problems related to teachers' lack of linguistic knowledge on students' discipline. Collaboration with a content teacher to overcome pedagogical problems is also suggested in numerous ESP studies (Ahmed, 2014;Bojović, 2006;Luo & Garner, 2017;Zhang, 2017). Lack of training among ESP teachers is also found in some ESP studies (Ali, 2015;Kusni, 2013;Nguyen et al., 2019;Pham & Ta, 2016). Marwan's (2017) study showed a mismatch between the curriculum and learners' language competence. It was found that in his study, what was prescribed in the curriculum could hardly be realized as learners' language ability was relatively low. In other words, the design of the curriculum is often unrealistic; thus, achieving learning objectives will be overwhelming. Although Marwan's (2017) study presented some crucial issues that ESP teachers faced, the context of his study was limited as it was a case study conducted in a particular college and involved a single teacher. Thus, the finding cannot be used to represent the general reality in ESP teaching. Materials have also become a challenging aspect. A study by Medrea and Rus (2012), for instance, highlighted that materials, whether they are selected from commercial books or self-developed by teachers, are significant to be taken into account when running an ESP course. As picking certain resources can be costly and the level of language does not always comply with learners' ability, developing materials is also challenging in a way that it requires the teacher's sufficient knowledge of learners' discipline. Medrea and Rus (2012) also found that the teachers lacked knowledge about the students' field of study, so it made the ESP teaching became more demanding. Lacking knowledge of students' discipline could be seen from teachers' limited vocabulary on students' discipline. The lack of systematic needs analysis also contributes to the challenge in ESP teaching, as found in a study conducted by Poedjiastutie (2017). It is highlighted that needs analysis greatly affects the selection of teaching materials. In other words, an inappropriate or less systematic needs analysis will result in a less suitable selection of teaching materials. Unsystematic needs analysis, as found by Poedjiastutie (2017), somehow, indicates that ESP courses might not have been well managed by stakeholders. A lack of systematic needs analysis before designing an ESP course is also found in a study undertaken by Kusni (2013). While Poedjiastutie's (2017) study emphasizes that the role of systematic needs analysis will highly determine the selection of teaching resources, Kusni's (2013) study asserts that course design heavily depends on it. The studies presented above clearly indicate that the practice of ESP teaching in higher education institutions is still far from being ideal. There are complex problems in the practice of ESP as mentioned in the previous sections such as the course materials, the curriculum, teachers' lack of readiness to teach ESP, and large classes. However, the previous studies only investigated issues of ESP teaching in a single institution which may not reflect the general realities of ESP practice in higher education institutions. Therefore, it is necessary to deeply investigate the actual challenges on the practice of ESP in various non-English departments of higher education in Indonesian settings so that policymakers can see the urgency of reforming the policy. METHODS This study employed a qualitative method to explore the challenges faced by ESP teachers at the tertiary level. The following subsections describe the participants of the study, research instrument, and data collection and analysis. Participants The participants of this study were five ESP teachers in the non-English departments of higher education institutions in Yogyakarta Province, Indonesia. One taught in a public university, while the other four taught in private colleges. They taught in four different departments. The detailed information about the research participants can be seen in Table 1. The unequal number of participants from state and private universities is due to the limited state universities in the province teaching ESP in the non-English departments. Moreover, among several potential respondents from some state universities, only one agreed to participate in the study. Research Instrument There are two instruments utilized in this study: 1) the researchers as a human instrument who gathered data through interviews (Saldana, 2011), and 2) interview guides. The interview guide, based on prominent theories in ESP and EFL teaching, was used to obtain data from the participants. The questions in Table 2 serve as the primary questions in the interview guide which were then developed into some other related questions: Data Collection and Analysis As this study was qualitative, the data were collected through interviews to gain insight and understanding of some fundamental phenomena related to ESP practice. The interviews did not strictly follow the guiding questions (Richards, 2009) to collect a more in-depth understanding of ESP teachers' challenges in teaching. Instead, it flowed naturally by addressing impromptu questions that were still in the area of the research problem. To avoid misinterpretation, the researchers conducted the interviews in the participants' native language (Bahasa Indonesia), yet the essential excerpts presented in this article are translated into English. The interview for each participant lasted for about 30 minutes and it was recorded using a mobile phone. A few background noises were found in the recordings, but they did not interfere with the essence of the interviews. Upon collecting the data, the data were transcribed. The data were analyzed using Creswell's (2012) model. Each participant's transcript was read repeatedly in order to find phenomena that fit into specific themes. After themes were found, they were coded and put into themes and subthemes accordingly. The data were then interpreted. To ensure data validity, the interpreted data were confirmed to the participants (debriefing). It is a way to avoid misinterpretations. The procedure of data analysis is shown in Figure 1. In addition to the data obtained through interviews, participants' demographic data (see Table 1) were also used to better understand the participants' professional history. Using participants' demographic data could prevent fragmentation of participants' information into detached code that might lead to failure in revealing a thorough interpretation (Tao & Gao, 2018). FINDINGS AND DISCUSSION This study reveals several findings contributing to the challenges encountered by ESP teachers in some non-English departments of colleges. The identified themes are: teachers' perceived knowledge and competence (lack of knowledge on learners' discipline and teachers' training), lack of proper needs analysis, large classes, and learners' various competence. Teachers' Perceived Knowledge and Competence The first emerging theme found during data collection is related to how ESP teachers perceive their knowledge and teaching competence concerning teaching ESP in their department. Lack of knowledge of learners' discipline Teachers' perceived knowledge and competence to their lack of knowledge of learners' discipline and their limited competence in teaching ESP for the target discipline. The data are as shown in the excerpts below. Excerpt 1: "I was afraid when the first time I was teaching because the field, nursing, is very unfamiliar to me". ( P5, A1) Excerpt 2: "I had to learn about medical terms and procedures which I never knew before. Firstly, I was repressed, afraid to make mistakes". ( P1, A1) Their lack of knowledge of learners' discipline indicates that their different study backgrounds result in the struggle to learn a new area of study. Although knowledge is mostly referred to as terms or vocabulary of the related field, to a great deal, it affects teacher's anxiety and stress in teaching as they mentioned "repressed", "afraid", "overwhelmed", and "difficult". Teachers' lack of knowledge of learners' discipline is in line with a study conducted by Medrea and Rus (2012). Contrastingly, some researchers have asserted the importance of mastering specialty knowledge to be a professional ESP teacher (Bracaj, 2014;Maleki, 2008;Pradhan, 2013) in addition to their teaching competency and English language proficiency. Inadequate knowledge in the subject matter can bring negative feelings for teachers, and eventually affect the teaching-learning atmosphere. It will not be conducive as teachers feel insecure and not confident during their teaching performance. This condition can be seen in the statement uttered by Participants 1, 3, and 5. Feeling insecure over one own limited knowledge will be a stumbling stone during teachers' teaching. If this challenge continues to occur, learners will likely lose trust in their teachers. This situation should be prevented by ensuring that teachers are equipped with sufficient content knowledge in the related field. Moreover, there should be ample, realistic period given to ESP teachers before their teaching period is commenced so that they are well prepared. In order to deal with this problem, cooperation should be held between English teachers and specialty teachers (Ahmed, 2014;Bojović, 2006;Luo & Garner, 2017;Zhang, 2017) to complement each other and to minimize the gap in the lack of teachers' content knowledge of the subject matter. In addition, being aware of having limited knowledge on the subject matter can turn to be positive as it will stimulate and motivate ESP teachers to learn new things outside their field. Lack of training on ESP Another aspect that contributes to teachers' perceived knowledge and competence is the lack of training, especially ESP teaching. Contrasting with earlier studies (Alsharif & Shukri, 2018;Inozemtseva & Troufanova, 2018;Richards & Farrell, 2005;Stojkovic, 2019) which highlight the importance of training, the participants claimed that there was no training, moreover ESP training before beginning to teach. Excerpt 5: "ESP training? No, there was not. When I first joined the institution, no training was given. On the contrary, I was the one who had to improve the system to be ideal". (P1, A2) Excerpt 6: "Training was organized twice by the university for all lecturers. It is about the socialization of the new curriculum. My department never organized it. No training for ESP". (P2, A2) Excerpt 7: "No training was given when I first joined the faculty member. As a result, I was in a kind of confusion since I had to learn a lot about Mechanical Engineering, especially its vocabulary, which carries specific meanings". (P3, A2). Excerpt 8: "I attended some workshops organized by my faculty, but it was not intended for language teachers, let alone ESP teachers. They were for all lecturers of various subjects. So, very general". (P4, A2). Excerpt 9: "No training was specifically given to ESP teachers. There was only briefing". (P5, A21) Participant 1 admitted that the absence of training provided by her institution otherwise gives her an unexpected additional role as she has to be responsible for improving the existing system. The fact addressed by Participant 1 is entirely unanticipated. She did not expect that as a newly recruited teacher, she would be assigned to manage the program and change the system. It seemed overwhelming. Similarly, having no prior training in a new field results in "confusion", as lamented by Participant 3, indicating that she felt unsure of what to do. Meanwhile, concerning Participant 5, it seems that the absence of training was substituted with a briefing that she considered as being inadequate as she mentioned: "...only briefing". Slightly different from the other participants, Participants 2 and 4 claimed that they got training but not specifically intended to prepare them to teach ESP. The fact that Participant 4 teaches in a renowned public university might indicate that as a government-financed university, training has been allocated as an essential agenda for the sake of human resources. Likewise, training (although very general), as claimed by Participant 2, could be organized since she is teaching in a well-established private university. However, it should be noted that still, the general training does not adequately accommodate them to be well prepared in teaching ESP. The training does not equip them with knowledge and methodology in ESP teaching of particular disciplines. This reality contrasts with Richards and Farrell (2005), who urged that training should be aimed to fulfill not only what the institutions need, but also what teachers need. The claims conveyed by Participants 2 and 4 imply that ESP in non-English departments of either private or public university is given less attention concerning the teacher professional development. Inadequate professional development opportunities are apparent; thus, leading to teachers' low perceived knowledge and competence in ESP teaching. The fact that ESP teachers lack training in ESP is concurrent with previous studies in various ESP contexts (Ali, 2015;Hoa & Mai, 2016;Kusni, 2013;Muhrofi-Gunadi, 2016;Nguyen et al., 2019). Additionally, Participant 5 urged the need for training due to the importance of having "standard" teaching. Excerpt 10: "I think regular training is needed so that new teachers can have the same standard in teaching". (P5, A22). As uttered by Participant 5, her expectation for training indicates the need to have a standard in teaching, although it is not explained what this standard means. A standard could mean equal competence in teaching, a way of eliminating labels of "good" and "less good" teachers that can create a gap. By having a standard, it is expected that less experienced teachers can teach as competent as more experienced ones. This finding supports previous studies that acknowledge the importance of standards. It has been addressed by Forde et al. (2016), who emphasize that professional standards refer to tasks, skills, and knowledge required for practitioners. Similarly, Wahyuni and Rozi (2020) urged that a particular standard of competencies is essential for teachers so that the broad goals of national education can be achieved. Training for ESP teachers should be aimed not only to enhance teachers' knowledge and understanding of teaching methodology but also to equip them with adequate knowledge on the relevant subject matter. When an institution does not provide training, it often becomes teachers' responsibility to ensure their readiness to teach ESP, at least by being an autodidact. As suggested by Ali (2015), ESP teachers may integrate self-training and training programs to enhance their knowledge and competence. Lack of Proper Needs Analysis The next emerging challenge is the lack of proper needs analysis. All participants admitted that there is no proper needs analysis conducted in their institutions. Some excerpts are exhibited as follows. Excerpt 11: "No systematic needs analysis was conducted as the institution just provided the curriculum and named the course". (P1, A3) Excerpt 12: "In the Geology Department, everything becomes the responsibility of each teacher. The university just provided the curriculum and learning outcomes". (P4, A3) Excerpt 13: "Needs analysis? My department just leaves everything to me about the English subject. I did a diagnostic test in the first meeting to identify students' competency". (P3, A3) The excerpts shown above indicate that the participants' institutions did not conduct needs analysis properly by involving the respective teachers. As a result, ESP teachers are often given an unrealistic burden of designing the syllabus and selecting teaching materials by themselves, without their institution's involvement. Without needs analysis, teachers develop the syllabus and teaching materials not based on previously obtained information that portrays what learners and the institution exactly need and want. The lack of needs analysis, as found in this study, signals that the ESP courses implemented in the institutions mentioned above are questionable in terms of their characteristics, as mentioned by Dudley-Evans and St. John (1998). It is stated that one of the fundamental characteristics of ESP is its purpose of fulfilling learners' needs (Dudley-Evans & St. John, 1998). The absence of needs analysis postulates that learners' actual needs are not adequately identified. Thus, whether the formulated learning goals can accommodate learners' needs are questionable. Interestingly, Participant 2 stated that the absence of systematic analysis through collaboration between the teacher and stakeholders is because the educational background of the program coordinator is not related to language education or teaching. Excerpt 14: "As far as I know, a needs analysis was not carried out before beginning the ESP program since the program coordinator's background education is from Chemical Engineering". (P2, A3) The educational background, which is not related to English teaching, answers a question of why needs analysis for ESP courses is not conducted adequately in institutions as addressed by Participant 2. As the program coordinator's educational background is not language teaching or education science, he might lack knowledge of pedagogical theories and principles, including the importance of needs analysis. To fill this gap, she initiated to survey students' needs and wants. However, she gained very little information since the students, who were still in the first semester, could not clearly state what they needed in learning ESP. This evidence is in line with Anthony (2009, as cited in Bhatia et al., 2011, whose survey to determine students' needs in learning could only obtain information on what students wanted to learn, not on their needs. In short, it is not easy to gather direct information from students to identify not only their wants but also their needs. Moreover, when it is not done systematically. A vague statement uttered by Participant 5 below regarding whether or not needs analysis was conducted implies that she does not see needs analysis as her responsibility. Excerpt 15: "I do not know about needs analysis, whether there was one or not. However, I am sure that the decision to use the existing coursebook is based on careful consideration". (P5, A3) Although needs analysis might have been conducted in the context of Participant 5, she was not involved in it, which indicates that the needs analysis (if any) was not appropriately carried out. Unlike the other participants who had to prepare the teaching materials by themselves, Participant 5 was lucky enough that her institution already provided the coursebook. Despite the importance of needs analysis before the ESP program is designed, it is surprising that the five participants admitted that there was no proper needs analysis conducted by the stakeholders (the institution, the policymaker, and the curriculum developer) along with the respective teacher. Finding related to a lack of systematic needs analysis in ESP supports previous studies conducted by Kusni (2013) and Poedjiastutie (2017). As a result of no systematic needs analysis, Participant 3 furthermore explained that she had to design the syllabus by herself. Having no prior knowledge and training related to teaching English for Mechanical Engineering students, she used her common sense in designing the syllabus and materials. No negotiation was made on what should be put in the syllabus. This fact contrasts with Antic (2007, as cited in Ahmadvand et al., 2015, who urges that the development of an ESP syllabus must be based on learners' needs and interests. Lack of ESP needs analysis in non-English departments illustrates a fundamental issue that needs to be seriously considered. The decision to run the ESP program is made without a thorough consideration, which should be based on sound pedagogical principles. It is because those who are in charge (i.e., head of the department or program coordinator) have no (or limited) pedagogical knowledge, especially in language teaching. Their majors are not in language teaching. Large Classes The following challenge is related to the classroom condition, which is large class size. The excerpts are shown below: Excerpt 16: "Ideally, there should be a placement test. A language class should be smaller, with 2030 students at most. However, we cannot find an ideal situation in any university due to many considerations. For example, the more classes, the more teachers are needed. Consequently, more teachers mean higher cost". (P4, A4) Excerpt 17: "The next challenge is the large classes. As many as 120 students are divided into two classes. There are around 60 students in one class, with 100 minutes of contact. Thus, it is not easy to communicate with them and to achieve the learning outcomes". (P5, A4) Excerpt 18: "The classroom is over capacity as there are 45 to 50 students. It is not ideal for them. It is too big". (P1, A4) Excerpt 19: "The most prominent problem in Mechanical Engineering is large classes. There are around 40 to 50 students, so individual work is difficult to be executed because it is timeconsuming. Therefore, I usually assign group work unless during the test". (P3, A4) Almost all participants face the same problem of having a large number of students in a class. This fact broadly supports Harmer (2001), who states that large classes expose challenges that are not found in smaller classes. As lamented by Participant 4, the financial issue that causes the policy to have a large number of students in a class might imply that money is a big issue that her institution has to deal with carefully. The issue concerning the institution's financial capability, which affects the class size, accords with Poedjiastutie and Oliver (2017) and has also been claimed as being common in Brown (2007). Budgeting has caused many universities to have larger classes since smaller classes will require more teachers. The more teachers are hired, the higher cost they will take. It might also be speculated that in a state university where Participant 4 works, it is not because of the financial reason that causes big classes, but due to mismanagement. Lack of principal knowledge on managing a foreign language class will result in treating that class similar to those of other subjects, in which a large number of students are all right. Inconvenience during the teaching learning process is sure to occur when the room is overcapacity. This dismaying fact portrays the insufficient attention for ESP courses organized by non-English departments in higher education. In contrast with Brown (2007), who contends that an ideal language class should not consist of more than a dozen students, ESP classes usually consist of many students (40 students or more), making it not ideal for the teaching of ESP. This condition will eventually affect the achievement of the learning outcomes. In addition, it limits teachers' options in deciding the type of classroom activities. A class with too many learners will not support individual work, as addressed by Participant 3, especially within a relatively short teaching duration (a hundred minutes per session). Therefore, teachers cannot freely decide or modify the task due to time considerations. The finding concerning large ESP classes in non-English departments of tertiary education corroborates earlier studies (Hoa & Mai, 2016;Hou, 2013;Kusni, 2013;Poedjiastutie & Oliver, 2017). It would be frustrating and ineffective to teach ESP with a large number of learners in one class. On the other hand, organizing smaller classes of ESP indeed requires more teachers, more space, and more money. Nevertheless, if this can be realized, there will be more benefits. The teaching learning process will be more effective because learners will gain more opportunities to practice using the language. Teachers can give more attention and assistance to learners, particularly those whose language proficiency is still low. They will also have more options to select and modify the desired activities. Eventually, the teaching learning process will be more effective, and the learning outcomes can be achieved. Learners' Varied English Competence Another challenge dealing with classroom conditions is the learners' varied abilities in English. Mixed ability classes have always been an issue contributing to the difficulty in teaching language (Brown, 2007). The participants address that fact as follows: Excerpt 20: "In the class, usually the students' abilities are mixed. Some students study English for the first time, yet a few students surpass the others, for example, their TOEFL score is >500, and they can speak in English. The gap is real". (P1, A5) Excerpt 21: "There are high achiever students, and there are 'slow' students". (P5, A5) Excerpt 22: "They are mixed students, from competent to beginner students in one class". (P2, A5) The more challenging fact is that, as voiced by most participants, learners have various English abilities, from the low level of learners until those with high English competency. Students' mixed abilities have brought a noticeable gap in the learning process, as complained by Participant 1. The fact that ESP classes consist of students with heterogeneous language capabilities concurs closely with some earlier studies (Gatehouse, 2001;Hoa & Mai, 2016;Tsou & Chen, 2014). Thus, selecting materials that suit students' varied abilities will not be easy because the teacher has to consider not only its degree of difficulty but also its potential to generate learners' interest to learn. This fact supports evidence from Medrea and Rus (2012), which emphasizes the difficulty of selecting suitable materials to fit learners' competence. Learners' mixed abilities, as pointed out by Participant 4, is due to the absence of placement tests to identify and group students based on their actual competence. One way to anticipate learners' varied competence is by conducting a pretest to group learners based on their competence. However, conducting a pretest for ESP learners is not without a challenge. It will be time-consuming and costly because it deals with administering a large number of learners. It also demands a thorough preparation: what kind of assessment tool will be used, how to measure the results, and how many levels are needed to group the learners? Thus, it often becomes discouraging unless the institution is very determined to do so. It might be one reason why a pretest for ESP classes is not conducted in non-English departments. The other reason is that it is hard to find a measuring instrument for ESP classes. So, there are usually no leveled classes for ESP. However, if the pretest seems impossible to organize, a pre-course can be given to equipping learners with the necessary language skills. They will encounter fewer problems during the ESP course, and the teacher can anticipate their mixed abilities. Recognizing learners' language proficiency before the ESP course is worthwhile for teachers, as they will be able to foreshadow the selected materials which are suitable for learners' proficiency. CONCLUSION The practice of ESP in non-English departments of either private or state universities suffers from issues that remain to be unsolved. This study has brought some 'chronic' matters to the surface that long have not been adequately handled and solved. Some realities related to ESP practice, i.e., lack of knowledge on learners' discipline, lack of teachers' training, lack of proper needs analysis, large ESP classes, and learners' varied language competence, indeed become a stumbling block for the advancement and growth of ESP practice in general, especially in higher education. The issues found and discussed in this study should become a wake-up call to policymakers and stakeholders to reform ESP practices in many higher education settings. The disparity between the teaching of ESP in non-English departments and the teaching of other subjects has a caveat to reform policy and regulation, which is directed towards giving equal attention for all taught subjects in higher education, regardless of their weighing credits. Concurrent with this, Yu and Liu (2018) suggest that stakeholders should initiate reform at a university level since they can provide funds and management support. The findings in this ESP study evoke realities that numerous issues call for attention from those who should step in to take action. However, as this study involved a limited number of participants and universities in a province, this may not represent the general condition of ESP practice in a broader context, be it nationally or globally. Thus, a broader context of the study is highly suggested to gain more convincing evidence.
The Integration of Geochemical Characteristics and Stable Isotopes Analyses of δ2H and δ18O in the Paleogene Carbonate Rocks Unit of the M-Field, Ciputat Sub-Basin, North West Java Basin, Indonesia The Paleogene carbonate unit in the North West Java Basin has no cropped out and never been shown in the regional stratigraphy, whether as a formation or as a member of the existing formation. This paper provides new insight of the diagenetic process evidence by the stable isotope of 2H and 18O in formation fluids integrated with petrographic and geochemical data of rock and fluids samples analysis. The major minerals of this carbonate unit are: calcite, clay minerals, dolomite, quartz, plagioclase, and pyrite. From ICP-OES analyses result this carbonate rocks has the content of Fe, Mg and Al ranges 450-7800 ppm, 497-10892 ppm and 96-3900 ppm respectively, while Si and Sr are relatively low around 0.1 ppb to 0.7 ppm and 60 ppm to 570 ppm respectively. Formation water chemistry data shows the total charges for cation and anion were relatively balanced from 75.5 to 396.8 meq, the TDS from 4,904 mg/l to 22,351 mg/l, and SG from 1.005 to 1.016 and were dominated by elements of Na, Ca, Mg, Cl and HCO3. The δ2H and δ18O from water samples are between -26.2 to -37.2 (‰) and between -3.63 to 2.50 (‰) respectively. With all the correlation of geochemical and isotope data of both rock and water indicate that the Paleogene Carbonate system in the M-Field has been through at least once uplifting and one sea water rise/drowning event, with meteoric water affected diagenetic process. These geological processes shown by the calcite cementation, the presence of pyrite and quartz, recrystallization of the carbonate grains and mylonitic dolomite, high content of Mg, Fe and Al, and also the abruptly change of the δ13C and δ18O values.Keywords: Paleogene carbonate, geochemistry, water chemistry, stable isotope, diagenesis. evidence by the stable isotope of H and O in formation fluids integrated with petrographic and geochemical data of rock and fluids samples analysis. The major minerals of this carbonate unit are: calcite, clay minerals, dolomite, quartz, plagioclase, and pyrite. From ICP-OES analyses result this carbonate rocks has the content of Fe, Mg and Al ranges 450-7800 ppm, 497-10892 ppm and 96-3900 ppm respectively, while Si and Sr are relatively low around 0.1 ppb to 0.7 ppm and 60 ppm to 570 ppm respectively. Formation water chemistry data shows the total charges for cation and anion were relatively balanced from 75.5 to 396.8 meq, the TDS from 4,904 mg/l to 22,351 mg/l, and SG from 1.005 to 1.016 and were dominated by elements of Na, Ca, Mg, Cl and HCO3. The d2H and δ18O from water samples are between -26.2 to -37.2 (‰) and between -3.63 to 2.50 (‰) respectively. With all the correlation of geochemical and isotope data of both rock and water indicate that the Paleogene Carbonate system in the M-Field has been through at least once uplifting and one sea water rise/drowning event, with meteoric water affected diagenetic process. These geological processes shown by the calcite cementation, the presence of pyrite and quartz, recrystallization of the carbonate grains and mylonitic dolomite, high content of Mg,Fe and Al,and 13 18 also the abruptly change of the δ C and δ O values. INTRODUCTION The Miocene-Pleistocene Formation units in the West Java has been massively studied, while the the pre-Miocene carbonate rocks are poorly understood. This pre-Miocene carbonate sequence has never been found or exposed across the region of West Java. Wilson and Hall (2010) indicate that the tectonic subsidence of back arc areas in north of Java and Sumatra resulted in marine flooding, allowing carbonate development during the latest Oligocene to Early Miocene. This pre-Miocene carbonate might also be part of the Oligo-Miocene carbonate which is described by Satyana (2005) as distributed regional across Java, which mainly divided into two trends, northern trends (Cepu-Surabaya-Madura-Areas, North Central Java, and Ciputat-Jatibarang Areas), and southern trends (Gunung Kidul-Banyumas-Jampang, Bayah-Sukabumi-Padalarang Areas). The northern trend developed mainly in the subsurface of the back arc setting and was located far from the volcanic arc, while the southern trend developed in the intra-arc setting and shared the same location with or close to the volcanic arc (Satyana, 2005;Metcalfe, 2017). Due to lack of geochemical data for oil and gas exploration the Paleogene carbonate in the West Java Basin are not well defined. The variety, limited number and quality of the geochemical and stable isotopes of rock and fluids samples which could only be taken from several oil and gas wells that has penetrate and cored the carbonate rocks down to the depth of 2900-3000 mMD were few among many challenges in this area. Thus, it is important to gain geochemical and isotope analyses of the Paleogene carbonate in this study area, to get better understanding and give new insight in petroleum system development concept. GEOLOGICAL SETTING The position of Paleogene carbonate in Ciputat Sub Basin of the North West Java Basin were varies when correlated with other lithology of volcanic and metamorphic rocks, which in this sub basin were commonly identified as the basement. The position of Paleogene carbonate at some place was found higher than the basement, but in some location were lower. This might be due to the tectonic or diastrophic events in this sub-basin, nevertheless as the sub-basin is part of North West Java Basin ( Figure 1). The Ciputat Subbasin was also formed and affected by the North West Java regional tectonic as well as subduction system of Southern Java. There were at least 4 stage of tectonics related to rifting process (Doust and Noble, 2017) which driven the geology and stratigraphy of the North West Java Basin, the early synrift, the late synrift, the early post rift and the late post rift. The Ciputat Sub-Basin presumed to start developed during the pre-rift tectonic event on the North West Java Basin upon the pre-Tertiary basement as described in the previous regional study shown in Figure 2. The rifting and subduction process at the Southern of Java and diastrophic events in the Northwest Java Basin were causing the faults and fractures to growth over Ciputat Sub-Basin. The major faults orientation is relatively in the direction of north-south and shown as a normal fault at the eastern part of Ciputat Sub-Basin. The regional stratigraphy of the North West Java Basin shows that the oldest sedimentary rock formation above the basement is Jatibarang Volcanic Formation, which were formed during early synrift in the Paleocene-Eocene series, while the Paleogene carbonate is not presence nor considered as a member among the Oligocene formation of Volcanic Jatibarang or basement but it has been considered as part of the Paleogene basement ( Figure 2). The Ciputat Sub-Basin is a back arc type of basin, which was filled and growth mostly by coarse clastic sediments during rifting processes at Paleocene-Eocene to Oligocene series and from Oligocene to Present the deep marine volcanoclastic were dominant with turbidite mechanism as a product of subduction process and at the same time it was also developed the limestone sediment at the shelf edge. Petrography The petrological and mineralogical analyses presented here were gained from the previous study and has been already describe (Anonym, 2008, LAPI ITB, 2014. The petrographic samples were collected from one well (M-11) at depth of 2744 mMD, 2757 mMD, 2760 mMD and 2771 mMD. These were used as complimentary evidence of diagenetic processes in form of matrix replacement and recrystallization of skeletal fragments (Figure 3). The specimens are also shown some proof that can be considered as evidence of materials/minerals enrichment to the original depositional environment (Flugel, 2004), which is shallow marine facies. Neomorphosed mudstone shows as skeletal grains, mostly neomorphosed to calcite spar and the matrix replaced by micrite and indeterminate clays ( Figure 4). The texture is difficult to recognize, diagenesis occur in form of calcite, micrite and indeterminate clays and pyrite, siderite and dolomite. Fractures is also develop but not intense and it has very poor visible porosity (1%). The sample from 2760 mMD ( Figure 5) and 2771 mMD ( Figure 6), were also showing evidence of diagenetic process from mylonitic dolomite and mylonitic limestone presence which might be due to deep burial effect after deposition (Moore and Williams, 2013). Stable Isotope The Paleogene carbonate unit in the M-field were analyzed using stable isotope from formation water samples and interpretation of major elements geochemical data taken from 3 wells (M-11, M-19 and M-20). The vertical position of the target reservoir in these 3 (three) wells were flattened at the top of Paleogene carbonate and measured to the bottom of completion which is not yet reach the bottom of carbonate unit. This configuration as shown in Figure 7 showing the position of formation water samples taken in each wells that penetrated the Paleogene carbonate. While the well M-11 has the longest completion interval (310 m), the data from imaging logs has shown fractures with good connectivity are only occurred within the 50 m of the well's top interval completion ( Figure 8). All the samples were collected from the well head of each well to assure there was no mixing with other well samples. Later (in the Figure 9) we can see the cross plot of stable 2 13 isotope values ( H and C) in each well were able to show the evolution of geology event during the depositional and the forming of Paleogene carbonate rocks in Ciputat Sub-Basin METHODOLOGY The method for this research is using the geochemical analysis by ICP-OES method from 5 core samples which taken from 2 wells and supported by stable isotope analysis of formation water from 6 samples with 3 samples each for 18O and 2H respectively which collected from 3 wells. The results will be used to characterize the chemistry compound of the formation water and also the geochemical characteristics found within the Paleogene carbonate in Ciputat Sub-Basin. From geochemistry analysis we prognoses the process of geological event in this subbasin that would cause diagenetic in the Paleogene carbonate unit. DISCUSSION This study provides the integration of geochemical and isotope data from rocks and formation water in order to explain the geology and diastrophic event which could be seen through some evidence of diagenetic processes and changes of depositional setting in the Paleogene carbonate unit and its formation water of the M-Field, at Ciputat Sub-Basin. From the petrography analysis reveal some evidences of diagenetic processes and depositional setting where diagenetic process could formed. Some evidences are in forms of matrix replacement ( Figure 6) and recrystallization of skeletal fragments ( Figure 5). This clearly shows the diagenetic process in forms of minerals dissolution and precipitation which occurred when carbonate rocks were exposed to meteoric water interaction, so the less stable minerals of aragonite and high-Mg will dissolute and more stable minerals such as Low-Mg Calcite are often reprecipitate. Other proof of diagenetic process in this Paleogene Carbonate that it was also related with high pressure which shown from occurrences of fractures development in the carbonate rocks that forms fracture porosity and mylonitic limestone/dolostone which is quite obvious in the thin section ( Figure 5), these diagenetic evidence were related with deep burial depositional setting. The occurrences one of minerals quartz are also can be considered as evidences of enrichment in the origin of depositional environment of marine facies with some sediment sources of detrital influx (Figure 4) where transition depositional setting took place. These could possibly be the impact of volcanic activity, faulting or uplifting events within or nearby the area that would cause mixed processes between the carbonate system and volcanic, magmatic or detrital source of deposits. Hence, from petrographic analysis we could predict that diagenetic setting were occurred which is supported in variety and changes of depositional setting of this Paleogene Carbonate. Thus, the Depositional setting of this Paleogene Carbonate were varies from shallow marine setting that interact with meteoric water which has diagenetic process shown in mineral replacement, dissolution and precipitation, to deep marine depositional setting which has diagenetic process shown in fracture growth as secondary porosity and occurrence of mylonitic dolomite, and also transitional setting of shallow marine-terrestrial depositional environment which has evidence of diagenetic in presence of quartz, siderite and pyrite minerals in carbonate rock. Water Chemistry analysis in carbonate rocks can also be considered as rock sources tracing tool, knowing the association and dissociation reaction of elements contain in the water for example the presence of Ca, Mg, Na and abundance of HCO means the reaction would favor 15 and M-11. Hence, we could presume the most favorable solvent that react with the rocks is HCO 3 followed by sulfate and chloride, while the most reacted elements are Ca, followed by Mg and Na. All of these elements could lead us to conformed that source of this formation water is sea water from paleo environment. The stable isotopes values reveal that the formation water in the carbonate rocks has been through some significant changes in geological events (Table 2). Based on the information in the Figure 8 & 9 and Table 2, we could see the length of completion intervals such as in Well M-11 is not always in a porous zone, but may also flow through some fractures as we can see from the FMI analysis ( Figure 8). Thus, we could conclude that the deeper portion of the carbonate rock already through some continuous pressure and compaction process. Those presumption could only be thoroughly understand only after we analyze the isotope value combine with the geochemical/ICP OES interpretation. possibilities is there are zone that has both condition which is deep environment and close to magmatic/alteration. Further analyses of these isotope values has enable us to correlate the diagenetic of the formation water with the Paleogene carbonate diagenetic process and it depositional setting. It is presumed that formation water in these 3 wells in the M-field were contained originally as deep carbonate rock reservoir due to burial process and later was heavily affected by magmatic alteration through intrusion or faulting as part of diastrophic event. Evaluation of stable isotope data from formation water as we could see such in Figure 11, there are two series of events in the development of this carbonate zone, first is The porosity in the upper section of carbonate rocks from M-11 well has a good porosity of matrix and fractures while after depth of 2775 mMD the carbonate rocks has a different property with vuggy porosity in the isolated position, the matrix could not provide flowing media for the fluids with only less than 1% of porosity while the fractures porosity in carbonate will dominate the flow of formation water. The stable isotopes values from formation water used 18 in this study were based on the change of values in O isotope due to reaction with influx of meteoric/fresh water that mixed with depositional environment of sea water (Bowen, 1994). As in Figure 9 above, there are two phase of events which is an isotope depletion as both isotope values significantly dropped (2.5 to - Paleogene carbonate rocks. Based on the O and H cross plot shows that the depositional environment and major influence to the carbonate rocks are divided into 3 (three) zone of geological processes (Figure 10), as described: 1.The isotopes value from the samples collected far below the local meteoric water line has shown that the interval zone is not contaminated with meteoric water or surface water, 2.The formation water in this Paleogene carbonate rocks reservoir is not an isolated zone, 3.There are at least two or three condition zones for the carbonate rocks: deep reservoir environment or magmatic/alteration impact, and the third O and H Isotope values from formation water as shown in the Table with completion interval length, depth position (mku/mMD) and distance between Top Paleogene/Old Carbonate and Top of Open Hole Completion. The isotopes plot on the right showing changing phase of depletion and enrichment related to geological event of diastrophism that would impact on diagenesis of the formation water and carbonate rocks CONCLUSIONS The carbonate Rocks of the M-Field in the Ciputat Sub-Basin, North West Java Basin has several distinct properties and characteristic that could explain depositional setting related to diagenetic process of the formation water and carbonate has been through. Distinct properties of some element such as Ca, Mg, Na and Cl, HCO contained in the formation water of Paleogene 3 carbonate found to be the most dominant elements in the rock-water interaction processes, which typified the water type of the formation water in the M-Field. The depositional condition from the petrography description are of shallow marine with diagenetic events that forming neomorphosed limestone which identified from the appearance of quartz, pyrite and siderite. The diagenetic setting from petrographic is aligned with the result of 2 18 stable isotope cross plot of H and O which shows there are three carbonate rocks setting in this Paleogene Carbonate of M-Field in Ciputat Sub Basin: 1. transition of marine-terrestrial depositional environment with diagenetic evidence in the occurrence of quartz, pyrite and siderite, 2. marine depositional setting with buried/deep diagenetic environment setting, shown by the presence of fractures porosity and mylonitic dolomite and 3. shallow marine depositional setting with diagenetic setting environment influenced by meteoric water and magmatic activity which shown by mineral dissolution and precipitation and also affecting the isotope characteristic. ACKNOWLEDGMENTS The author would acknowledge Pertamina EP and special task force for oil and gas (SKK Migas) for giving me the opportunity to complete this study. I would also like to thank my colleague from Pertamina EP especially Mr. Ari Wahyu and Mr. Rizki, and also Mr. Satrio and Mr. Bungkus from BATAN for their support in providing isotope samples assessment.
Spin and valley degrees of freedom in a bilayer graphene quantum point contact: Zeeman splitting and interaction effects We present a study on the lifting of degeneracy of the size-quantized energy levels in an electrostatically defined quantum point contact in bilayer graphene by the application of in-plane magnetic fields. We observe a Zeeman spin splitting of the first three subbands, characterized by effective Land\'{e} $g$-factors that are enhanced by confinement and interactions. In the gate-voltage dependence of the conductance, a shoulder-like feature below the lowest subband appears, which we identify as a $0.7$ anomaly stemming from the interaction-induced lifting of the band degeneracy. We employ a phenomenological model of the $0.7$ anomaly to the gate-defined channel in bilayer graphene subject to in-plane magnetic field. Based on the qualitative theoretical predictions for the conductance evolution with increasing magnetic field, we conclude that the assumption of an effective spontaneous spin splitting is capable of describing our observations, while the valley degree of freedom remains degenerate. I. INTRODUCTION Exploiting the quantum degrees of freedom of charge carriers offers a potential route for designing new types of quantum electronic devices. While most studied systems involve the electron's spin degree of freedom aiming at spintronic applications [1,2], more recently the additional valley isospin in a variety of materials has attracted a growing interest for use in valleytronics [3]. However, irrespective of the system of choice, the implementation of spin-or valley-based functionalities into electronic devices requires a full control of the quantum state itself. A quantum point contact that confines charge carriers into one dimension [4], is one of the basic building blocks for efficient injection, control, and read-out measures. Recently, we have reported [5] on an electrostaticallyinduced quantum point contact (QPC) in bilayer graphene (BLG) [6][7][8][9][10][11][12][13][14], i. e., a system with four-fold spin and valley degeneracy, where the constriction is realized by local band gap engineering with a displacement field perpendicular to the BLG plane. We observed confinement with well-resolved conductance quantization in steps of 4 e 2 h down to the lowest one-dimensional (1D) subband, as well as a peculiar valley subband splitting and merging of K and K ′ valleys from two non-adjacent subbands in an out-of-plane magnetic field (see also Ref. [10]). In the present paper, we investigate the same system in an in-plane magnetic field. In this context, we became aware of the publication [11] that reported on conductance measurements in a similar setup and found certain features additional to the expected conductance quantization. These features were attributed [11] to the substrate-induced Kane-Mele spin-orbit coupling [15] below the lowest plateau. Since the reported values of the spin-orbit coupling in monolayer graphene is of the order of 40 µeV [16] (corresponding to temperatures of the or-der of 0.5 K) and there is no clear mechanism that would lead to an enhancement of spin-orbit coupling by hexagonal boron nitride (hBN), we expect another mechanism behind such features. Here, we explore alternative possibilities for the explanation of the appearance of additional features in the conductance. One rather notorious phenomenon, where interaction effects show up in transport measurements, is the appearance of an additional shoulder in the quantized conductance of QPCs below the lowest plateau. It is commonly known as the 0.7 conductance anomaly, since, in systems with spin degeneracy, it is usually observed close to the value of the conductance tallization [48][49][50], and other interaction-based mechanisms [51][52][53][54][55][56][57]. In particular, there are studies investigating the influence of the QPC barrier on electron-electron interaction effects perturbatively. On a very simplistic level, considering local interaction, only the Hartree type processes involving electrons with opposite spin contribute, leading to an effective blocking of the channel for one spin species for a certain amount of time and thus lowered conductance. Since interaction effects are enhanced at low densities, such type of effects would be strongest in the lowest quantization subband. In this work, we study the conductance of a BLG QPC for in-plane magnetic field orientation. We start with presenting our experimental results (Sec. II), which were obtained in the same sample as in Ref. [5], but in another cool down for changing the sample orientation within the magnet. In particular, we demonstrate the importance of interaction effects in the lowest size-quantized subbands by measuring the renormalized Landé g-factor governing the Zeeman splitting of the subbands. This motivates us to employ the picture based on the interactioninduced spontaneous polarization of spin or valley degrees of freedom to describe the shoulder-like features in the conductance. After a short reminder on the band structure of BLG and, especially, the influence of external gating on the gap and the densities (Sec. III), we discuss the conductance of the BLG QPC. In Sec. IV we detail an extension of a phenomenological model for the 0.7 anomaly proposed in Ref. [58] to BLG. Within this framework, we investigate all possible scenarios in order to find the one most likely to be present in this experiment. We do not explicitly consider any microscopic model of the anomaly, but, instead, assume that some sort of interaction-induced spin and/or valley splitting is present at zero magnetic field and investigate the consequences of possible types of splitting on the conductance in increasing magnetic field. In fact, the assumed polarization does not need to be static, it just needs to fluctuate slowly compared to the traveling time through the constriction, which according to Ref. [59] is indeed fulfilled. By comparing our experimental results with these scenarios (Sec. V), we conclude that our sample shows spontaneous spin polarization but no valley splitting. Our findings are summarized in Sec. VI, and technical details are described in Appendices. A. Fabrication and characterization For this experiment, we have used the same BLG device as presented in Ref. [5], see Fig. 1. The chosen gate configurations is V BG = 10 V (back-gate voltage) and V SG = −12 V (split-gate voltage). This setup differs from the one used in Ref. [8] for the study of the supercurrent confinement in BLG QPC by an addition of an overall top gate. The device consists of a hBN-BLG-hBN heterostructure, which is edge contacted with Ti/Al electrodes. The thickness of the top and bottom hBN layers of the sandwich are 38 nm and 35 nm, respectively. The sandwich is placed onto a pre-patterned back gate, which is designed on a sapphire substrate that is, in turn, covered by an additional layer of the dielectric Al 2 O 3 . The magnetic field was applied in the plane of the BLG layer. The measurements were performed under the same experimental condition as in Ref. [5], but in a different cool down, with the magnetic field oriented in the plane of the BLG (at approximately 45°from the current direction). The QPC in BLG is engineered electrostatically by means of the split gate placed on top of the device and the whole sample is covered in an extra layer of Al 2 O 3 with 30 nm thickness before adding the overall top gate made from Ti/Cu. The measurements were performed at either 20mK or 4K in a 3 He 4 He dilution refrigerator BF-LD250 from BlueFors. A two-terminal configuration was used employing the standard low-frequency (≈ 13Hz) lock-in technique, with an AC-excitation ranging from 1 to 20µV. For further details of the characterization of the sample the reader is referred to the Supplemental Material in Ref. [5]. Figure 8 of the Supplemental Material in Ref. [5] also shows the finite-bias measurements used to extract the gate-coupling parameter. To the best of our knowledge, there are two papers by other groups that have investigated similar setups, namely Ref. [60] and Ref. [12]. While both these papers also studied transport through a BLG QPC, the confinement conditions there were different from those in our setup. This difference might be crucial for observation interaction effects, including the 0.7 anomaly. Specifically, in the present work, the QPC is formed by split gates of a physical width w ≈ 65 nm. Because of the additional layers of Al 2 O 3 , the distances between the channel and the global back and top gate are 55 nm and 68 nm, respectively. In Ref. [60], the physical width of the split gates is 120 nm, while the distance to the back gate and split gate is not specified. Since Ref. [60] did specify that the BLG is encapsulated in hBN, the distance to the back gate and the split gate is likely of the order of 30 nm, with an additional 35 nm of Al 2 O 3 between split gates and local top gate. Similarly, Ref. [12] stated a width of 250 nm, a distance of 25 nm to the back gate (and, probably, a similar one to the split gates), and additionally 25 nm of Al 2 O 3 between split gates and a local top gate. This means, that our channel is a lot narrower, confinement a lot stronger, and, thus, the density of states way larger, which enhances all interaction effects. Moreover, interaction effects in Refs. [12,60] should be more strongly suppressed by the top and bottom gate, which are closer than the typical distance of interacting electrons within the constriction. It is worth noting that Ref. [61] stated that gates need to be closer than a few nanometer, to fully suppress electron-electron interaction in graphene and BLG. At this point, it should be mentioned that, depending on the exact shape of the constriction, the 0.7 shoulder can appear at different conductance values (for example, at 0.5 e 2 h [55,56]), which would fit with the alleged spin-orbit gap of Ref. [12]. The global back gate that covers also parts of the leads in our device leads to a smoother coupling in the QPC region, while also modifying the band structure and gap in the non-QPC regions. As has been shown, for example, in Refs. [55,56], both the presence and shape of the 0.7 anomaly depend rather strongly on the exact constriction profile, so that a smoother constriction region might be necessary for its appearance. This also applies to the larger parameter space we explore by varying our split gate and back gate not only along the direction of zero displacement field. Lastly, we want to point out that most of our reported results are based on the three lowest size quantized levels, which are not even resolved in Ref. [60], while Ref. [12] does not reach full pinch-off. B. Conductance We start by investigating the dependence of the conductance on the magnetic field and top-gate voltage. Figures 2(a)-(d) show the experimental data at temperature 20 mK. In Fig. 2(d), the conductance is shown as a function of the top-gate voltage V TG for two different values of in-plane magnetic fields B ∥ . The black curve corresponds to B ∥ = 0.2 T and the light-blue one to B ∥ = 6 T as marked in Fig. 2(a). The light-blue curve highlights the appearance of additional half-step conductance plateaus in high in-plane magnetic fields. The black curves contains a shoulder marked by the arrow, which we will attribute to the 0.7 conductance anomaly. We note that the valley degeneracy is apparently not affected by the application of the in-plane magnetic field, and the Zeeman spin-split subbands remain degenerate in the two valleys K and K ′ . Since the aluminum leads are superconducting at 20 mK, a finite magnetic field is needed to kill this effect and curves below 0.2 T show influence of the superconducting leads, cf. Appendix A. Cubic spline fits of the conductance for all measured values of magnetic field between 0.2 T and 6 T are shown in Fig. 2(b) and Fig. 2(c) for temperatures 20 mK and 4 K, respectively. Curves in both figures are shifted vertically for clarity and colored according to their first derivative. For both temperatures, there are two regions of steep incline (orange-red) for high magnetic field, corresponding to the chemical potential crossing through the spin-split bands. The splitting is both sharper and higher for the lower temperature, and plateaus are flatter there as well. The lower spin-subband stays roughly at the same value of V TG . Figure 2(a) shows a grayscale map of the differentiated differential conductance dG dV TG as a function of top gate voltage V TG and in-plane magnetic field B ∥ for T = 20 mK. Transitions across 1D subband edges appear as dark lines, while conductance plateaus are visible as light regions in between. One clearly sees the four well-resolved conductance plateaus. These are separated by the three regions corresponding to the 1D subbands, which are split roughly symmetrically with the applied in-plane field for higher bands. This corresponds to the evolution from the spin-degenerate into spin-split energy levels. The lifting of the spin degeneracy occurs for the lowest three subbands, where the confinement and interactions are the strongest. Figure 2(e) shows the same data as 2(a), but as a function of B ∥ and G. The bright horizontal lines at multiples of 4e 2 h correspond to the spin-and valley-degenerate conductance quantization plateaus for zero magnetic field, the additional half-integer multiples for higher magnetic fields correspond to the spin-split plateaus due to the Zeeman effect. FIG. 2. Measured conductance of the QPC in BLG for V BG = 10 V and V SG = −12 V. (a): Differentiated differential conductance as a function of the top-gate voltage V TG and in-plane magnetic field B for temperature 20 mK. Plateaus of the conductance corresponds to bright regions, steps correspond to dark region. The map scale is cut at 0 e 2 hV and 8 e 2 hV to bring out the details. The black dashed line corresponds to B = 0.2 T, the lines of blue tones to 2, 4, 6 T. The dots of different shades of pink mark the development of the spin subbands used to extract the Zeeman splitting and the effective Landé g-factors. (b) and (c): Cubic spline fit of the differential conductance G as a function of the V TG in elevating B for 20 mK and 4 K, respectively. The curves are shifted vertically with α = 2e 2 hT and colored according to their first derivative. (d): Differential conductance G as a function of V TG at 20 mK for B = 0.2 T (black curve) and B = 6 T (light blue). The arrow marks the additional shoulder, which we identify as a 0.7 anomaly. (e): Differentiated differential conductance as a function of B and conductance G for 20 mK. Plateaus of the conductance correspond to bright regions, slopes correspond to dark regions. C. Extra features of the conductance Additionally, we note the presence of a shoulder-like feature below the lowest conductance plateau at about G = 2.5 e 2 h, similar to the 0.7 structure described in many other materials [62], which develops into the lowest spin-split subband at G = 2 e 2 h. This feature is well visible in the black curves in Figs. 2(a) and 2(d). Since flatter parts of the conductance correspond to brighter color in Figs. 2(c) and 2(f), it corresponds to a bright region in between the zeroth and first plateau, i.e., within the darker region to the left of V TG = −12 V, making it look like a spin splitting of the 1D subbands at zero magnetic field. This additional feature is also visible in Fig. 3, which shows cadence plots of the conductance at 20 mK and 4 K in Fig. 3(a) and (b), respectively, and of the derivative of the conductance at 20 mK and 4 K in Fig. 3(c) and (d), respectively. In all cases, only the lowest band is shown. The cadence plots for a larger range of conductance variation are shown in Appendix A. The colored curves correspond to the values of magnetic field marked in Figure 2(a). In the black curves in both Figure 3(a) and (b) the is an additional shoulder at around 2.5e 2 h, which develops into the spin split plateau for higher magnetic fields. In the cadence plots of the conductance Figure 3(c) and (d) this shoulder corresponds to an additional peak. which clearly develops into the spin split peak for 4 K whereas this transition is somewhat obscured by yet another feature at 20 mK. We identify this obscuring feature as part of a larger oscillation pattern discussed later. Similar plots are shown in Ref. [45] for GaAs, where the observed behavior was attributed to the 0.7 structure. The extra feature cannot be an effect caused by the finite magnetic field needed to kill superconductivity, since it is not located on the imaginary line extending the Zeeman splitting down to small magnetic fields. Instead, a finite magnetic field is needed to bring this feature down to the spin-split value. Moreover, this feature is seen already at zero magnetic field in Figs. 2(c) and 3(b) and (d) at higher temperature, where the contacts are not superconducting. At stronger magnetic fields, B ∥ ≳ 4 T, this feature merges with the shoulder that, at the lowest magnetic fields, splits off the lowest main conductance quantization plateau at G = 4 e 2 h and goes down to form a plateau slightly below G = 2 e 2 h. This behavior is clearly observed as the evolution of the red region above V TG ≈ −12 V in Fig. 2(b). The merging of the two shoulders is also evident in Fig. 2(a) as an intersection of the two bright regions at B ∥ ≈ 4 T and V TG ≈ −12 V. Finally, there are additional oscillations in the conductance (of which the obscuring feature in Fig. 3(c) is one), which are most visible close to conductance plateaus in Figs. 2 (d). These appear as vertical lines in Fig. 2(a) and are less visible for the higher temperature in Fig. 3(b). Most notably, a maximum of such an oscillation is seen to go straight through one of the spin-split bands of the lowest 1D subband in Fig. 2(a) and (b) and Fig. 3(c), starting at around −12V and 0T in the lowest plateau, crossing one spin subband at around 3T, and ending up in the 0.5 e 2 h plateau for higher magnetic field. Similar oscillations appear at other voltages in a regular fashion. D. Effective Landé g -factor From the spin splitting of the 1D subbands marked in pink in Fig. 2(a) we extract the Zeeman energy splitting ∆E Z by converting the top-gate voltage V TG into energy, using the splitting rate of the energy levels in sourcedrain bias measurements [5], as described in Ref. [63][64][65][66]. The confinement in this cooldown, V BG = 10 V and V SG = −12 V, does not exactly correspond to the setup in the source-drain measurement, where V SG = −11.6 V. We observed a good agreement between the two measurements in Ref. [5], which had a bigger difference in the confining potentials. Most importantly, the extracted gate coupling is the same for all nine visible subbands. Thus, we expect this value to be a very good fit here as well and use The obtained value of ∆E Z for each of the three lowest subbabnds is plotted in Fig. 4(a) as a function of magnetic field, revealing linearly increasing Zeeman energy splittings. Remarkably, in case of the N = 0 subband, the Zeeman splitting shows a linear behavior only for B ∥ ≳ 5 T, whereas at smaller fields an almost constant splitting is observed. This saturation effect can be linked to the observed additional shoulder in the conductance curves in Figs. 2(b) and 2(c) at not too strong magnetic fields. The plateau in the Zeeman splitting corresponds to the magnetic fields below 4 T in Fig. 2(a), where the bright region to the left of V TG = −12 V disappears. One can either fit the dependence of ∆E Z on the magnetic field requiring a vanishing splitting extrapolated to zero B or not (using then the best linear fit at high magnetic field). In the latter case, a finite intercept at ∆E Z ≈ 1.7 meV is observed for N = 0 subband at B ∥ = 0, unlike the cases N = 1 and N = 2, which extrapolate to close to zero energy splitting. This suggests that a spontaneous spin splitting occurs for the N = 0 subband, where the effects of the interaction and confinement are expected to be the most prominent. Fitting with a finite intercept, as was done, e.g., in Ref. [45], establishes a bound on zero-field splitting without interaction effects. One should note that this splitting is fully obscured by the much larger, interaction-induced 0.7 anomaly that produces a much larger value of the zero-B splitting. From the slopes of the Zeeman splitting in Fig. 4(a), we find the (independent of the magnetic field) values of , obtained for the in-plane magnetic field. The gray line indicates the value bare 2D g-factor g = 2 for BLG. The error bars mark the 1σ intervals from the two performed fits shown in panel (a). The dotted points correspond to the dotted lines above, the crosses to the dashed lines. All parameters are given in Table I. subband 0 1 2 g * (no offset) 6.04 (6) 4.22 (4) 3.73(4) g * (finite offset) 4.91 (40) 4.14(17) 3.72 (22) effective Landé g-factors g * N for each of the subbands, shown in Fig. 4(b), see Table I. These values are obtained by taking the linear fit to the splitting with and without a finite intercept. For both fits we use the 1σ intervals to obtain error bars. The obtained g * N values are increasingly enhanced for lower subbands compared to the bare 2D g-factor g = 2, with a maximum enhancement by a factor of about 2-3 for N = 0. This observation also supports the idea of an enhanced role of interaction effects for the N = 0 subband. Independent of the exact value of the gate coupling, this enhancement only relies on a gate coupling that is the same for the three subbands. This enhancement is seen both for a fit with finite intercept or without. Since the reported Kane-Mele spin-orbit gap of 0.04 − 0.08 meV is in between the finite and the vanishing intercept, it would also not change the resulting enhancement of the Landé g-factor by more than a few percent. III. THEORETICAL MODEL The quantization of conductance in a QPC is a wellknown experimental proof of the possibility of confining charge carriers and it clearly shows their quantum nature [67]. What makes BLG an interesting platform for such measurements, is its additional valley degree of freedom and the high electrostatic tunability of its band gap [6,68]. In this section, we discuss the effects of the applied gate voltages on the band structure and, thus, on the observed conductance within the essentially noninteracting model (interaction here is taken into account only through the self-consistent screening of the gate potentials). A. Effective Hamiltonian and dispersion of BLG We describe the low-energy properties of BLG relevant for the transport measurements in the QPC geometry by the effective two-band Hamiltonian, see Ref. [69]. The details of this approximation are given in Appendix C. The two-band matrix Hamiltonian, acting in the space of the pseudospin degree of freedom (Pauli matricesσ) combined with the Zeeman interaction in the spin space (Pauli matricesŝ), has the form Here, π = ξp x + ip y is the kinetic momentum, with ξ = ± referring to the K ± valley. Here, we disregard possible spin-orbit coupling, which is a small effect at the energy scales of the experiment and not capable of explaining the zero-field splitting or the magnetic field behavior we observe, as seen by the obtained zero-field splitting of Fig. 4(a). We will return to this issue again below. In what follows, we disregard the Mexican-hat term H M that develops for finite layer asymmetry U as discussed in Ref. [70]. We also neglect the skew interlayer hopping, which leads to trigonal warping [70,71]. The effect of these subtle features of the BLG spectrum on the conductance of a QPC in in-plane magnetic field will be discussed elsewhere. Here, we adopt the simplest model that, as we demonstrate below, is capable of describing the salient features of the conductance. Clearly, we have to distinguish the two spatial regions in our physical sample. Away from the split gates there is no confinement and electrons feel an approximately constant top-gate and back-gate voltage. Close to the split gate, the shape of the confinement leads to a nontrivial, spatially dependent effective top-gate voltage. The dispersion of the spin σ =↑, ↓ band for the lowenergy Hamiltonian (1) without the Mexican-hat feature (3) is given by This corresponds to a 2D density for spin projection σ: where the factor of 2 accounts for the valley degree of freedom and the chemical potential µ is measured with respect to the middle of the asymmetry gap. For a small Zeeman splitting, ∆E Z ≪ 4µ 2 − U 2 , one can use the expansion This expansion tells us that the effect of the Zeeman splitting on the density is enhanced when the chemical potential is close to the gap. The total density n 2D = ∑ σ=± n 2D σ is, to first order in ∆E Z , independent of magnetic field, and we get for the chemical potential in weak fields: B. Controlling BLG with gates In the 2D regions away from the QPC, the effect of a constant back-gate and top-gate voltage is described by the self-consistent gap equation [70,72]. The total density n = n ↑ + n ↓ is electrostatically determined by the gates and given by Here, ε 0 is the vacuum permittivity, L BG (L TG ) is the distance from the BLG plane to the back gate (top gate), and ε BG , ε TG are the relative dielectric constants of the material between BLG and the back gate and top gate, respectively. In the absence of screening, the interlayer asymmetry factor U is given by where c 0 is the distance between the two BLG planes and ε r is the relative permittivity between these sheets. Since the two layers of BLG screen the effect of the closer gate for the other BLG plane depending on their density and thus the felt voltage, the actual asymmetry as a function of the density is given by the self-consistent equation [70] Thus, changing the top-gate voltage tunes the density n according to Eq. (8), which, in turn, influences the asymmetry factor U according to Eq. (10) and hence the dispersion (4) and the chemical potential according to Eq. (7). This chemical potential remains constant over the whole sample, including the QPC constriction, where the density is no longer given by Eq. (5): Here, the chemical potential depends on V BG and V TG through the corresponding dependence of the 2D density, Eq. (8), and the dependence of U ext , Eq. (9). In the experiment, the combination of back-gate and split-gate voltages is used to open a gap U under the constricted region and tune the chemical potential inside this gap, as shown in Ref. [5], and thus form the QPC, see Fig. 5. The overall top gate is used to tune into the low-density regime, where the observation of conductance quantization is possible [5]. Importantly, for fixed back-gate and split-gate voltages, like in the experimental setup, the top-gate voltage tunes the electronic density in the sample linearly [70]. As proposed in Ref. [73], we model the QPC by projecting the 2D problem onto a one-dimensional one. The procedure for a standard Schrödinger equation is described in Appendix E 1. A generalization of the method to BLG is discussed in Appendix E 2. The quantization of conductance is already visible in the simplest approximation of hard-wall boundary conditions, as we will show now. In the case of a channel of width W , the dispersion relation for the longitudinal wavevector k resulting from Eq. (1) takes the form [5] where N = 0, 1, 2 . . . labels the size-quantized bands. While the case N = 0, strictly speaking, requires a different choice of boundary condition, we still chose to investigate the effect of the resulting k 4 dispersion, which one would also get in the 2D setup. It will turn out, that the choice of any non-linear dispersion does not have qualitative consequences for the 0.7 effect. Note that U in Eq. (13) differs from the 2D expression (10), since the screening in a 1D channel differs from that in the unconfined regions of BLG. We also note that the channel width is affected in a non-trivial way by V BG and V TG . The lowest band is, to leading order, quartic in the momentum, so that the zero-temperature density resulting from Eq. (13) is given by as opposed to the square-root dependence of the 2D density (5). The total density in the constriction is again determined electrostatically by the gates, but the stray fields of the split gates make the evaluation of the dependence of the density on the gate voltages harder. Since the split-gate voltage is applied additionally in the constricted region, the gap there is larger and the density inside the QPC is lower than away from the barrier (Fig. 5), enabling the observation of the very lowest size-quantized bands. C. Conductance quantization We describe the conductance of the system by means of the Landauer-Büttiker formula, where T σ,ξ ( ) is the transmission of a subband with spin σ and valley ξ and Assuming an idealized step-function transmission coefficient, where a band contributes to G as soon as it is starting to get filled, the Landauer-Büttiker conductance is given by where the factor of 2 accounts for the valley degeneracy, The splitting inside the constriction is larger, since the Landé g-factor is enhanced. As observed experimentally, the Zeeman splitting inside the constriction is not symmetric. To the lowest order in a weak magnetic field, the chemical potential for a fixed total 2D density is independent of magnetic field, see Eq. (6). is the lower band edge of band N at zero magnetic field, and the Zeeman interaction is written explicitly. Every time the chemical potential crosses another lower band edge at finite magnetic field, the conductance makes a step of ∆G = 2 e 2 h and, for zero magnetic field, a step of ∆G = 4 e 2 h. Each step has the shape of the Fermi function. The steps are separated by conductance plateaus, thus giving rise to a staircase structure seen in Fig 2 and Fig 3. This is the conventional conductance quantization for a QPC, with an appropriate degeneracy of the bands. In contrast to the case of an out-of-plane magnetic field [5,71], the in-plane magnetic field does not couple to the valley degree of freedom. As discussed in Ref. [74], the direct effect on the band structure is also negligible at experimentally accessible magnetic fields. Therefore, at arbitrary fields, the steps of non-interacting conductance have a factor of two corresponding to the two valleys of BLG. D. Screening and electron-electron correlations Electrons in the device are subject to Coulomb interaction, which is screened by the electrons themselves, by the metallic gates, and by the dielectric material. Let us first discuss the screening effect of the gates. There are three relevant length scales in the system. The first one is the physical distance between the split gate fingers is w ≈ 65 nm and the electrostatically induced channel is smaller than that. The width of the split-gate fingers is of the order of L ≈ 300 nm, so that we can distinguish two ranges of length scales relevant to electrostatic screening in our device. On scales smaller or of the order of w, the system is truly 2D, only for larger distances it crosses over to 1D. Another relevant scale is the distance to the back gate and top gate, which are both of the order of d ≈ 55 nm. Here, we also take into account the dielectric screening by further assuming, for simplicity, that the insulating layers in between have the same dielectric constant r (the vacuum dielectric constant is denoted below by 0 ). The bare, only dielectrically screened Coulomb interaction is given by its Fourier component at wave vector q (different in 1D and 2D cases): The gate-screened interaction can be found by summing up the infinite series of mirror charges. In the 2D case, this leads to where l = L TG + L BG , l ′ = L BG and in the last line we assumed L TG = L BG = d. This means that screening strongly alters the interaction if qd ≪ 1. But in the 2D case we require r < w, i.e., q > 1 w, and thus qd ≳ d w ≳ 1, so that the interactions are not strongly altered by the screening of the gates. A closer look, including the screening effects on the interaction for monolayer graphene, is discussed in Ref. [61] and reveals, that gates need to be closer than a few nanometer to really alter the interaction, which is not experimentally accessible and certainly not the case here. There, it has also been stressed that for BLG distances need to be even closer. In the 1D case the presence of the gates is relevant only on scales x > d and q < 1 d. In this case, we get a constant interaction strength, which is in agreement with our phenomenological model. One effect of electron-electron interaction is an enhancement of both the Landé g-factor and spin-orbit coupling, as discussed in Refs. [75][76][77]. By introducing the Fermi-liquid constants F 0 , F 1 we can express the Landé g-factor enhancement asg = g (1 − F 0 ). Spinorbit coupling has an additional linear momentum dependence, which means that F 1 enters instead of F 0 . Since F 0 > F 1 , this means that the g-factor will always be more strongly enhanced than the spin-orbit coupling. The enhancement is largest for large density of states, so that a strong confinement further enhances this effect. This phenomenon cannot be explained within a singleparticle picture [62,97], and it is commonly admitted that it is directly linked to spin [41,78]. In addition, this effect appears to be thermally activated and therefore not a ground-state property [79]. Moreover, experiments show that the confinement potential seems to play a crucial in the strength and the position of this conductance feature [87,88]. Recently, various explanations have been suggested to capture the physical origin of the 0.7 anomaly, such as dynamical spin polarization or spin gap models due to electron-electron interaction [58,79,80,95,98,99], Kondo effect [42-44, 94, 100-103], Wigner crystallization [104][105][106], or charge density waves [49] To our knowledge, no comprehensive study of the interaction-induced 0.7 anomaly in systems where both spin and valley degrees of freedom are degenerate has been reported so far. In contrast to the conductance quantization, the shoulder-like feature appearing in the conductance cannot be explained by the non-interacting theory presented in Sec. III. In this section, we explore the possibility of explaining this special feature in the context of the interaction-induced 0.7 conductance anomaly. As already discussed above, several microscopic theories were used to describe the 0.7 anomaly. Here, we do not specify any microscopic mechanism behind the anomaly, but, instead, just assume that there is some that effectively leads to spin and/or valley polarization. Based on this assumption, we extend the phenomenological model of Ref. [58] to four bands and the BLG band structure. The required polarization does not have to be static, it can fluctuate slowly compared to the typical traveling time through the constriction. For simplicity we nevertheless describe the model for a static situation. The "classic" 0.7 effect is only seen in the lowest conductance step, Fig. 2, so that below we restrict our consideration to the lowest size-quantized band shown in Fig. 3. A. Phenomenological model Following the general idea of [58], we again use the Landauer-Büttiker formula (16) for the conductance, here for the quantized band N = 0 from Eq. (13). The 0.7 effect requires a finite temperature. Assuming that the energy scale for the variation of the transmission probability is smaller than that of the thermal distribution function, we approximate the former as a step function. A spin-valley subband contributes to the conductance as soon as the chemical potential reaches its lower band edge 0 σ,ξ within the temperature window. In this section, we develop a phenomenological model to describe how interaction effects may influence these lower band edges beyond the self-consistent screening. There are two ways in which these can differ from the non-interacting single-particle ones. The first one is the spontaneous polarization mentioned above, which is assumed to be arbitrary in the space of spin and valleys. Already when the chemical potential is way below any of the relevant subbands, these subbands may be spontaneously split to different values of energy. The arrangement of these values, which are acquired for very low chemical potential and zero magnetic field, is referred to as the initial subband configuration. All subbands that are above the lowest subband are called minority bands; those that are characterized by the lowest band edge are majority bands. The second effect is the dependence of the subbands on the chemical potential when it is close to the band edge. A particular type of this dependence-pinning of the band edge to the chemical potential-gives rise to additional plateaus in the conductance. It is this interactioninduced dependence of the lower band edge of minority bands on the chemical potential that our phenomenological model describes for any assumed initial configuration. We then consider the corresponding evolution of the conductance with increasing in-plane magnetic field, and, by comparing the resulting behavior with the experimentally observed one (Sec. II), infer the initial splitting configuration. General four-band model For a system with four degrees of freedom, like BLG, we label the subbands by their spin and valley index, i.e., 0 σ,ξ . Moreover, we assume that the lower band edges of minority bands start to depend on the chemical potential once it reaches a certain value µ σ,ξ , i.e., 0 σ,ξ = 0 σ,ξ (µ). All possible initial spontaneously polarized configurations of the band edges are shown in Fig. 6(a). For the analysis of the dependence of the band edges on µ, we look at the local spin-valley energy-density functional in the form Here, F is the free energy of the system and E is its internal energy. A diagrammatic approach to obtaining such a free energy and a corresponding analysis of possible instabilities in models with multiple species of quasiparticles is discussed, e.g., in Refs. [107,108]. The lowest bands in Fig. 6 are majority bands with a fixed lower band edge and we decompose their density into n = n 0 + δn. All changes with the chemical potential are included in δn. For minority bands, we do not make this decomposition, but assume that n = 0 for µ < µ σ,ξ . We approximate the free energy functional F as bilinear in all partial majority density contributions δn and minority densities n, i.e. where α σ,ξ , β σ,ξ , and γ are phenomenological constants to be determined experimentally and n σ,ξ is understood as δn σ,ξ for majority bands. The minimum of the energy functional is achieved when ∂F ∂n σ,ξ = α σ,ξ − µ + β σ,ξ n σ,ξ + γ which leads to solutions of the form for minority bands, where µ σ,ξ is the critical chemical potential of the minority subband that depends on the parameters of the free energy Eq. (24). Application to BLG At this point, we have to specify the band structure in order to get access to the single-particle densities entering the energy functional (24). For this purpose, we use the results of Sec. III B for the non-interacting 1D dispersion in the BLG QPC, and modify them to include the interaction effects at the phenomenological level. Without a magnetic field, we consider a quartic dispersion relation of the form where a is a constant. This form of the effectivemodified by interactions-form of single-particle energies in BLG QPC is based on the fourth-order expansion of dispersion relation (13) for N = 0, namely As shown above, the gap magnitude U depends on the chemical potential through the self-consistent electrostatic screening. Specifically, U = U (n), according to Eq. (10) with additional effects of the split gates, and n = n(µ) according to Eq. (15). The chemical potential is set by the 2D density according to Eq. (5) and Eq. (8). Within the lowest band, these dependencies are very smooth and do not lead to any additional features. The main role in our consideration is played by the interaction-induced bandgap that determines the band edge 0 σ,ξ (µ). For this reason, we neglect all electrostatic contributions [effectively fixing U = U (µ σ,ξ )] and introduce a new bandgap instead of U 2. One could just as well assume that this bandgap is applied on top of the fixed gap U (µ σ,ξ ) 2, since this would only lead to an overall shift and a redefinition of the origin. For the dispersion (27), we get a one-dimensional density in the form By combining this with Eq. (26), we thus get the dependence where C σ,ξ is a phenomenological constant depending on the parameter a as well as the parameters of the free energy functional F. This means that once the chemical potential reaches the lower band edge of a minority band they become pinned together over a certain energy range. For continuity reasons we require 0 σ,ξ = µ σ,ξ , i.e., the initial configuration determines the critical chemical potentials. It is worth emphasizing that the enhanced density of states at the bottom of the almost flat (quartic in momentum) band in BLG QPC (27) is expected to enhance the role of interactions compared to the case of conventional parabolic bands. Resulting conductance With the step-function transmission probabilities, the conductance reads: At this point, one should note that the Fermi function f (x) is close to 1 2 for x ≪ k B T . For fixed lower band edges, this corresponds to a very small region and does not lead to conductance anomalies, but for the pinned band edges and finite temperature considered here it does. If we tune the chemical potential through all band edges, the crossing of a fixed majority band corresponds to a plateau of 1 e 2 h, while for every minority band we get an additional less sharp one at 0.5 e 2 h. Any additional plateau from a majority band at 1 e 2 h can be smeared by temperature. If we have several minority band edges at different initial energies, the distance between the bands compared to the temperature determines whether lower minority bands already contribute fully or not, cf. Ref. [58]. The conductance corresponding to the initial splitting configurations from Fig. 6(a) is shown in Fig. 6(b). The values of the additional shoulders are summarized in Table II. In experimental conductance curves in Fig. 2(d), there is one additional shoulder at around 2.5 e 2 h and another one around 3.5 e 2 h for zero magnetic field. Thus we can rule out case a, because it does not have any additional shoulder and cases f, g, and h, which have a too low first shoulder from the very beginning. Only cases b, c,d and e from Fig. 6 are relevant here. B. Behavior of conductance in magnetic field In order to distinguish between the cases of spin, valley or spin-valley splitting, we consider the behavior of the conductance in parallel magnetic field B. This is incorporated by the following replacement There are in total 6 possibilities of assigning 2 × 2 spins to four subbands. Since one cannot distinguish different valley indices this way, after this assigning, the spins are still four-fold degenerate in their valley index. We only know that each valley has two opposite spins. The permutation of spins within subbands with the same lower band edge does not change the outcome. From this we get in total 26 different arrangements with distinct development with magnetic field for the eight initial cases shown in Fig. 6. These are shown in Table III. The magnetic field behavior of the relevant cases b, c, d, e is shown in Fig. 7 in analogy to Fig. 3(a). Here one should note, that the initial spontaneous splitting is a spontaneous symmetry breaking and if the magnetic field is tuned adiabatically, it will always favor the initial splitting in the direction of the magnetic field. Behavior like case c2, where the initial spontaneous splitting is opposite to the Zeeman splitting will only be observed if the magnetic field is switched on very fast. A comparison of the experimental data and theoretical ones for symmetric splitting Eqs. (32) and (33) and a phenomenological asymmetric one, where we use assuming that the spin up band is energetically higher is shown in Fig. 8. From this we see that an asymmetric splitting, Eqs. (34) and (35), yields a better agreement with the experimental observations in this particular case, that will turn out to be the most relevant one. However, owing to the special dependence of minority bands on the chemical potential, this asymmetric replacement rule may lead to unphysical half-integer plateaus in high magnetic fields for some initial configurations. Therefore, we have used the symmetrical splitting introduced in Eqs. (32) and (33) to produce Fig. 7. V. DISCSUSSION Let us now compare the results of our phenomenological model with the experimental results reported in Sec. II. Many, but not all, features in Fig. 2, e.g., conventional conductance quantization, can be explained without considering interaction effects. Other features, e.g., additional shoulders in the conductance curves and behavior of the g-factor, are compatible with the phenomenological model presented in Sec. IV. A. Conductance plateaus Every time the chemical potential, tuned by the top gate voltage, reaches a new lower band edge, the conductance makes a step of 1 e 2 h per spin and valley. For zero magnetic field, the plateaus are at multiples of 4 e 2 h, which can be clearly seen in the cadence plots in Fig. 3. This is in contrast to Ref. [13], where the valley splitting was observed in a similar setup with changing the split-gate voltage, but at much higher back-gate voltage. The Zeeman coupling of the spin to the in-plane magnetic field leads to appearance of steps at multiples of 2 e 2 h for higher magnetic field. The additional plateaus become visible when the Zeeman-split bands have a spacing that can be experimentally resolved, which occurs in our case above 2 T. B. Effective g -factor The Zeeman splitting in the first three subbands shown in Fig. 4(a) for 20mK is very close to linear at sufficiently high magnetic fields. For the lowest subband, a nearly constant splitting is observed up to nearly 4T. The extracted g-factors show a strong enhancement compared to the bare value of g = 2 for BLG. We attribute this enhancement to the strong confinement and interaction effects, similar to those discussed in Ref. [109]. These effects are strongest for the lowest subbands because of lower densities in the almost flat (quartic) band, which is consistent with the increase of the enhancement with lowering the band index. This effect should only be present for electrons going through the constriction. Electrons that bounce back and stay in the 2D region are at too high densities for the interaction-induced enhancement to be visible. This effect combined with the peculiar low-field behavior of the lowest subband strongly hints at the importance of interaction effects in this experiment. C. Additional features There are two additional features below the conventional conductance quantization step at G = 4 e 2 h, which thus only involve the lowest quantized subband. The first one starts at around 3 e 2 h at zero magnetic field, and the second one at around 2.5 e 2 h. 0.7 anomaly One might try to identify both these features with case d in Fig. 6, where there are two additional shoulders at zero magnetic field. However, considering the conductance behavior in magnetic field shown in Fig. 7, one sees that when these two additional shoulder move down with magnetic field, as in case d4, the additional plateau at 3 e 2 h persists for higher magnetic fields. We do not see such a plateau in the experimental data. The feature starting around 2.5 e 2 h in Fig. 3(a) moving down into the 2 e 2 h plateau is visible in Fig. 2(a) as a splitting of the spin-valley subbands at vanishing magnetic field, which makes it a strong contender for the 0.7 effect. It also leads to the non-linear behaviour of the Zeeman splitting in the first subband in Fig. 4. Since we have ruled out case d, only the cases b, c and e in Table II are Fig. 6 in an in-plane magnetic field. Conductance curves for magnetic fields between 0T and 8T are shown with a horizontal shift parametrized by α = 1.5meV T. The thick black curve corresponding to B = 0 T is non-shifted. The blue lines correspond to 2 T , 4 T and and 6 T, as in Fig. 3. Without any initial splitting, there is no continuous development of a shoulder in a, the additional plateau appears, as soon as it can be resolved. The particular assignment of spin to the subbands is irrelevant: all six possibilities are indistinguishable. In case c, three different scenarios are possible. Each case happens for all four possible valley assignments. For cases b and e, there are two distinguishable spin configurations; for case d four. Same parameters as in Fig. 6; according to the measured Landé g-factor, g=4 was chosen. magnetic field behavior with Fig. 7, we conclude that we are in either case c1 or e1. While one might not be convinced by the value of the theoretical shoulder of case c, that is 3 e 2 h compared to the experimental one at 2.5 e 2 h, which is exactly the value in case e, as shown in Fig. 6, this would require the spin-up state of one valley split to the same energy as the non-spin split bands of the other valley, which implies an accidental fine-tuning. Instead, if only one valley was spin split, cases d, f and g would be way more probable, but these were already ruled out. Thus, we identify the experimentally observed behavior as case c1, which assumes an initial spin splitting, but no valley splitting. It is also clear that, in contrast to [12], we do not see any crossing of Zeeman-split bands. A shoulder similar to ours but at 2 e 2 h was attributed to the substrateenhanced Kane-Mele spin-orbit coupling in Ref. [12]. We note that such effects of the weak spin-orbit coupling can be observed only at very low temperatures, but we still see a similar effect at 4K. Finally, the proposed Kane-Mele spin-orbit splitting would lead to opposite spin splitting in the two valleys, so that there is no net spin splitting, as detailed in Appendix C. However, the observed Zeeman splitting at low magnetic fields suggests the presence of spontaneous net spin splitting in our case, while the enhancement of the effective g-factor points towards rather strong interaction effects. We thus identify this feature in the conductance as an interaction-induced 0.7 anomaly. As mentioned in Ref. [56], the exact value of the shoulder may depend on the exact QPC geometry, so that it may also appear very close to the value of 0.5 × 4 e 2 h. Fabry-Pérot resonances We identify the upper feature in the lowest subband conductance at low magnetic field, corresponding to an additional peak in the low-temperature plot in Fig. 3(c) at around −11.8 V that goes vertically through the right spin-split band, as a Fabry-Pérot resonance [110][111][112][113][114][115][116]. In Fig. 2(e), this additional feature is seen as a faint Fig. 7 with the non-symmetric splitting (sp.) introduced in Eqs. (34) and (35). Same parameters as in Fig. 6; according to the measured Landé g-factor, g=4 was chosen. The behavior of (a) is qualitatively replicated. (c): Same as in (b), but with a symmetric Zeeman splitting introduced in Eqs. (32) and (33). There is a fixed(crossing) point clearly absent in the experimental data. As is apparent by comparing the distance of the blue plateaus in chemical-potential space in (b) and (c) a symmetric splitting enhances the g-factor even more. bright curve moving down from the 4 e 2 h plateau (at weak fields) and merging with the 0.7-feature to from the spin-split 2 e 2 h plateau at magnetic field around 4 T. Note that at this same value of magnetic field, the Zeeman splitting of the lowest subband starts growing linearly with magnetic field, see Fig. 4(a). With increasing temperature, this feature disappears, in contrast to the 0.7 anomaly, see Fig. 3(d). The Fabry-Pérot resonances in our geometry emerge from interferences of electronic waves in the 2D region, which are back-scattered from the interface with the contact on one hand, and the barrier created by the split gates, see Ref. [5]. In a parallel magnetic field, there are two Fermi-wavevectors, one for each spin, so that the minima and maxima of these oscillations disperse with the magnetic field. Since Fabry-Pérot resonances correspond to electrons bouncing back and forth between the contacts and the split gate, these electrons are inherently two-dimensional and are not affected by the enhancement of the g-factor in the QPC region. A closer look reveals additional Fabry-Pérot peaks at other values of the topgate voltage, which do not move to different plateaus in the considered range of magnetic fields. For a discussion of this effect, see App. B. VI. CONCLUSIONS In conclusion, we have studied an electrostatically defined QPC in BLG which shows a zero field quantized conductance in steps of 4 e 2 h owing to the spin and val-ley degeneracy. In an in-plane magnetic field, a splitting of the first three subbands at 20 mK is observed that results from the Zeeman spin splitting, while the valley degeneracy is not affected. Additionally, a 0.7-like structure is located below the lowest size-quantized energy level which develops into the lowest spin split subband at 2 e 2 h. This additional feature is also observed in the 4K data, where only the splitting of the lowest band is clearly resolved. On top of the quantized conductance we observe Fabry-Pérot resonances. Because of the higher densities in the 2D region and the relatively small bare Zeeman splitting in BLG, these stay at fixed top gate values with increasing magnetic field. From the Zeeman energy splitting, the effective 1D gfactors in an in-plane magnetic field are found to be increasingly enhanced for lower subbands compared to the bare 2D Landé g-factor g = 2 in BLG. Moreover, the fact that the linear fitting of the Zeeman energy splitting for N = 0 does not extrapolate to zero at B = 0 further indicates the spontaneous spin polarization of the lowest subband. The behavior of the Zeeman spitting is a clear sign of the importance of interaction effects and confinement in this experiment. Based on this, we also attribute the observed shoulder below the lowest subband to the 0.7 anomaly stemming from the interaction-induced lifting of the band degeneracy. We employ a phenomenological model to qualitatively describe the behavior of this feature in the applied inplane magnetic field. In this model we assume, that each spin-valley subband can be spontaneously split by the electron-electron correlations. By comparing the devel-opment of resulting features in a magnetic field, Fig. 7, with the experimental conductance curves in Fig. 3, we conclude, that the observed behavior can be explained by the assumption of an effective spontaneous spin splitting, while the valley degree of freedom is not affected. This is in a full agreement with the picture of spontaneous spin polarization inferred from the measured Zeeman splitting. Our experimental findings, supported by phenomenological calculations and combined with those of Ref. [5] for out-of-plane magnetic field, establish the exquisite tunability of spin and valley degree of freedom by the application of gates or external magnetic fields. Furthermore, our results also demonstrate relevance of electronelectron correlations in BLG QPC geometry, as well as a possibility to control the effective strength of interactions by means of electrostatic spatial confinement by a combination of external gates. Apart from developing the microscopic theory of 0.7 anomaly in BLG QPC, several questions regarding the phenomenological model still remain. While it is straightforward to obtain a free energy of the form of Eq. (24) for a 2D model with quadratic band structure (cf. Ref. [107]), the corresponding calculation for a hybrid 2D-1D geometry is much more involved. Within such a derivation, it would be especially interesting to show the microscopic origin of the parameters of the phenomenological description. In particular, such a calculation would yield their explicit dependence on the applied magnetic field. There are several additional ingredients that could be combined with this sort of setup. In particular, in Ref. [117] it was observed that, at least if there is a substantial gap and the trigonal warping is relevant, electrons might predominately orient along the lattice directions and not take the shortest path, which is expected to affect the conductance of the QPC in the corresponding regime of gate voltages. Further, while the intrinsic spin-orbit coupling in BLG is very weak, using an additional layer with strong spin-orbit coupling, e.g., a layer of a transition metal dichalcogenide, should induce noticeable proximity spin-orbit related effects [118,119] and may lead to topologically nontrivial states. In addition, the introduction of a finite twist between the layers may also lead, at certain fillings and twist angles, to topological states [119,120]. In order to open a gap in such a system, spin-orbit coupling has to be added as well. To what extent these states can be manipulated with gates and external magnetic fields and what role interaction effects play in such engineered sample are questions worth exploring. The analysis of the present paper serves as the starting point for further studies in this direction. ACKNOWLEDGMENTS We thank Alexander Dmitriev, Angelika Knothe, Ralph Krupke, Alex Levchenko, Christoph Stampfer, and Figures 9(a) and 9(b) show the differential conductance as a function of the applied topgate voltage with horizontal shift linear in the applied in-plane magnetic field. The additional 0.7-shoulder is seen in the lowest step for both temperatures. The main difference between the two temperatures is the smoother and flatter behavior for higher temperatures. Moreover, there are two additional features that are only visible in the 20 mK case, Fig. 9(a). For magnetic fields below 0.2T, the aluminum leads are still superconducting, so that the conductance is affected by superconducting fluctuations. Additionally, one sees Fabry-Pérot resonances, which are most clear on top of plateaus. Since the aluminum leads are superconducting at 20 mK, a finite magnetic field is needed to kill this effect and curves below 0.2T show a higher conductance than the quantized values. This should be contrasted with the data shown in Fig. 3(b) for 4K, where there are no superconducting effects even at vanishing magnetic field. The superconducting proximity effect for the QPC in BLG is out of scope of the present paper; the analysis of conductance curves affected by superconducting fluctuations is an interesting task from both the experimental and theoretical points of view (for a related analysis of the supercurrent in this geometry, see Ref. [8]). Since it is well known that the 0.7 anomaly is very sensitive to the exact shape of the constriction, we include data of the same sample in a different cool down at T = 20 mK and with a perpendicular magnetic field of 20 mT. The back-gate voltage is again V BG = 10 V and the split-gate voltage ranges between −12 V and −11.5 V. Figure 10 shows a cubic spline fit of the obtained conductance data in a form similar to Fig. 2 (b) and (c) but the vertical shifts correspond to different split-gate voltages, starting at V SG = −12 V for the lowest curve and ending with V SG = −11.5 V as a function of V TG . The curves are colored according to their derivative. The thick solid lines mark the onset of the conductance plateaus, showing their dependence on the exact confinement condition, i.e., the split-gate voltage. The lowest curve corresponds to the same split-gate and back-gate configuration as the data in the main text, where we have identified the 0.7 anomaly by its magnetic field dependence, see Fig. 2 (b) and (e). In this cooldown, we see a similar feature, marked by the arrow. When following the split-gate dependence of this feature (black dotted line), one observes that it stays parallel to the onset of the lowest plateau, which verifies that it is a feature of the QPC modes. Additionally, we again see Fabry-Pérot oscillation on top of the 4 e 2 h and 8 e 2 h plateaus (black dashed lines). Since they are generated by the lead modes, they show a different dispersion with the split-gate voltage. They always appear at the same electronic 2D density, which is only slightly tuned by the split-gate voltage. The onset of the conductance steps (and the 0.7 anomaly) is much more strongly dependent on the exact gate configuration, which makes the two effects clearly distinct. Appendix B: Fabry-Pérot resonances In our experiment (Fig. 2), the 0.7 shoulder in the conductance of the lowest subband merged at magnetic field of about 4 T merges with an additional conductance FIG. 10. Cubic spline fit of the differential conductance G of the same sample in a different cooldown for V BG = 10 V and a perpendicular magnetic field of 20 mT as a function of the V TG in elevating V SG for 20 mK. The curves are shifted vertically with α = 10e 2 hV and colored according to their first derivative. The onset of the conductance plateau (black solid lines serving as guides for the eye) shows a clear dependence on the exact gate configuration. The black dotted line shows the dispersion of a feature, that we identify as a 0.7 shoulder, with a dispersion parallel to the onset of the plateau, in agreement with its quasi-1D nature. Fabry-Pérot oscillations are marked by black dashed lines and show a different (weaker) split-gate voltage dependence, since they are generated by the 2D lead modes. feature, which was identified with the Fabry-Pérot resonances in the main text. Here, we present additional details supporting this identification. The transmission coefficient accounting for the Fabry-Pérot resonance can be described by where F is the finesse and θ is the angle of incidence of the electron wave. At very low temperatures, the contribution of the resonance to the conductance is given by FIG. 11. Differentiated differential conductance at 20 mK for different magnetic fields with a vertical shift with α = 4e 2 (hVT). Only magnetic fields with magnitudes of multiples of 0.2 T are shown for clarity. The blue curves correspond to the magnetic field values given in Fig. 2(a). For all magnetic field values and over the full plotted voltage range repetitive behavior corresponding to Fabry-Pérot oscillations is visible. One should note that, according to Ref. [121], the Zeeman splitting in BLG is around 1.1 meV for 10T. Using the conversion formula for top-gate voltages to energies from the Supplemental Materials of [5] for the same device at slightly different voltages, the distance between V TG = −12 V and V TG = −8 V corresponds to the band splitting of 15.2 meV. Moreover, the density in the 2D region is not as low as in the constriction, since the split gates do not cover this region. Therefore, the total spin polarization of these 2D bands cannot be achieved and, since we only observe faint oscillations on top of the plateaus, the finesse F is small. As a result, this dependence of the conductance on the magnetic field is not experimentally resolved. These Fabry-Pérot oscillations are clearly visible in the differentiated differential conductance in Fig. 11, where they appear as small oscillations over the full top-gate and magnetic field range. An experimental example of the dependence of this conductance contribution on V TG and magnetic field is shown in Fig. 12(a) for the case of vanishing back-gate and split-gate voltage and a theoretical plot based on Eq. (B2) in Fig. 12(b). The Fabry-Pérot resonances are seen for all magnetic field values and over the whole top-gate voltage range. These peaks, in contrast to the Zeeman-split subbands, only weakly depend on magnetic field, since the Zeeman splitting in the 2D bulk is smaller than in the QPC region. While the plots in Fig. 12 agree qualitatively, there are two points to keep in mind. The theoretical plot was obtained using no residual density, which is certainly not the case in the experiment, and does not account for the peculiarity of the screening in the experimental setup in the presence of the split gates. Even if the split-gate voltage is zero, the split gates affect the electrostatics of the setup by locally screening the top gate and developing mirror charges for carriers in this region. In addition, the dielectric layer in the split-gate region is noticeably thinner. This introduces inhomogeneity in the middle of the sample, and the length scale corresponding to the distance between the leads and the split gates naturally appears. In order to fully reproduce the experimentally observed pattern one would need a full electrostatic simulation. One very apparent difference is the fact that the period stays nearly constant in the experiment but not in the theory. While we did take into account the influence of the top-gate voltage on the density and the gap, the voltage also changes the boundary condition at the contacts and close to the split-gate fingers. Moreover we neglected any present residual density, which might change the position within the spectrum and thus the top-gate dependence. Finally, the presence of the split gates will naturally induce a breaking of translational symmetry, as the side boundaries do, which were not taken into account. For additional Fabry-Pérot interference data in the same sample, the reader is referred to the Supplemental Material of Ref. [5]. A more thorough study of Fabry-Pérot interferences in BLG can be found in Ref. [115]. Appendix C: Effective low-energy theory In order to derive the conductance of the system considered in the main text, we start with the effective twoband model for BLG, Eq. (1). In this Appendix, we discuss how to obtain this low-energy approximation and how it is affected by the magnetic field, as well as by possible terms describing spin-orbit interaction. Close to the K point, the full four-band Hamiltonian is given by It acts on the four-component wave-function according to The spin degree of freedom is included through in the spin-degenerate sector, and the Zeeman splitting is introduced via where µ, σ and s are the Pauli matrices in layer, sublattice, and spin space, respectively. According to [122], the two types of intrinsic spin-orbit interaction allowed by the symmetry of the problem are where ξ = ± corresponds to valley K (K ′ ). Following [12], we additionally introduce an extrinsic spin-orbit interaction of the form For the effective low-energy theory, E ≪ γ 1 , we follow the derivation of Ref. [69]. The basis states are reordered to (ψ A1 , ψ B2 , ψ A2 , ψ B1 ) ⊗ (↑, ↓), where the first half corresponds to the low-energy, non-dimer states and the second half to the dimer states, which are coupled by the large energy γ 1 . From now on, all terms in the Hamiltonian are reordered according to this basis. Then, we can define the Greens function of the total Hamiltonian H = H i + H Z + H so i as follows: The goal is to find a closed expression for G 11 which is then used to define the new Hamiltonian H 2 according to We find and, thus, We now expand G (0) Applying this procedure to the reordered full Hamiltonian produces, to linear order in U, ∆ ′ , δ AB , v 4 , v 3 , λ, ∆E Z , the effective two-band Hamiltonian: In the main text, we restrict ourselves to the terms h 0 , h U , and h Z . This is exactly equation (1) and (3). For all calculations, we furthermore neglect the second term of h U that describes the Mexican-hat feature of the spectrum. The only terms capable of lifting spin degeneracy are h Z and the spin-orbit term for asymmetry between the layers λ u ≠ λ d , which can be caused by the lack of mirror symmetry of the whole stack [123]. Because of the valley index ξ in this expression, the splitting is opposite in the two valleys, so that there is no net spin splitting due to spin-orbit interaction at all. If such a term is present in the Hamiltonian, it would lead to full spin-valley splitting in an applied magnetic field, i.e., four steps of 1 e 2 h. This is, however, not seen in the experiment. This type of effect of spin-orbit coupling on the first conductance plateau in in-plane magnetic fields for the parameters specified in Ref. [12] is shown in Fig. 13. Appendix D: Effect of tilted magnetic field We have investigated the effect of a perpendicular magnetic field on the quantized conductance in the same device in Ref. [5]. Large out-of-plane magnetic fields lead to a valley splitting, similar to the Zeeman spin splitting, with characteristic braiding behavior. Since we see neither a lifting of the valley degree of freedom, which would lead to a full resolution of conductance steps of 1 e 2 h for large magnetic fields, nor any hint at a non-linear splitting, we can exclude a large out-of-plane component of the magnetic field. The presence of an appreciable outof-plane component of the magnetic field would also show up in a curving of the Fabry-Pérot oscillations, which is also not observed here. For small out-of-plane components the valley splitting is roughly linear [71] and can be easily included into our model by adding a term τ g v B to the energy spectrum, where τ = ±1 corresponds to the valleys K and K ′ , respectively, and g v contains both the angular dependence on the tilt angle and the magnetic moment due to the non-trivial Berry curvature. The expected effect of the tilt on the conductance traces is shown in Fig. 14. We see that a very small tilt does not lead to any noticeable difference, while a bigger one leads to a full lifting of all degeneracies at strong magnetic fields. This is in contrast to quantum dots in BLG, see, e.g., Ref. [124], where all four single-particle energies can be extracted at all values of magnetic field due to their additional charging energies and one can construct an effective g-factor by combining spin and valley splitting in a specific way, that would get enhanced over the bare spin Landé g-factor for one combination, while reducing it for the other combination. Since we do not observe a valley splitting, this effect would exactly average out in our case. consistent treatment of this electrostatic problem is very involved, the main features can be described by the inclusion of a local potential in the corresponding Schrödinger equation, thus neglecting the coupling between the Schrödinger and the Poisson equations. There are two slightly different ways to include this potential profile. We start by summarizing the most common one for a normal two-dimensional Schrödinger equation, and proceed with an alternative method for BLG. The obtained energy spectrum would extend dispersion (13) in the main text and thus also the density (15). Projection procedure for the Schrödinger equation In this section, we briefly recapitulate the projection procedure for a QPC in a conventional 2D electron gas, as discussed in Ref. [73]. The Hamiltonian is given by V (x, y) = V QPC (x, y) + V lead (x), (E2) where one models the QPC as a harmonic potential in the transverse direction and V l corresponds to the potential difference at the leads. We expand the wavefunction in transverse modes χ nx (y) according to where the complete orthonormal basis χ nx (y) satisfies − ̵ h 2 2m ∂ 2 y + V (x, y) χ nx (y) = n (x)χ nx (y). For the given potential, these eigenvalues are given by n (x) = ̵ hω y n + 1 2 and from this we get the projected 1d problem At low energies, only the lowest transverse modes contribute, and we can approximate the full solution as Ψ(x, y) = φ 0 (x)χ 0x (y). The exact form of the top of the effective 1D potential n (x) determines at what position the additional 0.7 shoulder appears; for the usual parabolic barrier top it appears very close to 0.7 × 2 e 2 h [56]. Procedure for BLG In the present case of BLG it is easier to include the constriction by means of boundary conditions than a real local potential. The Hamiltonian (1) acts aŝ on the four-component wave-function in the spin and sublattice space. These four coupled second-order equations can be decoupled into a fourth-order one and we get for the first two components: Here, we have used that, without a magnetic field, the momentum operators commute. We expand the wave-function ψ A1σ in transverse modes χ nxσ (y) as ψ A1σ (x, y) = n φ nσ (x)χ nxσ (y), which leads to a new differential equation, where the x and y component are still coupled. We already assume, that χ nxσ (y) ∝ sin(k nxσ y)[cos(k nxσ y)] if the solution is antisymmetric [symmetric] when describing the QPC by imposing hard-wall boundary conditions along the y direction. The width of the channel W (x) depends smoothly on x and we get standing waves with wavevector We decouple the components by neglecting all x derivatives of χ nxσ (y), which leads to the effective 1D equation: where the constriction W (x) acts as an effective 1D potential E n (x) = ̵ h 2 n 2 π 2 2mW (x) 2 . At low energies, only the lowest transverse mode contributes and we can approximate the full solution as ψ A1σ = φ 1σ (x)χ 1xσ (y). Choosing, for example, W (x) = cosh(x L) we get a very realistic 1D potential containing terms of the form 1 cosh(x L) 2 .
Railroads and Reform: How Trains Strengthened the Nation State Abstract This paper examines the relationship between the coming of the railroads, the expansion of primary education, and the introduction of national school curricula. Using fine-grained data on local education outcomes in Sweden in the nineteenth century, the paper tests the idea that the development of the railroad network enabled national school inspectors to monitor remote schools more effectively. In localities to which school inspectors could travel by rail, a larger share of children attended permanent public schools and took classes in nation-building subjects such as geography and history. By contrast, the parochial interests of local and religious authorities continued to dominate in remote areas school inspectors could not reach by train. The paper argues for a causal interpretation of these findings, which are robust for the share of children in permanent schools and suggestive for the content of the curriculum. The paper therefore concludes that the railroad, the defining innovation of the First Industrial Revolution, mattered directly for the state's ability to implement public policies. We combine fine-grained data on the provision of primary education in different localities in Sweden in 1868 with geographic-information-system data on the location of each train station and the home address of each Swedish school inspector in that year. This research design allows us to study the association between railroads and state capacity: the ability of state officials to enforce and implement government policies. Controlling for many important confounders and relying on data that allow us to describe the travel options of each school inspector with great precision, we argue for a causal interpretation of our findings. The evidence we present suggests that the development of the Swedish railroad network enabled school inspectors to monitor schools more effectively, strengthening the implementation of national school policies. National and local authorities disagreed over the provision of public education since the national government wanted local authorities to pay for permanent public schools whereas local authorities preferred less expensive ambulatory schools. They also disagreed over the content of the curriculum, for nineteenth-century national-local conflicts were also state-church conflicts: in Sweden, the modernizing nation state sought to mold children into loyal citizens by teaching them subjects such as geography and history, but local priests mainly wanted children to learn the Lutheran catechism. We show that if a school inspector could get to remote schools by train, the proportion of permanent schools in a school district was considerably higher than if the school inspector depended on other means of transportation. The proportion of geography and history in the curriculum was also higher where remote schools could be reached by train. We are less confident in the causal interpretation of this finding than the finding regarding permanent schools, but our results are at least suggestive that the railroad also had an effect on what was taught in the schools. In addition to its indirect effects via cultural and socioeconomic modernization, we thus argue that the railroads had direct, positive effects on the capacity of the central state. This argument has important implications for theories of state capacity, education, and comparative political development. Trains, States, and Schools Scholars have long believed that modern technology transformed politics in the nineteenth and twentieth centuries. The political scientist Samuel Finer wrote in his magnum opus The History of Government (1997, Book III, 1610-1618 that the development of the modern state in the nineteenth century was only possible because of technological changes associated with the Industrial Revolution. The sociologist Michael Mann observed in his magnum opus, The Sources of Social Power (1993), that the increase in the state's 'infrastructural power' in the nineteenth and twentieth centuries was a consequence of new technologies that allowed the state to penetrate civil society, including new modes of transport and new administrative practices that were made possible by modern communications and high levels of literacy. Long before that, the economist John Hicks noted in his Theory of Economic History (1969) that new technologies have had profound effects on public administration: 'Modern governments, one would guess, overuse the aeroplane,' Hicks wrote, 'but where would they be without the telephoneand the typewriter?' (Hicks 1969, 99). But empirical studies of the relationship between technology and political development are few and far between. The studies that do exist typically treat the effects of technology as indirectoperating through cultural and socioeconomic modernizationand not as direct effects on the state itself (see, for example, Schram (1997) on railroads in Italy and Clark (1998) on railroads in Ecuador). What interests us here is whether and how a new technology, the railroad, increased state capacity: the 'institutional capability of the state to carry out various policies' (Besley and Persson 2011, 6) and the 'degree of control that state agents exercise over persons, activities, and resources within their government's territorial jurisdiction' (McAdam, Tarrow, and Tilly 2001, 78). When it comes to the relationship between technology and state capacity, the specific mechanism we are interested in is that the railroad improved the ability of the school inspectors to monitor local schools in remote areas. Before the introduction of modern forms of transportation (notably the railroad) and modern forms of communication (such as the telegraph), both longdistance travel and the long-distance exchange of messages were very time-consuming. It was therefore difficult for state agents to establish effective control and implement national policies throughout the state's territory. As Soifer (2015) shows in his recent book on state building in Latin America, the implementation of national policies has historically been more effective where bureaucrats were sent out by, and responsive to, the central governmentas opposed to being appointed by, and responsive to, local elites. Before the introduction of modern forms of transportation and communication, this form of centralization was often not possible at all. We concentrate on education for two reasons. First of all, the creation of national education systems was an exceptionally important event, which transformed the relationship between states and citizens and which had many other cultural and economic effects besides (Aghion et al. 2019;Lindvall 2013, Ansell andBenavot et al. 1991;Benavot and Riddle 1988;Lindert 2004;Meyer, Ramirez, and Soysal 1992;Soysal and Strang 1989). Second, the expansion of primary education led to one of the nineteenth century's defining political conflicts: the struggle between the 'centralizing, standardizing, and mobilizing Nation-State' and the 'historically established corporate privileges of the Church' (Lipset, Martin, and Rokkan 1967, 14-15, emphasis in original). As Lipset and Rokkan note, 'the fundamental issue between church and state was the control of education.' 1 The struggle over the control of education was a church-state conflict and a local-national conflict at the same time. The 'school wars' of nineteenth-and early-twentieth-century Europe involved both the question of secularization (whether the responsibility for primary education should be shifted from religious institutions such as parishes and dioceses to secular bureaucracies) and the question of centralization (whether schools should be administered by local or regional authorities, or by the central government). The famous French school reforms of the 1880s, the 'Ferry Laws,' are a particularly clear example. When France's Republican reformers introduced new legislation that made education seculardeepening the conflict between the French state and the Catholic Church that ended with the adoption of the Law of 9 December 1905 concerning the Separation of the Churches and the Statethey also made the central government financially and administratively responsible for primary education. It is a curious fact that these sorts of conflicts only began in earnest in the second half of the nineteenth century (Ansell and Lindvall 2013), even if the latent conflict between modernizing elites in national capitals and conservative and religious authorities in the periphery existed well before that time. In our view, the best explanation for the increasing salience of localnational conflicts and church-state conflicts in the second half of the nineteenth century is that before the construction of the railroads, national governments were simply unable to establish the sort of direct control that is required to run something as complicated as a school system. In other words, without modern technology, states would not have been able to centralize, or even secularize, primary education. The ideas and findings we report in this paper are mainly concerned with the ability of the state, or agents of the state, to monitor the implementation of public policies by local-level decision makers. They are likely to be generalizable to other situations in which governments seek to monitor local authorities. There are many such examples, both historically and in the contemporary world. For example, already in the nineteenth century, many governments appointed health 1 It is also important to keep in mind the role that education was expected to play in the development of national defense capabilities (Aghion et al. 2019)a connection that is especially relevant in the context of this paper, since investments both in the railroad and in the school system were promoted by nineteenth-century reformers in the interest of strengthening the military. and public-health inspectors that oversaw the implementation of national policies concerning, for example, vaccinations (cf. Ansell and Lindvall 2021, Part IV). The paper's ideas and findings are also generalizable to situations in which national governments seek to monitor private organizations that carry out public functions. An important nineteenth-century example here is mentalhealth institutions. In nineteenth-century England, for instance, the national 'Commissioners of Lunacy' inspected both local, county, and private mental institutions (Jones 1972). In policy areas where state capacity does not primarily rely on the ability of state agents to monitor other institutions and organizations, transportation technologies such as the railroad or indeed the automobileare not likely to matter as much. But other technologies may matter more. The effects of internet-based technologies on state capacity today, for instance, are not primarily a result of the state's ability to monitor other decision makersat least in liberal, democratic states, what matters is the ability of citizens to interact with the state, as in countries where citizens can claim government benefits through web-based interfaces. This paper's general arguments about the crucial relationship between technology and state capacity are therefore likely to have broad application. Our study is related to other recent studies that have examined the political effects of transportation technologies and networks. For example, Nall (2018) has recently described the relationship between the development of the Interstate Highway System in the United States during the 1940s to the subsequent evolution of American politics. In his book, Nall argues that the highways reshaped the American political landscape by aiding the movement of the white middle-and upper-middle classes to the suburbs. The resulting spatial sorting of households also affected subsequent local public transportation policies. In a similar vein, Baum-Snow (2007) has studied the relationship between the Interstate Highway System and urbanization. His analysis shows that the main cities in metropolitan areas declined by 17 per cent on average, despite an aggregate population growth of 72 per cent, which suggests that the population shifted to suburban areas, as described by Nall. These changes in political geography also mattered for the education system due to increased school segregation. Baum-Snow and Lutz (2011) suggest that a decline of 6 to 12 per cent in white public school enrollment, due to the desegregation policy, manifested primarily as an effect of suburban migration, and infer that part of the effect was also related to an increase in private schooling. We share, with these studies, an interest in the role of transportation technology, but our framework emphasizes the struggle between national political authorities and recalcitrant local and religious communities, not the economic and political responses of individual citizens to changes in transportation. We concentrate on how the prospect of being monitored by state officials affects local resistance to top-down political change. Swedish Schools in the 1860s Sweden's nationwide compulsory public school system was established in 1842, when the Swedish parliament, the Diet of the Estates, adopted the Education Ordinance, folkskolestadgan. Under the Education Ordinance, primary schools were funded and administered by local governments, which were, at the time, coextensive with the parishes of the Lutheran state church. The school system that was established under the Education Ordinance contained two conflicts of interest between national and local authorities. First of all, there was a conflict of interest over the provision of primary education per se. § 1 of the Education Ordinance provided that each parish must have at least one school, preferably a permanent school that employed a skilled teacher. But § 2 of the Education Ordinance provided that at least for some time, parishes could opt to operate ambulatory schools instead of establishing permanent schools with their own purpose-built buildings. Parents were often reluctant to release their children from what they saw as more important work around the home, and local tax payers were often reluctant to pay for teachers and school buildings. Where resources were thin and resistance was great, local authorities therefore often preferred to set up ambulatory schools, as a cheaper, temporary alternative to constructing permanent school buildings and employing teachers on regular contracts. Second, there was a conflict of interest over the content of the curriculum. § 6 of the Education Ordinance provided that all teachers must be able to teach the Catechism of the Lutheran church, biblical history, natural and political geography, history, arithmetic, geometry, and natural sciences. In addition, writing, drawing, physical education, and singing were taught in the schools. 2 But § 7 provided that children did not have to pass exams in all these subjects; they were allowed to graduate if they had enough knowledge in the Swedish language, sufficient knowledge of biblical history and the Catechism to be confirmed in the Lutheran church, and adequate skills in arithmetic, writing, andexcept in truly hopeless casessinging. Again, the provisions of the Education Ordinance made it possible for local decision-makers to avoid implementing some of the national government's more ambitious policies. In 1861, parliament decided to appoint national school inspectors to promote the implementation of the provisions of the Education Ordinance throughout the country. In particular, the school inspectorate was meant to contribute to a greater expansion of permanent public schools (Nilsson 2018, 11), but they were also committed to increasing the number of children who learned geography and history, which were seen as subjects that were essential to a modern, 'citizen-oriented' education system (Evertsson 2012, 639). Before 1861, schools were inspected by local authorities that sorted under the parishes themselves; starting in 1861, the state sought to increase its control over local schools. As Thelin (1994) notes, the inspectors were meant to be the 'government's eyes in the parishes.' They were selected from the region in which the inspected school districts were locatedthe bishops submitted recommendations to the Ministry of Education in Stockholmbut the final appointment decisions were made in the capital. The idea was to appoint individuals whose reputation would ensure that the inspections were not met with 'a contemptuous grin' among local decision-makers and teachers (Thelin 1994, 14-15). The appointment of national school inspectors was an important step toward the centralization and secularization of the Swedish school system in the late nineteenth century and in the beginning of the twentieth (Tegborg 1969). School inspectors were expected to visit local schools regularly, and each year submitted reports to the Ministry of Education. They promoted adherence to the Education Ordinance, inspected the school facilities, and advised teachers on how to improve their methods of teaching. From time to time, inspectors also taught their own classes. Moreover, starting in 1864, a few years before the specific period we examine, inspectors were expected to examine all students once per semester. Anecdotal evidence suggests that the school inspectors were widely feared: thinking back to his childhood in the late nineteenth century, the son of one station master with the railroads remembered that as soon as the school inspector had exited the train station, 'my father promptly ran to the telephone to warn the schools of his arrival' (Thelin 1994, 157). When it comes to the two conflicts of interest we described above, the school inspectors typically promoted the policies of the national government. First of all, the inspectors fought the ambulatory schools energetically and pushed for the construction of permanent public schools (Ekholm and Lindvall 2008). Second, the school inspectors had a broader concept of learning than local decision-makers. Whereas local priests typically held the view that all children needed was enough knowledge of the Catechism and biblical history to be confirmedor, in other words, that the basic knowledge and skills specified in § 7 of the Education Ordinance were enoughmost school inspectors sought to convince local priests, school boards, and teachers to teach subjects that were associated with civil citizenshipparticularly geography and historyand not only religion (Evertsson 2012). In the period we are studying, the inspectors largely exercised 'soft power.' Only later, in the early twentieth century, were school inspectors granted more direct authority over local school policies. For example, from the 1914 school-inspectorate reform onward, they had a veto on the appointment of teachers (Jägerskiöld 1959, 82). In 1868, the year we study, the main instrument the inspectors had available was to name and shame school districts that performed badlyin their annual reports and through other, more informal channels. Research Design and Data The argument behind our idea that trains strengthened the nation state is straightforward: where school inspectors were able to travel more easily, they were better able to enforce national policies. In Sweden, this meant pushing for children to attend permanent schools and to be taught subjects such as geography and history in addition to the Catechism. Before the arrival of the railroads, travel conditions were difficult. One historical study tells of a school inspector in 1862, Inspector Rudenschöld, who spent 262 days away from home, 'often on long journeys by horsedrawn carriage on lousy roads' (Nilsson 2018, 13, our translation). To test the idea that the railroad made a big difference to the effectiveness of the inspectors, we conduct an empirical investigation of official data from 1868, a decade and a half after the parliament's 1854 decision to create a national railroad network and a few years after the 1861 decision to appoint national school inspectors for all school districts. In 1868, Statistics Sweden compiled detailed data on schools and education outcomes in each of the 174 deaneries of the Swedish church (Statistiska Centralbyrån, 1870). The deanery, or kontrakt, is a level of church governance between the parish (the lowest level) and the diocese (the seat of a bishop). Our investigation is based on a cross-sectional comparison of 170 deaneries (for reasons that we explain below, we exclude four deaneries). We collect data from official sources on the share of children being educated in different types of schools and in different subjects in the year 1868. Unfortunately, data were not reported in this detailed form either before 1868 or after. Consequently, we only have access to a cross section of all deaneries in 1868. The crosssectional research design permits a detailed econometric analysis of the correlates between the railroad expansion and education outcomes. We also argue, however, that the specific historical circumstances surrounding the gradual roll out of the railroad network, in combination with geography-specific controls, allows us to make causal claims on the basis of the estimated correlations. In the absence of detailed panel data, we are fortunate to have data from 1868, when the railroad network had not yet been extended to all the major towns and cities, which means that our main explanatory variable varies across all types of localities. Moreover, by 1868, the national school inspectorate had only existed for a few years, which allows us to study the tension between national authorities and local, religious authorities at an early stage of this political struggle. We concentrate on two types of data: the share of school-aged children in each locality attending permanent public schools (to test the effect of the railroads on the provision of education per se) and the number of children who received teaching in each subject (to test the effect of the railroads on the content of the curriculum). Table 1 lists the share of students in all school forms in 1868 in the 170 deaneries in our study. As seen fom the table, public schools ( folkskolor) were the most prominent school form. However, a significant share of children were in elementary schools (24 per cent) or home schooled (15 per cent). To be untaught was uncommon (only 2 per cent of all school-aged children did not attend school at all). Among the public schools, permanent schools were clearly preferred by the national government, and they were also more common (35 per cent) than the cheaper ambulatory schools (21 per cent). When it comes to the relative importance of the different subjects that are listed in § 6 of the Education Ordinance, official statistics for the year 1868 report the number of children that were taught each subject. Unfortunately, we do not have information about how many hours the students were taught per subject; the inspectors only took note of the number of students that had, at some point during the semester, recived education in each of the subjects. Since the Cathechism was taught to virtually all students, we compare the number of students who received education in geography and history to the number of students that were taught the Catechism. The railroad revolutionized overland travel in Sweden, as in other countries. On many routes, travel times were reduced up to ten times (Sjöberg 1956). Travelling between the two largest cities, Stockholm and Gothenburg, had previously taken several days, involving frequent stops and changing modes of transportation; the railroad made this journey possible to undertake in a single day. In 1868, the Swedish railroad network was still in its infancy, which is why there is a great deal of cross-sectional variation in railroad connectivity that we can exploit in our analyses. Beginning in 1856, a first wave of state-sponsored trunk lines was built (Oredsson 1969). The original plan of the network consisted of five main trunk lines from north to south, connecting the entire country. Because of military concerns, however, and because of a desire to stimulate economic development in backward areas away from the prosperous coasts, the network was rerouted through the interior of the country, avoiding many important towns and transport hubs. This new plan did not impress the local representatives in the parliament, many of whom saw their home towns bypassed by the railroad lines and who regarded the plan's alleged 'fear of waterways and towns' as irrational (Heckscher 1954, 241). This political infighting delayed railroad development construction, especially since the business cycle turned downward in the late 1860s. Hence, by 1868, Sweden had a network of railroads that connected the three largest population centers -Stockholm, Gothenburg and Malmöbut many important parts of the country remained unconnected to the network. It was not until after the business cycle turned upward again during the 1870s and 1880s that many of the proposed lines were built. The fact that some lines had been held up due to political infighting in the Riksdag is plausibly unrelated to relative differences in local growth and educational outcomes. As we explain below, we therefore think of deaneries with planned but yet unbuilt railroads as a reasonable placebo group that helps us see what school outcomes looked like absent the railroads. Figure 1A describes the railroad network in the year 1868. Many of the areas through which the railroads were drawn were not particularly prosperous (darker colors represent high levels of gross domestic product per capita), and many cities remained unconnected. Berger and Enflo (2017, 128-129) show more formally that towns with and without rail by 1870 has equal access to domestic urban markets and a similar sectoral structure, measured by employment shares in artisanal, trade, military, manufacturing and service occupations prior to the construction to the network. Moreover, railroad towns did not exhibit any differences in their pre-rail urban growth patterns. Thus, it can be concluded that the early development of the Swedish railroad network was not a function of the prior level of economic development in different localities. Taken together, we argue that the low correlation between the railroad network and variables such as GDP per capita and urbanisation, measured prior to the network, alleviate the obvious concern that any observed relationship between railroad access and education outcomes might be a result of underlying, pre-existing differences in prosperity, urbanization, or class structure. But there are other potential confounders that are political, not economic: the railroads might have been routed through areas that were easier for planners to access due to pre-existing differences in local state capacity. As Dunlavy (1994) and others have shown, existing political structures typically shaped railroad development in the nineteenth century. We therefore need to consider the possibility that any observed correlation between the railroad network and education outcomes might reflect pre-existing political differences among localities. Measuring preindustrial local state capacity is not easy, but in the Swedish case, the early-modern state postal routes are a meaningful proxy. During Sweden's involvment in the Thirty Years' War, in the seventeenth century, the state's need for an efficient postal system became acute. The Postal Regulation of 1636 specified the obligations of the postal operators (who were in fact peasants living at an appropriate distance from each other). The postal routes were directed over land and water in summer and over snow and ice in the winter. The mail was supposed to be carried at all times of the day, and those who did not run the expected 10 kilometers an hour were condemned to eight days in prison on water and bread (Boije and Prenzlau-Enander 2003). Panel B in Fig. 1 compares the railroad network with the early-modern postal routes, which are digitized on the basis of a map from the middle of the eighteenth century. There were some areasparticularly in the north-western, mountainous part of the countrythat still lacked both railroads and postal routes, but there were also many parts of the country that were included in the postal network but lacked railroads. This alleviates the concern that our rail variables might simply pick up pre-existing local variation in state capacity. But the railroad network was clearly not entirely independent of previous economic and political structures, for the three main cities -Stockholm, Gothenburg, and Malmöwere, by design, the first to get a rail connection. We therefore drop the deaneries covering these three cities from our analysis, reducing the likelihood of biased results due to endogeneity. 3 We also include numerous control variables in our statistical analyses, as explained in the next section. Empirical Strategy We are interested in the effect of the railroad on the ability of state officials to implement public policies. The ideal experiment for testing this idea in the context of nineteenth-century school inspections in Sweden would have been to randomly assign railway connectionsbetween school districts and inspectors' homesamong the deaneries that were remote from the homes of the inspectors, and then compare the school outcomes in remote deaneries to which school inspectors could and could not travel by train. Absent such experimental data, we instead rely on a historical case that has several compelling features, as we discussed in the previous section. A first, rather naive approach to modelling the effect of the railroad would be to estimate the equation where y i is an outcome we wish to explain in deanery i (that is, either the share of children in permanent schools or the proportion of children taking classes in geography and history), the variable Railroad takes the value 1 if there was a train station in deanery i and 0 otherwise, and X is a vector of controls (which we discuss in more detail below). But we can clearly improve on equation (1). First of all, if there was no train station near the inspector's home, the fact that there was a train station in the inspected deanery was of little help to the inspector. The first equation we actually estimate in the paperfirst without and then with the controls includedis therefore equation (2), which replaces the variable Railroad with the variable (Railroad) Connection, which takes the value 1 if there was a train station in deanery i and in the deanery where the inspector lived. The coefficient β 1 ; which is an estimate of the effect of railroad connectivity, is the main quantity of interest in our analyses. We must also take into account that the railroad only mattered if the schools the inspector was tasked with inspecting were so remote that they were difficult to reach otherwiseby foot, on horseback, or using a horse-drawn carriage. To account for this, the next few models we estimate include the variable Within 30 km, for school districts that were not remote, and then we interact that variable with the variable Connection. 3 In fact, the national school inspectors did not even inspect the schools in the two largest cities, Stockholm and Gothenburg. In the case of the capital, Stockholm, the city's school board arranged its own inspections, and reported directly to the government. In the case of Gothenburg, the school inspector responsible for the deaneries of Kind, Falkenberg, and Halmstad was formally responsible also for Gothenburg, but he did not carry out inspections, relying instead on reports that were prepared by city authorities. Due to a border change in the diocese of Linköping, we are also forced to drop the deanery of Lysing from our sample, which leaves us with the sample of 170 deaneries for the empirical analysis. Previous research on Swedish market towns in the pre-industrial era suggests that approximately 65 kilometers was the maximum distance that a horse-drawn cart could travel in a day (Bergenfeldt 2014, 131-133). Adding the assumption that an inspector would want to return to his home within a day, we assume that deaneries further away than 30 kilometers from the school inspector's home were very difficult to reach comfortablyespecially considering the fact that the inspector also needed some time to perform the actual inspection. Since the 30-kilometer cut-off is, if anything, likely to be overly generous, we also try an alternative cut-off of 20 kilometers. We expect the coefficient β 2 to be positive, since a nearby inspector was more effective than a remote one, and we expect the coefficient β 3 to be negative since the railroad is unlikely to have mattered much for school outcomes in areas that were easily accessed by other means of transportation. The last model that we estimate offers us the most conservative test of our main hypotheses. As we discussed earlier, the railroad had many different sorts of effects in the connected villages, towns, and cities, and we wish to distinguish between all those other more indirect effects and the direct effect of the railroad on the effectiveness of the school inspectors. Here, we exploit the fact that some deaneries with a train station within their borders were inspected by school inspectors who did not themselves live in a deanery that was connected to the rail network. By including both the Railroad variable from equation (1) and the Connection variable that we rely on otherwise in the same model, we are able to estimate the effect of a railroad connection to the inspector while controlling for the presence of the railroad as such. Including both Connection and Railroad in the model is a powerful test since it is hard to see why the effects of the railroad in locality A should depend on whether locality B, where the school inspector lived, was also connected to the railroadunless, or course, the railroad mattered to the school inspector's activities, as we believe it did. Figure 2 explains the construction of the main variables of interest, using the example of Peter Wingren, a school inspector who lived in the city of Lund in 1868. Inspector Wingren was responsible for inspection of six deaneries in the diocese of Lund (but unfortunately not the one in Lund itself, where he lived, since it was the responsibility of another inspector, who also lived in Lund). Assuming that Wingren was willing to travel up to 30 kilometers by horse of foot, as we discussed above, he could reach three of the six deaneries comfortably without relying on the railroad: Skytts, Wemmenhögs, and Oxie (but note that Oxie, to which Wingren could also travel by train, includes the major city of Malmö, and is thus excluded from our analyses). The other three deaneries were further away, but Inspector Wingren was luckier than some other inspectors, for he could use the railroad to reach two of them: the Luggude deanery and the Ljunits och Herrestads deanery both had train stations. Unfortunately, it was not at all easy for Inspector Wingren to reach Södra Åsbo, which was located 44 kilometers from his home in Lund and which did not have a train station. We will now proceed to discuss the control variables that are included in the vector X. Our estimation strategy relies, in part, on the assumption that the development of the railroad network in 1868 was largely independent of the previous level of economic and political development in the connected localities, as suggested by Fig. 1. But we nevertheless control for several potential confounders in our regression analyses. In spite of the evidence in panel A in Fig. 1, we control for prior levels of economic development. If railroad lines were at least in some cases drawn through richer or faster-growing areas, higher demand for education in some regions might be caused by those innate economic differences. To control for these economic effects, we rely on data on regional GDP per capita at the county level in 1860 from Enflo, Henning, and Schön (2014); we adapt the data for Sweden's twenty-four counties to the borders of our deaneries using GIS methods. Moreover, we control for urbanization by adding a dummy variable that takes the value 1 if there was a town holding administrative township rights within the deanery (and 0 otherwise). Although Swedish towns were tiny by international standards (most of them did not reach the population threshold of 5,000 inhabitants often used in the international literature to define a 'town'), urban areas might have been more modern and hence prone to supply more education than rural deaneries, as well as being more likely to have railroad access. We also control for prior political development, in spite of the evidence in panel B in Fig. 1. Specifically, we include a measure of the number of postal rays per deanery to proxy for preindustrial state capacity. Since this variable is not a perfect proxy for pre-treatment state capacity, we also control for two geographical variables: the ruggedness of terrain, specifically the standard deviation of elevation in each deanery (Nunn and Puga 2012), and the distance, in hundreds of kilometers, from Sweden's capital, Stockholm. We do not think that the ruggedness variable influenced the quality of teaching by increasing travel times, for the school students; instead, we wish to use ruggedness as another proxy for pre-treatment levels of state capacity. In addition, we add a control variable that measures the effect of the power of landed elites over the provison of schooling. The literature about the potential effects of elites on schooling is large but inconclusive. Important studies suggest that landed elites often blocked the introduction of public schooling when they had the power to do so (see, for example, Engerman and Sokoloff 1994, Lindert 2004, and Galor, Moav, and Vollrath 2009. However, a recent study of rural parishes in Sweden in the late nineteenth century suggests that local elites in fact promoted investments in primary schooling (Andersson and Berger 2018). To control for the power of landed elites, we collect data on the share of the rural population with voting rights by digitizing a map from official statistical publications. We also wish to consider the possibility that the main town in each diocesethe seat of the bishopmight have differed from other localities in the willingness of local decision-makers to supply different forms of state-sponsored education. We therefore control for the seats of Sweden's bishops in 1868. Finally, we add controls for railroad lines that were planned but not yet built by the year 1868. As we discussed earlier, the inclusion of this variable allows us to perform a placebo test. According to our theory, the mere expectation that a deanery would soon become connected to the railroad network should not influence school outcomes much; what mattered, in our view, was the effect an actual railroad had on the school inspector's ability to carry out his duties. We therefore expect the future-railroad placebo variable to be only weakly related with the outcome variables of interest. There were 36 inspectors in total during our period of study. They were either priests or had a career in teaching and teaching administration. One potential methodological concern for our study is that if the assignment of inspectors to deaneries was based on the political preferences of local, regional, and national decision makers, this process, and not the railroad, might explain the patterns we observe. But we have found no evidence of this sort of political bargaining process in the historical literature we have consulted. The inspectors were largely chosen on basis of their high social prestige and their ability to devote time and resources to their tasks. Nevertheless, since the unobserved individual characteristics of the inspectors are likely to have mattered to how they performed his duties, we cluster the standard errors at the inspector level in all our models. The clustering also helps to address another important aspect of the problem we are studying: the fact that the performance of the inspections depended on the time-allocation budget of each school inspector. If a railroad connection increased the efficiency with which an inspector could oversee one connected school district, this likely also increased the time left for inspecting other schools, including schools without a rail connection. In that important sense, school districts that were inspected by the same inspector were clearly not independent observations. Permanent Schools The first of the two main ideas we wish to test is that having a school inspector who was able to travel via the railroad network increased the proportion of children who attended permanent public schools. Note that the idea is not that the railroad increased the likelihood that students made it to school, for students did not in fact travel very far, even in the nineteenth century (data from 1868 suggest that the number of children who walked more than half a mile to get to school was only 9.4 per cent in deaneries that were not connected to the railroad network and 13.2 per cent in deaneries that were). Our idea is rather that the presence of a school inspector who could get easily to the inspected schools increased the likelihood that education was provided in the favored form of permanent public schools. 4 For descriptive evidence, see the map in Fig. 3, which shows that the proportion of children who attended permanent public schools was typically higher in areas that were close to the first railroads. In this figure, the share of school-age children who went to public schools is visibly higher in deaneries that the school inspector could get to easily than in deaneries that were remote and had no railroad connection. The results of our regression analyses can be found in Table 2. In column 1, we estimate a model without control variables that simply includes the dummy variable 'Railroad Connection,' which, as we have mentioned, takes the value 1 if there was a train station in the inspected deanery and in the deanery where the school inspector lived. As Table 2 shows, the estimated coefficient is approximately 17: In other words, this simple analysis suggests that the proportion of children in permanent schools was approximately 17 per cent higher if the inspector could get to a school district by train. In column 2, we include the control variables, which behave very similarly across specifications. High levels of GDP per capita are consistently correlated with the provision of public education. The estimated difference between towns and other localities is small, but in the expected (positive) direction. Neither of the three variables that we have included to control for pretreatment levels of state capacityearly-modern postal-system density, rugged terrain, and the distance to Stockholm, are clearly associated with a higher share of children in permanent schools. Neither is political participation, and although being in the vicinity of the bishop's office is associated with more children attending permanent schools, this correlation is very imprecisely estimated. Finally, it is reassuring to find that there is no clear relationship at all between planned railwaysthe placebo test we discussed earlierand the share of children in permanent public schools. The model in column 2 does not take into account the important fact that some school inspectors lived so close to the schools they inspected that they could reach at least some of their assigned districts easily without traveling by train. In column 3, we add a variable that takes the value 1 if the inspector lived within 30 km of the centroid of the inspected deanery. We then interact the 'Railroad Connection' variable with this new dummy variable, since we expect different forms of travel to be substitutes, not complements. As expected, the coefficient for the interaction term is negative, which means that if the inspector lived nearby, outcomes were not improved much by the fact that he could also travel by train. Meanwhile, the coefficient for the variable 'Railroad Connection' variable and the coefficient for the variable 'Inspector Within 30 km' are both positive, suggesting that education outcomes were improved if the inspector either lived nearby or could travel to the inspected deanery by train. In column 4, we run the same specification, but we now use a 20-kilometer threshold instead of a 30-kilometer threshold. This decreases the size of the coefficient slightly, but the qualitative interpretation remains the same. Finally, in column 5 and 6, we take into account that deaneries with stations that were connected to the railroad network might have had larger shares of children in permanent schools due to the general modernizing effects of the new transportation technology, not because the school inspector had better travel options. To account for such general effects of the railroad, we now add a dummy for the presence of a railway station in the expected deanery as a control. 4 Although historical evidence suggests that teachers were recruited locally, a railroad connection could have been important for broader recruitment and teacher quality. While this alternative mechanism might have influenced the curriculum taught in the schools, there are less reasons to believe that it impacted directly on our main variable of interest: the share of children in permanent schools. The coefficient for this variable is small and imprecisely estimated, which strongly suggests that merely having a train station did not contribute much to education outcomes. What seems to matter in these models, as before, is whether the school inspector was able to travel by train to the deaneries he inspected: the variable 'Railroad Connection' remains large, positive, and relatively precisely estimated. Again, the estimated effect of a railroad connection turns out to be somewhat larger when we allow for a more generous definition of living nearby (30 kilometers, as opposed to 20). As Table 3 shows, the relationship between a railway connection allowing an inspector to travel to remote schools and the proportion of children in ambulatory schoolsthe school form that the inspectors were most critical of and that the national government wished to phase outis the exact opposite of the relationship between a railway connection and the proportion of children in permanent schools. The other school forms listed in Table 1 are less relevant for our argument, but we include analyses of each of these school forms in the Supplementary Material. There is no clear relationship between the railroads and those school forms, except for the elementary schools (småskolor), which were preparatory schools for children who were, for various reasons, not deemed ready to enroll in a regular public school. These schools were usually staffed by less well-educated teachers, and historical studies suggest that they were often introduced as a second-best solution to reduce salary costs and increase public acceptance in areas where schooling was resisted (see especially Evertsson 2012). These findings strengthen our interpretation of the effects of the railroad: when the inspector could travel to remote schools via the railroads, fewer children attended elementary schools, just as fewer children attended ambulatory schools. The Curriculum The historical literature on the Swedish education system suggests that school inspectors encouraged local teachers to teach secular, state-building subjectsespecially geography and history - while downplaying the role of the Catechism. There were widespread concerns, at the time, that the church-run public schools merely encouraged repetitive memorization of the Catechism at the expense of modern forms of teaching in non-religious subjects. For descriptive evidence on the relationship between the location of the school inspectors, railroad access, and the proportion of children who studied geography and history relative to the proportion of children who studied the Catechism, see the map in Fig. 4. As in Fig. 3, there appears to be a strong correlation between railroad access and education outcomes (here, the share of children who were taught geography and history, relative to the number who were taught the Catechism). Interestingly, the higher relative share of students who studied geography and history in the railroad-connected deaneries was not because the share of children who studied the Catechism was lower. Almost all children studied the Catechism, but children in deaneries that the inspector could get to easily also studied other things. Table 4 analyzes the evidence in Fig. 4 using statistical methods. The positive coefficients for the variables Railroad Connection and Within 30 kilometers suggest that, as expected, deaneries whose inspector was able to travel by train, or whose inspector lived close enough to walk or ride, were taught more geography and historysubjects that school inspectors were particularly keen to promote. But the coefficients are estimated with less precision than in our previous analyses, making us less confident in a causal interpretation of the observed correlations. When analyzing the data in more detail, we follow the same empirical strategy that we followed in the previous section. Thus, in column 1, we simply include the Railroad Connection Railroad connection −10.3* −11.5** −15.6*** −13.6*** −14.0** −12.4** (5.7) (5.4) (5.6) (5.0) (6.3) (6.1) Inspector within 30 km −9.0* −9.2* (4.9) (5.0) Connection × Within 30 km 12.0* 12.3* (6.6) (6.6) Inspector within 20 km −7.8 −7.9 (5.7) (5.8) Connection × Within 20 km 9.7 9.9 (6.8) (6.9) Railroad −2.6 −2.0 (4.5) (4.6) Regional GDP per Capita in 1860 −9.6** −10.1** −9.9** −10.0** −9.8** (3.7) ( variable without adding controls. The coefficient for having a railroad connection on the proportion of children who followed the broader curriculum that was promoted by the national government is large and statistically significant. 5 In column 2, we add the control variables. The magnitude of the coefficient for Railroad Connection decreases somewhat when the controls The mean of the dependent variable is 37, and its standard deviation is 13. The interpretation of the coefficient is that deaneries within reach of school inspectors were on average 9.2 points higher on this relative scale. are included. In column 3, we add information about the distance between the inspected deanery and the inspector's home. The coefficient remains large and is relatively precisely estimated (although the p-value is greated than .05). The negative interaction term again suggests that different forms of travel were substitutes, not complements. In column 4, we show that the effects are essentially similar when we use the 20-kilometer cut-off instead of the 30-kilometer cut-off. Finally, in columns 5 and 6, we take into account that access to the railroad network might have influenced education outcomes through the generally modernizing effects that were brought by this new transportation technology, not because inspectors had better travel options. By including the variable Railroad as a control, we separate the association that is due to the inspector's travel options from many other potential effects of the railroad in the inspected deanery. As in the previous section, most of the estimated effect seems to be due to the fact that the inspector could get to the schools easily, not to the mere presence of the railroad, but the coefficient for Railroad Connection nevertheless drops in size and becomes less precisely estimated (p ≈ 0.2). This means that the results regarding the content of the curriculum are significantly less robust than the results regarding the provision of education per se, which we discussed in the previous section, although it remains more likely than not that the railroad strengthened the nation state in this respect as well. In the Supplementary Material, we include similar analyses of the other subjects in the curriculum: arithmetic, geometry, natural science, physical education, religious history, reading, singing, and writing. The results are consistently less robust than the results for geography and history (which were the subjects that school inspectors were most eager to promote according to the historical literature we discussed earlier). It is worth noting, however, that a railway connection for the inspector is positively associated with more students taking geometry, natural science, physical education, and writing, but with fewer students taking reading, religious history, and singing. The school inspectors were known to favor the subjects in the first group, whereas the subjects in the second group were compulsory for all students (with the exception of singing in 'hopeless cases'), and they were favored by local priests since they were essential parts of religious upbringing (reading the Bible was encouraged in the Lutheran tradition, which explains why the estimated effects of the railroad are different for reading and writing). We do not wish to make too much of these observations, however, since the coefficients are small and not precisely estimated. Conclusions In this paper, we have examined the relationship between one of the defining technological innovations of the First Industrial Revolutionthe railroadand one of the most momentous social and political changes of the nineteenth centurythe expansion of primary education and the introduction of national school curricula. To accomplish this goal, we have combined geographic-information-system data on the extent of the Swedish railroad network in the second half of the nineteenth century with fine-grained, official data on the provision of primary education in different localities in the late 1860s, allowing us to examine how the coming of railroads related not only to the provision of education per se, but also to the content of the curriculum. Our results strongly suggest that the coming of the railroad strengthened the nation state vis-à-vis the local, religious authorities that had long controlled primary education. By comparing nearby school districts that national inspectors could get to easily from their homes, remote school districts that were reachable by train, and remote school districts that were not reachable by train, we have provided evidence on how the railroad mattered for the effectiveness of state bureaucracies in the nineteenth-century world. We find that the railroad was positively associated with the proportion of students in permanent public schools. Because of the specific historical circumstances during the railroad network roll-out, the detail of the geographical data, and the inclusion of several control variables, we argue that it is possible to make causal claims on the basis of the estimated correlations. We also find that the railroad was positively associated with the relative share of children that were taught the subjects that the modernizing nation state was keen to promote: geography and history. The causal interpretation of those findings is more uncertain, however. There are strong reasons to believe that if these sorts of results hold for Swedenwhere the relationship between church and state was comparatively harmonious since the Swedish church had been a state church since the sixteenth century (cf. Morgan 2002)they can also be generalized to other countries in Western Europe and elsewhere. In other words, our paper strongly suggest that the railroad, a nineteenth-century technological innovation, had important political effects, confirming the ideas of social scientists such as Samuel Finer, Michael Mann, and John Hicks about the modern nation-state's dependence on quintessentially modern technologies. More generally, our findings suggest that technological innovations in the nineteenth century had a powerful direct effect on state capacity: the ability of state agents to exercise control over persons, activities, and resources, and enforce government policies. Existing theories of state capacity in economics and political science rightly emphasize the strategic interaction among political parties (Besley and Persson 2011) and the political struggle for control between local and national elites (Soifer 2015). Our results suggest that modern technologies such as the railroad sharpened those conflicts by making it technically feasible for the state's agents to exercise control in the first place. These ideas and findings about communication technologies are likely to be generalizable to other situations in which state capacity depends on the government's ability to monitor local authorities or private organizations that carry out public functions. Technology is also likely to matter for state capacity in other ways, but that more general idea awaits further development and testing. Author contributions. Authors have contributed to the article equally. Conflicts of interest. There are no conflicts of interest. Ethical standards. Ethical standards have been met since the information is publicly available for anyone. Supplementary Material. Online appendices are available at https://doi.org/10.1017/S0007123420000654. Data Availability Statement. The data and analysis files that are necessary to replicate the findings in the article, and in the supplementary materials are available in Harvard Dataverse at: https://doi.org/10.7910/DVN/XMEDZD.
Phenotypic and Genomic Analysis of Hypervirulent Human-associated Bordetella bronchiseptica Background B. bronchiseptica infections are usually associated with wild or domesticated animals, but infrequently with humans. A recent phylogenetic analysis distinguished two distinct B. bronchiseptica subpopulations, designated complexes I and IV. Complex IV isolates appear to have a bias for infecting humans; however, little is known regarding their epidemiology, virulence properties, or comparative genomics. Results Here we report a characterization of the virulence of human-associated complex IV B. bronchiseptica strains. In in vitro cytotoxicity assays, complex IV strains showed increased cytotoxicity in comparison to a panel of complex I strains. Some complex IV isolates were remarkably cytotoxic, resulting in LDH release levels in A549 cells that were 10- to 20-fold greater than complex I strains. In vivo, a subset of complex IV strains was found to be hypervirulent, with an increased ability to cause lethal pulmonary infections in mice. Hypercytotoxicity in vitro and hypervirulence in vivo were both dependent on the activity of the bsc T3SS and the BteA effector. To clarify differences between lineages, representative complex IV isolates were sequenced and their genomes were compared to complex I isolates. Although our analysis showed there were no genomic sequences that can be considered unique to complex IV strains, there were several loci that were predominantly found in complex IV isolates. Conclusion Our observations reveal a T3SS-dependent hypervirulence phenotype in human-associated complex IV isolates, highlighting the need for further studies on the epidemiology and evolutionary dynamics of this B. bronchiseptica lineage. Results: Here we report a characterization of the virulence of human-associated complex IV B. bronchiseptica strains. In in vitro cytotoxicity assays, complex IV strains showed increased cytotoxicity in comparison to a panel of complex I strains. Some complex IV isolates were remarkably cytotoxic, resulting in LDH release levels in A549 cells that were 10-to 20-fold greater than complex I strains. In vivo, a subset of complex IV strains was found to be hypervirulent, with an increased ability to cause lethal pulmonary infections in mice. Hypercytotoxicity in vitro and hypervirulence in vivo were both dependent on the activity of the bsc T3SS and the BteA effector. To clarify differences between lineages, representative complex IV isolates were sequenced and their genomes were compared to complex I isolates. Although our analysis showed there were no genomic sequences that can be considered unique to complex IV strains, there were several loci that were predominantly found in complex IV isolates. Conclusion: Our observations reveal a T3SS-dependent hypervirulence phenotype in human-associated complex IV isolates, highlighting the need for further studies on the epidemiology and evolutionary dynamics of this B. bronchiseptica lineage. Keywords: B. bronchiseptica, Hypervirulence, Cytotoxicity, Bordetella evolution, Host adaptation, Pathogenomics Background Human pathogens often evolve from animal reservoirs, and changes in virulence sometimes accompany acquisition of the ability to infect humans [1]. Examples include smallpox virus, HIV, enterohemorrhagic E. coli, and Bordetella pertussis. Understanding how these events occur requires the ability to reconstruct evolutionary history, and this can be facilitated by the identification of evolutionary intermediates. An experimentally tractable opportunity to study human adaptation is provided by Bordetella species. The Bordetella genus currently includes nine closely related species, several of which colonize respiratory epithelial surfaces in mammals. B. pertussis, the etiological agent of pertussis (whooping cough) is exclusively adapted to humans; B. parapertussis refers to two groups, one infects only humans and the other infects sheep [2,3]; and B. bronchiseptica establishes both asymptomatic and symptomatic infections in a broad range of mammalian hosts, which sometimes include humans [4][5][6][7]. Numerous studies have implicated B. bronchiseptica as the closest common ancestor of human-adapted bordetellae, with B. pertussis and B. parapertussis hu , evolving independently from different B. bronchiseptica lineages [8][9][10]. The genomes of these 3 species differ considerably in size and B. pertussis and B. parapertussis have undergone genome decay, presumably as a consequence of niche restriction [6]. Most mammalian bordetellae express a common set of virulence factors which include putative adhesins such as filamentous hemagglutinin (FHA), fimbriae, and pertactin, and toxins such as a bifunctional adenylate cyclase/hemolysin, dermonecrotic toxin, and tracheal cytotoxin. B. pertussis additionally produces pertussis toxin [7]. Of particular significance here is the bsc type III secretion system (T3SS) locus which encodes components of the secretion machinery, associated chaperones, and regulatory factors. Remarkably, only a single T3SS effector, BteA, has been identified to date [11][12][13]. BteA is an unusually potent cytotoxin capable of inducing rapid, nonapoptotic death in a diverse array of cell types [14][15][16]. T3SS and bteAloci are highly conserved in B. pertussis, B. parapertussis, and B. bronchiseptica [14,15]. A seminal phylogenetic analysis using multilocus sequence typing (MLST) of 132 Bordetella stains with diverse host associations led to the description of a new B. bronchiseptica lineage, designated complex IV, which differs in several respects from the canonical complex I B. bronchiseptica cluster [10]. Complex I strains are most commonly isolated from non-human mammalian hosts, whereas the majority of complex IV strains were from humans, many with pertussis-like symptoms. Complex IV strains were found to exclusively share IS1663 with B. pertussis, suggesting a close evolutionary relationship among these lineages. Complex IV strains and B. pertussis are proposed to share a common ancestor, although the genes encoding pertussis toxin (ptxA-E) and the ptl transport locus were found to be missing in the majority of complex IV strains that were sampled [10]. Additionally, several other B. pertussis virulence genes were also found to be absent or highly divergent, including those encoding dermonecrotic toxin, tracheal colonization factor, pertactin, and the lipopolysaccharide biosynthesis locus. Differences between virulence determinants expressed by B. pertussis and complex IV strains have been suggested to be driven by immune competition in human hosts [10], a model also proposed for differences observed between B. pertussis and B. parapertussis hu [17]. Given their apparent predilection of complex IV B. bronchiseptica isolates for human infectivity, we have initiated a systematic analysis of their virulence properties and mechanisms. We found that complex IV strains, on average, display significantly elevated levels of cytotoxicity in comparison to complex I isolates. Several complex IV strains are also hyperlethal in mice, and hyperlethality in vivo as well as cytotoxicity in vitro is dependent on the BteA T3SS effector protein [11,12]. Comparative whole-genome sequence analysis of four complex IV isolates was used to identify similarities and differences between B. bronchiseptica lineages. Results from genome comparisons did not identify significant genomic regions that are unique to complex IV strains but missing from complex I isolates. This implies that complex IV-specific phenotypes are determined by polymorphisms in conserved genes, differential regulation [18], or other epigenetic mechanisms rather than acquisition or retention of unique genomic determinants. Sample preparation, protein electrophoresis and immunoblotting For SDS-polyacrylamide gel electrophoresis (SDS-PAGE) sample preparation, bacteria were cultured in SS media overnight and harvested by centrifugation at 10,000 x g at 4°C for 10 min. The resulting supernatant, containing secreted proteins, was filtered through a 0.2-μm membrane to remove contaminating bacterial cells. Protein from supernatants (equivalent of 3.75 O.D. 600 ) was precipitated with 15% trichloroacetic acid (TCA) for 1 h on ice and samples were centrifuged at 15,000 g for 15 min at 4°C. After centrifugation, TCA was removed and the pellet was resuspended in 1 X SDS-loading dye with 25 mM freshly prepared DTT. To neutralize the acidic pH of the samples, a few crystals of tris-base were added. Protein pellets were dissolved by shaking over a bench-top shaker for 30 min at room temperature prior to fractionation by various fixed percentage or gradient (as indicated) pre-cast SDS-polyacrylamide gels (Bio-Rad). The pellet samples after normalization to 12.5 O. D. 600 /ml, were boiled for 10 min in 1 x SDS-loading dye as above. After the run, proteins were either Coomassie stained or transferred onto a polyvinylidene difluoride (PVDF) membrane (Immobilon P, Millipore) using a semi-dry blot. BvgS, a non-secreted protein control was detected using polyclonal mouse antiserum at a dilution of 1:1000 [21]. Pertactin (PRN), which is secreted by a non-T3SS dependent pathway, was identified using a monoclonal mouse antibody at a dilution of 1:1000 [22]. Bsp22, a T3SS substrate control, was detected using polyclonal mouse serum at a dilution of 1:10,000 [23]. Immunodetection was carried out by chemifluorescence [15] pRE112-λbteA pGP704 based suicide plasmid harboring bteA in-frame deletion of codons 4-653, pir dependent, oriT, oriV, sacB, Cm R [11] pBBR1MCS-5 lacPOZ' mob + , broad-host-cloning vector, Gm R [50] pbteA bteA cloned into pBBR1MCS-5, Gm R [11] using horseradish peroxidase-labeled goat anti-mouse IgG and the ECL plus W detection substrate (GE Healthcare). Chemifluorescent signals were visualized using a Typhoon scanner (GE Healthcare). Construction of bscN and bteA in-frame deletion mutants To construct in-frame deletions of codons 171-261 in the bscN locus, allelic exchange was performed using pEGBR1005 suicide plasmid derivatives as previously described by Yuk et al. [15]. For construction of bteA inframe deletions (codons 4-653), suicide plasmid pRE112-bteA was used as previously described by Panina et al. [11]. All mutants were verified by sequencing target open reading frames. Cell lines Cell lines used in this study were obtained from the American Tissue Culture Collection (ATCC Cytotoxicity assays Bacteria were cultured in SS media overnight and were then sub-cultured in SS media to an optical density of~0.5 at 600 nm. For cytotoxicity assays, bacteria were added to previously seeded cell monolayers in 12or 24-well tissue culture plates at the indicated MOIs. The plates were centrifuged for 5 min at 60 x g and incubated for up to 4 h at 37°C with 5% CO 2. To measure cell cytotoxicity, Lactate dehydrogenase (LDH) release was used as a surrogate marker for cell death. LDH release in the supernatant media was assayed using a CytoTox 96 W non-radioactive cytotoxicity assay kit (Promega, Madison, WI), according to the manufacturer's instructions. The maximal LDH release was defined as 100% and was determined by adding lysis solution to uninfected monolayers, determining the absorbance at 490 nm, and then subtracting the background value. Each sample was measured in triplicate in at least three independent experiments. Animal infection experiments Wild-type female C57BL/6NCr (B6) mice, 4-6 weeks of age, were purchased from Charles River Breeding Laboratories (Wilmington, MA). The animals were lightly sedated with isoflurane (Novation Laboratories, TX) prior to intranasal infection with the indicated number of CFU of bacteria in a total volume of 40 μl of phosphate-buffered saline (PBS, Mediatech Inc, VA). Bacteria were cultured in SS media overnight and were then sub-cultured in SS media to an optical density of 0.5 at 600 nm. Inocula were confirmed by plating serial dilutions. For survival curves, groups of four mice were inoculated with the indicated dose, and the percent survival was monitored over a 30-day period. Mice with lethal bordetellosis, indicated by ruffled fur, labored breathing, and diminished responsiveness, were euthanized to alleviate unnecessary suffering [24]. To enumerate the number of bacteria in respiratory organs, groups of three to four mice were sacrificed at the indicated time points and bacterial numbers in the lungs and tracheas were quantified by plating dilutions of tissue homogenates on BG plates with appropriate antibiotics, following incubation at 37°C for 2 days. The mean ± the standard error was determined for each group. The statistical significance between different groups was calculated by Student's two-tailed t-test. A significance level was set at P values of ≤0.05. All animal experiments were repeated at least three times with similar results. Murine survival percentage was analyzed with the Log-Rank (Mantel-Cox) test. All mice were maintained in UCLA animal research facilities according to National Institutes of Health and University of California Institutional Animal Care Committee guidelines. Animals were housed under specific pathogen-free conditions with free access to food and water. All experiments were approved by the UCLA Chancellor's Animal Research Committee. Histopathological analysis Lungs were inflated with 10% neutral buffered formalin at the time of necropsy. Following fixation, tissue samples were embedded in paraffin, sectioned at 5 μm, and stained with hematoxylin-eosin, Giemsa, and Warthin-Starry for light microscopic examination at the Translational Pathology Core Laboratory of UCLA. Sections were scored for pathology by a veterinarian with training and experience in rodent pathology who was blinded to experimental treatment. The degree of inflammation was assigned an arbitrary score of 0 (normal = no inflammation), 1 (minimal = perivascular, peribronchial, or patchy interstitial inflammation involving less than 10% of lung volume), 2 (mild = perivascular, peribronchial, or patchy interstitial inflammation involving 10-20% of lung volume), 3 (moderate = perivascular, peribronchial, patchy interstitial, or diffuse inflammation involving 20-50% of lung volume), and 4 (severe = diffuse inflammation involving more than 50% of lung volume). In vitro adherence assays Human lung epithelial (A549) cells and Human cervical epithelial (HeLa) cells were grown in F-12 K and DMEM medium, containing 10% fetal calf serum on cover slips in standard 12-well tissue culture plates, respectively. Bacteria in their mid-log phase were added to cell monolayers at a MOI of 200 as previously described [25]. The plates were spun at 200 × g for 5 min and then incubated for 15 min at 37°C. The cells were then washed six times with Hanks' balanced salts solution, fixed with methanol, stained with Giemsa stain (Polyscience, Warrington, PA) and visualized by light microscopy. Adherence was quantified by counting the total number of bacteria per eukaryotic cell in at least three microscopic fields from two separate experiments. Trypsin digestion of polypeptides for mass spectrometry For secretome analysis by mass spectrometry, bacteria were cultured in SS media overnight and were then subcultured in SS media to an optical density at 600 nm of 1.0. A 5 ml aliquot was removed and centrifuged at 10,000 x g at 4°C for 10 min to remove bacterial cells. The resulting supernatant, containing proteins secreted into the culture medium, was filtered through a 0.2 μm membrane to remove contaminating bacterial cells. The filtered supernatants were then desalted and concentrated using a centrifugal filter device (Amicon Ultra-3 K, Millipore) into~300 μl of 50 mM ammonium bicarbonate buffer. The samples were reduced by incubation in 10 mM dithiotreitol (DTT) in 50 mM ammonium bicarbonate at 37°C for 1 h. They were then alkylated by adding 55 mM iodoacetamide in 50 mM ammonium bicarbonate and incubated at 37°C in dark for 1 h. Finally, the samples were digested at 37°C overnight with addition of 75 ng trypsin (EC 3.4.21.4, Promega) in 50 mM ammonium bicarbonate. For in-gel trypsin digestion of polypeptides, a previously described method was used [26]. Nano-liquid chromatography with tandem mass spectrometry (nLC-MSMS) nLC-MS/MS with Collision Induced Dissociation (CID) was performed on a linear trap quadrupole fourier transform (LTQ FT, Thermo Fisher, Waltham, MA) integrated with an Eksigent nano-LC. A prepacked reverse-phase column (Microtech Scientific C18 with a dimension of 100 μm x 3.5 cm) containing resin (Biobasic C18, 5-μm particle size, 300-Å pore size, Microtech Scientific, Fontana, CA) was used for peptide chromatography and subsequent CID analyses. ESI conditions using the nano-spray source (Thermo Fisher) for the LTQ-FT were set as follows: capillary temperature of 220°C, tube lens 110 V, and a spray voltage of 2.5 kV. The flow rate for reverse-phase chromatography was 5 μl/min for loading and 300 nl/min for the analytical separation (buffer A: 0.1% formic acid, 1% acetonitrile; buffer B: 0.1% formic acid, 100% acetonitrile). Peptides were resolved by the following gradient: 2-60% buffer B over 40 min, then increased to 80% buffer B over 10 min and then returned to 0% buffer B for equilibration of 10 min. The LTQ FT was operated in data-dependent mode with a full precursor scan at high-resolution (100000 at m/z 400) and six MSMS experiments at low resolution on the linear trap while the full scan was completed. For CID the intensity threshold was set to 5000, where mass range was 350-2000. Spectra were searched using Mascot software (Matrix Science, UK) in which results with p < 0.05 (95% confidence interval) were considered significant and indicating identity. The data was also analyzed through Sequest database search algorithm implemented in Discoverer software (Thermo Fisher, Waltham, MA). Identification of the core, non-core, and pan-genome of Bordetella "Core" regions were defined as genome sequences that were present in all 11 Bordetella genomes, while "noncore" regions were defined as genome sequences that are not present in all genomes. RB50 was used as the reference genome. For each of the other 10 sequences, genomes were mapped to the reference genome using Nucmer [27]. All 10 ".coords" output files from the Nucmer program were analyzed to identify overlap regions based on RB50 coordinates using a Perl script. Finally, "core" sequences were extracted based on the genome sequence of RB50 with the coordinates calculated above. Unshared regions were then added to the reference genome to make a "revised" reference genome, which contained the original sequence plus unshared sequences. This process was repeated until all of the genomes were compared to include all unshared sequences included in the pan-genome. The core region was subtracted from the pan-genome of all the 11 genomes, and the remaining regions were identified as non-core regions. Hierarchical clustering using Cluster and Java Tree View 844 non-core fragments with more than 1000 bp were identified. An 844 row x 11 column matrix, in which 1 means "present" while 0 means "absent" for each non-core region, was entered to the Cluster program (http://bonsai.hgc.jp/~mdehoon/software/cluster/software. htm#ctv) [28]. Average linkage was used for clustering. The Java Tree View program [28] was used to show the clustering result. Hypercytotoxicity of complex IV isolates in vitro Cytotoxicity against a broad range of cell types is a hallmark of B. bronchiseptica infection in vitro [11,12,14,16,23]. To measure relative levels of cytotoxicity, human epithelial cells (HeLa), murine monocytemacrophage derived cells (J774A.1), or human pneumocyte-derived cells (A549) were infected with an array of complex I or complex IV B. bronchiseptica isolates ( Figure 1A-C). These strains represent different multilocus sequence types (STs), and they were isolated from both human and non-human hosts (Table 1). Lactate dehydrogenase (LDH) release was used as a surrogate marker for cell death, and RB50, an extensively characterized complex I rabbit isolate classified as ST12, was used as a positive control for cytotoxicity [20]. An isogenic RB50 derivative with a deletion in bscN, which encodes the ATPase required for T3SS activity [15], served as a negative control. For HeLa ( Figure 1A) and J774A.1 cells (Figure 1B), single time point assays showed a distinct trend, in which complex IV strains displayed higher levels of cytotoxicity than complex I isolates. For A549 cells the results were more dramatic ( Figure 1C). Unlike other cell types previously examined [11,16,29], A549 cells are nearly resistant to cell death mediated by the RB50 T3SS (see RB50 vs. RB50ΔbscN; Figure 1C). Similarly, other complex I strains displayed little or no cytotoxicity against these cells. In striking contrast, incubation with complex IV isolates resulted in significant levels of cell death (p < 0.0001; Figure 1C). For A549 cells, strains D444 (ST15), D445 (ST17), D446 (ST3) and Bbr77 (ST18) were 10-to 15-fold more cytotoxic than RB50. Parallel assays measuring bacterial attachment to A549 cells did not detect significant differences between complex I and complex IV isolates, indicating that relative levels of adherence are not responsible for the observed differences in cytotoxicity (Additional file 1: Table S1). Kinetic studies were performed next to increase the resolution of the analysis. We examined relative levels of cytotoxicity conferred by five complex IV strains towards HeLa, J774A.1 or A549 cells as a function of time, using RB50 as a complex I representative strain and RB50ΔbscN as a negative control. Following infection at an MOI of 50, cultures were sampled over a 4 h time period for measurements of LDH release. Using HeLa cells, which are exceptionally sensitive to T3SS-mediated killing, only minor differences were detected between strains (Figure 2A). Although two of the complex IV strains, Bbr77 and D444, displayed slightly elevated cytotoxicity at intermediate time points, all strains reached maximum lysis by the end of the 4 h time course. For J774 cells, differences between complex IV strains and RB50 were apparent throughout the experiment, with Bbr77 and D444 showing the highest levels of activity ( Figure 2B). As expected, the most dramatic differences were seen with A549 cells ( Figure 2C). Most complex IV strains displayed a marked hypercytotoxicity phenotype compared to RB50, with the exception of Bbr69 which had an intermediate phenotype. Interestingly, Bbr69 is a dog isolate whereas all of the other complex IV strains tested were cultured from human infections. Roles of the bsc T3SS and the BteA effector in hypercytotoxicity by complex IV B. bronchiseptica isolates To examine the hypercytotoxicity phenotype in detail, two representative highly toxic complex IV strains of human origin, D445 (ST17) and Bbr77 (ST18), were chosen for further analysis. To measure the contribution of the bsc T3SS, nonpolar in-frame deletions were introduced into the bscN loci of D445 and Bbr77. As shown in Figure 3A, bscN mutations eliminated in vitro cytotoxicity against all three cell types, demonstrating an essential role for type III secretion. We next examined the involvement of the BteA effector in hypercytotoxicity. Previous studies have shown that BteA is essential for T3SS-mediated cell death induced by RB50, and it is sufficient for cytotoxicity when expressed in mammalian cells [11]. For both complex IV strains, bteA deletion mutations had a similar effect as ΔbscN mutations and abrogated cytotoxicity ( Figure 3A). The BteA proteins expressed by Bbr77 and D445 are identical except for the absence of two amino acids at the extreme carboxyl end of D445 BteA ( Figure 3B). In contrast, when compared to RB50 BteA, the complex IV effectors from Bbr77 and D445 differ at 22 or 24 positions, respectively ( Figure 3B). Interestingly, BteA sequences from complex IV strains were more closely related to BteA in B. parapertussis hu Bpp12282 than to homologs in B. bronchiseptica RB50 or B. pertussis Tohama I. To determine whether BteA polymorphisms are responsible for differences in cytotoxicity phenotypes, bteA deletion derivatives of all three strains were complemented with the RB50 bteA allele on a medium copy vector ( Figure 3A) [11]. In each case, complemented levels of cytotoxicity were similar to those of the wild type isolate. Most importantly, complemented ΔbteA derivatives of strains D445 and Bbr77 regained cytotoxicity against A549 cells, whereas RB50 ΔbteA/ pbteA remained non-cytotoxic against this cell line. Taken together, these results demonstrate that the bsc T3SS and the BteA effector are essential for cytotoxicity by D445 and Bbr77. The hypercytotoxicity phenotypes of the complex IV isolates, however, are not due to polymorphisms in BteA. This is consistent with the conserved nature of this effector, both within and between Bordetella species [11]. Differential regulation of T3SS activity, the presence of novel secretion substrates, or alterations in accessory factors could account for phenotypic differences between strains (see Discussion). T3SS secretome analysis We next compared polypeptide profiles of proteins secreted into culture supernatants by the isolates examined in Figure 3. Strains D445, Bbr77, and RB50 were grown to early mid-log phase in liquid medium under conditions permissive for type III secretion (Bvg + phase conditions, see Methods) [15]. To specifically identify T3SS substrates, ΔbscN derivatives were examined in parallel. Culture supernatants were TCA-precipitated, digested with trypsin, and separated by reverse-phase nano-liquid chromatography on a C18 column followed by tandem mass spectrometry (nLC-MSMS). Peptide profiles were queried against the RB50 protein database. Nearly identical sets of peptides were detected in supernatants from strains D445, Bbr77 and RB50, and these included peptides corresponding to T3SS substrates previously identified using RB50 (Table 2). Bsp22, which polymerizes to form an elongated needle tip complex [30], BopB and BopD, which form the plasma membrane translocation apparatus [14,29,31], BopN, a homolog of Yersinia YscN which functions as a secreted regulator [32], and the BteA effector were present in supernatants from wild type strains, but absent in supernatants of ΔbscN derivatives. In the course of this analysis we discovered a novel T3SS substrate encoded from a conserved hypothetical ORF (BB1639), herein named BtrA, in supernatant fractions from RB50, D445 and Bbr77 but not from their ΔbscN derivatives. Importantly, examination of complex IV secretion substrates failed to identify unique polypeptides that were not expressed by RB50 or did not match the RB50 protein database. The relative amounts of T3SS substrates released into culture supernatants, as assessed by SDS-PAGE and western blot analysis, also failed to correlate with relative levels of cytotoxicity (Additional file 2: Figure S1). Although these observations did not reveal obvious differences in the T3SS secretome that could account for the hypercytotoxic phenotypes of D445 and Bbr77, it is important to consider that the activity of the bsc T3SS and its substrate specificity are regulated at multiple levels, and results obtained using broth-grown cells provide only a crude approximation of T3SS activity during infection (see Discussion). Virulence of complex IV strains during respiratory infections To determine if relative levels of cytotoxicity measured in vitro correlate with virulence in vivo, we used a murine respiratory intranasal challenge model [24]. Groups of 4-6 week old female specific-pathogen-free C57BL/ 6NCr mice were intranasally infected with 5 x 10 5 CFU. At this dose, RB50 establishes nonlethal respiratory infections that generally peak around day 10 postinoculation and are gradually cleared from the lower respiratory tract, while persisting in the nasal cavity [33]. As shown in Figure 4A, complex IV strains segregated into two groups. The first caused lethal infections in some (D444, Bbr77) or all (D445) of the infected animals. The second group (D446, Bbr69) caused nonlethal infections similar to RB50. In the experiment shown in Figure 4B, animals were intranasally inoculated with 5 x 10 5 CFU of RB50 or the two most virulent complex IV isolates, D445 and Bbr77, and sacrificed three days later. Both complex IV isolates were present in lungs at levels that were 10 to 30-fold higher than RB50 (p < 0.001). Histopathological examination of lung tissue from mice infected with D445 or Bbr77 showed severe and widespread inflammation, affecting nearly the entire volume of the lung for D445 and up to 40% of the tissue for Bbr77 ( Figure 4C & D). Extensive migration of lymphocytes, macrophages, and neutrophils resulted in severe consolidation of large areas of lung parenchyma. Alveolar and interstitial edema as well as extensive perivascular and peribronchiolar inflammation were also observed. In contrast, lungs from animals infected with RB50 displayed only mild inflammation that covered less than 25% of the total lung volume. We also examined the relative roles of the bsc T3SS and the BteA effector in the in vivo virulence phenotypes of D445 and Bbr77. As shown in Figure 4A, deletions in bscN or bteA abrogated lethality following infection by either strain. Consistent with these observations, ΔbscN and ΔbteA mutants also showed significantly decreased numbers of bacteria in the lungs at day 3 post infection ( Figure 4B) and a corresponding decrease in histopathology ( Figure 4C). These results demonstrate that in comparison to the prototype complex I strain RB50, D445 and Bbr77 are more virulent in mice following respiratory infection, and hypervirulence is dependent on type III secretion and BteA. We next carried out a comparative analysis of the non-core genome to identify potential loci shared only by complex IV strains. Despite sequences that are shared by more than one complex IV isolate, we did not identify complex IV genomic sequence(s) that uniquely differentiate complex IV from complex I strains. Strains D445, Bbr77 and D444 do, however, contain clusters of shared genes that are not present in other Bordetella genomes ( Figure 5B, yellow boxes). Although these loci are missing in BBF579, the virulence properties of this isolate has not been reported, raising the possibility that one or more of these loci may contribute to hypervirulence by a subset of complex IV strains. Blastn analysis of overlap regions revealed a diverse set of genes involved mainly in signal transduction, metabolism, adhesin/autotransporter expression and type IV secretion of unknown substrates ( Figure 5B). One locus of potential interest, found in two out of four sequenced complex IV isolates (Bbr77 and D444) but none of the other Bordetella genomes, is predicted to encode homologs of the QseBC two-component regulatory system found in numerous bacterial pathogens [39]. In enterohemorrhagic E. coli (EHEC) and Salmonella sp., QseBC has been shown to sense host stress hormones (epinephrine and norepinephrine) and regulate virulence gene expression [40][41][42][43][44]. The qseBC open reading frames from Bbr77 and D444 are identical, and their predicted products share 47% amino acid identity and 63% similarity with EHEC QseB, and 34% identity and 51% similarity with EHEC QseC, respectively. Using a PCR-based assay, we screened for the presence of qseBC in a larger collection of B. bronchiseptica isolates. As shown in Figure 5C, this locus is present in 7 out of 9 complex IV isolates, but only 1 out of 10 complex I isolates. Sequence analysis of PCR amplicons revealed high levels of nucleotide identity (> 97%) between B. bronchi-septicaqseBC alleles. Although highly enriched in complex IV strains, qseBC is unlikely to represent a single, conserved pathway for hypervirulence since it is absent from strain D445. Nonetheless, the potential role of QseBC in Bordetella-host interactions warrants further study. In addition to examining gross genomic differences, we also analyzed polymorphisms in virulence loci. Nearly all of the virulence genes shared a high degree of homology (Additional file 3: Table S2). The bsc T3SS locus, the btr genes involved in T3SS regulation, as well as their upstream promoter regions had greater than 97% sequence conservation between RB50 and complex-IV strains. Additionally, our analysis confirms the absence of ptx/ptl loci and divergence in tcfA and prn genes in sequenced complex-IV isolates as previously described by Diavatopoulos et al. [10]. Discussion The existence of a distinct lineage of B. bronchiseptica strains associated with human infections was described several years ago [10]; however, little is known regarding the virulence properties of complex IV isolates or their epidemiological significance. Here we present evidence that complex IV isolates display significantly higher levels of cytotoxicity against a variety of cell lines in vitro. For a subset of complex IV strains that were isolated from humans with respiratory illness and represent distinct sequence types, we also demonstrate that hypercytotoxicity in vitro correlates with hypervirulence in vivo, and that both phenotypes are dependent on the bsc T3SS and the BteA effector. To investigate the mechanistic basis for the quantitative differences in BteA-dependent cytotoxicity observed between complex I and complex IV strains, we took a genetic approach which is both simple and definitive. In the experiment in Figure 3A, we show that when the RB50 bteA allele is expressed in ΔbteA derivatives of RB50 or hypercytotoxic complex IV strains (D445 and Bbr77), the cytotoxicity profile of the parental strain is maintained. Thus, hypercytotoxicity is not due to differences in the specific activity of the bteA products. Additionally, the examination of culture supernatants also failed to detect differences in the T3SS secretome that could account for increased virulence. Although it is possible that one or more novel effectors that augment cytotoxicity are expressed by complex IV strains at levels that escape detection, it is also possible, and perhaps even likely, that differences in regulation are at play. We have previously shown that loci encoding bteA and bsc T3SS apparatus components and chaperones are regulated by the BvgAS phosphorelay through an alternative ECF-sigma factor, BtrS [11,23]. In addition to transcriptional control, the partner-switching proteins BtrU, BtrV and BtrW regulate the secretion machinery through a complex series of protein-protein interactions governed by serine phosphorylation and dephosphorylation [23,45]. Comparative expression analysis shows that differential expression of the BvgAS regulon correlates with human-adaptation by B. pertussis and B. parapertussis [18]. In a similar vein, it seems reasonable to suspect that T3SS regulatory systems may be adapting to the evolutionary pressures that are shaping B. bronchiseptica lineages. Although both cytotoxicity and virulence are known, or likely, to be T3SS-dependent phenotypes in all strains examined, the correlation between lethality in mice and LDH release in vitro was not absolute. Strain D446 was highly cytotoxic to all cell lines examined (Figure 1), yet it was relatively avirulent following respiratory infection ( Figure 4A). This is not unexpected given the fact that type III secretion is only one of many virulence determinants required for pathogenesis [7], and B. bronchiseptica isolates are known to have diverse phenotypic properties despite their high degree of genetic similarity. A recent study by Buboltz et al.. [46] identified two complex I isolates belonging to ST32 which also appeared to have heightened virulence when compared to RB50. In particular, the LD 50 of these strains was 40-to 60-fold lower than RB50 and based on transcriptome analyses, hypervirulence was associated with upregulated expression of T3SS genes. The authors also observed a T3SSdependent increase in cytotoxicity towards cultured J774A.1 macrophage cells. It will be important to determine whether complex IV isolates do indeed share common virulence properties, or if the observations reported here represent heterogeneity distributed throughout B. bronchiseptica lineages. Numerous studies have demonstrated the ability of the bsc T3SS to exert potent cytotoxicity against a remarkably broad range of mammalian cell types, regardless of their species or tissue of origin [11,12,14,15]. This was considered to be a defining feature of the B. bronchiseptica T3SS. A549 cells, derived from human alveolar epithelial cells, are the first cell line to our knowledge shown to be resistant to intoxication by RB50. The finding that complex IV isolates kill these cells with high efficiency provides particularly compelling evidence for their hypercytotoxicity. To begin to address the comparative genomics of B. bronchiseptica lineages, we analyzed the genome sequences of 4 complex IV and 3 complex I strains. The observation that homologs of the qseBC locus are present in multiple complex IV strains was an intriguing discovery, as these genes encode a catecholamineresponsive virulence control system in E. coli and Salmonella [39][40][41][42]. Since the locus is missing in two complex IV strains (A345, D445), one of which is also hypervirulent (D445), qseB and qseC do not satisfy the criteria for either complex IV-specific or hypervirulenceassociated genes. No loci were found to be uniquely present in all complex IV isolates, and we also failed to identify loci that are present in all members of the hypervirulent subset of complex IV strains and are predicted to encode factors involved in virulence. It is probable that there are multiple pathways to hypervirulence, and that polymorphisms between conserved virulence and regulatory genes play a role in this phenotype as well as the apparent predilection of complex IV isolates for human infectivity. A particularly relevant question that remains to be addressed involves the burden of human disease currently caused by B. bronchiseptica. Diagnostic methods in common use that rely on PCR-based identification efficiently detect B. pertussis and B. parapertussis, but not B. bronchiseptica [47]. It is therefore possible that B. bronchiseptica respiratory infections are more common than previously appreciated, and it is intriguing to speculate that complex IV isolates may be responsible for undiagnosed respiratory infections in humans. Conclusions This work provides an initial characterization of the virulence properties of human-associated B. bronchiseptica. In in vitro cytotoxicity assays using several mammalian cell lines, wild type complex IV isolates showed significantly increased cytotoxicity as compared to a panel of complex I strains. Some complex IV isolates were remarkably cytotoxic, resulting in LDH release levels that were 10-to 20fold greater than the prototype complex I strain RB50. While infection of C57/BL6 mice with RB50 resulted in asymptomatic respiratory infection, a subset of complex IV strains displayed hypervirulence which was characterized by rapidly progressive pneumonia with massive peribronchiolitis, perivasculitis, and alveolitis. Although in vitro cytotoxicity and in vivo hypervirulence are both dependent upon T3SS activity and the BteA effector, the exact mechanistic basis for quantitative differences in cytotoxicity observed between complex I and complex IV Figure 5 Comparative genome analysis. A. Cluster analysis of non-core genome sequences of 11 Bordetella strains. The results are displayed using TREEVIEW. Each row corresponds to a specific non-core region of the genome, and columns represent the analyzed strain. Yellow indicates presence while blue represents absence of particular genomic segments. Abbreviations: Bp = B. pertussis, Bpp h = human B. parapertussis, Bb IV = complex IV B. bronchiseptica, Bb I = complex I B. bronchisetpica, Bpp o = ovine B. parapertussis. B. Zoomed image of non-core region in panel A marked with a red bracket showing complex IV specific regions. On the right, blastn with default settings was used to query the nucleotide collection (nr/nt) from the National Center for Biotechnology Information and homology designations are indicated. C. Distribution of qseBC alleles among complex I and complex IV B. bronchiseptica isolates based on PCR-based amplification and sequencing.
Evaluation of mitochondrial DNA copy number estimation techniques Mitochondrial DNA copy number (mtDNA-CN), a measure of the number of mitochondrial genomes per cell, is a minimally invasive proxy measure for mitochondrial function and has been associated with several aging-related diseases. Although quantitative real-time PCR (qPCR) is the current gold standard method for measuring mtDNA-CN, mtDNA-CN can also be measured from genotyping microarray probe intensities and DNA sequencing read counts. To conduct a comprehensive examination on the performance of these methods, we use known mtDNA-CN correlates (age, sex, white blood cell count, Duffy locus genotype, incident cardiovascular disease) to evaluate mtDNA-CN calculated from qPCR, two microarray platforms, as well as whole genome (WGS) and whole exome sequence (WES) data across 1,085 participants from the Atherosclerosis Risk in Communities (ARIC) study and 3,489 participants from the Multi-Ethnic Study of Atherosclerosis (MESA). We observe mtDNA-CN derived from WGS data is significantly more associated with known correlates compared to all other methods (p < 0.001). Additionally, mtDNA-CN measured from WGS is on average more significantly associated with traits by 5.6 orders of magnitude and has effect size estimates 5.8 times more extreme than the current gold standard of qPCR. We further investigated the role of DNA extraction method on mtDNA-CN estimate reproducibility and found mtDNA-CN estimated from cell lysate is significantly less variable than traditional phenol-chloroform-isoamyl alcohol (p = 5.44x10-4) and silica-based column selection (p = 2.82x10-7). In conclusion, we recommend the field moves towards more accurate methods for mtDNA-CN, as well as re-analyze trait associations as more WGS data becomes available from larger initiatives such as TOPMed. Introduction Mitochondrial dysfunction has long been known to play an important role in the underlying etiology of several aging-related diseases, including cardiovascular disease (CVD), neurodegenerative disorders and cancer [1]. As an easily measurable and accessible proxy for mitochondrial function, mitochondrial DNA copy number (mtDNA-CN) is increasingly used to assess the role of mitochondria in disease. Several population-based studies have shown higher levels of mtDNA-CN to be associated with decreased incidence for CVD and its component parts: coronary artery disease (CAD) and stroke [2,3]; neurodegenerative disorders such as Parkinson's and Alzheimer's [4,5]; as well as several types of cancer including breast, kidney, liver and colorectal [6][7][8]. Furthermore, mtDNA-CN measured from peripheral blood has consistently been shown to be higher in women, decline with age, and correlate negatively with white blood cell (WBC) count [9][10][11]. Although the mtDNA-CN field is relatively young, the number of publications has been steadily increasing at an average rate of 12% per year since 2015 [12]. However, there has yet to be a rigorous examination of the various methods for measuring this novel phenotype and the factors which may influence its accurate estimation. Without such an examination, studies may be severely underestimating or misrepresenting the relationship of mtDNA-CN with their traits of interest. Quantitative real-time PCR (qPCR) has been the most widely used method for measuring mtDNA-CN, partly due to its low cost and quick turnaround time. However, recent work has demonstrated the feasibility of accurately measuring mtDNA-CN from preexisting microarray, whole exome sequencing (WES) and whole genome sequencing (WGS) data [2,10,13]. With these advances, it is important for the field to evaluate these methods in the context of the current gold standard. In addition to the method for determining mtDNA-CN, it is important to consider the impact of DNA extraction method on mtDNA-CN, particularly due to the small size and circular nature of the mitochondrial genome. Previous research has shown organic solvent extraction is more accurate than silica-based methods at measuring mtDNA-CN, which is unsurprising as column kit parameters are typically optimized for DNA fragments �50 Kb [14]. However, as all DNA extraction methods have bias in the DNA which they target, measuring mtDNA-CN from direct cell lysate may prove to be a more accurate method. In the present study, we assess the relative performance of various methods for measuring mtDNA-CN and the effects of DNA extraction on mtDNA-CN estimation accuracy. We leverage mtDNA-CN calculated across 4,574 individuals from two prospective cohorts, the Atherosclerosis Risk in Communities study (ARIC) and the Multi-Ethnic Study of Atherosclerosis (MESA). Using mtDNA-CN estimates calculated from qPCR, WES, WGS, and two microarray platforms-the Affymetrix Genome-Wide Human SNP Array 6.0 and the Illumina HumanExome BeadChip genotyping array-we compare associations for known correlates of mtDNA-CN including age, sex, white blood cell count, the Duffy locus and incident CVD to determine the optimal method for calculating copy number. We additionally determined the reproducibility of mtDNA-CN measurements in vitro from three separate DNA extraction methods: silica-based column selection, organic solvent extraction (phenol-chloroform-isoamyl alcohol), and measuring mtDNA-CN from direct cell lysis without performing a traditional DNA extraction. We hypothesized that mtDNA-CN calculated from WGS data would outperform other estimation methods and mtDNA-CN measured from direct cell lysate would be more accurate than traditional DNA extraction methods. Study populations The ARIC study recruited 15,792 individuals between 1987 and 1989 aged 45 to 65 years from 4 US communities. DNA for mtDNA-CN estimation was collected from different visits and was derived from buffy coat using the Gentra Puregene Blood Kit (Qiagen). Relevant covariates were derived from the same visit in which DNA was collected. Our analyses were limited to 1,085 individuals with mtDNA-CN data available across all four platforms performed within ARIC: Affymetrix Genome-Wide Human SNP Array 6.0, Illumina HumanExome BeadChip genotyping array, WES and WGS. Eighty-eight percent of our final ARIC participants were Black. The MESA study recruited 6,814 individuals free of prevalent clinical CVD from 6 US communities across 4 ethnicities. Age range at baseline was 45 to 84 and the baseline exam occurred between 2000 and 2002. DNA for mtDNA-CN analyses was isolated from exam 1 peripheral leukocytes using the Gentra Puregene Blood Kit. Our analyses were restricted to 3,489 White and Black (36%) individuals with mtDNA-CN data available across the three platforms with mtDNA-CN data available at the time of analysis: qPCR, Affymetrix Genome-Wide Human SNP Array 6.0 and Illumina HumanExome BeadChip genotyping array. Exam 1 DNA for the exploratory dPCR pilot study was derived from packed red blood cells. Measurement of mtDNA-CN qPCR. mtDNA-CN was determined using a multiplexed real time qPCR assay as previously described [11]. Briefly, the cycle threshold (Ct) value of a mitochondrial-specific (ND1) and nuclear-specific (RPPH1) target were determined in triplicate for each sample. The difference in Ct values (ΔCt) for each replicate represents a raw relative measure of mtDNA-CN. Replicates were removed if they had Ct values for ND1 >28, Ct values for RPPH1 >5 standard deviations from the mean, or ΔCt values >3 standard deviations from the mean of the plate. Outlier replicates were identified and excluded for samples with a ΔCt standard deviation >0.5. The sample was excluded if the ΔCt standard deviation remained >0.5 after replicate removal. We corrected for an observed linear increase in ΔCt value due to the pipetting order of each replicate via linear regression. The mean ΔCt across all replicates was further adjusted for plate effects as a random effect to represent a raw relative measure of mtDNA-CN. Microarray. mtDNA-CN was determined using the Genvisis [15] software package for both the Affymetrix Genome-Wide Human SNP Array 6.0 and the Illumina HumanExome BeadChip genotyping array. A list of high-quality mitochondrial SNPs were hand-curated by employing BLAST to remove SNPs without a perfect match to the annotated mitochondrial location and SNPs with off-target matches longer than 20 bp. The probe intensities of the remaining mitochondrial SNPs (25 Affymetrix, 58 Illumina Exome Chip) were determined using quantile sketch normalization (apt-probeset-summarize) as implemented in the Affymetrix Power Tools software. The median of the normalized intensity, log R ratio (LRR) for all homozygous calls was GC corrected and used as initial estimates of mtDNA-CN for each sample. Technical covariates such as DNA quality, DNA quantity, and hybridization efficiency were captured via surrogate variable analysis or principal component analysis as previously described [2]. Surrogate variables or principal components were applied to the BLAST filtered, GC corrected LRR of the remaining autosomal SNPs (43,316 Affymetrix, 47,512 Exome Chip). These autosomal SNPs were selected based on the following quality filters: call rate >98%, HWE p value >0.00001, PLINK mishap for non-random missingness p value >0.0001, association with sex p value >0.00001, linkage disequilibrium pruning (r 2 <0.30), with maximal spacing between autosomal SNPs of 41.7 kb. WES. Whole exome capture was performed using Nimblegen's VChrome2.1 (Roche) and sequencing was performed on the Illumina HiSeq 2000. Sequence reads were aligned using Burrows-Wheeler Aligner (BWA) [16] to the hg19 reference genome. Variant calling, and quality control were performed as previously described [17]. mtDNA-CN was calculated using the mitoAnalyzer software package, which determines the observed ratios of sequence coverages between autosomal and mtDNA [18,19]. Due to large batch effects observed in our raw mtDNA-CN calls, alignment summary, insert size, quality score, base distribution, sequencing artifact and quality yield metrics were collected using Picard tools (version 1.87) to take into account differences in capture efficiency as well as sequencing and alignment quality [20]. Picard sequencing summary metrics to incorporate into our final model were selected through a stepwise backwards elimination model (S1 Table). WGS. Whole genome sequencing data was generated at the Baylor College of Medicine Human Genome Sequencing Center using Nano or PCR-free DNA libraries on the Illumina HiSeq 2000. Sequence reads were mapped to the hg19 reference genome using BWA [16]. Variant calling and quality control were performed as previously described [21]. A count for the total number of reads in a sample was scraped from the NCBI sequence read archive using the R package RCurl [22] while reads aligned to the mitochondrial genome were downloaded directly through Samtools (version 1.3.1). A raw measure of mtDNA-CN was calculated as the ratio of mitochondrial reads to the number of total aligned reads. Unlike WES, we did not observe large batch effects in our WGS raw mtDNA-CN calls, obviating the need for adjustment for Picard sequencing summary metrics. Digital PCR. mtDNA-CN was calculated using a multiplexed digital plate-based PCR (dPCR) method utilizing the ND1 and RPPH1 qPCR probes previously described. Samples were divided into 36,000 partitions on a 24-well plate and the fluorescence for each probe was measured with the Constellation Digital PCR System (Formulatrix, Boston MA). Fluorescence intensity was evaluated with the Formulatrix software and thresholds were based on visual inspection of the aggregate data for each plate. Thresholds were then used to determine the number of positive and negative partitions. Positive counts were fitted to a Poisson distribution to determine copy number [23]. mtDNA-CN was represented as the ratio between the number of ND1 copies/ μL and the number of RPPH1 copies/μl. Samples were included if they had fewer than 30,000 positives for ND1 and between 5 and 2,000 positives for RPPH1. Samples were filtered if the observed ratio was not between 15 and 300 ND1:RPPH1. The initial mtDNA-CN ratio was adjusted for plate as a random effect to represent a raw absolute measure of mtDNA-CN. Cardiovascular disease definition and adjudication Event adjudication through 2017 in ARIC and 2015 in MESA consisted of expert committee review of death certificates, hospital records and telephone interviews. Incident cardiovascular disease (CVD) was defined as either incident coronary artery disease (CAD) or incident stroke. Incident CAD was defined as first incident MI or death owing to CAD while incident stroke was defined as first nonfatal stroke or death due to stroke. Individuals in ARIC with prevalent CVD at baseline were excluded from incident analyses. Genotyping and imputation Genotype calling for the WBC count locus was derived from the Affymetrix Genome-wide Human SNP Array 6.0 in ARIC and MESA. Haplotype phasing for both cohorts was performed using ShapeIt [24] and imputation was performed using IMPUTE2 [25]. Genotypes were imputed to the 1000G reference panel (Phase I, version 3). Imputation quality for the Duffy locus lead SNP (rs2814778) was 0.95 and 0.92 in ARIC and MESA, respectively. DNA extraction method All DNA used in the DNA extraction comparison were derived from HEK293T cells grown in a single 150T flask to minimize variation due to clonality and cell culture procedures. Extraction were performed with 15 replicates each containing one million cells. mtDNA-CN was determined using qPCR as described previously. To account for the inherent variability in mtDNA-CN estimation, qPCR was run in triplicate. Silica-based column extraction We performed a silica-based column extraction using the AllPrep DNA/RNA Mini Kit (Qiagen) according to the manufacturer's instructions for fewer than 5 x 10 6 cells. Briefly, HEK293T cells were lysed and the subsequent lysate was pipetted directly onto the DNA Allprep spin column for homogenization and DNA binding. The bound DNA was then washed and eluted. Organic solvent extraction An aliquot of cells were lysed with 350 μL of RLT Plus Buffer (Qiagen) and one volume of phenol:chloroform:isoamyl alcohol (25:24:1) (PCIAA) was added to the sample and mixed until it turned milky white. The solution was centrifuged and the upper aqueous phase containing DNA was transferred to a separate tube. We proceeded with an ethanol precipitation protocol using 3M sodium acetate to complete the DNA extraction. Direct cell lysis Cells were pelleted at 500 g for 5 minutes and the supernatant was removed. The cell pellet was agitated in 100 μL of QuickExtract DNA Solution (Lucigen) to disrupt the pellet and placed in a thermocycler for 15 minutes at 68˚C followed by 10 minutes at 95˚C. The cell lysate was then centrifuged at 17,000 g for 15 minutes to pellet any insoluble inhibitors and the supernatant was transferred to a clean tube. The supernatant containing DNA was finally diluted 1:30 with water to limit the impact of any soluble inhibitors on qPCR. Statistical analyses The final mtDNA-CN phenotype for all measurement techniques is represented as the standardized residuals from a linear model adjusting the raw measure of mtDNA-CN for age, sex, DNA collection center, and technical covariates. Additionally, mtDNA-CN in ARIC was adjusted for WBC count, and the14.9% of individuals with missing WBC data were imputed to the mean. WBC was not available in MESA for the same visit in which the DNA was obtained. As mtDNA-CN was standardized, the effect size estimates are in units of standard deviations, with positive betas corresponding to an increase in mtDNA-CN. For analyses involving outcomes which also served as covariates in our final phenotype model (age, sex, WBC count), mtDNA-CN was calculated using the full model minus the outcome variable. For example, when exploring the relationship between mtDNA-CN and age, our mtDNA-CN phenotype would represent the standardized residuals from a model controlling for sex, sample collection center, WBC count and any technical covariates. We would then use this phenotype to explore the association between age and mtDNA-CN such that effect sizes for all comparisons remain in standard deviation units. The Duffy locus is highly associated with WBC count in Blacks [26] due to its role in conferring a selective advantage to malaria, however this association is limited or absent in other ethnicities [27]. As such, single SNP regression for mtDNA-CN on the Duffy locus was limited to Blacks. Due to the association of mtDNA-CN with WBC count, the Duffy locus acts as another independent external validator for mtDNA-CN unadjusted for WBC count. In ARIC, mtDNA-CN not adjusted for WBC count was used as the independent variable. Single SNP regression models were additionally adjusted for age, sex, sample collection site, and genotyping PCs. Regression analyses were performed with FAST [28]. Cox-proportional hazards regression was used to estimate hazard ratios (HRs) for incident CVD outcomes. Follow-up time was defined from DNA collection through death, lost to follow-up, or study end point (through 2017 in ARIC and 2015 in MESA). Pairwise F-tests were used to test the null hypothesis that the ratio of variances between the DNA extraction methods is equal to one. All statistical analyses were performed using R (version 3.3.3). Ethics statement Johns Hopkins IRB approved of this study (NA_00091014 / CR00027367). All participants provided written informed consent and all centers obtained approval from their respective institutional review boards. Results The study included 1,085 participants from ARIC with mtDNA-CN data from the Affymetrix 6.0 microarray, the Illumina Exome Chip microarray, WES, and WGS while MESA included 3,489 participants with mtDNA-CN data available from qPCR, the Affymetrix 6.0 microarray, and the Illumina Exome Chip microarray (combined N = 4,574 mtDNA-CN estimation method comparison To determine the optimal method for measuring mtDNA-CN, we ranked the performance of each technique based on strength of the association, as measured by p values, with the relevant mtDNA-CN correlate (S2 Table). Kendall's W tests [29] show significant agreement in Table 2). To additionally quantify performance, we created a scoring system for each method using negative log transformed p values standardized to the least significant method for each correlate. These values were then summed across the correlates for each method to achieve an overall rating of performance (S3 Table). These ratings were compared to 1,000 permutations of a random sampling of the standardized and transformed p values for each correlate across the different estimation techniques. In ARIC, WGS had a significantly higher performance score compared to all other methods (p < 0.002) while the Illumina Exome Chip had a significantly lower score (p = 0.03) (S1A Fig). In MESA, Affymetrix had a significantly higher score than qPCR and the Illumina Exome Chip (p = 0.002) (S1B Fig). When removing the contribution of WGS in ARIC, the Affymetrix array had a significantly higher score than the Illumina Exome Chip and WES (p = 0.01) (S1C Fig). As WGS and Affymetrix performed similarly, we sought to further parse out their performance by evaluating the 2,746 ARIC samples which contained mtDNA-CN from both platforms. On average, WGS performed 2.2 orders of magnitude more significantly than the Affymetrix array (S4 Table). Due to the recent emergence of digital PCR (dPCR) as a viable method for calculating mtDNA-CN, we performed an additional exploratory analysis in 983 individuals of the MESA cohort comparing the performance of dPCR to qPCR and the Affymetrix genotyping array (S5 Table). While mtDNA-CN calculated from dPCR was more significantly associated with age then either qPCR or the Affymetrix array, dPCR was the least significantly associated metric with sex and the observed association with incident CVD was in the opposite direction as expected (S6 Table). DNA extraction comparison Raw mitochondrial estimates from qPCR were mean-zeroed to the plate average and the mean value across the triplicate plates was used to determine the variance across the 15 replicates for each method (Fig 1). The variance for our novel Lyse method was significantly lower at 0.02 compared to 0.17 and 0.59 for the PCIAA and Qiagen Kit extractions respectively (F = 0.13, p = 5.44x10 -4 ; F = 0.04, p = 2.82x10 -7 ). Additionally, our findings support previous work [14] demonstrating PCIAA had significantly lower variability compared to the Qiagen Kit (F = 0.29, p = 0.03). Discussion We explored several methods for measuring mtDNA-CN in 4,574 self-identified White and Black participants from the ARIC and MESA studies. We found mtDNA-CN estimated from WGS read counts and Affymetrix Genome-Wide Human SNP Array 6.0 probe intensities was more significantly associated with known mtDNA-CN correlates compared to mtDNA-CN estimated from WES, qPCR and the Illumina HumanExome BeadChip. When observing the relative performance of these methods, mtDNA-CN calculated from either WGS or Affymetrix array are, respectively, 5.6 and 5.4 orders of magnitude more significant than the current gold standard of qPCR (Fig 2). These results are not limited to significance as we see similar trends when exploring effect size estimates (Fig 3). For example, when looking at incident CVD, mtDNA-CN measured from WGS observes a substantial HR of 0.63 (0.54-0.74) where as Evaluation of mitochondrial DNA copy number estimation techniques mtDNA-CN measured from qPCR only has a HR of 0.93 (0.82-1.05), a marked difference. As a result, when exploring the relationship between mtDNA-CN and a trait of interest, on average one could expect a result 5.6 orders of magnitude less significant and 6 times less extreme when using mtDNA-CN estimated from qPCR data as opposed to WGS. Several recent reports have touted dPCR as the new gold standard for mtDNA-CN estimation due to its ability to quantify absolute copy number [30][31][32]. In a small subset of MESA samples, we found mtDNA-CN estimates from dPCR were on average 1.15 and 0.55 orders of magnitude less significant than Affymetrix and qPCR respectively (S7 Table). These results suggest dPCR may not measure mtDNA-CN as accurately as both the current gold standard and other recently developed methods. However, it is important to note these findings were derived from a subset of samples a fifth of the size as those from the main findings of the overall study, and thus should be interpreted with caution. Additionally, whereas the dPCR data was derived from DNA from packed red blood cells, the qPCR and Affymetrix data was obtained from peripheral leukocytes potentially explaining the poor performance of dPCR relative to other methods. Interestingly, mtDNA-CN measured from two seemingly similar microarray platforms differed drastically (S2 Fig). However, this finding is unsurprising when exploring the underlying biochemistry of sample preparation for each microarray platform. While the Affymetrix protocol starts with two restriction enzyme digests prior to whole genome amplification (WGA), the Illumina Exome Chip requires WGA with a processive polymerase prior to sonication. As a result, the mitochondrial genome undergoes rolling circle amplification which occurs at a significantly faster rate than linear WGA [33]. Lower mtDNA-CN has been found to be associated with an increased incidence for several diseases, including end stage renal disease, type 2 diabetes, and non-alcoholic fatty liver disease [34][35][36]. However, such studies have relied on mtDNA-CN estimated from qPCR data. Our findings suggest much of the current literature may be severely underestimating disease associations with mtDNA-CN as well as its potential as a predictor of disease outcomes. Despite this, at <$2 per sample qPCR may remain the principal method for measuring mtDNA-CN due to the prohibitive costs of WGS. Furthermore, absolute quantification of mtDNA-CN through the use of standard curves may improve upon the performance of qPCR furthering its continuing use [37]. We additionally showed DNA extraction method affects mtDNA-CN estimate reproducibility with copy number measured directly from cell lysate significantly outperforming silicabased column extraction and organic solvent extraction. Although several other studies have explored the impact of DNA isolation protocol on mtDNA-CN estimation [14,38,39], to our knowledge, this is the first study to interrogate the possibility of measuring mtDNA-CN directly from cell lysate. In addition to the superior performance of direct cell lysis, this method is cheaper and has less hands-on time than PCIAA or Qiagen Kit extractions. However, the authors recognize DNA from cell lysate has less downstream utility than traditional DNA extraction procedures potentially limiting its adoption within the mtDNA-CN field when sample availability is limited. Additionally, as our application of the lyse method was limited to cultured cells, it is important to further validate this method in the context of different sample types which may have higher concentrations of inhibitors. Furthermore, it is important to note the various DNA extraction methods resulted in significantly different Effect size and hazard ratio estimates for mtDNA-CN with known correlates. Data points and their corresponding 95% confidence intervals represent the effect size or hazard ratio estimates for mtDNA-CN with Age, Sex, white blood cell (WBC) count, Duffy locus, and incident cardiovascular disease (CVD). Effect size estimates are in standard deviation units. The significance of each estimate is represented as ' � ' for P < 0.05, ' �� ' for P < 0.01, and ' ��� ' for P < 0.001. WBC, white blood cell. https://doi.org/10.1371/journal.pone.0228166.g003 Evaluation of mitochondrial DNA copy number estimation techniques mtDNA-CN estimates (p = 3.56x10 -11 , 0.02, 2.85x10 -7 for Lyse:PCIAA, Lyse:Qiagen Kit, and PCIAA:Qiagen Kit respectively). As such, when choosing an extraction method, it is important to remain consistent across the study. In conclusion, our study demonstrates mtDNA-CN calculated from WGS reads or Affymetrix microarray probe intensities significantly improves upon the current gold standard method of qPCR. Furthermore, we show direct cell lysis introduces less variability to mtDNA-CN estimates than popular DNA extraction methods. Despite the relative infancy of using mtDNA-CN as a novel risk marker, these findings highlight the need for the field to adapt to current technologies to ensure disease and trait associations are fully realized with a move toward more accurate microarray and WGS methods. Furthermore, due to the prevalence of qPCR in the literature, the authors recommend re-analyzing trait associations as more WGS data becomes available from large initiatives such as TOPMed. Table. Relative performance of methods as rated by standardized -log p values for dPCR subset. � Duffy locus associations were performed in blacks only. +Affymetrix and dPCR effect size estimates were in opposite direction as known effects and thus the -log p value of qPCR was standardized to a p value of 1 for Affymetrix and dPCR. (XLSX) important contributions. A full list of participating MESA investigators and institutions can be
NMR-Based Metabolic Profiles of Intact Zebrafish Embryos Exposed to Aflatoxin B1 Recapitulates Hepatotoxicity and Supports Possible Neurotoxicity Aflatoxin B1 (AFB1) is a widespread contaminant of grains and other agricultural crops and is globally associated with both acute toxicity and carcinogenicity. In the present study, we utilized nuclear magnetic resonance (NMR), and specifically high-resolution magic angle spin (HRMAS) NMR, coupled to the zebrafish (Danio rerio) embryo toxicological model, to characterize metabolic profiles associated with exposure to AFB1. Exposure to AFB1 was associated with dose-dependent acute toxicity (i.e., lethality) and developmental deformities at micromolar (≤ 2 µM) concentrations. Toxicity of AFB1 was stage-dependent and specifically consistent, in this regard, with a role of the liver and phase I enzyme (i.e., cytochrome P450) bioactivation. Metabolic profiles of intact zebrafish embryos exposed to AFB1 were, furthermore, largely consistent with hepatotoxicity previously reported in mammalian systems including metabolites associated with cytotoxicity (i.e., loss of cellular membrane integrity), glutathione-based detoxification, and multiple pathways associated with the liver including amino acid, lipid, and carbohydrate (i.e., energy) metabolism. Taken together, these metabolic alterations enabled the proposal of an integrated model of the hepatotoxicity of AFB1 in the zebrafish embryo system. Interestingly, changes in amino acid neurotransmitters (i.e., Gly, Glu, and GABA), as a key modulator of neural development, supports a role in recently-reported neurobehavioral and neurodevelopmental effects of AFB1 in the zebrafish embryo model. The present study reinforces not only toxicological pathways of AFB1 (i.e., hepatotoxicity, neurotoxicity), but also multiple metabolites as potential biomarkers of exposure and toxicity. More generally, this underscores the capacity of NMR-based approaches, when coupled to animal models, as a powerful toxicometabolomics tool. associated with AFB1 exposure. Embryonic and other early life (e.g., larval) stages of the zebrafish are well established as a toxicological model, in general, and have been specifically shown in several previous studies [11][12][13] to be an effective model for assessment of AFB1 toxicity. It is worth noting that, in addition to contamination of agricultural products, AFB1 is one of the most common contaminants of aquaculture (i.e., fish) feeds [14]. As such, assessment of the toxin in the zebrafish system not only represents a model for human and mammalian toxicity, but may also have direct relevance to the field of aquaculture. Herein, we couple the zebrafish as a model system to high-resolution magic-angle spinning (HRMAS) NMR techniques, which has been recently shown to enable both highly quantitative and qualitative (i.e., metabolite identification) analyses of major metabolites in the developing zebrafish embryo [15][16][17][18][19]. Application of this metabolomics approach to the zebrafish embryo model has, in fact, been previously demonstrated with respect to other naturally-occurring toxicants and in these prior studies shown to facilitate both characterization of toxicological pathways and identification of possible biomarkers of toxin exposure [18,19]. The present study exploits the power of this technique to both characterize toxicological pathways toward better understanding of how AFB1 toxicity translates to adverse health outcomes and identify potential metabolic biomarkers of exposure (and toxicity) toward improved means of tracking exposure and human health impacts. Toxicity of AFB1 in the Zebrafish Embryo Model Acute toxicity of AFB1, based on embryo lethality, was evaluated over a range of concentrations (up to 2 µM) in zebrafish embryos, specifically following 24 h of exposure at representative developmental stages (i.e., 24,48,72, and 96 hpf). Dose-dependent toxicity was observed (at all exposure stages) below 2 µM; moreover, toxicity was clearly stage-dependent and specifically increased with age of embryos ( Figure 1). AFB1 was, for example, nearly an order of magnitude more toxic for embryos exposed at 96 hpf compared to 24 hpf, and there was, more generally, a stage-dependent decrease in calculated LC 50 (after 24 h exposure) of 2.1, 1.8, 1.1, and 0.5 µM at 24, 48, 72, and 96 hpf, respectively. Toxins 2019, 11, x FOR PEER REVIEW 3 of 18 are well established as a toxicological model, in general, and have been specifically shown in several previous studies [11][12][13] to be an effective model for assessment of AFB1 toxicity. It is worth noting that, in addition to contamination of agricultural products, AFB1 is one of the most common contaminants of aquaculture (i.e., fish) feeds [14]. As such, assessment of the toxin in the zebrafish system not only represents a model for human and mammalian toxicity, but may also have direct relevance to the field of aquaculture. Herein, we couple the zebrafish as a model system to high-resolution magic-angle spinning (HRMAS) NMR techniques, which has been recently shown to enable both highly quantitative and qualitative (i.e., metabolite identification) analyses of major metabolites in the developing zebrafish embryo [15][16][17][18][19]. Application of this metabolomics approach to the zebrafish embryo model has, in fact, been previously demonstrated with respect to other naturally-occurring toxicants and in these prior studies shown to facilitate both characterization of toxicological pathways and identification of possible biomarkers of toxin exposure [18,19]. The present study exploits the power of this technique to both characterize toxicological pathways toward better understanding of how AFB1 toxicity translates to adverse health outcomes and identify potential metabolic biomarkers of exposure (and toxicity) toward improved means of tracking exposure and human health impacts. Toxicity of AFB1 in the Zebrafish Embryo Model Acute toxicity of AFB1, based on embryo lethality, was evaluated over a range of concentrations (up to 2 µM) in zebrafish embryos, specifically following 24 hours of exposure at representative developmental stages (i.e., 24,48,72, and 96 hpf). Dose-dependent toxicity was observed (at all exposure stages) below 2 µM; moreover, toxicity was clearly stage-dependent and specifically increased with age of embryos ( Figure 1). AFB1 was, for example, nearly an order of magnitude more toxic for embryos exposed at 96 hpf compared to 24 hpf, and there was, more generally, a stagedependent decrease in calculated LC50 (after 24 h exposure) of 2.1, 1.8, 1.1, and 0.5 µM at 24, 48, 72, and 96 hpf, respectively. Concentration-dependent toxicity of aflatoxin B1 (AFB1) to zebrafish embryos as measured by lethality. Embryos at 4, 24, 72, and 96 hours post-fertilization (hpf) were exposed (N = 6 replicates, Figure 1. Concentration-dependent toxicity of aflatoxin B1 (AFB1) to zebrafish embryos as measured by lethality. Embryos at 4, 24, 72, and 96 hours post-fertilization (hpf) were exposed (N = 6 replicates, N = 20 embryos per replicate) to a range of concentrations of AFB1 (i.e., 0.25, 0.5, 1.0, and 2 µM in DMSO) for 24 h. Percentage of survival of the embryos was recorded after 24 h of treatment. At sub-lethal concentrations (below LC 50 ), AFB1 impaired development, resulting in embryo deformity. Deformities were generally observed among approximately 20-30% of surviving embryos at or below lethal concentrations. At 72 hpf, for example, embryos exposed to concentrations below the approximate LC 50 (i.e., 1 µM) were consistently characterized by malformation of the head and bending of the tail and upper body ( Figure 2). At sub-lethal concentrations (below LC50), AFB1 impaired development, resulting in embryo deformity. Deformities were generally observed among approximately 20-30% of surviving embryos at or below lethal concentrations. At 72 hpf, for example, embryos exposed to concentrations below the approximate LC50 (i.e., 1 µM) were consistently characterized by malformation of the head and bending of the tail and upper body ( Figure 2). NMR-Based Metabolic Profiles of Zebrafish Exposed to AFB1 High-resolution magic angle spin NMR resolved several metabolites in intact zebrafish embryos ( Figure 3) and when coupled to principal components analysis (PCA) enabled statistical discrimination ( Figure S1) of quantitative differences in metabolites between AFB1-exposed and control (i.e., DMSO only) embryos. Comparisons (based on 1-D NMR chemical shifts) to the Human Metabolome Database (HMDB), along with 2D NMR techniques (i.e., 1 H-1 H COSY) enabled unambiguous identification, and subsequent quantitation, of metabolites ( Figure 4). Consequently, 28 metabolites could be identified, and consequently quantified and statistically evaluated. Developmental deformities of zebrafish embryos exposed at 72 h post-fertilization (hpf) to 1.0 µM aflatoxin B1 for 24 h (compared to solvent, i.e., DMSO, only ("control")). Images were taken at 96 hpf. Deformities include malformation of head (H) and bending of upper body (UB) and tail (T). The scale bar given represents 500 µm (10× magnification). NMR-Based Metabolic Profiles of Zebrafish Exposed to AFB1 High-resolution magic angle spin NMR resolved several metabolites in intact zebrafish embryos ( Figure 3) and when coupled to principal components analysis (PCA) enabled statistical discrimination ( Figure S1) of quantitative differences in metabolites between AFB1-exposed and control (i.e., DMSO only) embryos. Comparisons (based on 1-D NMR chemical shifts) to the Human Metabolome Database (HMDB), along with 2D NMR techniques (i.e., 1 H-1 H COSY) enabled unambiguous identification, and subsequent quantitation, of metabolites ( Figure 4). Consequently, 28 metabolites could be identified, and consequently quantified and statistically evaluated. Of these, a total of 19 metabolites were shown to increase or decrease significantly (p < 0.05) following 24-hour exposure to AFB1 at 72 hpf ( Figure 4 and Table 1). A significant increase of several amino acids including phenylalanine (Phe, p < 0.01), tryptophan (Trp, p < 0.001), and tyrosine (Tyr, p < 0.0001), as well as isoleucine (Ile, p < 0.05), glutamate (Glu, p < 0.05), glutamine (Gln, p < 0.05), and glycine (Gly, p < 0.05) was observed, whereas a highly significant (p < 0.0001) decrease in cysteine (Cys) was measured. Chemical shifts used to identify amino acids are specific for amino acids that are not incorporated into protein, and quantification, therefore, reflects the concentration of the "free" amino acid pools for each. Notably, the non-proteinogenic amino acid neurotransmitter, γ-aminobutyric acid (GABA), also significant increased (p < 0.05). Numerous metabolites associated with carbohydrate metabolism, and cellular energetics, were significantly altered by AFB1 treatment including: (1) decreases in glucose-1-phosphate (G1P) and glucose-6-phosphate (G6P), as well as glucose (Glc; p < 0.001) itself; (2) highly significant increases in lactate (Lac, p < 0.0001) as the product of lactate dehydrogenase and/or anaerobic glycolysis; and (3) increases in several metabolites associated with cellular energetics including ATP, NADH, and NAD + (p < 0.05). Significant increases in fatty acids (FA, p < 0.001) and cholesterol (Chol, p < 0.01) were observed, alongside a concomitant increase in acetate as both an intermediate of lipid metabolism and anaplerotic catabolism (i.e., β-oxidation) into the Krebs cycle (as acetyl CoA). Alongside changes in lipids, significant increases in the polar headgroups, choline (Cho) and myo-inositol (m-Ins), of phospholipids' characteristic cellular membranes were observed. Finally, a significant decrease (p < 0.05) in glutathione (GSH) as a phase II detoxification mechanism was measured following AFB1 exposure (compared to controls). Figure 3. Representative high-resolution magic angle spin (HRMAS) NMR spectra of (A) control (i.e., DMSO-only), and (B) AFB1-exposed (1 µM) zebrafish embryos exposed at 72 hpf for 24 h. Integrated peak areas and chemical shifts of 1D NMR spectra were used to quantify and identify metabolites. Of these, a total of 19 metabolites were shown to increase or decrease significantly (p < 0.05) following 24-hour exposure to AFB1 at 72 hpf ( Figure 4 and Table 1). A significant increase of several amino acids including phenylalanine (Phe, p < 0.01), tryptophan (Trp, p < 0.001), and tyrosine (Tyr, p < 0.0001), as well as isoleucine (Ile, p < 0.05), glutamate (Glu, p < 0.05), glutamine (Gln, p < 0.05), and glycine (Gly, p < 0.05) was observed, whereas a highly significant (p < 0.0001) decrease in cysteine (Cys) was measured. Chemical shifts used to identify amino acids are specific for amino acids that are not incorporated into protein, and quantification, therefore, reflects the concentration of the "free" amino acid pools for each. Notably, the non-proteinogenic amino acid neurotransmitter, γaminobutyric acid (GABA), also significant increased (p < 0.05). Numerous metabolites associated with carbohydrate metabolism, and cellular energetics, were significantly altered by AFB1 treatment including: (1) decreases in glucose-1-phosphate (G1P) and glucose-6-phosphate (G6P), as well as glucose (Glc; p < 0.001) itself; (2) highly significant increases in lactate (Lac, p < 0.0001) as the product of lactate dehydrogenase and/or anaerobic glycolysis; and (3) increases in several metabolites associated with cellular energetics including ATP, NADH, and NAD + (p < 0.05). Significant increases in fatty acids (FA, p < 0.001) and cholesterol (Chol, p < 0.01) were observed, alongside a concomitant increase in acetate as both an intermediate of lipid metabolism and anaplerotic catabolism (i.e., βoxidation) into the Krebs cycle (as acetyl CoA). Alongside changes in lipids, significant increases in the polar headgroups, choline (Cho) and myo-inositol (m-Ins), of phospholipids' characteristic cellular membranes were observed. Finally, a significant decrease (p < 0.05) in glutathione (GSH) as a phase II detoxification mechanism was measured following AFB1 exposure (compared to controls). Representative high-resolution magic angle spin (HRMAS) NMR spectra of (A) control (i.e., DMSO-only), and (B) AFB1-exposed (1 µM) zebrafish embryos exposed at 72 hpf for 24 h. Integrated peak areas and chemical shifts of 1D NMR spectra were used to quantify and identify metabolites. Red arrows indicate increase (↑) and decrease (↓) of metabolites. Table 1. Relative (i.e., percent) change in metabolites and ratio (i.e., fold-change) of aromatic amino acids (AAA) to branched-chain amino acids (BCAA) of zebrafish embryos exposed to AFB1 compared to controls. Embryos exposed to 1 µM AFB1 at 72 hpf (for 24 h) and the concentration of metabolites (relative to total Cr) measured by HRMAS NMR compared to vehicle (DMSO) only controls. For statistically-significant changes, p-values are given; "n.s." indicates that differences are not significant. Contamination of crop plants by aflatoxinogenic Aspergillus has been clearly linked to both acute intoxication (i.e., "aflatoxicosis") and carcinogenicity, although a complete picture of the pathways of toxicity remains to be fully clarified. To elucidate pathways and potential exposure biomarkers, we utilized HRMAS NMR to characterize alterations in the metabolic profiles of intact zebrafish embryos following exposure to AFB1. Solution-state NMR techniques have been previously applied ex vivo [20][21][22][23] to metabolomics studies of AFB1 in mammalian systems; however, the present study represents the first to utilize HRMAS NMR of an intact organismal model to understand the toxicology of AFB1. This approach in the zebrafish system has, indeed, been previously demonstrated and shown to be highly effective, with respect to other environmental toxicants including, in particular, aquatic (i.e., algal) biotoxins [18,19]. Similar to these prior studies, when coupled to toxicological assessment, this approach enabled the development of a holistic and integrated model of AFB1. Toxicity of AFB1 in the Zebrafish Embryo Model Aligned with previous studies [11][12][13], ambient exposure to AFB1 was lethal to zebrafish embryos in the micromolar range ( Figure 1). Developmental effects (Figure 2), within this same exposure range, were both quantitatively and qualitatively consistent with these previous studies. Prior reports of the embryotoxicity of AFB1 in the zebrafish model [11] determined comparable lethal concentrations (LC 50 = 2.3 µM, versus 1.1 µM in the present study; see Figure 1) for 72 hpf embryos exposed to AFB1 for 24 h and observed similar developmental deformities including, in particular, deformity of the head, tail, and body axis ( Figure 2). Micromolar exposure concentrations associated with lethality, and other various developmental endpoints, have been similarly confirmed in the zebrafish embryo model by more recent studies [12], and the time dependence (>72-96 hpf) of zebrafish embryotoxicity at sub-micromolar concentration, as observed in the present study ( Figure 1), has been, likewise, very recently reported [13]. Notably, however, in the present study, toxicity was specifically observed with a sequential exposure regime (i.e., 24 h at 24, 48, 72, and 96 hpf; Figure 1), rather than continuous exposure (as in all prior studies), clearly indicating stage-dependent susceptibility (as opposed to possible cumulative toxicity). The observed concentration-and stage-dependent toxicity was, in turn, referenced to select appropriate exposure concentrations and stages for NMR analyses (see below). Specifically, these subsequent studies utilized 72 hpf embryos exposed (for 24 h) to a concentration of 1 µM, which approximates the LC 50 of AFB1 (with additional replicates to provide a sufficient number of embryos; see Materials and Methods). Given the reported role of hepatocytes and, moreover, cytochrome P450 enzymes (which are primarily localized to the liver) in the metabolic bioactivation AFB1 to reactive epoxides (e.g., AFBO), it is proposed that observed stage-dependence of toxicity in the zebrafish embryo is related to the development of the liver and associated expression of these phase I detoxification enzymes. Although initial differentiation (i.e., "budding") of the liver in zebrafish embryos begins at 24 hpf, complete development (i.e. "outgrowth and expansion") does not occur until approximately 72-96 hpf [24]. Moreover, significant expression of relevant cytochrome P450 enzymes (e.g., CYP1A and CYP3A), and corresponding xenobiotic-induced activity, in the zebrafish embryo does not occur until approximately 72 hpf and is primarily localized to the developing liver [25,26]. Relevant to the present results (see below) also, phase I metabolism by CYP enzymes is coupled, in turn, to subsequent phase II detoxification, and specifically conjugation of glutathione (GSH) to electrophilic AFBO by glutathione-S-transferase (GST) [27] (see Figure 5). Toxicity of AFB1 is, therefore, linked to levels of GSH and GST activity. It has been shown, for example, that elevated sensitivity of certain poultry (e.g., domestic turkey and "turkey X disease") to AFB1 is due to a combination of highly efficient CYP1A and CYP3A enzymes (and conversion to reactive epoxides) and simultaneously deficient GST activity, in the hepatocytes of these species [3]. Alteration of Metabolic Profiles of Zebrafish Embryos by AFB1: Development of a Toxicological Model Metabolic profiling of zebrafish embryos by HRMAS NMR (Figure 4 and Table 1) in the present study is remarkably consistent with the hepatotoxicity of AFB1 and recapitulates many of the ex vivo observations in previous metabolomics studies in other (i.e., mammalian) models [20][21][22][23]. One of the key cellular aspects of hepatotoxicity is the swelling of cells (i.e., hepatocytes), which in turn, results in the release and subsequent hydrolysis of cell membrane components [20,28]. In the present study, HRMAS NMR accordingly measured significant increases in both lipids (i.e., FA and Chol) and the primary polar headgroups (i.e., m-Ins and Cho) associated with phospholipids that are essential to cell membranes ( Figure 4). In addition, increased Lac, in our study, is consistent with elevated LDH activity that is widely recognized as a general measure of cytotoxicity and, in the current model, a possible indicator of hepatic cytotoxicity of AFB1 [29]. At the same time, the observed decrease in GSH aligns with the essential role of phase II hepatic detoxification of AFB1. Following phase I metabolic bioactivation of AFB1 to AFBO, conjugation of GSH via GST facilitates removal of these reactive species, and observed decreases in GSH would, therefore, be consistent with metabolic consumption of the glutathione pool in response to the formation of the reactive AFB1 epoxide ( Figure 5). Alongside these general indicators of hepatotoxicity, in our study, the measured alterations of three major metabolic pathways-namely amino acid, lipid, and carbohydrate (i.e., energy) metabolism ( Figure 4 and Table 1)-which are primarily localized to the liver, are, likewise, highly consistent with both hepatic targeting of AFB1 and previous studies in mammalian systems [20][21][22][23]. Based on observed alterations in metabolic profile, an integrated model of the toxicity of AFB1 in the zebrafish embryo model is proposed ( Figure 5). Central to these metabolic changes is a significant alteration of the levels of several relevant amino acids following exposure of zebrafish embryos to AFB1. Exposure of 72 hpf embryos to AFB1 altered levels of several amino acids including significant increases of Phe, Tyr, Trp, Ile, Glu, Gln, and Gly and a significant decrease of Cys ( Figure 4 and Table 1). Altered amino acid levels have, indeed, been consistently identified in relation to hepatic pathology, in general, and hepatotoxicity of AFB1 Central to these metabolic changes is a significant alteration of the levels of several relevant amino acids following exposure of zebrafish embryos to AFB1. Exposure of 72 hpf embryos to AFB1 altered levels of several amino acids including significant increases of Phe, Tyr, Trp, Ile, Glu, Gln, and Gly and a significant decrease of Cys ( Figure 4 and Table 1). Altered amino acid levels have, indeed, been consistently identified in relation to hepatic pathology, in general, and hepatotoxicity of AFB1 specifically, in effectively all previous metabolomics studies [20][21][22][23]. As an established metabolic biomarker of liver damage, coined the "Fischer ratio", it has, for example, long been known that during hepatic failure, the ratio of aromatic amino acids (AAA; i.e., Tyr, Phe, and Trp) to branched-chain amino acids (BCAA; i.e., Leu, Ile, and Val) increases [30][31][32]. This is related, in part, to the fact that catabolism of BCAA, in contrast to AAA (and, indeed, all other amino acids), is not localized to the liver, but rather peripheral systems, and particularly skeletal muscle [33]. The ratio of the normalized levels of AAA:BCAA and Tyr:BCAA (as a variant of the Fischer ratio [32]) for 96 hpf embryos significantly increased with exposure to AFB1 relative to controls (i.e., 1.5-fold and 1.8-fold, respectively; Table 1). Of the BCAA, a statistically-significant change (increase) of only Ile was observed; however, highly significant increases in all AAA (i.e., Tyr, Phe, and Trp) were measured for 72-hpf AFB1 exposures (Figure 4), which resulted in the elevated AAA:BCAA and Tyr:BCAA ratios ( Table 1). The observed increase in Ile is very notable since this particular BCAA has been distinctively demonstrated in several studies [34][35][36][37][38] to increase glucose uptake and utilization and decrease gluconeogenesis by liver. The observed increase in Ile would, therefore, be consistent with increased utilization of glucose (from glycogen) observed here ( Figure 5; see the discussion below). Both AAA and BCAA are known to be essential amino acids for teleost fish [39], such as zebrafish, and are necessarily derived from diet (or, in the case of embryos, protein-rich yolk) such that any change in their levels is presumptively due to alterations in their catabolism, rather than anabolism (i.e., biosynthesis). In addition, levels of the non-essential amino acids Glu, Gln, and Gly were, likewise, elevated in AFB1-exposed embryos. Elevated Glu is noteworthy, in this regard, as conversion of Glu to α-ketoglutarate (αKG) via either transamination or glutamate dehydrogenase represents a key alternative (to pyruvate/acetyl CoA) as an entry point for the Krebs cycle to meet cellular energy demands ( Figure 5). Furthermore, "upstream" deamination of Gln via glutaminase, likewise, provides the substrate (i.e., Glu) for transamination to αKG (and subsequent entry to the Krebs cycle), and it has been asserted that so-called glutaminolysis (alongside glycolysis; see below) is essential to metabolic homeostasis during embryo development [40,41]. It is, therefore, proposed that cellular damage to hepatocytes, associated with AFB1 exposure, leads to loss of context-specific glutaminase and transaminase activity and, consequently, impaired amino acid catabolism (and homeostasis, more generally), which contributes to toxicity in zebrafish embryos. This notion is supported, for example, by previous studies of the hepatotoxicity of acetaminophen in the zebrafish embryo, whereby a similar stage-and concentration-dependent embryotoxicity is correlated with the elevation of serum levels, and consequent loss in hepatocytes, of transaminase activity [42]. Interestingly, neither Asp, nor Ala levels were significantly altered ( Figure 4). This is notable given their well-known role in amino acid biosynthesis and catabolism, and specifically transamination reactions (including conversion of Glu to αKG) in the liver. Plasma levels of aspartate and alanine transaminases (i.e., AST and ALT, respectively) and, moreover, their ratio (i.e., AST/ALT) are some of the best-established biomarkers of liver damage (although other amino acid transaminases have been, likewise, linked to hepatic pathology) [43]. Specifically, increased plasma levels of AST and ALT are linked to damage to the liver, as the primary location of these enzymes, which leads to their release into plasma. Although ALT and AST were not directly measured in the present study, decreased transaminase capacity of hepatocytes (due to hepatocytotoxicity) might be expected to alter Ala and Asp (as substrates of these enzymes). However, although levels of both Asp and Ala were elevated, neither was increased significantly (p > 0.05; Figure 4 and Table 1). The lack of a significant change in Ala and Asp is proposed to be due, in part, to the lack of transamination by mitochondrial ALT and AST (i.e., mALT and mAST) of Glu to αKG, to supply the Krebs cycle, for which these two amino acids are the primary products. Specifically, previously-demonstrated disruption of mitochondrial function including Krebs cycle and coupling of oxidative phosphorylation ( [44], see below) would, in this proposed mechanism, lead to a lack of mAST and mALT and, thus, both accumulation of Gln and Glu, as well as reduction in mAST/mALT-derived Asp and Ala, which would be offset by anticipated increases due to the loss of cytoplasmic transaminases in hepatocytes ( Figure 5). Notably, the only amino acid for which significantly decreased levels were observed was Cys (Figure 4). In hepatocytes, Cys is essential to GSH biosynthesis and, indeed, the major determinant of GSH availability [45]. Decreased Cys, accompanied by a decrease in GSH, therefore, likely reflects increased utilization of the latter, as part of phase II detoxification, to remove reactive AFB1 epoxides. In light, however, of the apparent effect of AFB1 on both energy and lipid metabolism (as discussed below), depletion of Cys may, alternatively, or additionally, be related to demands for coenzyme A for which Cys is, likewise, an essential biosynthetic building block ( Figure 5). Dysfunction in amino acid metabolism intersects with the observed effects of AFB1 on both lipid and carbohydrate/energy metabolism ( Figure 5). Anaplerotic catabolism of amino acids, specifically following deamination, leads to α-ketoacid products, which serve as both intermediates for entry into the Krebs cycle and as substrates for gluconeogenesis, and indirectly, for lipid biosynthesis (i.e., acetyl-CoA), for which the liver is the centrally-functioning organ. Amino acids for which levels were significantly increased by AFB1 exposure include both strictly gluconeogenic (i.e., Glu, Gln, Gly) representatives and essential amino acids (i.e., AAA) that can be either gluconeogenic or ketogenic. In fact, both lipid metabolism and cellular energetics (and related carbohydrate metabolism) have been, likewise, clearly linked to hepatic pathology including AFB1 hepatotoxicity, and all previous metabolomics studies have identified similar alteration of these metabolic pathways [20][21][22][23]. With respect to carbohydrate metabolism and associated cellular energetics, one of the most striking metabolic effects is a significant decrease in G1P, G6P, and Glc accompanied by concomitant increases in Lac and ATP, NADH, and NAD + (Figure 4). Given the unique role of G1P in glycogenolysis (and, in reverse, glycogenesis), increased levels of this intermediate, in concert with decreases in G6P and Glc, are highly suggestive of a breakdown of glycogen to supply glucose for energetic metabolism. Decreases in Glc and glycogen-derived intermediates occur alongside an increase in Lac that would be indicative of either anaerobic glycolysis and/or elevated LDH activity. Lactate dehydrogenase (as a stable and ubiquitous cellular enzyme) is, in fact, widely recognized as an indicator of cytotoxicity, in general, and of AFB1 hepatocytotoxicity specifically [29]. Release of this enzyme, following hepatic cell death, would be expected to lead to the production of Lac from pyruvate (derived, in turn, from glycogenolysis and subsequent glycolysis with the attendant increase in ATP and NADH) and a concomitant increase in NAD + , as observed (Figures 4 and 5). That said, LDH is also functionally associated with anaerobic glycolysis. Anaerobic glycolysis is, in fact, an essential energetic pathway during early embryonic stages, and a requisite shift in metabolism from anaerobic glycolysis to oxidative phosphorylation accompanies programmed embryo development [46]. As such, the increase in Lac (via anaerobic glycolysis) would be consistent with general impairment of embryo development as previously observed in HRMAS NMR studies of zebrafish exposed to developmental toxins [19]. Either way, shunting of Glc to glycolysis and subsequent consumption of pyruvate (by either LDH release or anaerobic glycolysis) would be consistent with reduced entry into the Krebs cycle (via acetyl CoA). At the same time, the loss of amino acid transaminase activity, in association with hepatotoxicity (as discussed above), would reduce levels of αKG, as a second entry point into the Krebs cycle, which would further compound energetic stress on the developing embryo. It is proposed that stage-dependent (and presumably hepatocyte-dependent) toxicity of AFB1, and alterations of metabolic pathways observed here, result from impairment of energy metabolism and associated anaplerotic reactions (e.g., amino acid catabolism, lipid metabolism) during embryo development ( Figure 5). The effects of AFB1 on cellular energy, and particularly the Krebs cycle, indeed, have been previously demonstrated in mammalian (i.e., dairy goat) models [22]. Moreover, recent studies have shown that AFB1 targets mitochondria and, specifically, uncouples oxidative phosphorylation [44]. Both disruption of mitochondria, in general, and inhibition of oxidative phosphorylation (as a key developmental transition) could, therefore, explain the impairment of development and general toxicity. Mitochondrial disruption would limit the utility of the Krebs' cycle, and subsequent oxidative phosphorylation (which is coupled to the Krebs cycle via succinate and NADH), in the mitochondrial matrix, and shift energetic demand to cytosolic glycolysis (leading to the production of Lac). This is further supported by increased acetate, as a proxy for acetyl CoA (as the primary substrate for the Krebs cycle), which may suggest a build-up of this intermediate following disruption of mitochondria (and loss of energetic functionality in the Krebs cycle and oxidative phosphorylation). It has, in fact, been shown that inhibition of the transition from aerobic glycolysis to oxidative phosphorylation during embryo development leads to apoptotic cell-death in progenitor cells [46], which would, therefore, explain both acute toxicity (i.e., lethality) and developmental defects (Figures 1 and 2). Alongside carbohydrate/energy metabolism, hepatocytes are the primary location of lipid metabolism including both biosynthetic and catabolic functions. Indeed, hepatic damage is clinically associated with lipid accumulation (i.e., hepatic steatosis) or so-called "fatty liver." Hepatotoxicity of AFB1 would, therefore, also closely correlate with loss of lipid metabolic function and, thus, observed increases in FA and Chol ( Figure 4). Indeed, numerous studies (in other model systems) have, likewise, reported increases in lipids (both FA and Chol) in association with AFB1 exposure and toxicity [21,47]. Although acetyl CoA (as the catabolic product of β-oxidation of FA and substrate for lipid biosynthesis) was not directly measured in the present study, acetate may serve as a proxy for impairment of lipid metabolism. The observed increase in acetate may, in this regard, indicates catabolic breakdown, as well as subsequent accumulation of this substrate of the Krebs cycle (due to loss of mitochondrial energy production) and simultaneously as a building block of lipid biosynthesis (aligned with observed increases in FA and Chol). At the same time, the liver produces bile acids, which are primarily derived from cholesterol, and increases in Chol may consequently reflect, in addition, a loss of this function (due to hepatotoxicity). Similarly, alongside disruption of lipid metabolism, observed elevation of lipids (including both FA and Chol) in the present study may be additionally augmented by release from hepatic membranes (i.e., swelling, release, and hydrolysis of phospholipids) in association with hepatic damage ( Figure 5; discussed above). Alteration of Metabolic Profiles in Relation to Neurotoxicity of AFB1 Finally, in addition to recognized hepatotoxicity, AFB1 has been very recently found to impair locomotor function and disrupt neural development in zebrafish embryos and larvae [13]. Although no such neurobehavioral or neural development effects were directly observed or measured in the present study, the significant increases in GABA, Glu, and Gly (Figure 4) would support such effects. Indeed, the developing zebrafish embryo is enriched in several metabolites associated with the CNS including neurotransmitters (such as Glu, Gly, and GABA), likely due to their role in development including, in particular, the neural crest as a key population of progenitor cells. Thus, alongside their roles in neuronal function (as neurotransmitters), all three of these amino acids have been found to have a role in the development of the CNS during embryogenesis [48][49][50], and alterations of their levels may indicate a contribution to the neurobehavioral and neurodevelopmental effects of AFB1 in zebrafish. Of these, Glu and GABA are particularly notable given their shared pathways of "recycling" (between neurons and glial cells) via the glutamate/GABA-glutamine cycle and synchronized association with the Krebs cycle ( Figure 6). With respect to glutamatergic neurons, synaptic Glu is taken up (via excitatory amino acid transporters) to astrocytes where it is converted to Gln by glutamine synthetase (GS), and subsequently transported to neuronal cells where mitochondrial phosphate-activated glutaminase recycles Gln to the neurotransmitter Glu. In GABAergic neurons, GABA is, likewise, recovered from synapses (though to a lesser extent compared to Glu) by astrocytes and similarly converted to Gln. However, in this case, conversion to Gln occurs by transamination (i.e., GABA transaminase) of α-keto acids (i.e., α-KG, pyruvate or glyoxylate) to amino acids (i.e., Glu, Gly and Ala) and involvement of the Krebs cycle (via succinate semialdehyde, succinate, and αKG, sequentially; Figure 6) to produce Glu and, in turn, Gln (via GS), which is transported to neurons where PAG converts Gln back to Glu and subsequently GABA (via glutamic acid decarboxylase (GAD)). In both cases, Glu can be shunted to the mitochondria to meet energy demands. Thus, the proposed impairment of mitochondrial energy production (i.e., Krebs cycle and subsequent oxidative phosphorylation, as discussed above) would, likewise, lead to increased levels of Glu, GABA, and Gln in neurons. Biosynthesis of Gly, on the other hand, is by way of 3-phosphoglycerate and, subsequently, following transamination by Glu, serine. As the former is a key intermediate in glycolysis, elevated levels of Gly may reflect increased diversion of Glc to this energetic pathway ( Figure 5). Regardless of the source, impaired homeostasis of the three neurotransmitters could, in turn, explain possible dysfunction in both neural function and development observed in zebrafish embryos [13]. Based on these findings, a proposed model for AFB1-induced neurotoxicity is summarized in Figure 6. embryos [13]. Based on these findings, a proposed model for AFB1-induced neurotoxicity is summarized in Figure 6. Conclusions Coupling of NMR-based techniques to early life stages of the zebrafish was demonstrated in the present study (and, indeed, other recent studies [15][16][17][18][19]) to provide unique access into the integrated metabolome of an intact organism. Herein, we specifically demonstrated that this approach (when coupled to this established toxicological system) not only revealed a remarkable level of consistency with previous ex vivo metabolomics studies in mammals [20][21][22][23], but simultaneously enabled a holistic model ( Figure 5), with respect to the hepatotoxicity of AFB1. As such, this underscores the Conclusions Coupling of NMR-based techniques to early life stages of the zebrafish was demonstrated in the present study (and, indeed, other recent studies [15][16][17][18][19]) to provide unique access into the integrated metabolome of an intact organism. Herein, we specifically demonstrated that this approach (when coupled to this established toxicological system) not only revealed a remarkable level of consistency with previous ex vivo metabolomics studies in mammals [20][21][22][23], but simultaneously enabled a holistic model ( Figure 5), with respect to the hepatotoxicity of AFB1. As such, this underscores the potential of the technique toward otherwise inaccessible insight regarding pathways and biomarkers of toxicity. As a further demonstration of this potential, alterations of metabolites associated with function and development of the CNS (i.e., neurotransmitters) additionally revealed previously-unknown biochemical effects on cellular homeostasis, which may explain the previously-proposed neurotoxicity of AFB1, including recently-reported neurobehavioral and neurodevelopmental impairment in the zebrafish model [13,51]. Chemicals All chemicals, including AFB1, were obtained from Sigma-Aldrich (St. Louis, MO, USA), unless otherwise mentioned Zebrafish Embryos Adult wild-type zebrafish (Danio rerio) were maintained in recirculating aquarium systems according to established rearing procedures [16], and breeding and embryo collection were performed by following the standard procedure as described earlier [16]. Husbandry and experimental procedures (i.e., exposures, collection of embryos) involving zebrafish embryos were performed in accordance with the local animal welfare regulations and maintained according to standard protocols [52]. This local regulation serves as the implementation of the guidelines on the protection of experimental animals by the Council of Europe, Directive 86/609/EEC, which allows zebrafish embryos to be used up to the moment of free-living (5 days after fertilization). Since embryos used in this study were no more than 5 days old, no license is required by the European Union, Directive 2010/63/EU (1 January 2010), or the Leiden University ethics committee. Zebrafish Embryo Toxicity Assays To evaluate acute toxicity and developmental defects caused by AFB1, zebrafish embryos at representative stages (24,28,72, and 96 hpf) were treated with varying concentrations of AFB1 (0, 0.25, 0.5, 1.0, and 2.0 µM) for 24 h in 35 mm-diameter polystyrene dishes (N = 20 embryos per replicates, i.e., dish, and N = 6 replicates per treatment group). Percentage survival of the embryos was evaluated and scored for lethal or teratogenic effects, using a Zeiss CKX41 inverted microscope with phase contrast optics, amounted time-lapse recorder, and the analysis software (Olympus, Hamburg, Germany). Lethal or teratogenic effects were recorded according to Weigt et al. [11]. Teratogenic effects were considered valid if the following criteria were fulfilled:(i) concentration-response relationship and (ii) the endpoint observed in ≥50% of embryos showed teratogenic effects in all replicates. Lethal concentrations for 50% (LC 50 ) were calculated by probit analysis. HRMAS NMR Metabolic profiling by HRMAS NMR was performed as adapted from previous studies [18,19]. Embryos (72 hpf) were exposed to 1 µM AFB1 for 24 h; control embryos were exposed to the solvent (i.e., DMSO) carrier only. As the exposure concentration for AFB1 was approximately equal to the LC 50 , additional exposure replicates were done in order to generate a sufficient number of embryos (N = 100) and replicates (N = 3) for quantitative NMR analyses. Accordingly,~100 embryos were collected (after 24 h) from both controls (N = 3) and pooled (N > 3) AFB1 exposures. Following washing (3-times with MilliQ water) to remove residual AFB1, embryos were transferred to 4-mm zirconium oxide rotors (Bruker BioSpin AG, Switzerland) for replicate (N = 3) measurements by NMR. As a reference ( 1 H chemical shift at 0 ppm), deuterated phosphate buffer (10 µL of 100 mM, pH 7.0) containing 0.1% (w/v) 3-trimetylsilyl-2,2,3,3-tetradeuteropropionic acid (TSP) was added, and the rotor was transferred immediately to the NMR spectrometer. All HRMAS NMR experiments were done on a Bruker DMX 600-MHz NMR magnet with a proton resonance frequency of 600 MHz, which was equipped with a 4-mm HRMAS dual 1 H/ 13 C inverse probe with a magic angle gradient and spinning rate of 6 kHz. Measurements were carried out at a temperature of 277 K using a Bruker BVT3000 control unit. Acquisition and processing of date were done with Bruker TOPSPIN software (Bruker Analytische Messtechnik, Germany). A rotor synchronized Carr-Purcell-Meiboom-Gill (CPMG) pulse sequence with water suppression was used for one-dimensional 1 H HR-MAS NMR spectra. Each one-dimensional spectrum was acquired applying a spectral width of 8000 Hz, domain data points of 16k, a number of averages of 512 with 8 dummy scans, a constant receiver gain of 2048, an acquisition time of 2 s, and a relaxation delay of 2 s. The relaxation delay was set to a small value to remove short T 2 components due to the presence of lipids in intact embryo samples. All spectra were processed by an exponential window function corresponding to a line broadening of 1 Hz and zero-filled before Fourier transformation. NMR spectra were phased manually and automatically baseline corrected using TOPSPIN 2.1 (Bruker Analytische Messtechnik, Germany). The total analysis time (including sample preparation, optimization of NMR parameters, and data acquisition) of 1 H-HRMAS NMR spectroscopy for each sample was approximately 20 min. Two-dimensional (2D) homo-nuclear correlation spectroscopy ( 1 H-1 H COSY), specifically in magnitude mode, was performed using a standard pulse program library. For COSY, 2048 data points were collected in the t 2 domain over the spectral width of 4k, and 512 t 1 increments were collected with 16 transients, a relaxation delay of 2 s, an acquisition time of 116 ms, and a pre-saturated water resonance during relaxation delay. Data were zero-filled with 2048 data points and weighted with a sine bell window function in both dimensions (prior to Fourier transformation). To preclude the possibility of sample degradation during COSY experiments, the 1D 1 H HRMAS spectra were measured before and after 1 H-1 H COSY measurements ( Figure S2). 1 H NMR Data Analysis All of the spectra were referenced, baseline-and phase-corrected, and analyzed by using MestReNova v.8.0 (Mestrelab research S.L., Santiago de Compostela, Spain). Quantification of metabolites was performed by Chenomx NMR Suite 8.2 (Chenomx Inc., Edmonton, Alberta, Canada). This enabled qualitative and quantitative analysis of an NMR spectrum by fitting spectral signatures from an HMDB database to the spectrum. The concentrations of metabolites were subsequently calculated based on a ratio relative to tCr, since the external reference may lead to the misleading results, and Cr resonance has been previously shown to be a reliable internal reference in a wide range of animal studies. Statistical analysis of NMR quantification was done by one-way analysis of variance (ANOVA) using OriginPro v. 8 (OriginLab, Northampton, MA, USA), and calculated F-values larger than 2.8 (p < 0.05) were considered significant. For multivariate analysis, AMIX (Version 3.8.7, BrukerBioSpin, The Woodland, TX, USA) was used to generate bucket tables from the one-dimensional spectra of control and AFB1treated embryos, excluding the region between 4.20 and 6.00 ppm to remove the larger water signal. The one-dimensional CPMG spectra were normalized to the total intensity and binned into buckets of 0.04 ppm. The data were mean-centered and scaled using the Pareto method in the SIMCA software package (Version 14.0, Umetrics, Umeå, Sweden). Unsupervised principle component analysis (PCA) was performed on the data using the SIMCA software as described earlier [19]. Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6651/11/5/258/s1: Figure S1: PCA scores plots (A) and loading plot (B) for control and Aflatoxin B1 treated embryo. Spectra derived from the same group have the same color. Total 75% variables are used to make this score plot. Figure S2: A representative 1 H-1 H COSY spectra for 96 hpf embryos treated with 1 uM aflatoxin for 24 h. Spectrum was recorded in magnitude mode. The parameters used for COSY were 2048 data points collected in the t2 domain over the spectral width of 4k, 512 t1 increments were collected with 16 transients, relaxation delay 2 sec, acquisition time 116 msec, and pre-saturated water resonance during relaxation delay. The resulting data were zero filled with 512 data points, and were weighted with the squared sine bell window functions in both dimensions prior to Fourier Transformation. Application of gradient pulses along with tradition 1 H-1 H COSY sequence provides resolution compared to liquid NMR.
Summary: Chicago Quantum Exchange (CQE) Pulse-level Quantum Control Workshop Quantum information processing holds great promise for pushing beyond the current frontiers in computing. Specifically, quantum computation promises to accelerate the solving of certain problems, and there are many opportunities for innovation based on applications in chemistry, engineering, and finance. To harness the full potential of quantum computing, however, we must not only place emphasis on manufacturing better qubits, advancing our algorithms, and developing quantum software. To scale devices to the fault tolerant regime, we must refine device-level quantum control. On May 17-18, 2021, the Chicago Quantum Exchange (CQE) partnered with IBM Quantum and Super.tech to host the Pulse-level Quantum Control Workshop. At the workshop, representatives from academia, national labs, and industry addressed the importance of fine-tuning quantum processing at the physical layer. The purpose of this report is to summarize the topics of this meeting for the quantum community at large. I. INTRODUCTION Quantum computing today: The present era of quantum computing is characterized by the emergence of quantum computers (QCs) with dozens of qubits. Although these devices are not fault tolerant, new algorithms exist that have innate noise resilience and modest qubit requirements. There are promising indications that near-term devices could be used to accelerate or enable solutions to problems in domains ranging from molecular chemistry [1] to combinatorial optimization [2] and machine learning [3]. Why take a pulse approach: The underlying evolution of a quantum system is continuous and so are the control signals. These continuous control signals offer much richer and more flexible controllability than the gate-level quantum instruction set architecture (ISA). The control pulses can drive the QC hardware to the desired quantum states by varying a system-dependent and time-dependent quantity called the Hamiltonian. The Hamiltonian of a quantum system is an operator corresponding to the total energy of the system. Thus, the system Hamiltonian determines the evolution path of the quantum states. The ability to engineer the real-time system Hamiltonian allows us to navigate the quantum system to the quantum state of interest through generating accurate control signals. Quantum computation can be done by constructing a quantum system in which the system Hamiltonian evolves in a way that aligns with a QC task, producing the computational result with high probability upon final measurement of the qubits. Pulse-level challenges: While the benefits to the pulse approach are clear, there are several challenges stemming from the inherent complexities of a full-stack approach, some of which are discussed below: 1) Machine Hamiltonian: Pulse-level optimization typically requires an extremely accurate model of the quantum system or machine, i.e. its Hamiltonian. Hamiltonians are difficult to measure experimentally and moreover, they drift significantly over time between daily recalibrations. Experimental quantum optimal control (QOC) papers incur considerable overhead associated with preexecution calibration to address this issue. 2) Programming Model: A traditional gate-level quantum programming model is simply an abstraction of the real quantum hardware execution in a form which is amenable to users familiar with classical programming models. Thus, execution of a program represented via the quantum circuit model requires translating circuit instructions to pulses which enact the desired statetransformations and/or measurements. This translation is often sub-optimal due to the heavy abstractions imposed across the software stack, resulting in a pulse-level instruction that closely approximates the ideal gate within some margin of error. Masking pulses with gate-level programming prevents exposing too much information to the end user, simplifying computation. Low-level quantum programming, if not done in a systematic manner, can considerably increase the complexity of an algorithm's specification. 3) Optimization and Compilation Overheads: Generating optimal pulses is an arduous task. Compilation and optimization overheads are especially prohibitive in applications such as variational quantum algorithms wherein the circuit and pulse construction process are part of the critical execution loop. This could potentially amount to several weeks of total compilation latency over the course of thousands of iterations. 4) Simulation: Quantum systems are dynamic and evolve with time. As a result, simulation tools for quantum mechanical systems need to take variation into consideration, whether that variation stems from intentional gate application or from unintentional drift or environmental coupling. If quantum systems are modeled with enough granularity, pulses required for low-level control can be developed. Role of workshop and report: There are many challenges, as previously mentioned, associated with low-level quantum control. Overcoming these difficulties, however, could contribute to achieving quantum advantage in the near-term. Additionally, accelerated quantum processing could emerge as a significant benefit of custom architectures produced as a result of software-hardware codesign. The Chicago Quantum Exchange (CQE) Pulse-level Quantum Control Workshop was organized with the intent of informing the broader community of the advantages of pulselevel quantum programming. Included sessions did not all specifically focus on pulse-level software or hardware optimization, but the role of low-level control in improving nearterm QCs was an integral theme throughout the workshop. Talks featured many types of quantum technologies and information encodings, showing the potential for wide range of improved control in the hardware-diverse quantum space. The purpose of this report is to summarize the topics discussed during the workshop and to provide a reference for the quantum community at large. II. QUANTUM INFORMATION Quantum information science (QIS) redefines the classical computational model through the use of a type of information that can hold many values at once. Most frequently, radix-2 or base-2 quantum computation is implemented within algorithms and quantum computer architectures. This type of quantum computation uses quantum bits, or qubits, that have two basis states represented as |0 = 1 0 T and |1 = 0 1 T . Qubits, unlike classical bits that hold a static value of either 0 or 1, demonstrate states of superposition in the form of α 0 |0 + α 1 |1 with probability amplitudes α 0 , α 1 ∈ C such that |α 0 | 2 + |α 1 | 2 = 1. Superposition enables n qubits to represent states in 2 n -dimensional Hilbert Space, and this phenomenon, along with the ability for quantum states to interfere and become entangled, allow certain problems to be solved with significant reductions in complexity. Qubits hold large quantities of information for processing while in superposition, but upon measurement, the quantum state collapses; only classical values of either 0 or 1 are observed. Radix-d computation where d > 2 is seen occasionally in classical systems, particularly in domain-specific applications. However, the benefits of high-dimensional encoding are often outweighed by the advantages provided through the continuous scaling of bistable transistors. Similarly, higher-dimensional quantum computation is not infeasible and trade-offs associated with its application are actively being explored. The qudit, or quantum digit, is the multi-level quantum unit that can be used as an alternative to the base-2 qubit. A qudit is described by a d-dimensional vector and is written as where α i values are the probability amplitudes corresponding to the basis states |i . As with the qubit, qudit probabilities must all sum to one. Under the no-cloning theorem, unknown qubit and qudit states cannot be copied without destroying superposition. In other words, any attempt to duplicate quantum information essentially acts as a measurement operation, resulting in a basis state. As a result, quantum error correction and information storage methods that preserve state cannot be implemented like their classical analogs because classical processing often exploits the ability to efficiently copy data. The no-cloning theorem inflicts a serious architectural constraint for quantum information processing. Quantum computation is implemented with operators or gates that cause the probability amplitudes associated with each basis in the quantum state to evolve. These operators are represented by a unitary transformation matrix, U , of size d n × d n where n is the number of radix-d units of quantum information that the operation transforms. Quantum operations are reversible since gates are unitary. U U † = U † U = I where the symbol † indicates a complex conjugate operation that creates the inverse of U , and I is the identity operation that preserves quantum state. Measurement is not reversible since it causes quantum states to collapse to classical information. Quantum operations are necessary for qubits and qudits to demonstrate special quantum properties. Single input opera-tions can be used to create superpositions of basis states. If entangled states are desired, multi-qubit or multi-qudit gates must be available. When qubits or qudits are entangled, they act as an inseparable system where any action on one part of the system impacts the other(s). Cascades of quantum operations create quantum circuits or algorithms. The set of gates employed by the circuit depends on the abstraction level used in the description. For example, higher-level circuits that are technology-independent may use complex, multi-qubit or qudit operators whereas lowerlevel circuits targeted for execution on quantum hardware will implement a set of elementary single-and two-qubit or qudit gates. The set of basis gates used for a specific QC is technology-dependent as certain quantum platforms implement some gates more efficiently than others. A basis gate set usually consists of a set of single-qubit or qudit operations that implement arbitrary rotations within a small margin of error along with a a multi-qubit or qudit operation, such as the radix-2 CX or CZ gates, to form a universal gate set. III. QUANTUM ALGORITHMS Search, simulation, optimization, and algebraic problems are all proposed for QCs, but the breadth of quantum applications will depend on the size and capability of available machines. The subset of computations, as well as their corresponding quantum kernels, for which QCs will promise an advantage is still being defined, and over time, it is likely that this class of problems will evolve. A key challenge for quantum researchers is developing efficient methods for categorizing problems for the best suited hardware, either classical or quantum, and then running subroutines derived from partitioned algorithms accordingly. A. Variational Algorithms Variational quantum algorithms (VQA) provide an exciting opportunity for near-term machines to demonstrate quantum advantage. VQAs have a one-to-one mapping between logical qubits in algorithms to physical qubits in QCs. Physical qubits will be discussed more in Section IV-A. Proposed applications of VQAs include ground state energy estimation of molecules [4] and MAXCUT approximation [2]. VQAs are hybrid algorithms since they comprise of classical and quantum subroutines. VQAs are well suited for near-term hardware since they adapt to the intrinsic noise properties of the QC they run on. The VQA circuit is parameterized by a vector of angles that correspond to gate rotations on qubits. The vector of angles is optimized by a classical optimizer during many iterations to either maximize or minimize an objective function that represents the problem that the VQA implementation hopes to solve. Two notable VQAs are the Variational Quantum Eigensolver (VQE) and the Quantum Approximate Optimization Algorithm (QAOA). The former is often used for quantum chemistry while the latter is applied to combinatorial optimization. B. Fault Tolerant Algorithms Fault tolerant quantum algorithms offer exciting computational gains over their classical analogs, but they require extremely low error thresholds. Because of this, error correcting codes (ECCs) are required that enforce a one-to-many mapping between logical qubits in an algorithm to physical qubits on a QC. This encoding allows quantum information to be shielded from errors that arise due to uncontrolled coupling with the environment or due to imperfect operations at the physical level. There are many kinds of quantum ECCs [5]. At minimum, it is estimated that millions of high-quality physical qubits will be needed to implement fault-tolerant quantum computation [6]. Once fault tolerance is reached, disruptive implementations of Shor's algorithm for quantum factoring [7], Grover's quantum database search [8], and quantum phase estimation [9] can be implemented. A. Physical Qubits Even if the theory is well defined, information must have a physical realization for computation to occur. Currently, a variety of quantum technologies can encode logical qubits within different media. We call a physical implementation of a radix-2 unit of information a physical qubit. Since today's quantum devices lack error correction, each logical qubit within an algorithm is implemented with one physical qubit in a machine. These devices, sometimes called Noisy Intermediate Scale Quantum (NISQ) devices, are error prone and are up to hundreds of qubits in size [10]. Some physical qubit examples include energy levels within superconducting circuits [11]- [13], ions trapped by surrounding electrodes [14]- [16], neutral atoms held with optical tweezers [17], [18], and photons travelling through free space or waveguides [19], [20]. Each of these platforms have their unique strengths, but none has become the obvious choice for the standard quantum computing platform. B. Physical Architectural Constraints Although many promising quantum technologies exist, they are imperfect and demonstrate attributes that prevent scaling in the near-term. For example, quantum state preparation, gate evolution, and measurement all are characterized by nontrivial infidelity rates and must be implemented with a limited set of instructions that depend on the control signals that are available for a specific qubit technology. Additionally, restricted connectivity among device qubits is a key limiting factor. Qubit-qubit interaction is necessary for quantum entanglement, but unfortunately, many of today's QCs are limited in the number of qubits that are able to interact directly. Even with technologies that allow for all-to-all connectivity, such as with trapped ions, QCs are limited by the total number of qubits that are actively used during computation [21]. Careful design of qubit layout on devices as well as intelligent algorithm mappers and compilers can assist with qubit communication overheads. Qubit communication is required for quantum algorithms, but unintended communication, or crosstalk, can cause errors during computation. Crosstalk errors experienced during quantum algorithm execution often arise from simultaneously stimulating two neighboring qubits. If the qubits are strongly coupled and "feel" each other, the fidelity of the two simultaneous computations can decrease. Fortunately, qubit crosstalk errors are systematic. Once identified, sources of crosstalk can be mitigated with software [22], [23] or with improved device design [24]. Today's qubits are extremely sensitive, and as a result, crosstalk with neighboring qubits is not the only accidental signal interference that must be considered for quantum systems. Qubits also tend to couple with their surrounding environment, causing quantum information to decohere. Amplitude damping and dephasing cause qubit decoherence errors. Amplitude damping describes the loss of energy from |1 to |0 , and T 1 is used to denote the exponential decay time from an excited qubit state to a ground state. Dephasing error refers to the gradual loss of phase between |0 to |1 and is described by the time T 2 . Quantum decoherence sets rigid windows for compute time on current QCs. If the qubit runtime, or period spanning the first gate up until measurement, surpasses the QC T 1 or T 2 time, the final result of computation may resemble a random distribution rather than the correct output. C. Superconducting Circuits -A Case Study on Progress Superconducting circuits have emerged as a leading physical qubit implementation [25]. IBM has dedicated much research to the development of the Transmon-based superconducting QC, and since 2016, the company has allowed their prototype devices to be used by the public via the IBM Quantum Experience [26]. Since 2016, IBM QCs have increased from 5 to 127 qubits [27]. The significance of increasing the number of qubits on-chip to a total of n is that more complex algorithms can be run that explore an exponentially larger state space, dimension 2 n . Although the debut of larger QCs with over a hundred qubits is an impressive engineering development, limited fidelity, qubit communication, and coherence times prohibit these devices from reliably executing circuits that require all of a device's physical qubits to work in synergy. Progress in quantum hardware is not measured simply by the consistency and count of device qubits. The gate error must also be considered as it influences the depth of circuits that a QC can run. IBM QCs have seen the average two-qubit infidelity decrease in magnitude from order 10 −2 to 10 −3 over the last five years [27]. Quality of multi-qubit gates is an important metric to consider when rating a QC because it enables the entanglement required for many quantum algorithms to demonstrate advantage. The importance of device size, qubit quality, and operation success is clear, thus, improving quantum systems as a whole requires a multi-targeted approach. To assist with scaling into more robust devices, IBM developed a metric referred to as Quantum Volume (QV) that aims to benchmark quantum hardware in the near term. QV is a scalar value, where higher is better, that assesses the largest random circuit of equal width and depth equal to m that a QC can successfully run [28]. As QV= 2 m , the metric provides perspective on system performance as a general purpose QC. Capability in terms of coherence, calibration, state initialization, gate fidelity, crosstalk, and measurement quality are all captured by QV. QV is also influenced by design aspects such as chip topology, compilation tools, and basis gate set. Recently, IBM extended the QC frontier to achieve a QV of 64, and then later 128, through the combined application of improved compiler optimizations, shorter two-qubit gates, excited state promoted readout, higher-accuracy state discrimination and open-loop correction via dynamical decoupling [13], [29]. These breakthroughs were accompished on 27 qubit devices, demonstrating that the compute power of a quantum system is heavily influenced on compilation and low-level control; it does not depend on qubit count alone. Quantum compilation and low-level control will be discussed further in Section V. By focusing research to innovate in these domains, along with making developments in other areas of QC architecture, IBM anticipates to scale QCs to one million qubits and beyond [30]. V. PROGRAMMING FOR PULSE-LEVEL CONTROL To target real-world quantum use cases, efficient programming languages and supporting software tool-flows are required to represent complex classical and quantum information processing in quantum algorithms and then efficiently execute them both in simulation and on real quantum devices. To build such an efficient software stack, balancing between abstraction and detail is key. On the one hand, a transparent software stack that exposes device specifics helps programmers write tailored code, but on the other hand it dramatically increases the complexity of the toolflow. Considering this trade-off, advancements in building an efficient software stack have been a critical driver in pushing the field of quantum computing beyond the laboratory. Managing these trade-offs is particularly important for pulse-level control because it inherently requires programmer exposure to the finer details of the device. In this Section, we discuss state-of-the-art tools and models for quantum programming and device execution. Quantum programming languages are designed to be userfriendly, with sophisticated control flow, debugging tools, and strong abstraction barriers between target operations and the underlying quantum hardware. Operations are thought of as "black-box" in the sense that the details of the physical implementation of quantum gates are hidden from the end user. This allows for modularity since a technology-independent quantum program written at the gate level can be compiled for execution on multiple QCs of different qubit types. The most successful languages have been implemented as Python packages, such as IBM's Qiskit [31], Google's Cirq [32], Rigetti's PyQuil [33] and Xanadau's Strawberry Fields [34]. Others are written as entirely new languages, such as Scaffold [35] which is based on LLVM infrastructure, Quipper [36] which is a functional language embedded in Haskell, and Q# [37] which is Microsoft's quantum domain specific language. Here, we restrict ourselves to Qiskit and provide an overview of Qiskit's programming model and its support for pulse-level control. We encourage readers to further references on Qiskit [31] and its pulse support [38], [39]. Other programming frameworks and their device-level control can be reached via their corresponding webpages and documentation. A. Qiskit and the Pulse Programming Model Qiskit is an open-source quantum computing framework from IBM that provides tools for creating, manipulating, and running quantum programs on quantum systems independent of their underlying technology and architecture. The commonly used quantum programming paradigm is the circuit model. Such a model abstracts the physical execution of a quantum algorithm on a quantum system into a sequence of unitary gate operations on a set of qubits followed by qubit measurements. The gates manipulate the qubit states while measurements project these qubits onto a particular measurement basis that are extracted as classical bitstrings. Qiskit supports this programming model via a quantum assembly language called OpenQASM [40]. OpenQASM is simply an abstraction of the real quantum hardware execution in a form which is amenable to users familiar with classical programming models. The hardware is not capable of naively implementing or executing the quantum instructions from this model and must instead compose these operations via the control hardware. At the device level, quantum system execution is implemented by steering the qubit(s) through a desired unitary evolution, which is achieved by careful engineering of applied classical control fields. Thus, execution of a program represented via the quantum circuit model on a quantum system requires translating or compiling gate-level circuit instructions to a set of microwave control instructions, or pulses, which enact the desired state-transformations or measurements. This translation is often suboptimal due to the heavy abstractions imposed across the software stack. In the circuit programming model, an atomic circuit instruction is agnostic to its pulselevel implementation on hardware, and unfortunately, the vast majority of program optimization is often done at the gatelevel in the standard circuit model. Extracting the highest performance out of quantum hardware would require the ability to craft a pulse-level instruction schedule for the optimization of circuit partitions. Qiskit Pulse [38], [39] was developed to describe quantum programs as a sequence of pulses, scheduled in time. Qiskit Pulse adds to the Qiskit compilation pipeline the capability to schedule a quantum circuit into a pulse program intermediate representation, perform analysis and optimizations, and then compile to Qiskit Pulse object code to execute on a quantum system. To program such systems at the pulse-level in a hardwareindependent manner requires the user-level instruction set to be target-compiled to the underlying system hardware components, each of which may have a unique instruction set and programming model. Qiskit Pulse provides a common and reusable suite of technology-independent quantum control techniques that operate at the level of an analog stimulus, which may be remotely re-targeted to diverse quantum systems. 1 shows an example of how a pulse schedule can be explicitly coded in Qiskit. While this is a trivial example, readers can refer to [39] for an example of implementing a highfidelity CX gate based on the calibrated Cross-Resonance (CR) pulse. The CR gate is an important microwave-activated, two-qubit entangling operation that is performed by driving one qubit, the control, by the frequency of another, the target [41]. Fig. 2 shows provides a code snippet for scheduling a quantum circuit into a pulse schedule using Qiskit. Qiskit Pulse users may create pulse programs to replace the default pulse programs of the native gate set provided by the backend and pass them as an argument to the scheduler. B. Pulse Support via OpenQASM3 OpenQASM has become a de facto standard, allowing a number of independent tools to interoperate using OpenQASM as the common interchange format. While OpenQASM2 uses a circuit model which was described previously, quantum paradigms require going beyond the circuit model, incorporating primitives such as teleportation and the measurement model of quantum computing. OpenQASM3 [40] describes a broader set of quantum circuits with concepts beyond simple qubits and gates. Chief among them are arbitrary classical control flow, gate modifiers (e.g. control and inverse), timing, and microcoded pulse implementations. To use the same tools for circuit development as well as for the lower-level control sequences needed for calibration, Visualization of the mapping between circuit instructions (b) and the composite pulse sequences that will implement the circuit elements (c). [39] characterization, and error mitigation, it is necessary to control timing and to connect quantum instructions with their pulselevel implementations for various qubit modalities. This is critical for working with techniques such as dynamic decoupling [42], [43] as well as for better characterization of decoherence and crosstalk. These are all sensitive to time and can be programmed via the timing features in OpenQASM3. One such potential application compilation and execution flow is shown in Fig. 3. To control timing, OpenQASM3 introduces "delay" statements and "duration" types. The delay statement allows the programmer to specify relative timing of operations. Timing instructions can use the duration type which represents amounts of time measured in seconds. OpenQASM3 includes features called "box" and "barrier" to constrain the reordering of gates, where the timing of those gates might otherwise be changed by the compiler. Additionally, without these directives, the desired gates could be removed entirely as a valid optimization on the logical level. OpenQASM3 also allows specifying relative timing of operations rather than absolute timing. This allows more flexible timing of operations, which can be helpful in a setting with a variety of calibrated gates with different durations. To do so, it introduces a new type "stretch", representing a duration of time which is resolvable to a concrete duration at compile time once the exact durations of calibrated gates are known. This increases circuit portability by decoupling the circuit timing intent from the underlying pulses, which may vary from machine to machine or even from day to day. OpenQASM3 also has added support for specifying instruction calibrations in the form of "defcal," short for 'define calibration,' declarations which allow the programmer to specify a microcoded implementation of a gate, measure, or reset instruction. While only a few features of OpenQASM3 are discussed here, the overall design intends to be a multilevel intermediate representation (IR), where the focus shifts from target-agnostic computation to a concrete implementation as more hardware specificity is introduced. An OpenQASM circuit can also mix different abstraction levels by introducing constraints where needed, but allowing the compiler to make decisions where there are no constraints. VI. CROSS-LAYER COMPILER OPTIMIZATIONS FOR EFFICIENT PULSE CONTROL The abstractions introduced in the layered approach of current QC stacks restrict opportunities for cross-layer optimization. For near-term quantum computing, maximal utilization of the limited quantum resources and reconciling quantum algorithms with noisy devices is of importance. Thus, a shift of the quantum computing stack towards a more vertically integrated architecture is promising. In this Section, we discuss optimizations that break the ISA abstraction by exposing pulse level information across the compiler stack, resulting in improvements to pulse level control as well its more efficient implementation. A. Optimized Compilation of Aggregated Instructions for Realistic Quantum Computers The work [44] proposes a quantum compilation technique that optimizes for pulse control by breaking across existing abstraction barriers. Doing so reduces the execution latency while also making optimal pulse-level control practical for larger numbers of qubits. Rather than directly translating oneand two-qubit gates to control pulses, the proposed framework first aggregates these small gates into larger operations. Then the framework manipulates these aggregates in two ways. First, it finds commutative operations that allow for much more efficient schedules of control pulses. Second, it uses quantum optimal control on the aggregates to produce a set of control pulses optimized for the the underlying physical architecture. In all, the technique greatly exploits pulse-level control, which improves quantum efficiency over traditional gate-based methods. At the same time, it mitigates the scalability problem of quantum optimal control methods. Since the technique is software-based, these results can see practical implementation much faster than experimental approaches for improving physical device latency. Compared to traditional gate-based methods, the technique achieves execution mean speedup of 5x with a maximum speedup of 10x. Two novel techniques are implemented: a) detecting diagonal unitaries and scheduling commutative instructions to reduce the critical path of computation; b) blocking quantum circuits in a way that scales the optimal control beyond 10 qubits without compromising parallelism. For quantum computers, achieving these speedups, and thereby reducing latency, is do-or-die: if circuits take too long, the qubits decohere by the end of the computation. By reducing the latency 2-10x, this work provides an accelerated pathway to running useful quantum algorithms, without needing to wait years for hardware with 2-10x longer qubit lifetimes. An illustrative example is shown in Fig. 4. B. Partial Compilation of Variational Algorithms for Noisy Intermediate-Scale Quantum Machines Each iteration of a variational algorithm depends on the results of the previous iteration-hence, the compilation must be interleaved through the computation. The noise levels in current quantum machines and the complexity of the is much shorter in duration and easier to implement than that of (c). [44] variational use cases that are useful to solve, result in a very complex parameter tuning space for most algorithms. Thus, even small instances require thousands of iterations. Considering that the circuit compilation is on the execution critical path and cannot be hidden, the compilation latency for Hyperparameter optimization is used to precompute good hyperparameters (learning rate and decay rate) for each subcircuit. When gate angles are specified at runtime, the tuned hyperparameters quickly find optimized pulses for each subcircuit. [45] each iteration becomes a serious limitation. To cope with this limitation on compilation latency, past work on VQAs has performed compilation under the standard gate-based model. This methodology has the advantage of extremely fast compilation -a lookup table maps each gate to a sequence of machine-level control pulses so that the compilation simply amounts to concatenating the pulses corresponding to each gate. The gate-based compilation model is known to fall short of the GRadient Ascent Pulse Engineering (GRAPE) [46], [47] compilation technique, which compiles directly to the level of the machine-level control pulses that a QC actually executes. As has been the theme of this paper, the pulse-level control provides for considerably more efficient execution on the quantum machine. GRAPE has been used to achieve 2-5x pulse speedups over gate-based compilation for a range of quantum algorithms, resulting in lower decoherence and thus increased fidelity. However, GRAPE-based compilation has a substantial cost: compilation time. This would potentially amount to several weeks or months of total compilation latency during thousands of iterations since millions of iterations will be needed for larger problems of significance. By contrast, typical pulse times for quantum circuits are on the order of microseconds, so the compilation latency imposed by GRAPE is untenable. This proposal [45] introduces the idea of partial compilation, a strategy that approaches the pulse duration speedup of GRAPE, but with a manageable overhead in compilation latency. This powerful new compiler capability enables a realistic architectural choice of pulse-level control for more complex near-term applications. Two variations are proposed: a) strict partial compilation, a strategy that pre-computes optimal pulses for parametrization-independent blocks of gates; and b) flexible partial compilation, a strategy that performs as well as full GRAPE, but with a dramatic speedup in compilation latency via precomputed hyperparameter optimization. An illustration of the flexible partial compilation is shown in 6: Like classical programs, quantum programs undergo a compilation process from high-level programming language to assembly. However, unlike the classical setting, quantum hardware is controlled via analog pulses. This work optimizes the underlying pulse schedule by augmenting the set basis gates to match hardware. The compiler automatically optimizes user code, which therefore remains hardware-agnostic. [48] C. Optimized Quantum Compilation for Near-Term Algorithms with Qiskit Pulse While pulse optimization has shown promise in previous quantum optimal control (QOC), noisy experimental systems are not entirely ready for compilation via QOC approaches. This is because QOC requires an extremely accurate model of the machine, i.e., its Hamiltonian. Hamiltonians are difficult to measure experimentally and moreover, they drift significantly between daily recalibrations. Experimental QOC papers incur significant pre-execution calibration overhead to address this issue. By contrast, this work [48], [49] proposes a technique that is bootstrapped purely from daily calibrations that are already performed for the standard set of basis gates. The resulting pulses are used to create an augmented basis gate set. These pulses are extremely simple, which reduces the control error and also preserves intuition about underlying operations, unlike traditional QOC. This technique leads to optimized programs, with mean 1.6x error reduction and 2x speedup for near-term algorithms. The proposed approach can target any underlying quantum hardware. An overview is shown in Fig.6 Four key optimizations are proposed, all of which are enabled by pulse-level control: (a) access to pulse-level control allows implementing any single-qubit operation directly with high fidelity, circumventing inefficiencies from standard compilation; (b) although gates have the illusion of atomicity, the true atomic units are pulses. The proposed compiler creates new cancellation optimizations that are otherwise invisible; (c) Two-qubit operations are compiled directly down to the two-qubit interactions that the hardware actually implements; (d) Pulse control enables d-level qudit operations, beyond the 2-level qubit subspace. VII. SIMULATION OF PULSES AND WITH PULSES In this Section, we highlight some works on simulation. There are two forms of simulation which are relevant. The first is classical simulation of quantum devices to better understand device behavior. The second is simulating the quantum physical aspects of a complex system on a quantum machine itself. Both of these simulation domains intersect with pulse-level control. On the one hand, the effective classical simulation of quantum devices will require accurate capture of pulse level device phenomena. On the other hand, effectively modeling complex quantum physical systems also requires precise control of execution on the quantum device, which is enabled by pulse-level control. A. Classical Simulation: Capturing the Time-varying Nature of Open Quantum Systems Classical simulation of quantum devices is critical to better understand device behavior. This is especially true in the near-term for noisy prototype architectures. Simulation tools accomplish a variety of different tasks including modeling noise sources, validating calculations and designs, and evaluating quantum algorithms. Since quantum simulation tools are classical, they often require supercomputers to simulate quantum devices that are large in terms of either qubit count or hardware components they contain. Quantum simulators imitate the operation of quantum devices and can be used to build the Hamiltonians required to derive control pulses for the qubit or qudit space. As a result, quantum simulators are an important tool in control engineering because complex quantum operations require the generation of fine-tuned drive signals. Quantum systems are dynamic and evolve with time. As a result, simulation tools for quantum mechanical systems need to take variation into consideration, whether that variation stems from intentional gate application or from unintentional drift or environmental coupling. QuaC, or "Quantum in C," was developed to capture the time-varying nature of open quantum systems, including realistic amplitude damping and dephasing along with other correlated noises and thermal effects [50]. The simulator is not limited to a specific type of qubit or qudit technology, and although QuaC is classical simulator, the tool can model quantum systems with enough granularity to develop pulses required for low-level control. QuaC has been used for many applications, such as in the discovery of improved error models to better understand noisy quantum systems [51] and for the study of the coupling required between quantum dots and photonic cavities for entanglement transfer [52]. Additionally, QuaC was applied in the comparison of different quantum memory architectures to discover optimum features, such as encoding dimension [53]. B. Quantum Simulation: Hardware Efficient Simulation with Qiskit Pulse The proposal in [54] simulates a quantum topological condensed matter system on an IBM quantum processor. The simulation is done within the qubits' coherence times using pulse-level instructions provided by Qiskit Pulse. Ideally, capturing the system characteristics would require simulation by continuous time-evolution of qubits under the appropriate spin Hamiltonian obtained from a transformation of the fermion Hamiltonian. However, practically, this is run on quantum devices by having this "analog" simulation decomposed and mapped onto the calibrated native basis gates of a quantum computer, making it "digital". This digital implementation on noisy quantum hardware limits precision and flexibility and is thus unable to avoid the accumulation of unnecessary errors. Pulse-level control improves this by allowing for a "semi-analog" approach. The proposal shows a pulse-scaling technique that, without additional calibration, gets closer to the ideal analog simulation. Topologically-protected quantum computation works by moving non-Abelian anyons, such as Majorana zero modes (MZMs), around each other in two dimensions to form threedimensional braids in space-time [55]. Thus far, there has been no definitive experimental evidence of braiding due to dynamical state evolution [56] This work simulates a key part of a topological quantum computer: the dynamics of braiding of a pair of MZMs on a trijunction. Braiding is implemented by parametrically adjusting the Hamiltonian parameters; the time evolution is implemented using the Suzuki-Trotter decomposition, with each time step implemented by one-and two-qubit gates. Fidelity is significantly improved by using pulse-level control to scale cross resonance (CR) gates derived from those precalibrated on the backend, thereby enabling coupling of qubit pairs with shorter CR gate times. Specifically, using native CX gates, only 1/6th of a full braid can be performed. Whereas, using the pulse-enabled scaled gates leads to performing a complete braid. A. Error-robust Single-qubit Gate Set Design The work [57] proposes analog-layer programming on superconducting quantum hardware to implement and test a new error-robust single-qubit gate set. To build this, analog pulse waveforms are numerically optimized via the use of Boulder Opal, a custom Tensorflow-based package [58]. The optimized pulses enact gates which are more resilient against dephasing, control-amplitude fluctuations, and crosstalk. Before the implementation in real hardware, these pulses go through a calibration protocol that can be fully automated [59] to account for small distortions that may happen in the control channels. The experiments are performed on IBM quantum machines and programmed via the Qiskit Pulse API, which translates the pulses designed using Boulder Opal into hardware instructions. The experiments show that when pulses are optimized to be robust against amplitude or dephasing errors, they can outperform the default calibrated Derivative Removal by Adiabatic Gate (DRAG) operations under native noise conditions. These optimized pulses are built including both a 30-MHz-bandwidth sinc-smoothing-function and temporal discretization to match hardware programming. Using optimized pulses, single-qubit coherent error rates originating from sources such as gate miscalibration or drift are reduced by up to an order of magnitude, with average device-wide performance improvements of 5x. The same optimized pulses reduce the gate error variability across qubits and over time by an order of magnitude. B. Reinforcement Learning for Error-Robust Gate Set Design As discussed above, it has been demonstrated that the use of robust and optimal control techniques for gateset design can lead to dramatic improvements in hardware performance and computational capabilities. The design process is straightforward when Hamiltonian representations of the underlying system are precisely known, but is considerably difficult in state-of-the-art large-scale experimental systems. A combination of effects introduces challenges not faced in simpler systems including unknown and transient Hamiltonian terms, control signal distortion, crosstalk, and temporally varying environmental noise. In all cases, complete characterization of Hamiltonian terms, their dependencies, and dynamics becomes unwieldy as the system size grows. The work [60] proposes a black-box approach to designing an error-robust universal quantum gate set using a deep reinforcement learning (DRL) model, as shown in the inset of Fig. 7(a). The DRL agent is tasked to learn how to execute high fidelity constituent operations which can be used to construct a universal gate set. It iteratively constructs a model of the relevant effects of a set of available controls on quantum computer hardware, incorporating both targeted responses and undesired effects. It constructs an RX(π/2) single-qubit driven rotation and a ZX(−π/2) multi-qubit entangling operation by exploring a space of piecewise constant (PWC) operations executed on a superconducting quantum computer programmed using Qiskit Pulse. The constructed single-qubit gates outperform the default DRAG gates in randomized benchmarking with up to a 3x reduction in gate duration. Furthermore, the use of DRL defined entangling gates within quantum circuits for the SWAP operation shows 1.45x lower error than calibrated hardware defaults. These gates are shown to exhibit robustness against common system drifts, providing weeks of performance without needing intermediate recalibration. IX. ADVANCED ARCHITECTURES WITH CAVITY SYSTEMS Improved characterization and understanding of quantum devices leads to improved low-level control, and this opens the door to discovering novel hardware use that pushes the state-of-the-art in quantum computing forward. As the level of precision with which we manipulate quantum devices increases, fine-tuned drive signals become available for the realization of complex Hamiltonians and gates. This Section includes a discussion of how the codesign of quantum hardware and low-level control has enabled innovative architectures based on cavities. Cavity-Transmon architectures are particularly exciting for quantum computing as they have been proposed for use in quantum memory and error correction schemes. A. Robust Quantum Control over Cavity-Transmon Systems Oscillator cavities with photonic or phononic modes have long coherence times extending to tens of milliseconds, but unfortunately, these devices are difficult to control, prohibiting the generation of arbitrary quantum states needed for quantum computation [61]. However, if the storage cavity is coupled directly to a Transmon qubit, quantum states can be prepared in the Transmon and swapped into the storage cavity for universal control. After the quantum SWAP occurs, the oscillator holds a bosonic qubit. This idea led to the development of the superconducting cavity qubit module. Superconducting cavity qubit modules are built using superconducting circuits comprised of a Transmon qubit, a storage cavity, and a readout cavity connected to a coupler port [62]- [64]. In this system, the Transmon qubit is used for quantum information processing, while the storage cavity, or the oscillator, can interact with the Transmon to encode state within an oscillator mode [65]. These systems are characterized by a long-lived superconducting storage cavity coherence, a large Hilbert space for representing states, and fast, high-fidelity measurement and readout of the qubit state. All of these features make cavity qubit modules an attractive choice for storing encoded qubits. They also have potential within future quantum error correction protocols. The work in [61], [65] presents the SNAP gate that improves previous efforts of Transmon-cavity interaction for quantum computation. This gate efficiently enables the construction of arbitrary unitary operations, offering a scalable path towards performing quantum computation on qubits encoded in oscillators. Combined cavity and Transmon drive pulses provide control of the system to implement the SNAP gate. These pulses can be generated by gradient-based optimum control, or GRAPE [66]. The superconducting cavity module consisting of a storage cavity and a Transmon qubit is an example of an ancillaassisted system. The main goal of the system to take advantage of the long coherence times of oscillator modes. The challenge of limited control associated with the storage cavity is alleviated through Transmon qubit coupling. Employing a Transmon in the cavity qubit module, however, does not come without cost. The ancilla qubit injects noise to the system due to shorter coherence windows associated with the Transmon (tens of microseconds). Thus, there is room to further improve the quantum control of the superconducting cavity module. Quantum error correction techniques are required to protect cavity systems from ancilla-introduced errors. Fortunately, a solution based on path independence has been proposed to develop fault tolerant quantum gates that are robust to ancilla errors [67], [68]. B. Specialized Architectures for Error Correction Quantum control allows the desired quantum device behavior from carefully designed classical signals. Control techniques that continue to reduce error in near term QCs are still under development, but it must be kept in mind that the goal of quantum science is to eventually build architectures that can implement fault tolerance. As a result, new device architectures and supporting control systems should be designed concurrently. Looking forward, there are many design bottlenecks that must be addressed to scale the current qubit technology. For example, superconducting qubits face issues associated with crosstalk between qubits, limited area for control wires, and inconsistencies during fabrication. These challenges must be sidestepped, and one approach is to reduce the amount of hardware scaling required to minimize the burden on engineering and material science efforts. New architectures based on superconducting cavity technology have been proposed that aim to implement fault tolerance while reducing the requirements of the physical hardware [69]. With reduced hardware, some of the challenges associated with scaling quantum control mechanisms will be resolved. The approach in [69] aims to move towards scalable fault tolerant architectures by combining compute qubits with memory qubits for a 2.5D device design. The combined compute and memory unit will be constructed with a Transmon coupled with a superconducting cavity. The cavity is characterized by coherence times that are much longer than those of the Transmon qubit. This advantage allows the cavity memory unit coupled to the Transmon compute unit to be used for random access to error-corrected, logical qubits stored across different memories, creating a simple virtual and physical address scheme. Error correction is performed continuously by loading each qubit from memory. The error correction used in [69] includes two efficient adaptations of the surface code, Natural and Compact. The 2.5D architecture has many architectural advantages that provide avenues for simplified control engineering. First, the combined Transmon and cavity module requires only one set of control wires. Compared to traditional 2D architectures that require dedicated control wires and signal generators, there is potential to reduce the amount of control hardware by a factor of n where n is the number of modes, and thus the number of qubits, that can be stored in the storage cavity. Additionally, the 2.5D architecture allows for transversal CX operations that can extend into the z-plane of the cavity to act on qubits stored within modes. This CX operation allows for lattice surgery operations with improved connectivity between logical qubits and that can be executed 6x faster than standard lattice surgery CX operations. Faster gates are a huge win for device control when qubits are constrained by a coherence time. X. APPLYING QUDITS FOR ACCELERATION Improved low-level control has increased access to higherdimensional encoding. In this Section, we describe work that employs qudits for quantum processing gains during gate operation and entangled state preparation. A. Toffoli Gate Depth Reduction in Fixed Frequency Transmon Qubits Quantum information encoding is typically binary, but unlike classical computers, most QCs have multiple accessible energy levels beyond the lowest two used to realize a qubit. Accidental use of these upper energy levels can insert errors into computation. However, these energy levels that transform a qubit into a qudit can be intentionally used with careful quantum control to achieve performance gains during computation. Control pulses that access the qudit space must be carefully designed, taking hardware constraints such as shorter decay time associated with higher energy levels into consideration [70]. Implementing high-dimensional encoding for quantum information provides the benefit of data compression since qubit states can be represented with qudit formalism. Condensed information storage with qudits has been proposed for use in efficient applications of quantum error correction [71], communication protocols [72], and cryptography [73]. Another exciting application for high-dimensional encoding is in gate depth reduction for multi-qubit gates [74], [75]. Multi-qubit gates are required in many quantum algorithms, but they must be decomposed into smaller operations that agree with both the basis gate library and connectivity graph of a QC. These decompositions can be costly in terms of total single-and twoqubit operations, so opportunities for gate depth reduction can directly improve the overall circuit fidelity on today's noisy QCs. Qiskit Pulse allows users to access of higher energy levels on their Transmon-based QCs [76]. Using this low-level control of the hardware, gates for qutrits, or radix-3 quantum information that uses the basis states of |0 , |1 , and |2 , can be defined. Recent work using Qiskit Pulse experimentally demonstrated that extending into the qutrit space can provide advantages for qubit-based computing [77], [78]. In these schemes, qudits were used for intermediate computation in gate realization. The overall algorithm, however, still processes in radix-2, maintaining its "qubits in, qubits out" structure. In [78], the authors propose a Toffoli gate decomposition that is improved by intermediate qutrits. The Toffoli gate, or controlled-CX (CCX), is an important quantum gate that has been targeted by optimal control techniques since it is widely used in reversible computation, error correction, and chemistry simulation among other quantum applications [79]. The authors introduce a decomposition that uses single-and two-qutrit gates to achieve an order-preserving, Toffoli decomposition that only requires four, two-Transmon interactions. The value of being order preserving is that the operation is friendly to near-term devices that demonstrate nearestneighbor connections. Ref. [78] describes the control pulses required to realize the qutrit operations on IBM hardware. The successful demonstration of this Toffoli decomposition provides a significant gate reduction from the optimum qubit-based alternative that requires eight, two-Transmon interactions. Average gate fidelity of qutrit Toffoli execution was measured to be about 78%. The qutrit Toffoli gate was benchmarked against the optimum qubit-based decomposition showing a mean fidelity improvement of around 3.82% and an execution time reduction of 1µs. Additional error correction techniques via quantum control measures, such as dynamical decoupling, were also employed for further performance gains with the qutrit Toffoli. B. High-dimensional GHZ Demonstration Quantum entanglement is a hallmark that distinguishes quantum computation from classical processing techniques. It has a wide range of applications from secure communication protocols to high-precision metrology. There have been many experimental demonstrations of entanglement, but they have primarily focused on radix-2 systems. An advantage of exploring higher-dimensional entanglement includes increased bandwith in quantum communication protocols, such as those for superdense codes and teleportation. At least two systems are needed for quantum entanglement, and a GHZ state is a special type of entanglement, called multipartite entanglement, that is shared between three or more subsystems, i.e. qubits or qudits. High-dimensional entanglement has been demonstrated on numerous occasions with photonic devices [80], but with careful pulse-level control using Qiskit Pulse, a qutrit GHZ state shared between three qudits was prepared on IBM Transmon QC [81]. This demonstration was the first of its kind on superconducting quantum technology. In [81], the GHZ circuit for three qutrits was built using IBM calibrated gates, RY (θ) and CX, along with specially programmed and calibrated single-qutrit gates. To detect qutrit states, a custom discriminator was developed that is sensitive to three basis states for during measurement. Experiments to generate entanglement were ran using IBM Quantum Cloud resources, and GHZ states were produced around 30000 times faster than leading photonic experiments [82]. Tomography confirmed the fidelity of entangled state preparation on a fivequbit IBM machine to be approximately 76%. Additionally, the three-qutrit GHZ state was verified further using the entanglement witness protocol. XI. EXPANDING THE QUANTUM HARDWARE COMMUNITY Exciting breakthroughs have been seen in the NISQ era, but reaching the full promise of quantum-accelerated chemical simulation, data processing, and factoring, will require largescale and preferably fault tolerant quantum computers [10]. Refinement of algorithms, devices, and processes is still necessary to unlock the computational advantages of quantum information processing. Unfortunately, it is anticipated that the demand for a quantum workforce to pursue these goals will greatly outweigh the supply in the near term as the amount of individuals pursuing advanced degrees in the traditional background of quantum physics and engineering is not increasing [83]. Thus, it is of critical importance to 1) develop a quantum-aware workforce starting at all age ranges and educational backgrounds in STEM and 2) improve the general population's overall understanding of quantum technology so that QC frontiers continue to advance. As endto-end quantum solutions mature, it is imperative to assemble a multidisciplinary network with a broad set of skills that is ready to face all challenges relating to quantum scalability. A. Quantum Hardware Education Reaching fault tolerant QCs is heavily dependent on dedicating adequate resources to develop improved hardware and control. This involves building a quantum community with skill sets that range from quantum aware to quantum specialist. A first step to expanding the quantum hardware community is limiting the barriers to entry. The quantum net could be cast to a wider audience by increasing efforts to introduce concepts related to quantum computing and technology at all education levels. Additionally, expanding quantum optimal control research will include engaging a diverse set of willing students of all ages. STEM has largely overlooked many demographic groups, but as quantum research is still in its infancy compared to many classical fields, there is opportunity to ensure that engineers and scientists from underrepresented groups feel a sense of belonging in the quantum community and feel that they can contribute to its growth. Expanding diversity and inclusion in quantum computing will continue to advance the field in both the academic and industrial setting. In this Section, we will discuss three key strategies to allow quantum information to reach a wider audience by tailoring instruction to different levels of education. First, if students become more familiar with core quantum concepts at an early age, they are better prepared to pursue careers in quantum hardware and quantum control. Second, as the demand for a quantum workforce increases, higher education will gain a better perspective of what is required for degree plans in quantum science at both the undergraduate and graduate levels. Finally, it is possible to train seasoned scientists and engineers from classical areas of study so that their refined expertise can be applied to the research and development of quantum technology. 1) Early Education, K-12: Introducing quantum information concepts at early ages might assist with lowering the barrier to enter a career in quantum science. The theory behind quantum computing contains significant mathematics content, but it is possible to introduce core concepts in a nonmathematical manner. For example, educators at this level could work towards demystifying the field, focusing on the "what" rather than emphasizing and theoretically proving the reasoning behind purely quantum phenomena. It is often through the attempt to deliver a crash-course of the complex answers for "how" quantum works that leaves many students lost. For example, rather than teaching the details of superposition and its implications on quantum states, young learners can be taught at a higher level to get comfortable with the idea that some items, like the the combination of spoon and fork to make the spork, can be two things at once [84]. Recent advances in quantum science are largely caused by improvements made to supporting technology. For example, without the parallel development of cryogenic technology, many quantum computing platforms would not support computation due to the lack of a stable operational environment. Similar strides in refining quantum control have also allowed QCs to become more robust. Indirectly, concepts related to the fundamental nature of quantum hardware and optimal control to the development of quantum informatics can be introduced to a younger audience. Scientific discovery includes many moving parts that at first glance, may not seem significant, but are a critical part of development and experimentation. Everyday, tools and methods are used to accomplish a bigger goal, just as quantum hardware and optimum control enable quantum computing, and younger scientists can understand this by thinking about all of the equipment and instructions needed to complete a simple task or experiment. As a basic example, a cake cannot result from ingredients without the use of bowls, measuring cups, spoons, a working oven, and a detailed recipe for the baker to apply. 2) Higher Education: Select undergraduate colleges and universities along with graduate schools offer exposure to coursework in quantum computing and quantum technologies. These programs, however, expand to clearly define the quantum workforce pipeline so that its output can be maximized to meet the needs of academia, government, and industry. When considering education in the space of quantum informatics, a doctorate in quantum physics immediately comes to mind. Such expert individuals, however, are in limited supply while there is projected to be a substantial demand for a quantum-ready workforce. Additionally, not every quantum scientist needs to be well-versed in algorithms and theory. The future workforce is expected to be populated by individuals with varied backgrounds and levels of training, as opportunities for employment within the quantum industry can range drastically. More technical roles range from engineer (i.e., electrical, software, optical, systems, materials, etc.), experimental scientist, theorist, technician, and application researcher. Less technical roles include those handling business development, sales, and legal strategies related to emerging technologies. Because of the wide breadth of backgrounds valuable to quantum computing, all of which necessary to further progress on quantum devices and control, many have considered how to structure quantum-related majors and minors at the undergraduate level. These types of programs would help satisfy the demand of quantum-aware and quantumproficient professionals in the workforce. Focusing on the more technical portion of the workforce, quantum training programs could easily take advantage of existing programs relating to engineering, computation and science. Training in dynamics, chemistry, electromagnetism, and programming languages are just a few examples of courses that would supply individuals with the skills needed to contribute to the field of quantum. Along with a general education, application-specific areas of quantum, such as devices, algorithms, software, and protocols, would be taught depending on the student's targeted career path. In depth discussions related to the development of quantum engineering programs can be found in [83], [85], [86]. 3) Continuing Education as a Professional: In quantum education, much emphasis has been placed on algorithms, but this does not reflect on the breadth of needs for quantum research in industry and academics. In actuality, a much more multifaceted group of researchers composed of different backgrounds is needed to push quantum technology toward the ultimate goal of fault tolerance. For example, many strides in quantum technology can be made by recruiting from the classical fields of science as many classical problems appear on the roadmap to scalable quantum technology. There are many ways in which quantum devices can improve, but some key areas include the development of better fabrication techniques, more efficient refrigeration technologies, and more accurate control methods and hardware. These example design problems could be approached by engineers that are classically trained but are literate in the basics of quantum science. Industry constantly evolves as new technology appears. Thus, it is important that experienced professionals continue to explore their horizons to develop their skills and keep their knowledge up to date. In the space of quantum informatics, there are many resources to introduce seasoned scientists and engineers to core quantum concepts in a way that is accessible. There are many tools, such as the Qiskit Textbook [87], blogs [88], workshops [89], and online courses [90], that can allow individuals to explore quantum computing at their own pace. When experienced professionals dive into quantum computing, a greater pool of knowledge to assist with developing new and scaling current quantum methods and hardware results. B. Improving Quantum Technology Awareness Many exciting developments in the field of quantum information processing are unfortunately buried in research papers, specialized conference sessions, and meetings that are inaccessible to those not immediately working on the issues at hand. So while quantum computing holds great potential, there is the risk that the emerging technology could be undervalued or worse, misunderstood. A goal of expanding the quantum hardware community is to increase the public's awareness about the reasonable expectations and timelines to have for QCs. By improving communication channels and trust between the QIS community and the rest of the world, more individuals may consider how quantum research may impact their lives or even how they can get involved. Initial steps such as research groups producing news articles describing theoretical and experimental breakthroughs in layperson's terms could make a huge difference so that quantum progress reaches a wider audience is understood with greater accuracy. XII. CONCLUSION Quantum systems evolve in a continuous manner, and their underlying low-level control signals are continuous as well. Thus, the pulse-based quantum computing approach utilizing continuous control signals potentially offers a much richer and more flexible use than the highly popularized gate-based approach. The ability to engineer a real-time system Hamiltonian allows us to navigate the quantum system to the quantum state of interest through generating accurate control signals. While the benefits to the pulse approach are clear, there are several challenges stemming from the inherent complexities of a full-stack approach, including, but not limited to, defining the machine Hamiltonian, outlining the programming model, overheads from optimization and compilation, and both classical and quantum simulation capabilities. Overcoming these difficulties, however, could contribute significantly to achieving quantum advantage. The Chicago Quantum Exchange (CQE) Pulse-level Quantum Control Workshop was a step toward progress in low-level quantum control.
A Silurian short-great-appendage arthropod A new arthropod, Enalikter aphson gen. et sp. nov., is described from the Silurian (Wenlock Series) Herefordshire Lagerstätte of the UK. It belongs to the Megacheira (=short-great-appendage group), which is recognized here, for the first time, in strata younger than mid-Cambrian age. Discovery of this new Silurian taxon allows us to identify a Devonian megacheiran representative, Bundenbachiellus giganteus from the Hunsrück Slate of Germany. The phylogenetic position of megacheirans is controversial: they have been interpreted as stem chelicerates, or stem euarthropods, but when Enalikter and Bundenbachiellus are added to the most comprehensive morphological database available, a stem euarthropod position is supported. Enalikter represents the only fully three-dimensionally preserved stem-group euarthropod, it falls in the sister clade to the crown-group euarthropods, and it provides new insights surrounding the origin and early evolution of the euarthropods. Recognition of Enalikter and Bundenbachiellus as megacheirans indicates that this major arthropod group survived for nearly 100 Myr beyond the mid-Cambrian. Some so-called short-great-appendage arthropods (¼Megacheira [17]), such as leanchoiliids, are characterized by a first (great) head appendage with a short peduncle connected by a knuckle/elbow joint to a distal 'claw', the three podomeres of which each extends distally into a long flagellum [18,19]. Megacheirans have only been recorded from Cambrian deposits. Here, we describe a new genus and species of megacheiran with such a great-appendage morphology: Enalikter aphson from the Silurian Herefordshire fauna, representing another major arthropod group to be recognized from this Lagerstätte. Fossils from exceptionally preserved lower Palaeozoic biotas, such as the Herefordshire example, have the greatest potential for revealing the earliest stages of arthropod diversification, the stem region of the arthropod phylogenetic tree. Phylogenetic analysis of Enalikter and the re-evaluated Devonian taxon Bundenbachiellus refines the topology of this stem region, providing new insights into immediately pre-euarthropod crown-group morphologies. Material and methods Specimens of Enalikter were serially ground at 20 mm intervals. Each ground surface was captured digitally and, through using the SPIERS software suite, the resulting tomographic dataset was rendered and studied as a three-dimensional virtual fossil [20,21]. Interpretation on-screen of the virtual fossils was facilitated by variable magnification, unlimited rotational, virtual dissection and stereoscopic-viewing capabilities; they were also examined through hard-copy images. Analysis of the phylogenetic position of Enalikter and Bundenbachiellus was performed using a modified version (see the electronic supplementary material, note S1) of the panarthropod character matrix of Legg et al. [22], which represents the most comprehensive morphological matrix available. The Legg et al. analysis included recent re-interpretations of head appendage innervation [23,24], added to which we have now also taken into account the subsequently published conclusions of Tanaka et al. [25]. A dataset of 314 taxa and 753 characters was analysed using maximum-parsimony in TNT v. 1.1 [26], which generated 36 most parsimonious trees (MPTs). The strict consensus tree is provided (see electronic supplementary material, figure S1), and also a summary of the topologies from the phylogenetic analyses (figure 2; electronic supplementary material, figure S2). Holotype: Oxford University Museum of Natural History (OUMNH C.29631) complete outstretched specimen, length 24.4 mm from anterior margin of cephalic shield to posterior margin of telson (figure 1a-c,k,o,x). Datasets from serial-grinding tomography of the specimens are housed in the Oxford University Museum of Natural History. Other species. None. Generic and specific diagnosis. Head shield subrectangular, lacking a narrow, raised margin. Head bearing a boss-like structure ventromedially, extending anteriorly into a curved whip-like process. Trunk limb exopods with long, narrow, non-overlapping filaments lacking spines. Telson with a needle-like process medially, and two pairs of blade-like processes laterally. Description. The head shield is about 1.5 times as long as wide, subrectangular in outline and dorsoventrally shallow, partially covering the first trunk segment (figure 1e,j). Surface sculpture is apparently lacking. Appendage 1 originates at about 20% of the head length (figure 1h). It is uniramous, comprising a short peduncular section of probably two podomeres, plus three closely originating and tapering flagella (podomere numbers unresolved). One flagellum is about half as long as the other two-the ventralmost on both the best-preserved, outstretched specimens (figure 1e,h); an elbow/knuckle joint is lacking between peduncle and flagella. Appendage 2 is biramous and originates at about 55% of the head length. The limb base is very short, anteroposteriorly flattened, and bears a conspicuous spine-like endite. The endopod is finger-like, evenly tapered, and comprises at least three podomeres; the exopod is similar but much more slender (podomeres unresolved), and slightly shorter (figure 1p). Appendage 3 arises at about 85% of the head length. It is biramous and similar to appendage 2 but slightly larger, with a more robust, blunter endite; the first of the (at least four or five) podomeres of the endopod bears a median ridge (figure 1s). Eyes are absent. Ventromedially, a boss-like structure (figure 1d,h,r) extends anteriorly into a recurved, whip-like process that is subconical proximally, more slender and tapering distally, and presumably flexible, although in all three specimens it ends beneath the mouth. The more ventral part of the boss is subcylindrical and terminates in a flat, disc-like surface with a central subcircular mouth that faces posteroventrally. A short, narrow, sediment-filled space immediately inside the mouth is interpreted as a buccal cavity and/or very short oesophagus (figure 1r); it connects sharply with a broader, sediment-filled cavity, interpreted as the stomach. The latter is directed dorsally before bending posteriorly in a J-shape into the intestine/midgut (figure 1q,r,b1). The rest of the body, comprising a trunk and a telson, is about 14 times as long as wide. The trunk, which consists of 12 segments, is roughly parallel-sided, and is subcircular in transverse section in OUMNH C.29632 (figure 1m,u-w,a1), though both outstretched specimens display dorsoventral compression (see Discussion). Each tergite is dome-like (figure 1t,v) and lacks paratergal folds (tergopleurae). The sternite is a subcircular to subrectangular button-like structure, with a central node and a tuberculate marginal rim (figure 1f,i,u). At the anterior and posterior margin of each tergite and its associated sternite, there is a prominent, transverse, tuberculate ridge that encircles the trunk. In between these occur weaker, less persistent ridges (figure 1m,u,v,a1) representing articulations, which in places display a wedged concertina-like form, indicating segment pinching (figure 1m,u). These areas presumably represent arthrodial tissue, which enabled lateral flexure up to at least 908 between segments (figure 1t-v). Evidence of vertical trunk flexure is limited, and is at most gently upwards posteriorly (figure 1b). The gut is preserved discontinuously along the narrow trunk, but there is no evidence of midgut glands. Transverse, soft-tissue traces are evident posteriorly, some (? tendinous bars) coinciding with segment boundaries (figure 1x). rspb.royalsocietypublishing.org Proc. R. Soc. B 281: 20132986 The first trunk appendage (figure 1d,e,h) is biramous, with a short, stout, simple limb base that lacks endites. The endopod is stenopodous, similar but larger than that of head appendage 3, with at least six or seven podomeres, the second(?) of which is raised medially. The exopod consists of a slender, tapering shaft bearing at least eight filaments (each probably from a separate podomere). The filaments are long, slender, non-overlapping and apparently suboval in section; the most proximal is the stoutest, and they become shorter distally. Trunk segments 2-12 each bear a biramous appendage pair similar to the first trunk appendage (figure 1a-c). Some endopods preserve two slender spinose/setal terminal projections, which were presumably present on all trunk limbs. The exopods are recurved dorsomedially in both outstretched specimens. They preserve from 11 to 17 filaments (see figure 1o for a typical biramous limb). These filaments are long enough to overlap at least partially those of the following appendage ( figure 1b). The trunk appendages increase in size from the first to about the fifth, and are similar in length on successive segments ( figure 1a,g). The endopods of the more posterior trunk appendages are slightly more slender. The telson is ovoid in dorsal view (figure 1a,l) and about 1.3 times as long (medially) as wide; in lateral view, it is wedgelike, increasing in height posteriorly (figure 1b,n,w). Ventrally, a slightly raised, posteriorly narrowing subtriangular axial region is bounded by a very weak abaxially convex furrow (figure 1u). A narrow, prominent tuberculate ridge and parallel furrow, similar to those on the trunk segments, encircle the anterior margin of the telson. Posteriorly, the telson bears two pairs of long, blade-like processes (figure 1l,n); each originates adjacent to the midline, tapers to a point, and is laterally flattened and suboval in section. The dorsal processes project posterodorsally at about 308. The ventral ones curve evenly dorsally through about 608, their tips crossing immediately outside those of the dorsal pair. There is no evidence for or against mobility in any of these processes. A medial, needlelike process projects posterodorsally from between the ventral pair. The anus lies posteroventrally, as indicated by a faecal stream ( figure 1w,z,a1). The telson extends parallel to the trunk (figure 1w) or may be inclined upwards at about 308 (figure 1b). Discussion The preservation of Enalikter (figure 1; electronic supplementary material, figure S4) in full three-dimensional form is unique for a stem euarthropod. The trunk of OUMNH C.29632 (figure 1 m,t -w,y,z,a1) is subcircular in cross-section, it bends laterally through 1808, and the exopod filaments curve around to hug the bend, in a lowered, presumed 'in repose' position ( figure 1t,y). The other two specimens (figure 1a,g) have a flatter trunk section, yet retain upstanding to outstretched limbs, with straight to slightly sinuous, vertically radiating exopod filaments ( figure 1k,o). Operation of the trunk and filaments by hydraulic pressure might account for such differences of inflation and disposition, though equally it might reflect the early onset of decay. The pyritized but much larger arthropod (up to 228 mm [30]) Bundenbachiellus giganteus [29] (¼ Eschenbachiellus wuttkensis [31]; see [30]) from the Lower Devonian Hunsrü ck Slate is close in overall morphology to Enalikter. Insights from the new Silurian taxon are used here to reinterpret the younger Devonian form. Only one of the two specimens of Bundenbachiellus preserves the head ( [31], text figures 11-13; electronic supplementary material, figure S3), which was previously interpreted as bearing five appendages. A comparison with the better-preserved Enalikter indicates that the structures interpreted by Briggs & Bartels ( [31], p. 293) as a uniramous first (evident only on the left side) and a biramous second appendage, together represent a single triflagellate limb. It is likely that the following two (more posterior) head appendages of Bundenbachiellus were biramous, although only the endopod is clearly evident (see electronic supplementary material, figure S3). Comparison with the head of Enalikter suggests that the appendage interpreted as a fifth head limb in Bundenbachiellus may belong to the trunk. There would then be 12 pairs of biramous appendages in the trunk of Bundenbachiellus (although their correspondence to tergites is uncertain), as in Enalikter, and the posteriormost spines/appendages could be interpreted as telson processes (rather than a pair of spines and a caudal furca) such as those in Enalikter. Bundenbachiellus differs from Enalikter, however, in a number of ways: the head shield was semicircular (not subrectangular), surrounded by a narrow raised margin; there is no evidence of a whip-like process ventrally on the head; the trunk exopod filaments are leaflike (not linear) structures with fine spines on their inner margins; and there is no evidence of a medial, needle-like process on the telson. Additionally, the Devonian species is an order of magnitude larger than the Silurian one. Enalikter and Bundenbachiellus fall in a clade of short-greatappendage (¼megacheiran) arthropods [32] that includes Leanchoilia from the lower Cambrian of Chengjiang and the middle Cambrian Kaili Lagerstätte, China, and the Burgess Shale, Spence Shale and Marjum Formation of North America; Alalcomenaeus from Chengjiang and the Burgess Shale; Actaeus from the Burgess Shale; and Oestokerkus from the lower Cambrian Emu Bay Shale, Australia [32][33][34][35][36][37] (figure 2; electronic supplementary material, text S1 and figure S1). Specifically, Enalikter is recovered in a clade (Enaliktidae) together with Bundenbachiellus. More broadly, it falls under a clade that is the most derived in the euarthropod stem and sister to Euarthropoda, and which also includes the megacheirans Haikoucaris and Parapeytoia from Chengjiang, and Yohoia from Burgess. While the tergopleurae are reduced in some stem euarthropods-for example Haikoucaris [18]-enaliktids appear to be unique among stem euarthropods in lacking them entirely. Enaliktids are also distinguished among megacheirans in their lack (loss) of the knuckle/elbow joint between the peduncle and podomeres of the 'claw' (flagella), a hallmark of other megacheirans [48] (although this feature is only weakly developed in at least one other purported megacheiran, Occacaris [19]). A remarkable feature of Enalikter is the long, posteriorly recurved, whip-like anterior process on the head, which may be analogous to the spinose hypostomal structure found in parasitic eucrustaceans [49] (electronic supplementary material, text S2). The ventromedial, subventrally projecting boss-like feature to which the process is attached recalls similar structures interpreted as hypostomal homologues in the stem mandibulates Agnostus, rspb.royalsocietypublishing.org Proc. R. Soc. B 281: 20132986 Henningsmoenocaris and Martinssonia [50]; as in those taxa, a discrete, fully sclerotized hypostome is lacking in Enalikter. The flat, wide, circumoral disc-like surface in Enalikter bears comparison, variously, with the mouth/'Peytoia' cone of the panarthropod lobopodians Pamdelurian and Opabinia, and stem euarthropod radiodontids such as Anomalocaris and Peytoia, and the great appendage arthropod Parapeytoia [51 -55] (electronic supplementary material, figure S1). In those taxa, however, the oral cone surface is rigid and plated, unlike the disc surface of Enalikter, which lacks evidence of plates and was presumed fleshy (see electronic supplementary material, text S2). Enalikter inhabited the outer shelf/upper slope of the Anglo-Welsh basin, where water depths might have been up to some 200 m [2]. It is likely to have been a benthic or nektobenthic scavenger/detritivore (see electronic supplementary material, text S2). Recognition of Enalikter and Bundenbachiellus in Silurian and Devonian rocks indicates that members of the stem clade Leanchoiliida survived for nearly 100 Myr (75 and 97 Myr, respectively [56]) after the mid-Cambrian Leanchoilia? [34]), the hitherto stratigraphically youngest known short-great-appendage arthropod. Enalikter and Bundenbachiellus are some 55 and 77 Myr, respectively, younger than the next youngest stem euarthropod, anomalocaridids from the lower Ordovician Fezouta Lagerstätte (ca 480 Myr BP) of Morocco [57]; and the enaliktids represent only the second record of stem euarthropods in Silurian or Devonian strata, the other being that of Schinderhannes from the Hunsrü ck Slate [47]. Data on Enalikter and Bundenbachiellus highlight the importance of rare Silurian and Devonian Konservat-Lagerstätten for revealing the much later, mid-and upper Palaeozoic history of groups such as megacheirans that have previously been considered to be restricted to the Cambrian; more accurate knowledge of their true stratigraphic range is dependent on these critical taphonomic windows. Our study also highlights the advantage available in combining morphological data from different types of exceptional-preservation deposits.
Job autonomy, unscripted agility and ambidextrous innovation: analysis of Brazilian startups in times of the Covid-19 pandemic Purpose – This study aims to analyze the influence of job autonomy and unscripted agility on ambidextrous innovation in startups in times of the Covid-19 pandemic. Design/methodology/approach – A survey was conducted with founders and managers of Brazilian startups in the e-commerce segment, resulting in a sample of 84 startups. Symmetric (structural equation modeling) and asymmetric (fuzzy-set qualitative comparative analysis) analyses were performed. The variables’ external financing and institutional ties were controlled. Findings – The symmetric findings indicate that unscripted agility is a full mediator between job autonomy and ambidextrous innovation. The asymmetric findings offer two solutions for startups to achieve high ambidextrous innovation. Research limitations/implications – The implications of the research for the literature are discussing elements associated with ambidextrous innovation, exploring the context of innovation in startups in times of crisis, specifically in the Covid-19 pandemic, and considering the role of resilience in startups. Practical implications –The study provides informational inputs to founders and managers of startups on how job autonomy and unscripted agility can propel incremental and radical innovations. Originality/value – This study provides new insights and success factors into startups, based on the discussion of entrepreneurship in times of crisis, as in the case of the Covid-19 pandemic. Introduction Periods of crisis affect organizations and can threaten their survival (Doern, Williams & Vorley, 2019), and it is not different in the context of Covid-19 (Verma & Gustafsson, 2020), which broke out in December 2019 in China and was declared a global pandemic in March 2020 (Hua & Shaw, 2020). Even startups, which permeate innovation and search for quick answers to society's challenges (Spender, Corvello, Grimaldi & Rippa, 2017), may have their continuity threatened by the pandemic (Kuckertz et al., 2020). Thus, it is necessary that Brazilian startups in Covid-19 crisis startups explore ways to continue fostering innovation, to ensure their survival and even to identify new opportunities (Davila, Foster & Jia, 2010). Innovation can involve small changes (incremental innovation) and/or disruptive changes (radical innovation) (Benner & Tushman, 2003). However, managing and promoting ambidextrous innovation is a challenge for organizations (Raisch & Birkinshaw, 2008;Bedford, Bisbe & Sweeney, 2019). To this end, job autonomy, in the scope of freedom and latitude of individuals in the organization (Rodr ıguez, Bravo, Peir o & Schaufeli, 2001) to conduct and make decisions about their tasks (C€ aker & Siverbo, 2018), can stimulate the creativity of startups' employees and managers (Sauermann, 2018). Organizational resilience becomes useful for organizations to instigate anticipation, endurance and adaptation to a given situation (Duchek, 2020). In this sense, unscripted agility is one of the dimensions of resilience that stands out in periods of crisis and has the potential to leverage innovation (Akg€ un & Keskin, 2014). Evidence suggests that job autonomy can instigate unscripted agility (Hackman & Oldham, 1976;C€ aker & Siverbo, 2018;Gardner, 2020), and there is a possibility that it is associated with ambidextrous innovation (Akg€ un & Keskin, 2014;Hallak, Assaker, O'Connor & Lee, 2018;Bedford et al., 2019). Also, there is evidence that job autonomy can exert direct and indirect effects on ambidextrous innovation (Rodr ıguez et al., 2001;Bysted, 2013;C€ aker & Siverbo, 2018). Previous studies that analyzed these constructs individually, in other contexts and types of organizations, reveal a research gap that instigates analyzing them jointly, in the period of the crisis arising from the Covid-19 pandemic, in startups as they have unique dynamics. Thus, the objective of this study is to analyze the influence of job autonomy and unscripted agility on ambidextrous innovation in startups in times of the Covid-19 pandemic. This research was motivated mainly by the fact that previous studies focus on job autonomy and its relationship with innovation individually (Orth & Volmer, 2017; Albort-Morant, Ariza-Montes, Leal-Rodr ıguez & Giorgi, 2020), which instigates to verify the impact in the organizational innovation sphere, and also by the incipiency of empirical research on building resilience in startups (Haase & Eberl, 2019). It also instigates exploring relevant antecedents to drive and manage ambidextrous innovation (Bedford et al., 2019), in new contexts and different companies (Buccieri, Javalgi & Cavusgil, 2020). Lastly, since the e-commerce segment has grown in times of the Covid-19 pandemic (ACI Worldwide, 2020;Forbes, 2020), in part due to the imposition of social distancing (Peji c-Bach, 2021), it is relevant to consider startups in this segment. In this sense, a survey was conducted with 84 Brazilian startups in the e-commerce, retail and wholesale segments, from a population of 611 organizations registered in the StartupBase database of the Brazilian Startup Association (Abstartups -Associação Brasileira de Startups). Partial least squares structural equation modeling (PLS-SEM) was used to analyze the data, and fuzzy-set qualitative comparative analysis (fsQCA) was used complementarily. The results indicated that unscripted agility has a mediating role in the association of job autonomy and ambidextrous innovation. Furthermore, the findings point to two organizational configurations that lead startups to high ambidextrous innovation. This study provides theoretical and managerial contributions. In the theoretical context, it intends to demonstrate the possible effects of job autonomy on ambidextrous innovation, highlighting the role of unscripted agility as a possible mediating variable, which is responsible for facilitating this association. In the context of managerial practice, this study may be useful for founders and managers of startups, paving the way for reflections about freedom in conducting tasks (job autonomy) and resilient behavior (unscripted agility), which can leverage from small to more disruptive changes (ambidextrous innovation), especially in times of crisis and uncertainty. 2. Theoretical framework and hypotheses 2.1 Job autonomy and unscripted agility Job autonomy is a characteristic of managers' work for controlling the performance of their activities (Rodr ıguez et al., 2001) in a way that the higher the degree of autonomy, the higher the level of freedom in performing their tasks (Hackman & Oldham, 1976). Job autonomy can produce positive feelings in managers, such as trust (C€ aker & Siverbo, 2018). This autonomy can be beneficial to managers and their organizations in ways that generate various effects, depending on the intensity of the resilience demonstrated (Gardner, 2020). Resilience, in the organizational setting, can represent a meta-capability, consisting of stages of anticipating, enduring and adapting to a given situation (Duchek, 2020). One of the dimensions of organizational resilience concerns original/unscripted agility (Beuren, Santos & Bernd, 2020), henceforth referred to as unscripted agility. In turbulent contexts, this dimension plays a key role in organizations (Akg€ un & Keskin, 2014). In scenarios with high levels of uncertainty, organizations should encourage the development of resilience competencies (Duchek, 2020). In this perspective, the Covid-19 pandemic context requires organizations to develop resilience to ensure their business continuity (Bryce, Ring, Ashby & Wardman, 2020). It is recognized that crises directly affect organizations. However, the effect varies depending on their capabilities and resources (Doern et al., 2019). Startups are apparently exposed to failure or success faster than ever before (Salamzadeh & Dana, 2020), which may indicate distinct degrees of organizational resilience and, particularly, unscripted agility (Akg€ un & Keskin, 2014). Organizational behaviors of agility and re-adaptation have the potential to promote higher levels of resilience (Kantur & Iseri-Say, 2012) so as to instigate the rapid recognition of opportunities and new directions for conducting business (McCann, 2004). Thus, it is assumed that one of the organizational behaviors that can foster unscripted agility is job autonomy since greater freedom and latitude for employees to perform their tasks can instigate new ways and unscripted agility to face the uncertainties stemming from the Covid-19 pandemic. Given these arguments, the following hypothesis was formulated: H1(þ). Job autonomy has a direct and positive effect on unscripted agility. Unscripted agility and ambidextrous innovation Organizations increasingly need to implement strategies that promote ambidextrous innovation in order to provide for the management of tensions and contradictory goals (Birkinshaw & Gupta, 2013). Ambidextrous innovation is simultaneously explored in opposing biases: incremental innovation and radical innovation (Raisch & Birkinshaw, 2008). Incremental innovations involve minor readjustments and improvements, whereas radical innovations involve disruptive changes (Benner & Tushman, 2003). When organizations make modifications to their products and/or services in both incremental and radical ways, it results in ambidextrous innovation (Sarkees & Hulland, 2009). Previous studies emphasize the pertinence of analyzing ambidextrous innovation in organizations considering the dimensions of incremental and radical innovation simultaneously (Bedford et al., 2019;Monteiro & Beuren, 2020). Ambidextrous innovation is crucial for the survival of organizations in dynamic scenarios with market competition, turbulence and uncertainty (Harmancioglu & S€ a€ aksj€ a). In these scenarios, innovation is an essential element for successful entrepreneurship (Devece, Peris-Ortiz & Rueda-Armengot, 2016). Startups have innovation at their core, from new ideas, products, services and processes that can speed up/solve a diversity of problems and contexts (Spender et al., 2017). Despite being vulnerable to the impacts of the crisis caused by the Covid-19 pandemic (Kuckertz et al., 2020), they can show outstanding survivability and responsiveness. However, organizational Brazilian startups in Covid-19 crisis resilience is crucial for them to readapt and remain in the market (Doern et al., 2019). Promoting innovation in times of turbulence permeates unscripted agility (Akg€ un & Keskin, 2014), as in the Covid-19 pandemic crisis, given the demand for quick and agile solutions, especially in startups that already have the continuous challenge of innovating (Kuckertz et al., 2020). Resilience is important for managing adversity, especially for promoting innovation (Hallak, Assaker, O'Connor, & Lee, 2018). When facing the opportunities that may arise in periods of crisis (Doern et al., 2019), unscripted agility is a determinant of innovation (Akg€ un & Keskin, 2014) so as to act as a facilitating means for a variety of innovations to take place, especially in the technological context (Diamond, 1996). Just as the other managerial characteristics are important antecedents of ambidextrous innovation (Bedford et al., 2019), it is assumed that unscripted agility may be pertinent. From this perspective, the following hypothesis is formulated: . Unscripted agility has a direct and positive effect on ambidextrous innovation. Effects of job autonomy on ambidextrous innovation The positive effect of job autonomy on employees' innovative behavior is supported by the literature (Orth & Volmer, 2017;Albort-Morant et al., 2020). However, the findings on the impact of job autonomy on organizational innovation or ambidextrous innovation are not conclusive. Evidence shows that job autonomy allows for flexible time and opportunities to perform the activities (C€ aker & Siverbo, 2018), which may interfere with how startups' managers and employees plan, conduct and make decisions (Rodr ıguez et al., 2001). This freedom can stimulate the creation of new ideas and innovative solutions (Bysted, 2013). As instigating innovation is at the core of the startups, not only the founders and managers tend to exhibit creative behaviors, but also the employees (Sauermann, 2018). Thus, it is assumed the coexistence of an environment conducive to innovation, which can be driven by job autonomy (Bysted, 2013). It is argued that the level of freedom and latitude given to the organizational members can directly reflect on the incremental and radical innovations (ambidextrous innovation) of startups in the Covid-19 pandemic context, which demands even more from these companies (Kuckertz et al., 2020). Thus, it is assumed that: H3a(þ). Job autonomy has a direct and positive effect on ambidextrous innovation. In addition to expecting a direct and positive effect of job autonomy on ambidextrous innovation, evidence suggests an indirect and positive effect through unscripted agility. Evidence suggests that job autonomy favors organizational resilience within the scope of unscripted agility (Hackman & Oldham, 1976;C€ aker & Siverbo, 2018;Gardner, 2020), in addition to the likelihood of leveraging ambidextrous innovation (Diamond, 1996;Akg€ un & Keskin, 2014;Hallak et al., 2018;Bedford et al., 2019). Thus, it is assumed that unscripted agility exerts an effect between job autonomy and ambidextrous innovation, that unscripted agility can act as a mediating variable between job autonomy and ambidextrous innovation, as per the following hypothesis: H3b(þ). Job autonomy exerts indirect and positive effect on ambidextrous innovation through unscripted agility. In line with the theoretical framework and the hypotheses proposed, the conceptual model ( Figure 1) to be instrumentalized in this research was elaborated. Two control variables (external financing and institutional ties) were added to the model. Population and context of the research The research population consists of startups in the e-commerce, retail and wholesale segments registered in the StartupBase. The Brazilian Association of Startups (Abstartups -Associação Brasileira de Startups) describes a startup as "a company that is born from an agile and concise business model, able to generate value for its customer by solving a real problem of the real world" (Abstartups, 2021). It also highlights that startups essentially use technologies to promote a scalable business solution. This characterization is in line with Blank & Dorf (2012), who conceptualize startups as organizations that seek a repeatable and scalable business model. The herein study is based on these conceptions of startups. The Covid-19 pandemic has boosted the volume of e-commerce transactions globally (ACI Worldwide, 2020), including the Brazilian market (Forbes, 2020). Brazil is considered a country that fosters and instigates the development of technology-based startups (Gavasa, 2018), which facilitates the conception and market share of this type of organization. In parallel, startups are characterized by presenting innovative solutions (Hunt, 2013) and responses to unforeseen and sudden challenges, as in the case of the restrictions and changes derived from the Covid-19 pandemic (Kuckertz et al., 2020). The innovation potential of Brazilian startups is observed in the context of e-commerce, retail and wholesale, in the face of the changes that occurred due to the Covid-19 pandemic. The research was conducted through the use of a survey applied to founders/managers of e-commerce, retail and wholesale startups. Abstartups maintains a database, called StartupBase, in which 611 e-commerce, retail and wholesale startups that are supposedly in operation are listed (StartupBase, 2020). Thus, the 611 startups were invited to participate and make up the research population. Research instrument and data collection The research instrument consists of a cover letter and Informed Consent Form (ICF), latent constructs and their indicators, and demographic questions. In the cover letter, strategies were adopted to minimize the common method bias (CMB), which is inherent to selfcompletion surveys and the respondents answer both the dependent and independent variables (Podsakoff & Organ, 1986). The strategies consisted of making the research objectives explicit, ensuring anonymity and emphasizing that there are no right or wrong answers (Podsakoff, MacKenzie, Lee & Podsakoff, 2003). ICF provided for the respondent's agreement is used to participate in the survey and authorize the use of the data for academic purposes. A five-point Likert scale was used to measure the latent constructs and indicators. In order to minimize the CMB, different weightings were used for the scale points, depending on the construct (Podsakoff et al., 2003). The construct job autonomy (three items) was adapted from Rodr ıguez et al. (2019), with an agreement scale (1 5 strongly disagree and 5 5 strongly agree). Ambidextrous innovation was measured as a second-order construct, which is composed of two first-order dimensions: incremental innovation (three items) and radical innovation (three items). Both were adapted from Atuahene-Gima (2005), Lin, McDonough, Lin & Lin (2013), and Bedford et al. (2019), on an agreement scale (1 5 strongly disagree and 5 5 strongly agree). Similar to Monteiro & Beuren (2020), respondents were asked to indicate to what extent the startup introduced new incremental products/services during the pandemic period, and to what extent it more radically introduced new products/services during the pandemic period compared to their main competitors. Considering it as a construct that encompasses the dimensions of incremental and radical innovation, ambidextrous innovation is in accordance with previous literature (Bedford et al., 2019;Monteiro & Beuren, 2020). Two binary control variables (no/yes) were included in the research: external financing, in case the startup received or is receiving financial resources from third parties; and institutional ties, in case the startup has ties with accelerators, incubators, technology parks or similar. External financing can be a key factor for survival and growth (Davila et al., 2015), and when they have institutional ties, startups connected with innovative environments tend to denote differentiated forms of support and conditions to develop competencies that ensure their survival (Vargas & Plonski, 2019). The survey was instrumentalized in the QuestionPro® platform and the link to it was sent to the founders/managers of the startups through the social network LinkedIn®, from June to August 2020. A total of 84 startups (13.75% of the population) submitted their responses, making up the final sample. Regarding the profile of the respondents, 70.24% are founders (individual owner or partner), have been at the startup for an average of 2 to 3 years, are in average 37 years old, and 51.19% have a post-graduation degree in specialization, Master of Business Administration (MBA) or master's degree. Regarding the profile of the startups, 60.71% (n 5 51) raised or are raising external financing, and 42.86% (n 5 36) have institutional ties with an accelerator, incubator, technology park or the like. Data analysis techniques A post hoc test was elaborated in the G*Power 3 software with an average effect size (f 2 ) (0.15), error probability (α) of 5%, a sample of 84 respondents and 4 predictors in the dependent variable (ambidextrous innovation), obtaining a satisfactory statistical power (Ringle, Silva & Bido, 2014). The sample size is satisfactorily aligned with related studies that used such a technique (Garidis & Rossmann, 2019;Theiss & Beuren, 2020). Thus, the sample size is adequate for applying PLS-SEM. PLS-SEM has been receiving attention and being widely employed in management accounting (Nitzl, 2016) and entrepreneurship (Manley, Hair, Williams & McDowell, 2020) research. Some reasons for its acceptability in the field of business and management are its pertinence for complex modeling, for relatively limited sample sizes, for samples with an absence of multivariate normality of the data, analyses of relationships with a more exploratory character and the possibility of incremental analyses (Hair, Risher, Sarstedt, & Ringle, 2019). Furthermore, related studies that have conducted surveys contemplating constructs based on multi-item, captured in Likert scales, used PLS-SEM (e.g. Beuren & Santos, 2019;Kaya, Abubakar, Behravesh, Yildiz & Mert, 2020;Theiss & Beuren, 2020;Crespo, Curado, Oliveira & Muñoz-Pascual, 2021). One of the key points for choosing PLS-SEM consists in the absence of normal data, which commonly occurs in social science studies (Nitzl, 2016). From the perspective of non-normal data and small sample sizes, covariance-based structural equation modeling (CB-SEM) may have not so adequate results (Reinartz, Haenlein & Henseler, 2009), while PLS-SEM is more appropriate in such cases (Sarstedt, Hair, Ringle, Thiele & Gudergan, 2016;Hair et al., 2019). The study employs mediation analysis, in which the independent variable (job autonomy) is expected to exert an indirect effect on the dependent variable (ambidextrous innovation) through a third variable (unscripted agility) (Bido & Silva, 2019). PLS-SEM was instrumentalized in the software SmartPLS 3 (Ringle, Wende & Becker, 2015). An asymmetric analysis (fsQCA) was performed to complement the symmetric analysis (PLS-SEM). This technique permeates the analysis of necessary conditions and combinations of these conditions that may be sufficient to promote the success of the dependent variable (Ragin, 2008), in this case, high ambidextrous innovation. The fsQCA is receiving prominence in entrepreneurship and innovation research (Kraus, Ribeiro-Soriano & Sch€ ussler 2018), also proving useful in the context of managerial accounting (Bedford, Malmi & Sandelin, 2016). This technique was instrumentalized in the software fsQCA 3.0. The use of PLS-SEM and fsQCA provides complementary and relevant results (Kaya et al., 2020;Crespo et al., 2021). Common method bias and non-response bias In addition to the care taken to minimize CMB, tests were conducted to check for the possible presence of this problem (Podsakoff et al., 2003). Harman's single factor test was conducted through exploratory factor analysis (EFA), where the first factor resulted in 33.80% of the total variance explained, below the 50% threshold (Podsakoff et al., 2003), indicating that the CMB is not a problem. Since the demographic characteristics of nonrespondents are not known, the first-last criterion was adopted to analyze the non-response bias, in an analogy that the last respondents show similar behaviors to the non-respondents (Mahama & Cheng, 2013). Thus, a test of means was conducted between the first 25% and last 25% respondents, and the constructs showed no significant differences (p-values between 0.305 and 0.725), suggesting that the responses are congruent regardless of the timing of the response. Data analysis 4.1 PLS-SEM analysis Higher-order constructs are useful for modeling variables with higher levels of abstraction (Sarstedt, Hair, Cheah, Becker & Ringle, 2019). Therefore, in addition to the single-order constructs (job autonomy and unscripted agility), a second-order construct (ambidextrous innovation) was used, composed of two first-order constructs (incremental innovation and radical innovation). To this end, a reflective-reflective second-order construct was adopted . Table 1 presents the reliability, validity and other assumptions, for the unique order and second-order constructs and control variables. The factor loadings were all above 0.7 and rated in their respective construct, which indicates adequacy (Hair, Hult, Ringle & Sarstedt, 2016). For internal consistency, the values of Cronbach's Alpha (α), Dijkstra-Henseler's Rho (ρ A ) and Composite reliability (CR) ranged from 0.7 to 0.95, which indicates its adequacy . Convergent validity is evidenced by the average variance extracted (AVE), with values above 0.5 . Discriminant validity was verified by the Fornell-Larcker criterion, with the square root of the AVE (italic values in the diagonal row) greater than the correlation of the construct with the others , and by the values of heterotrait-monotrait ratio of correlations (HTMT) below 0.85 . Brazilian startups in Covid-19 crisis The structural model (Table 2) used the bootstrap model to calculate the significance, calculated the 95% confidence interval (CI) through the bias-correct and accelerated bootstrap (BCa) method, and two-tailed test and resample of 5,000 . Initially, attention was paid to the possible presence of multicollinearity. The internal variance inflation factor (VIF) (of the constructs) showed values below 3 (from 1 to 1.158), indicating the absence of this problem . About the explained variance of the endogenous variables, the coefficient of determination (R 2 ) for unscripted agility (12.2%) is close to medium (13%), while for ambidextrous innovation (22.1%) is close to large (26%) (Cohen, 1988). In turn, the Stone-Geisser indicator (Q 2 ) was used to calculate the predictive relevance of the model, indicating small (0%) to medium (25%) predictive accuracy for unscripted agility (5.3%) and ambidextrous innovation (14.1%) . The control variables show no significant association with ambidextrous innovation. fsQCA analysis The asymmetric fsQCA analysis requires a logical sequence of procedures. First, the fuzzification is conducted in order to convert the mean values of the constructs into fuzzy sets, with values between 0 and 1 (Ragin, 2008). To do so, three qualitative anchors should be defined: full membership (0.95), crossover point (0.50) and full non-membership (0.05) (Ragin, 2008). For the five-point Likert scale constructs (job autonomy, unscripted agility and ambidextrous innovation), the anchors were set at 4, 3 and 2 (Su, Zhang & Ma, 2019). For the binary variables (external financing and institutional ties), the values are calibrated as crisp sets, with 0 for absence and 1 for the presence of such a situation or characteristic (Ragin, 1987). Next, the necessary conditions for the success of the dependent variable, in this case, ambidextrous innovation, are analyzed. A condition can be necessary (consistency above 0.90), almost always necessary (consistency between 0.80 and 0.90) or not necessary (consistency below 0.80) (Ragin, 2000). In the analysis of necessary conditions, it can be seen The last step of the fsQCA requires the creation of a truth table (2 k rows), where k is the number of causal conditions that can promote the success of the dependent variable (Ragin, 2008). After creating the truth table, a consistency threshold of 0.80 is set, as suggested by Ragin (2008). The parsimonious and intermediate solutions were considered together, with the conditions that are part of both (parsimonious and intermediate) being classified as core conditions, while those that appear only in the intermediate solution as peripheral conditions (Fiss, 2011). Two rows (solutions) meet the consistency threshold (Table 3). The consistency for each solution and overall of the model exceeds the threshold of 0.80, demonstrating adequacy regarding the relevance of the solutions (Ragin, 2008). Coverage refers to the proportion of cases (startups) that use a given strategy (solution) to promote high ambidextrous innovation. In this sense, unique coverage represents the proportion of cases covered exclusively by a given solution; raw coverage indicates the proportion of cases covered, together with the other possible conditions; and the overall coverage shows the total proportion of cases that uses any of the two solutions (Ragin, 2008). Broadly speaking, coverage can be compared to the R 2 of regression techniques (Woodside, 2013). Thus, Solution 2 presents the highest raw coverage (35.3% of the cases) and is constituted by the presence of all conditions, with unscripted agility, external financing and institutional ties being core conditions, while job autonomy is a peripheral condition. This finding highlights the relevance of this set of antecedents for the promotion of high ambidextrous innovation. Solution 1 represents 4.1% of the cases and reinforces that the presence of unscripted agility is a core condition. However, at the expense of the absence of external financing and institutional ties, it is postulated that working autonomy must be an absent condition in order for the cases covered by the solution to achieve success. Discussion of the results H1 proposes that job autonomy has a direct and positive effect on the unscripted agility of startups in times of the Covid-19 pandemic, and it had support for acceptance (β 5 0.364, p < 0.05). It is inferred that the organizational behavior of instigating freedom, for the body of the organization to take control and plan tasks, has the potential to promote higher levels of organizational resilience (Kantur & Iseri-Say, 2012), as well as boosting unscripted agility (Akg€ un & Keskin, 2014 (Duchek, 2020), essential to their survival (Bryce et al., 2020) and possible recognition of new opportunities (McCann, 2004). H2 proposed that unscripted agility has a direct and positive effect on ambidextrous innovation of startups in times of the Covid-19 pandemic, and it cannot be rejected (β 5 0.543, p < 0.01). According to the fsQCA, unscripted agility is an almost always necessary condition (consistency 5 0.886), but not sufficient on its own. From this perspective, unscripted agility is configured as a core condition in the two sufficient solutions for promoting high ambidextrous innovation. The statistical supports denote the multiple facets of organizational resilience, in the form of unscripted agility, to promote ambidextrous innovation (incremental and radical) in startups in the context of the crisis caused by the Covid-19 pandemic. This finding is supported by the evidence from previous studies (Diamond, 1996;Akg€ un & Keskin, 2014;Hallak et al., 2018) that resilient behaviors drive innovation, also in crisis contexts. The innovative profile of startups (Spender et al., 2017), combined with the unscripted agility (Akg€ un & Keskin, 2014), in times of crisis (Doern et al., 2019), proves to be a pertinent predictor for the creation of incremental and radical innovations. H3a postulates that job autonomy has a direct and positive effect on startups' unscripted agility in times of the Covid-19 pandemic, but it was not supported (β 5 À0.112, p > 0.10). While the literature on job autonomy highlights its impact on individual innovative behavior (Orth & Volmer, 2017;Albort-Morant et al., 2020), the impact on organizational innovation or ambidextrous innovation is not evident. Although job autonomy instigates flexibility of time, routines, and opportunities (C€ aker & Siverbo, 2018) and possibly planning and conducting tasks (Rodr ıguez et al., 2001), no statistical support was obtained to argue that this autonomy is associated with ambidextrous innovation, at least not directly. H3bpostulates that through unscripted agility, job autonomy exerts an indirect and positive effect on startups' ambidextrous innovation in times of the Covid-19 pandemic cannot be rejected (β 5 0.198, p < 0.05). Since the direct association is not significant, there is full mediation (Bido & Silva, 2019), i.e. unscripted agility exhibits full mediating effect between job autonomy and ambidextrous innovation. In fsQCA, job autonomy is conditioned as a variable that is always necessary, but not sufficient on its own. When its presence is combined (Solution 2) with the presence of unscripted agility, external financing and institutional ties, it promotes high ambidextrous innovation (35.3% of cases). However, when combined with the absence of external financing and institutional ties (Solution 1), it also becomes a condition that needs to be absent for high ambidextrous innovation to occur (4.1% of cases). Given the larger scope of cases in Solution 2, it suggests that the third-party financing represents a crucial factor, as well as having some ties since ambidextrous innovation can be favored by the ecosystem, the training received, the innovative environment and the availability of the local resource (Vargas & Plonski, 2019). As evidenced in the third set of hypotheses, although there is no direct effect of job autonomy on ambidextrous innovation, there is an indirect effect, through unscripted agility. The fsQCA evidences that job autonomy, when combined with other variables, has the potential to promote high ambidextrous innovation, i.e. it takes on the asymmetric perspective. Thus, it indicates the respective symmetric impact of job autonomy on unscripted agility (Hackman & Oldham, 1976;C€ aker & Siverbo, 2018;Gardner, 2020) and of unscripted agility on innovation (Diamond, 1996;Akg€ un & Keskin, 2014;Hallak et al., 2018). Conclusions The results of the research lead to the conclusion that job autonomy and organizational resilience, in the unscripted agility dimension, are important drivers of startups' ambidextrous innovation (incremental and radical) in times of the Covid-19 pandemic. It was noticed that unscripted agility plays an important role in translating and passing on the role of job autonomy in favor of ambidextrous innovation. Moreover, two configurations between job autonomy, unscripted agility, external financing and institutional ties promote high ambidextrous innovation. The presence of these four elements together is the main pathway the sample relies on to achieve high ambidextrous innovation. Overall, the findings point to organizational behaviors that startups in the e-commerce segment rely on during the pandemic to promote an incremental and radical product and service innovation and pursue opportunities and business continuity. Implications One of the implications of this study is that it contributes to research that seeks to understand the mechanisms that are positively associated with ambidextrous innovation (Bedford et al., 2019;Monteiro & Beuren, 2020) by adding job autonomy and unscripted agility into the discussion. Another implication is that it adds new evidence for the innovation behavior of startups in pandemic times (Kuckertz et al., 2020), considering a specific segment (e-commerce). In addition to understanding the challenges faced by startups in the pandemic (Salamzadeh & Dana, 2020), the study explores elements that are associated with innovation and that may help the survival and continuity of the business. Also, it adds new evidence to entrepreneurship in times of crisis (Devece et al., 2016). This study also offers contributions to the debates on building resilience in startups (Haase & Eberl, 2019), adding evidence on how unscripted agility assumes a pertinent role in startups during times of disruptions, crises and pandemics. Its mediating role between job autonomy and innovation ambidexterity is highlighted. It also provides new evidence on the role of external financing (Davila et al., 2015) and institutional support (Vargas & Plonski, 2019) in the survival of startups, in particular for them to achieve high ambidextrous product and service innovation. Overall, the study contributes to the literature by discussing elements that are associated with ambidextrous innovation, exploring innovation in startups in times of crisis, specifically in times of the Covid-19 pandemic and contemplating the role of resilience in startups. Regarding managerial practice, this study can provide informational subsidies to startups' founders and managers. First, it demonstrates how the freedom and latitude attributed to the members of the organization (job autonomy) can stimulate creativity and the emergence of new ideas, which, despite not directly driving incremental and radical innovations (ambidextrous innovation), through resilience (unscripted agility), was proven to have an indirect effect. Furthermore, it lists paths for startups to promote high ambidextrous innovation, considering the possible conditions of external financing and institutional ties with some entrepreneurial ecosystem. In summary, the study presents startups' founders and managers with ways to promote incremental and radical innovations, through autonomy and resilience, in a singular period of crisis. Limitations and suggestions The findings of the research have limitations inherent to the study. They contemplate exclusively e-commerce, retail and wholesale startups. Therefore, the expansion to other segments should be parsimonious. The sample does not have a probabilistic nature, which does not allow extrapolation of the findings, while at the same time providing opportunities for new research in other contexts and probabilistic samples. The low representativeness of the sample in relation to the population (13.75%) is a limiting factor, despite being consistent with response rates of similar studies (Samagaio, Crespo & Rodrigues, 2018;Balboni, Bortoluzzi, Pugliese & Tracogna, 2019). Also, the cross-sectional study limits the inferential power of the findings. Brazilian startups in Covid-19 crisis Only two control variables were used; in the scope of external financing no cash was considered; and regarding the external ties, no distinction was made between accelerators, incubators, technology parks or the like. Although the study considered these control variables, due to the heterogeneity observed and because of the sample size, the analysis of possible unobserved heterogeneities through finite mixture partial least squares was not performed. These limitations, however, may pose new research opportunities.
The Entropy of Ricci Flows with Type-I Scalar Curvature Bounds In this paper, we extend the theory of Ricci flows satisfying a Type-I scalar curvature condition at a finite-time singularity. In [Bam16], Bamler showed that a Type-I rescaling procedure will produce a singular shrinking gradient Ricci soliton with singularities of codimension 4. We prove that the entropy of a conjugate heat kernel based at the singular time converges to the soliton entropy of the singular soliton, and use this to characterize the singular set of the Ricci flow solution in terms of a heat kernel density function. This generalizes results previously only known with the stronger assumption of a Type-I curvature bound. We also show that in dimension 4, the singular Ricci soliton is smooth away from finitely many points, which are conical smooth orbifold singularities Introduction This paper is concerned with the finite-time singularities of solutions (M n , (g t ) t∈[0,T ) ) to the Ricci flow ∂ ∂t g t = −2Rc(g t ) on a closed manifold which satisfy the Type-I scalar curvature condition Ricci flow solutions satisfying (1.2) have been studied in [EMT11; MM15;Nab10], where it was shown that, for any fixed q ∈ M and sequence of times t i ր T , a subsequence of (M n , (T −t i ) −1 g t i , q) converges in the pointed Cheeger-Gromov sense to a complete Riemannian manifold (M ∞ , g ∞ , q ∞ ) equipped with a function f ∞ ∈ C ∞ (M ∞ ) which satisfies the shrinking gradient Ricci soliton (GRS) equation While it is unknown whether this limiting soliton is uniquely determined by the basepoint q, in [MM15] it is shown that all such solitons share a numerical invariant, called the shrinker entropy W(g ∞ , f ∞ ), (see Section 4), which is determined by q. They also show that W(g ∞ , f ∞ ) = 0 if and only if (M ∞ , g ∞ , f ∞ ) is the Gaussian shrinking soliton on flat Euclidean space. While interesting questions about solutions satisfying (1.2) condition remain open, this condition is often too restrictive for useful applications of Ricci flow to geometry and topology. Condition (1.1), on the other hand, is satisfied by Kähler-Ricci flow on a Fano manifold with initial metric in the canonical Kähler class by the work of Perelman (see [ST08]), and is conjectured to be satisfied for a much larger class of Kähler-Ricci flow solutions (Conjecture 7.7 of [SW12]). One of the main technical difficulties that arises when studying Ricci flows satisfying (1.1) is that one cannot expect Type-I blowups to result in a smooth limiting space. In fact, most results about Ricci flows satisfying (1.2), including [CZ11; EMT11; Nab10; MM15], depend crucially on applying the Cheeger-Gromov compactness theorems to rescaled solutions. However, in [Bam17], [Bam16], Bamler develops an extensive theory for taking weak limits of Ricci flows with uniformly bounded scalar curvature, modeled on the Cheeger-Colding-Naber-Tian theory of noncollapsed Riemannian manifolds with bounded Ricci curvature. In particular, Theorem 1.2 of [Bam16] shows that any Ricci flow satisfying (1.1) has a dilation limit which is a singular space (see section 2), and which possesses the structure of a smooth but incomplete shrinking Ricci soliton outside of a subset of Minkowski codimension 4. The main goal of this paper is to extend Bamler's analysis of the singular limits of dilated Ricci flows satisfying (1.1), and to relate some of their properties to the original Ricci flow. The first main theorem generalizes the aforementioned results in [MM15]. In order to state this theorem, we first recall a result in [Bam16]. Assume (M n , (g t ) t∈[0,T ) ) is a closed, pointed solution of Ricci flow satisfying (1.1), and fix any sequence t i ր T . According to Theorem 1.2 of [Bam16], we can pass to a subsequence to get pointed Gromov-Hausdorff convergence of (M n , (T − t i ) −1 g t i , q) to a pointed singular space (X , q ∞ ) = (X, d, R, g ∞ , q ∞ ). Moreover, there exists f ∞ ∈ C ∞ (R) obtained as a limit of rescalings of a conjugate heat kernel based at the singular time, which satisfies the Ricci soliton equation on R. The Ricci soliton (R, g ∞ , f ∞ ) has a well-defined entropy W(g ∞ , f ∞ ), defined in Section 4, and there is a heat kernel density function (defined in Section 3) Θ(q) ∈ (−∞, 0] associated to the basepoint q. Theorem 1.1. Θ(q) = W(g ∞ , f ∞ ), with Θ(q) = 0 if and only if (R, g ∞ , f ∞ ) is the Gaussian shrinker on flat R n , in which case there is a neighborhood U of q in M such that sup U ×[−2,0) |Rm| < ∞. In particular, all singular shrinking GRS which arise as Type-I dilation limits at a fixed point in M possess the same shrinker entropy. We recall the definition of the singular set of (M, (g t ) t∈[0,T ) ), defined in [EMT11] as Σ := x ∈ M ; sup U ×[0,T ) |Rm| = ∞ for every neighborhood U of q in M . In the general Riemannian case, little is known about the regularity or structure of Σ. In the case where (M, g 0 ) is Kähler, it is known that Σ is actually an analytic subvariety of M , even without the Type-I assumption (see [CT15]). With a Type-I curvature assumption, it was shown in [MM15] that Σ is characterized by the density function: Σ = Θ −1 (0). We are able to generalize this result to the case of Type-I scalar curvature bounds. Finally, in dimension 4, we extend Bamler's results on the structure of singular shrinking GRS by giving a more precise description of the singular part of the shrinking soliton. We let (X, d, R, g ∞ ) be the singular space of Theorem 1.1 and the discussion preceding it. Theorem 1.3. If n = 4, then X \ R consists of finitely many points, and X has the structure of a C ∞ Riemannian orbifold. In particular, if X = (X, d, R, g) is the singular space in Theorem 1.3, then there exists f ∈ C ∞ (R) such that (R, g, f ) is an incomplete but smooth shrinking GRS, and each x ∈ X \ R admits a finite group Γ x ⊆ R 4 acting linearly and freely away from the origin, along with a homeomorphism ϕ x : R 4 /Γ x ⊇ B(0 4 , r 0 ) → B X (x, r 0 ) such that, if π x : R 4 → R 4 /Γ x is the quotient map, then ϕ x •π x is a smooth map on B(0 4 , r 0 ) \ {0}, and (ϕ x • π x ) * g, (ϕ x • π x ) * f extend smoothly to a Riemannian metric and function on B(0 4 , r 0 ). Theorem 1.3 was proved in the setting of Fano Kähler-Ricci flow in [CW12], where it was essential that the L 2 norm of the curvature tensor is uniformly bounded along the flow. This fails in the general Riemannian setting (even if we assume (1.2)), so our proof must rely on different arguments. In Section 2, we collect definitions and results related to Ricci flows satisfying certain scalar curvature bounds. In Section 3, we establish Gaussian-type estimates for conjugate heat kernels based at the singular time, largely along the lines of Bamler and Zhang's heat kernel estimates. In Section 4, we define shrinker entropy, and prove an important integration-by-parts lemma for singular shrinking GRS. In Section 5, we prove the convergence of entropy and the heat kernel measure. In Section 6, we show that the shrinker entropy of a normalized singular GRS only depends on the underlying manifold, and use this to complete the proof of Theorem 1.1 and Corollary 1.2. Finally, in Section 7, we specialize to the case of dimension 4, and prove Theorem 1.3. The author would like to thank his advisor Xiaodong Cao for his helpful feedback and support, as well as Jian Song for useful discussions. Preliminaries and Notation Given a solution (M n , (g t ) t∈[0,T ) ) of Ricci flow, we let d t : M × M → [0, ∞) be the length metric induced by g t , and define for all (x, t) ∈ M × [0, T ) and r > 0. For measurable S ⊆ M , we set |S| t := Vol gt (S). We denote the Lebesgue measure on a Riemannian manifold (M, g) as dg. If we consider a rescaled flow, for example g t = λg λ −1 t , we let d t be the length metric induced by g t , B(x, t, r) := B gt (x, r) the corresponding geodesic ball, and so on. If (X, d) is a metric space, we also set If in addition diam(X) ≤ π, then we denote by (C(X), d C(X) , c 0 ) the corresponding metric cone, with vertex c 0 . We recall Perelman's W functional, defined by for any Riemannian metric g on M , and any f ∈ C ∞ (M ),τ > 0. For any compact Riemannian manifold, Perelman's invariants Note that this definition of ν is not completely standard. We now define the class of weak limit spaces we will be considering, following the definitions in [Bam17], [Bam16]. Definition 2.1. A singular space is a tuple X = (X, d, R, g), where (X, d) is a complete, locally compact metric length space, and (R, g) is a C ∞ Riemannian manifold satisfying the following: (i) d|(R × R) is the length metric of (R, g). X is said to have singularities of codimension p 0 > 0 if, for all p ∈ (0, p 0 ), x ∈ X and r 0 > 0, there exists E p,x,r < ∞ such that |{r Rm < rs} ∩ B X (x, r) ∩ R| ≤ E p,x,r r n s p for all r ∈ (0, r 0 ), s ∈ (0, 1). X is said to have mild singularities if, for any p ∈ R, there exists a closed subset Q p ⊆ R of measure zero such that, for any x ∈ Q p , there exists a minimizing geodesic from p to x lying entirely in R. X is Y -regular at scale a if, for any x ∈ X and r ∈ (0, a) satisfying Note that conditions (iii),(iv) imply that φ i are Gromov-Hausdorff approximations. If a convergence scheme exists, we say that (M i , g i , q i ) converges to (X , q ∞ ). We will commonly rely on the the main theorem of Bamler in [Bam16], which establishes weak uniqueness properties and integral curvature bounds for Ricci flows with bounded scalar curvature. We will also need the following distortion estimate for Ricci flows with bounded scalar curvature. Estimates for Conjugate Heat Kernels Based at the Singular Time The following lemma is mostly a combination of the proofs of Theorem 1.2 in [Bam16] and Theorem 1.4 in [BZ17]. Lemma 3.1. For any A < ∞, there exists C * = C * (A, n) < ∞ such that the following holds. Let (M n , (g t ) t∈[−2,0) ) is a closed solution of Ricci flow satisfying ν[g −2 , 4] ≥ −A and |R|(x, t) ≤ A|t| −1 for all t ∈ [−2, 0). Then, for any x, y ∈ M and − 1 2 ≤ s < t < 0, we have 1 Proof. First note the reduced distance bound so by Perelman's differential Harnack inequality, K(x, t; x, s) ≥ (4π(t − s)) − n 2 e −A for all x ∈ M and −2 ≤ s < t < 0. Claim: There exists C ′ = C ′ (A, n) < ∞ such that, for − 1 2 ≤ s < 0 and t ∈ (s, 1 2 s], x, y ∈ M , we have where τ ∈ {s, t}. This will just follow from an appropriate rescaling and the corresponding Gaussian bounds for Ricci flow with bounded scalar curvature. In fact, consider the rescaled flow g r := |t| −1 g t+|t|r , so the Bamler-Zhang heat kernel estimates [BZ17] give C ′ = C ′ (A, n) < ∞ such that for all x, y ∈ M and r ∈ [−1, 0) we have where τ ∈ {0, r}. Also, we know the behavior of the heat kernel under rescaling: , so the claim follows. Now consider the case where s/2 < t < 0. A special case of the reproduction formula for the heat kernel is K(x, t; y, s) = M K(x, t; z, 1 2 (t + s))K(z, 1 2 (t + s); y, s)dg 1 2 (t+s) (z) for all x, y ∈ M and −2 ≤ s < 0 and t ∈ (s/2, 0). Also, the above claim implies , so combining this with the reproduction formula gives Thus, for any x ∈ M , we have , so applying the upper bound of (3.1) with τ = 1 2 (t + s) gives Now consider the rescaled flow g r := (t − s) −1 g (t−s)r+ 1 2 (s+t) , which satisfies In terms of the unrescaled flow, this gives This and (3.1) give C * (A, n) < 0 such that, for all x, y ∈ M and − 1 2 ≤ s < t < 0, we have In particular, for any r ∈ [s, 1 2 (s + t)], we have Applying Theorem 2.4 to the rescaled flow gives for all r ∈ [s, 1 2 (s + t)]. Then the Hein-Naber concentration inequality (Theorem 1.30 of [HN14]) gives Combining this with 3.4 gives We integrate from r = s to r = 1 2 (t + s) to get . We now combine this with the on-diagonal upper bound, obtaining C = C(A, n) < ∞ such that In terms of the rescaled flow, this is The Bamler-Zhang parabolic mean value inequality for solutions to the conjugate heat equation (Lemma 4.2 in [BZ17]) applied to the rescaled flow (on the time interval [− 1 2 , 0]) gives for some C ′′ = C ′′ (A, n) < ∞, so rescaling back gives . Throughout this section, let u q,t be the conjugate heat kernel based at (q, t), and write u q,t (x, s) = (4π(t − s)) − n 2 e −fq,t(x,s) . The following lemma is essentially obtained by passing Lemma 3.1 to the limit as t ր 0, and extends Propositions 2.7 and 2.8 of [MM15]. Proof. For any closed solution of Ricci flow, a subsequence of u q i ,t i must converge in C ∞ loc (M × [−2, 0)) to some u q,0 solving the conjugate heat equation on M × [−2, 0), as shown in [MM15]. Since M is closed, M u q,0 (x, t)dg t (x) = 1 is immediate, so it suffices to prove the Gaussian bounds for any limit u q,0 . Fix α ∈ (0, 1], and let i 0 ∈ N be sufficiently large so that t i − α ≥ 1 2 α for all i ≥ i 0 . By the previously established heat kernel bounds, there exists C * = C * (A, n) < ∞ such that, for all (y, s) ∈ M × [−1, −α] and i ≥ i 0 , we have Similarly, for any (y, so the claim follows as for the lower bound. Definition 3.3. Any limit u q,0 as in the statement of Lemma 10 is called a conjugate heat kernel based at the singular time. The set of such functions u q,0 is denoted U q , as in [MM15]. Note that we are not able to establish the uniqueness of u q,0 given a point q ∈ M (in fact, this is not even known under assumption (1.2)), but the collection of such functions satisfies strong compactness properties. By the uniform Gaussian estimates and parabolic regularity on compact . By the locally uniform bounds on u q ∈ U q and their derivatives, we observe that F q is also compact in C ∞ loc . Thus Perelman's differential Harnack inequality passes to the limit to give for any f q ∈ F q , where τ := |t|. As in [MM15], we also define but the integrand is bounded on any compact subset of M × (−1, 0), by the uniform estimates for f ∈ F q . Thus θ q is locally Lipschitz. Moreover, θ q (t) ≤ 0 for all t ∈ (−1, 0) by Perelman's Harnack inequality, so we can define the heat kernel density function Fix a sequence t i ր 0, and consider the rescaled flows we may conclude that By the argument of Theorem 1.2 of [Bam16], we know that, after passing to a subsequence, (M, g i 0 , q) converge to a singular shrinking GRS, and f i (0) converge to the corresponding potential function. The only difference is that, in [Bam16], the soliton potential function is obtained from limiting a fixed conjugate heat kernel based at the singular time, whereas we are obtaining a soliton potential function from a sequence in F q . The proof is almost exactly the same, since the estimates for elements of F q are uniform, but we rewrite the relevant parts of the argument in [Bam16] here for completeness, and because we would like to pass the heat kernel bounds of Lemma 3.2 to the limit. By Bamler's compactness theorem (Theorem 2.3), we can pass to a subsequence so that (M, g i 0 , q) converge to a pointed singular space (X , q ∞ ) = (X, d, R, g, q ∞ ), with associated convergence scheme Φ i : U i → V i . For any x ∈ R, we have r := r X Rm (x) > 0, so by Proposition 4.1 in [Bam16], , 4] ≥ −A, backwards pseudolocality (Theorem 1.5 in [BZ17]) gives α = c(n, A) > 0 such that r Rm (y, s) > αr for all (y, s) ∈ B g i (Φ i (x), 0, αr) × [−α 2 r 2 , 0] for all i ∈ N. By 9 3.2, we have the uniform bounds for all k ∈ N. Along with the locally uniform upper bound for f i , we get similar bounds for f i , so that we can pass to a subsequence such that f i (0) converges in C ∞ loc to some f ∞ ∈ C ∞ (R). Suppose by way of contradiction that there exists x * ∈ R such that Then this quantity is at least 1 2 c 0 on some ball B X (x * , r) ⊆ R, so for x ∈ B g i 0 (Φ i (x * ), 1 2 r) and sufficiently large i, we have However, this along with backwards pseudolocality and parabolic regularity give , where δ > 0 is small, (depending on x * but not on i) contradicting (3.5). The estimate (3) passes to the limit to give for all x ∈ R. Integration by Parts on the Singular Ricci Soliton Now let (X, d, R, g ∞ , f ∞ ) be a singular shrinking GRS as obtained in the previous section. Lemma 4.1. There exists T = T (A, n) < ∞ such that, for all r > 0, we have |B X (q ∞ , r) ∩ R| ≤ T r n . Proof. By Proposition 6, we obtain C = C(A) < ∞ such that |B(x, t, r)| t ≤ Cr n for all r ∈ (0, 1] and (x, t) ∈ M × [−1, 0). For the rescaled flows g i t := |t i | −1 g t i +|t i |t , this means that |B g i (x, t, r)| g i t ≤ Cr n for all r ∈ (0, |t i | − 1 2 ) and (x, t) ∈ M × [−2|t i | −1 , 0). Now let (U i , V i , Φ i ) be a convergence scheme for the convergence (M, g i 0 , q) → (X , q ∞ ). Let K be any compact subset of B X (q ∞ , r) ∩ R. Then, for sufficiently large i ∈ N, we have K ⊆ U i and Since K was arbitrary, this means |B X (q ∞ , r) ∩ R| ≤ 2 n+1 Cr n . Definition 4.2. The shrinker entropy of the singular shrinking GRS (X, d, This integral is finite by the previous lemma, since |R ∞ | is bounded, f ∞ has quadratic growth, In order to prove convergence of entropy, it is essential to use Perelman's differential Harnack inequality, so that the entropy can be rewritten as the integral of a nonpositive quantity. However, it is then necessary to prove that the integration by parts formula To this end, we recall the following integration by parts formula Then R (div Z)dg = 0. The hypotheses of this lemma will follow from various identities for soliton potential functions. Lemma 4.4. R ∆f ∞ e −f∞ dg ∞ = R |∇f ∞ | 2 g∞ e −f∞ dg ∞ . Proof. Now fix r > 0, and let φ ∈ C ∞ (R) be a smoothing of a radial function, chosen such that φ|B(q ∞ , r) = 1, 0 ≤ φ ≤ 1, |∇φ| ≤ 4, and supp(φ) ⊆ B X (q ∞ , r + 1). We want to apply the previous lemma to Z := φ∇e −f∞ . Note that R g∞ ≥ 0 since R gt is uniformly bounded below. Also, the bound |R|(x, t) ≤ A|t| −1 passes to the limit to give R g∞ ≤ A. We know that R g∞ + |∇f ∞ | 2 − f ∞ = C for some constant C ∈ R. For the purpose of this section, we may assume that C = 0, so that f ∞ ≥ 0 and |∇f ∞ | 2 ≤ f ∞ . Also, R g∞ + ∆f ∞ = n 2 implies that |∆f ∞ | ≤ A + n 2 . The quadratic growth estimates (3.7) give for x ∈ R. Both of these terms are locally bounded on R, so we may apply the previous lemma to Z to obtain 0 = R div(φ∇e −f∞ )dg ∞ . Using the volume upper bound, we can conclude The claim then follows by taking r → ∞, and using the dominated convergence theorem. Corollary 4.5. The soliton entropy can also be expressed as which has nonpositive integrand by passing Perelman's differential Harnack inequality to the limit. for all t ∈ [−2, 0). Let (X , q ∞ ) = (X, d, R, g ∞ , q ∞ ) be a singular space obtained as a pointed limit of (M, g i 0 , q), where t i ր 0, and g i t : Proof of Entropy Convergence Proof. The first equality is by definition. Let (U i , V i , Φ i ) be the convergence scheme for (M, g i 0 , q) → (X , q ∞ ). Then, for any compact subset K ⊆ R, we have for large enough i ∈ N that Taking the infimum over all compact subsets K ⊆ R gives W(g ∞ , f ∞ ) ≥ Θ(q). Now fix ǫ > 0, and choose K ⊆ R compact such that Then, for any K ′ ⊆ R compact with K ⊆ K ′ , we have In order to show W(g ∞ , f ∞ ) ≤ Θ(q), it therefore suffices to find some K ′ ⊆ R compact (possibly depending on ǫ) with K ⊆ K ′ and Since f i (0) have uniform quadratic growth, and because |R g i 0 | ≤ A, we can find D = D(A, n) < ∞ uniform such that for all i ∈ N. Moreover, Bamler's upper bound (Theorem 2.3) on the size of the quantitative singular set gives us E = E(A, n) < ∞ such that for all s ∈ (0, 1]. We also know that the entropy integrand is bounded uniformly from below on B g i (q, 0, D), and that for i ∈ N sufficiently large we have Thus we can choose s = s(A, n, ǫ) > 0 sufficiently small so that Finally, by the definition of a convergence scheme, we can choose K ′ ⊆ R such that K ⊆ K ′ and Φ i (K) ⊇ {r g i Rm (·, 0) ≥ s} ∩ B g i (q, 0, 2D) (in fact, this will follow by taking K ′ = U i for some large i ∈ N). Recall that R + |∇f | 2 − f is some constant c ∈ R and R + ∆f = n 2 , we can write That is, for a normalized soliton, we know R + |∇f | 2 = f − W(g, f ). Proposition 5.3. The singular shrinking GRS (R, g ∞ , f ∞ ) of Theorem 5.1 is normalized: Proof. For any compact subset K ⊆ R, we have so it suffices to prove that R (4π) − n 2 e −f∞ dg ∞ ≥ 1. In fact, fix ǫ > 0. By the uniform volume upper bound (Proposition 2.4) and heat kernel lower bound (Lemma 3.2), we have some D = D(ǫ) < ∞ such that for all i ∈ N. Moreover, since e −f∞ is uniformly bounded on B g i (q, 0, 2D) (independently of i) we also have for any s > 0, when sufficiently large i. By taking s > 0 sufficiently small, the upper bound on the size of the quantitative singular set (as in the previous section) tells us that the right hand side is less than 1 2 ǫ. This means that and the claim follows. Remark 5.4. As in the Type-I curvature case [MM15], we note that Proposition 5.3 and the entropy convergence part of Theorem 5.1 also hold if the sequence f i is replaced by f (·, t i + |t i |t) for some fixed f ∈ F q . The equality W(g ∞ , f ∞ ) = Θ(q) could fail a priori in that setting, though equality will follow from the results of Section 6. Entropy Rigidity of the Gaussian Soliton The following result extends Lemma 2.1 of [Nab10] to the setting of singular shrinking GRS. The proof of that lemma used essentially the fact that the underlying Riemannian manifold is complete, which in our setting is only true if the singular set X \ R is empty. However, we will see that the proof can be modified to work when X \ R has singularities of codimension strictly greater than 3, using the arguments of Claim 2.32 of [CW14]. In fact, the part of the following proof establishing the flow properties of a function f ∈ C ∞ (R) with ∇ 2 f = 0 is taken from this claim, but since the setting of [CW14] is somewhat different, we rewrite the part of this claim we need. Proposition 6.1. Suppose X = (X, d, R, g, f i ), i = 1, 2 are normalized singular shrinking GRS with singularities of codimension 4. Then Proof. We can assume that f 1 − f 2 is not constant, otherwise the normalization condition gives the claim. Set f := |∇(f 1 − f 2 )| −1 (f 1 − f 2 ), so that |∇f | = 1 and ∇ 2 f = 0 on R. Let ϕ t (x) be the flow of ∇f starting at x ∈ R for t ∈ R such that this is defined. Fix p ∈ (2, 4), s ∈ (0, 1]. We first show that, for any q ∈ X, s ∈ (0, 1], and D < ∞, the set has Minkowski codimension at least p − 1. We denote by H n−1 the (n − 1)-dimensional Hausdorff measure on R, which coincides with the Lebesgue measure on any hypersurface. Because r X Rm is 1-Lipschitz, we can find h ∈ C ∞ (R) such that |∇h| ≤ 2 and 1 2 r X Rm < h < 2r X Rm on R. This along with |S 2D,2s | ≤ 4 p+10 E q,6D,p s p−1 implies the Minkowski dimension claim. In particular, the set S of x ∈ R such that ϕ t (x) does not exist for all time satisfies |S| = 0 and H n−1 (S ∩ f −1 (0)) = 0. Define N := f −1 (0) ∩ R, and let U ⊆ R × N be the (open) maximal subset where ψ(t, x) := ϕ t (x) is defined. Then R \ S ⊆ ψ(U ), since for any x ∈ R \ S, we have x)). In particular, |R \ ψ(U )| = 0. By an computation similar to that in Claim 1, and noting that now (∇f (ϕ t (x))) ⊥ = ∇f (ϕ t (x)), where ⊥ denotes the projection T ϕt(x) R → (T ϕt(x) ϕ t (N )) ⊥ , we get that ψ(U ) is a Riemannian isometry (U, dt 2 + g) → (ψ(U ), g), where g is the Riemannian metric g on N := f −1 (0) ∩ R induced from g. In particular, Claim 3: There are a i ∈ R such that for all (t, x) ∈ U . In fact, the pulled back soliton equation gives ∂ 2 t f i = 1 2 everywhere, so for (t, x) ∈ U . Moreover, for any X ∈ X(N ), we have ∇ X ∂ t = 0 , so the Riemannian product structure and the soliton equation give This means that ∇(∂ t f i − 1 2 t) = 0 on U , hence ∇( ∇f, ∇f i − 1 2 f ) = 0 on the dense open subset ψ(U ) of R. Because f, f i are smooth and R is connected, we get that ∇f, ∇f i − 1 2 f is constant on ψ(U ), hence ∂ t f i − 1 2 t is constant on U . In particular, ∂ t f i is constant on {0}×N , and the claim follows. Next, we address the rigidity statement of Theorem 1.1. |R(·, t)|(T − t) < ∞, and let (X , q ∞ ) be a singular shrinking GRS obtained as a Type-I limit. If W(g ∞ , f ∞ ) = 0, then X is the Gaussian shrinker. If this occurs, there is a neighborhood U of q in M such that Thus, after passing to a subsequence, u Φ i (x),t i converges to a conjugate heat kernel at the singular time u ∈ U q in C ∞ loc (M × (−1, 0)). Writing u(y, s) = (4π|s|) − n 2 e −f (y,s) , we know from previous sections that, if f i (s) := f (t i + |t i |s), then f i (0) • Φ i converges in C ∞ loc (R) to a normalized soliton function f ∞ , which must satisfy W(g ∞ , f ∞ ) = W(g ∞ , f ∞ ) = 0 by Remark 5.4 and Proposition 6.1, hence (again using Remark 5.4) by Theorem 5.1. Now let ǫ = ǫ(n, C) > 0 be the constant from Theorem 2.6. Then there exists −1, 0)), we know that for any fixed t ∈ (−1, 0), we have In particular, W(g −δ+t i , f Φ i (x),t i (−δ + t i ), δ) ≥ −ǫ for sufficiently large i ∈ N. By Theorem 2.6, so by backwards Pseudolocality, it follows that (M, |t i | −1 g t i , q) actually converges in the C ∞ Cheeger-Gromov sense to the Gaussian shrinker on flat R n . Now apply a version of Perelman's pseudolocality theorem (Theorem 1.2 of [Lu10]) to the ball B(q, t i , D |t i |), with D < ∞ and i ∈ N sufficiently large, to conclude that |Rm|(x, t) ≤ C for all x ∈ B(q, t i , |t i |), t ∈ (t i , 0), (see also Lemma 2.4 of [EMT11]). Proof of Theorem 1. By Section 3, we can pass to a further subsequence in order to assume that f i (0) converge to another smooth soliton potential function f ′ ∞ ∈ C ∞ (R), which satisfies W(g ∞ , f ′ ∞ ) = Θ(q). By Proposition 6.1, we have W(g ∞ , f ′ ∞ ) = W(g ∞ , f ∞ ). The remaining claim is Proposition 6.2. Removable Singularities In this section, we specialize to the four-dimensional case, where we first sharpen Bamler's Minkowski dimension estimates for the singular set, obtaining that the limiting singular GRS is actually smooth outside of a discrete set of points. Using this, we are able to show the singularities are conical C 0 orbifold singularities, without knowing that the global L 2 norm of the curvature tensor on the regular set is finite (this is not true in general, even if we assume (1.2) so that X is smooth). In fact, it is not clear how one can prove local L 2 estimates for the curvature on the rescaled Ricci flow. This is because the L 2 curvature bound in dimension 4 is usually proved using the Chern-Gauss-Bonnet formula, but the argument relies crucially on the (rescaled) flow having uniformly bounded diameter. Moreover, it is not clear how to effectively localize the Chern-Gauss-Bonnet formula in this situation: applying the formula on a subdomain results in boundary terms which depend on the principal curvatures of the boundary. In [HM11], this difficulty was overcome by using properties of level sets of a shrinking GRS, which suggests that it may be easier to prove the L 2 curvature estimate on the limiting singular space rather than on the Ricci flow itself. Therefore, we aim to prove a local L 2 bound for |Rm| near the singular points of X , and then apply the removable singularity techniques of [Tia90], [CS07], [Uhl82]. We achieve this by estimating separately the traceless Ricci and the Weyl parts of the curvature tensor, using ideas of Haslhoffer-Muller [HM11] and Donaldson-Sun [DS14], respectively. After overcoming this difficulty, the proof is fairly standard, and Uhlenbeck's theory [Uhl82] of removable singularities along with the ǫregularity theorem proved in [Hua20], and later [GJ17], let us conclude that in fact the singular GRS has a C ∞ orbifold structure. Throughout this section, we suppose that ( , we may pass to a subsequence so that (M, g i 0 , q) converges to a pointed singular space (X , q ∞ ) = (X, d, R, g, q ∞ ) with singularities of codimension 4, that is Y -regular at all scales, for some Y = Y (A) < ∞, and satisfies the shrinking soliton equation Rc + ∇ 2 f = 1 2 g on the regular part R, where f ∈ C ∞ (R) is the obtained from a sequence of rescaled conjugate heat kernels based at the singular time. We recall that |R| ≤ A on R, and that f satisfies quadratic growth estimates (3.7), which combine with the equation R + |∇f | 2 = f − W(g, f ) to give a locally uniform gradient estimate for f . Lemma 7.1. X \ R is discrete, and every tangent cone at x ∈ X \ R is isometric to R 4 /Γ for some finite subgroup Γ ≤ O(4, R) (which may depend on x and the choice of rescalings). Moreover, there exists N = N (A) > 0 such that |Γ| ≤ N . Proof. Fix x 0 ∈ X \ R, and let (Z, d Z , c Y ) be a tangent cone at x 0 , with λ i → ∞ such that (X, λ i d X , x 0 ) → (Z, d Z , c Z ) in the pointed Gromov-Hausdorff sense. By Corollary 1.5 of [Bam16], Z is a metric cone. Choose x i ∈ M such that x i → x 0 as i → ∞. By definition of the convergence (M, g i 0 , q) → (X , q ∞ ), for each i ∈ N, we can choose j = j(i) ≥ i such that (M, λ 2 i g j(i) 0 , x j(i) ) is λ i i −1 -close in the pointed Gromov-Hausdorff topology to (X, λ i d X , x 0 ). Setting g i t := λ 2 i g , 4] ≥ −A, which converges in the pointed Gromov-Hausdorff sense to (Z, d Z , c Z ). In particular, (Z, d Z , c Z ) has the structure of a singular space Z = (Z, d Z , R Z , g Z , c Z ) with mild singularities of codimension 4, such that Rc g Z = 0 on R Z . However, Z = C(Σ) is a metric cone, so the link Σ of Z is a smooth 3-dimensional Riemannian manifold. That is, Z \ {c Z } is a smooth metric cone g Z = dr 2 + r 2 g Σ for some smooth Riemannain metric g Σ on Σ. However, Rc g Z = 0 implies Rc g Σ = (n − 1)g Σ , and since dim(Σ) = 3, (Σ, g Σ ) must be a disjoint union of spherical space forms. Because R Z = Z \ {c Z } is connected, Σ must be connected. Thus, Z = C(S 3 /Γ) = R 4 /Γ for some finite subgroup Γ ≤ O(4, R). Moreover, because Z is Y -tame for some Y = Y (A) < ∞ (by Proposition 4.2 of [Bam17]), we have It remains to show that x 0 is an isolated point of X \ R. Suppose by way of contradiction that there exist y i ∈ X \ (R ∪ {x 0 }) such that y i → x 0 . Set λ i := 1/d(x 0 , y i ). By passing to a subsequence, we can assume (X, λ i d X , x 0 ) converges in the pointed Gromov-Hausdorff sense to a tangent cone (Z, d Z , c Z ) as above. For any α ∈ (0, 1), we can pass to a further subsequence so that (B X (y i , αλ −1 i ), λ i d, y i ) converges in the pointed Gromov-Hausdorff sense to (B Z (y ∞ , α), d Z , y ∞ ) for some y ∞ ∈ Z with d(c Z , y ∞ ) = 1. By possibly shrinking α > 0, we can assume that B Z (y ∞ , α) is isometric to a ball in R n . Applying Theorem 2.37 of [TZ16] (see the appendix of this paper), we have |B X (y i , αλ −1 i ) ∩ R| g ≥ (ω n − ǫ i )(αλ −1 i ) 4 for some sequence ǫ i → 0. However, the Y (A)-regularity of X then implies r Rm (y i ) > 0, contradicting y i ∈ X \ R. Theorem 7.2. X has the structure of a C ∞ Riemannian orbifold with finitely many conical orbifold singularities, such that in orbifold charts around the singular points, f extends smoothly across the singular points, and satisfies the gradient Ricci soliton equation everywhere. In four dimensions, the curvature tensor Rm admits the orthogonal decomposition Because |R| ≤ A on R, the first term of (7.1) is bounded pointwise. We use the method of [HM11] to estimate the second term of (7.1). Fix β ∈ (0, 1),and let φ ∈ C ∞ c (A(βr 0 , r 0 )) be a cutoff function with |∇φ| ≤ C(n)(βr 0 ) −1 on A(βr 0 , 2βr 0 ), |∇φ| ≤ C(n)r −1 0 on A( 1 2 r 0 , 2r 0 ), and φ = 1 on A(2βr 0 , 1 2 r 0 ). Then, setting E := sup B (e −f + |∇f |), we get Taking β → 0, and recalling that f is locally bounded above, we obtain B * |Rc| 2 dg < ∞. Finally, to estimate the third term of (7.1), we further decompose W into the self-dual and anti-self-dual parts W ± , and then employ the strategy of [DS14]. Let A + be the connection on the bundle Λ + of self-dual forms on R induced by the Levi-Civita connection of (R, g) (see section 6.D of [Bes08] for definitions). Then, because W + is self-dual, (Rc − R 4 g) g is anti-self-dual, and Λ + , Λ − are orthogonal, we have A(s,r) is the Chern-Simons invariant (associated to the first Pontryagin class) of a connection ∇ = d+B on a trivial bundle over a 3-manifold Σ, once we have chosen an arbitrary global section of the bundle. However, by the Cheeger-Gromov convergence of r −2 g|∂B(x * , r) to a flat bundle metric on (T R 4 )|S 3 as r → ∞, we may conclude that A + |∂B(x * , r) converge (after pulling back by diffeomorphisms ψ i : S 3 → ∂B(x * , r)) to the Euclidean connection D on the trivial bundle of self-dual 2-forms of (R 4 \ {0}) restricted to S 3 , which has Chern-Simons invariant 0. This means CS(A + , ∂B(x * , r)) = CS(ψ * i A + , S 3 ) → CS(D, S 3 ) = 0 as r ց 0 , so we can choose r ∈ (0, r 0 ] sufficiently small such that |CS(A + , ∂B(x * , s))| ≤ is continuous, we conclude that the integral is bounded uniformly (in R) for all s < r. In particular, we can take s ց 0 to obtain B * |W + | 2 dg < ∞, and the proof of B * |W − | 2 dg < ∞ is similar. We can now argue as in [CS07], [Tia90], to conclude that in fact B * has the structure of a C ∞ orbifold at x * . Note that, because we have bounds on f, |∇f | on B * ,the only difference in our setting is that we must use the ǫ-regularity theorem that is Theorem 1.1 of [GJ17] or Theorem 1.2 of [GJ17] (note that the completeness condition can be replaced with the condition that a larger geodesic ball is locally compact). Also, R + |∇f | 2 = f − W(g, f ), |R| ≤ A, and the quadratic growth of f imply that all critical points of f must occur in some bounded set. On the other hand, any orbifold point of X must be a critical point: if ϕ : R 4 /Γ ⊇ U → B X (x, δ) is an orbifold chart, and π : R 4 → R 4 /Γ is the quotient map, then ∇ (π•ϕ) * g (π • ϕ) * f must be fixed by all of Γ, so must be the zero vector. Since X \ R is discrete and bounded, it must be finite. Proof of Theorem 3. This is immediate from Theorem 7.2. Appendix In this section, we give further details for the claim in Lemma 7.1 that |B X (y i , αλ −1 i ) ∩ R| g ≥ (ω n − ǫ i )(αλ −1 i ) 4 for some sequence ǫ i → 0. The main idea is to use the fact that, for i ∈ N sufficiently large, (B X (y i , αλ −1 i ), λ i d, y i ) is arbitrarily close to a Euclidean ball in the pointed Gromov-Hausdroff sense, and to then appeal to a volume convergence theorem for Riemannian manifolds with integral Ricci lower bounds. Observe that, by Lemma 6.1 of [BZ17], we have |Rc| g i (·, 0) ≤ C(A)(r g i Rm ) −1 (·, 0), so combining this with the integral estimate for the curvature scale (Theorem 1.7 of [Bam16]) gives Note that we actually have a local L p bound for Rc for any p < 4, and the following arguments will work for any p ∈ (2, 4), but we choose p = 3 for convenience. Let H 4 d = H 4 be the 4-dimensional Hausdorff measure on the metric space (X, d). Because H 4 (X \ R) = 0, and because H 4 agrees with the Riemannian volume measure on any 4-dimensional Riemannian manifold (in particular, on R), we have H 4 (S) = |S ∩ R| for any subset S ⊆ X. Thus H 4 λ i d (B X (y i , αλ −1 i )) = λ 4 i |B X (y i , αλ −1 i ) ∩ R| g , H 4 d Z (B Z (y ∞ , α)) = ω n α n . We now restate the modification of Theorem 2.37 of [TZ16] that we will be using. Denote by |Rc − |(x) the absolute value of the smallest negative eigenvalue of Rc(x) (if Rc(x) ≥ 0, then |Rc − | = 0). Lemma 8.1. For any κ > 0, Λ < ∞, n ∈ N, and p > n, there exist r 0 = r 0 (n, p, κ, Λ, ǫ) > 0 such that the following holds. Suppose (M n i , g i , x i ) is a sequence of complete Riemannian manifolds satisfying: (i) B(x,1) |Rc − | p dg ≤ Λ for all x ∈ M i , (ii) |B(x, r)| ≥ κr n for all r ∈ (0, 1], x ∈ M . Assume that (M n i , g i , x i ) converge in the pointed Gromov-Hausdorff sense to the complete metric length space (X, d, p). Then, for any r ∈ (0, r 0 ], we have The difference between this lemma and Theorem 2.37 of [TZ16] is that we only require a local integral Ricci bound (i) rather than the global bound M |Rc − | p dg ≤ Λ assumed in [TZ16]. However, in [TZ16], the objects under consideration are time slices of a normalized Ricci flow on a Fano threefold, which have uniformly bounded diameter. The proof of Theorem 2.37 is stated to be a modification of volume convergence for noncollapsed Riemannian manifolds with Ricci curvature bounded below, given in [Col97;CC96]. A careful examination of the proof shows that only the conditions (i), (ii) are used, essentially due to the fact that the involved arguments are all local. The following elementary lemma is essentially a consequence of Lemma 22 and a diagonal argument. Lemma 8.2. Let (X k , d k , p k ) be a sequence of limit spaces as in Lemma 22, converging in the pointed Gromov-Hausdorff sense to (X,d,p), and suppose r ≤ r 0 (n, p, κ, Λ). Then H n (B(p k , r)) → H n (B(p, r)). Proof. For each k ∈ N, let (M k,i , g k,i , x k,i ) be a sequence of complete, pointed Riemannian manifolds satisfying (i), (ii) of Lemma 8.1, which converge in the pointed Gromov-Hausdorff sense to (X k , d k , x k ) as i → ∞. Also let (M i , g i , x i ) be a sequence of such manifolds converging to (X, d, p) in the pointed Gromov-Hausdorff sense. By Lemma 8.1, we know that lim i→∞ |B(x k,i , r)| g k,i = H n (B(x k , r)) for each k ∈ N. Thus, for each k ∈ N, we can find i(k) ∈ N such that |B(x k,i(k) , r)| g k,i(k) − H n (B(x k , r)) ≤ 2 −k , d GH (B(x k,i(k) , α k r), d g k,i(k) , x k,i(k) ), (B(x k , α k r), d k , x k ) ≤ r2 −k , where α k → ∞. In particular, (M k,i(k) , g k,i(k) , x k,i(k) ) converge in the pointed Gromov-Hausdorff sense to (X, d, p), so |B(x k,i(k) , r)| g k,i(k) − H n (B(x, r)) → 0. Combining expressions gives the claim.
Do intrauterine or genetic influences explain the foetal origins of chronic disease? A novel experimental method for disentangling effects Background There is much evidence to suggest that risk for common clinical disorders begins in foetal life. Exposure to environmental risk factors however is often not random. Many commonly used indices of prenatal adversity (e.g. maternal gestational stress, gestational diabetes, smoking in pregnancy) are influenced by maternal genes and genetically influenced maternal behaviour. As mother provides the baby with both genes and prenatal environment, associations between prenatal risk factors and offspring disease maybe attributable to true prenatal risk effects or to the "confounding" effects of genetic liability that are shared by mother and offspring. Cross-fostering designs, including those that involve embryo transfer have proved useful in animal studies. However disentangling these effects in humans poses significant problems for traditional genetic epidemiological research designs. Methods We present a novel research strategy aimed at disentangling maternally provided pre-natal environmental and inherited genetic effects. Families of children aged 5 to 9 years born by assisted reproductive technologies, specifically homologous IVF, sperm donation, egg donation, embryo donation and gestational surrogacy were contacted through fertility clinics and mailed a package of questionnaires on health and mental health related risk factors and outcomes. Further data were obtained from antenatal records. Results To date 741 families from 18 fertility clinics have participated. The degree of association between maternally provided prenatal risk factor and child outcome in the group of families where the woman undergoing pregnancy and offspring are genetically related (homologous IVF, sperm donation) is compared to association in the group where offspring are genetically unrelated to the woman who undergoes the pregnancy (egg donation, embryo donation, surrogacy). These comparisons can be then examined to infer the extent to which prenatal effects are genetically and environmentally mediated. Conclusion A study based on children born by IVF treatment and who differ in genetic relatedness to the woman undergoing the pregnancy is feasible. The present report outlines a novel experimental method that permits disaggregation of maternally provided inherited genetic and post-implantation prenatal effects. Background The causal risk factors and pathways leading to common clinical problems, such as cardiovascular disease, asthma, schizophrenia and depression remain largely unknown. There is consistent evidence demonstrating that inherited, genetic factors play an important role in such disorders [1,2]. Although genetic factors are of major importance, epidemiological studies show that the rates of many disorders such as cardiovascular disease, diabetes, obesity and depression have changed over time and vary geographically to an extent that is incompatible with the effects of genetic differences [3][4][5]. This indicates the important contribution of environmental factors. More recently, there has been growing awareness that genes and environment work together in complex ways [6,7]. One important example of this complexity is the growing evidence that exposure to many important environmental risk factors for common disorders is not random. Specifically environmental risk factors such as exposure to early adversity are not independent of an individual's genetically influenced characteristics and behaviour or those of their parents [3]. Thus, association between an environmental risk factor and a disorder could be attributable to shared inherited genetic liability that influences both the index of environmental risk and the manifestation of disorder as well as because of true environmentally mediated risk effects. As a result of this growing awareness, there has been increasing interest in using suitable research designs to investigate this issue [3]. This is important as identifying which environmental factors exert true causal environmentally mediated risk effects on complex phenotypes is an important goal for the purposes of designing prevention, risk reduction and intervention strategies. Examples of such designs include early adoption studies of animals and humans where the postnatal environment is provided by genetically unrelated parents. These studies show that regardless of genetic liability, postnatal environmental factors have important effects on many different outcomes; for example stress susceptibility [8], renal renin-angiotensin system sensitivity [9], cognitive ability [10] and antisocial behaviour [11] Prenatal environmental risk factors and complex disorders The leading causes of global disease burden are complex disorders such as cardiovascular disease and depression. There has been increasing evidence over the last twenty years that many of these disorders and health-related problems have their origins in foetal life, with early intrauterine factors hypothesised to have long term effects on health and behaviour [12,3]. This hypothesis has been supported by evidence from animal studies [13]. In utero programming, whereby a stimulus or insult at a sensitive period of development has lasting effects, is thought to represent a key risk pathway by bringing about long-lasting changes to the structure and metabolism of the organism [12]. Replicated links between prenatal environmental factors and chronic disease have been demonstrated: between lower birth weight and cardiovascular disease [12], diabetes [12], depression [14] and early neurocognitive problems [15]; between poor maternal nutrition during pregnancy and schizophrenia [16]; between gestational stress and anxiety/depression [17]; and between maternal smoking in pregnancy and Attention Deficit Hyperactivity Disorder (ADHD) [18]. Most of the studies testing for these associations have used cohort or case-control designs. Although longitudinal studies are an important method for identifying causal risk factors for disease and behaviour, it is often difficult to rule out the contribution of unmeasured confounders [3]. Natural experiment designs and randomised control trials that take advantage of change in exposure to a specific environmental variable are thus attractive [3]. Only a few studies have been able to test environmental risk hypotheses by using experimental interventions or natural and sometimes unfortunate change imposed on populations. Examples of such 'experiments in nature' have been those demonstrating links between poor prenatal nutrition during the Dutch [19] and Chinese famines [20] with later mental disorders, notably schizophrenia, decreased glucose tolerance [21] and coronary heart disease [22]. Testing whether the associations between prenatal risk and disorder are the result of environmentally mediated effects or inherited genetic influences Exposure to the maternally provided prenatal environment is not random. Many important prenatal environmental risk factors for disorder where exposure occurs in utero, such as gestational stress and cigarette smoking in pregnancy, are also influenced by maternal characteristics, including those that are influenced by maternal genotype [23,24]. Given this, associations between putative prenatal risk factors or indices of environmental adversity in utero and disease outcomes could arise through maternally provided genetic factors and/or a 'true' environmentally mediated effect (see Figure 1). One example is the link between gestational stress and subsequent anxiety in offspring [17], where the effects are thought to be mediated by exposure to glucocorticoids in utero, but for which the association might be accounted for by genetic pathways, that influence both maternal predisposition to experiencing stress and offspring anxiety. Another example, is the link between pregnancy complications (such as gestational diabetes and intrauterine growth restriction) and cardiovascular disease (CVD), which may share common antecedents [25][26][27]. Mothers who show such complications not only have offspring who are at increased risk of showing CVD but such complications also appear to index an increased subsequent risk of vascular disease in the mothers themselves. The mechanisms for these links are not known but clearly genetic susceptibility is one potentially important contributor (see Figure 1). That is, the increased risk of cardiovascular disease in offspring may not necessarily be entirely mediated by prenatal risk effects but simply index an underlying inherited predisposition to cardiovascular disease passed on from mother to child. Prenatal cross-fostering of animals allows disentanglement of these mechanisms and has been used in some instances to test the relative contributions of the prenatal and postnatal environment and genetic factors to different phenotypes in animals. For example, one such study demonstrated the contribution of maternally transmitted autoantibodies (i.e. prenatal environmental mediation) to diabetes in offspring. This was achieved, in part, by prenatal cross fostering non-obese diabetic mice embryos to mothers of a non-autoimmune strain [28]. The genetically susceptible mice were protected from developing diabetes by changing the maternally provided prenatal environment. Another prenatal cross fostering study found that the prenatal and postnatal maternally provided environment contributed to behavioural differences in mice [29]. More recent animal work has shown that maternally provided prenatal and postnatal environment effects on offspring may be mediated by non-inherited epigenetic mechanisms [30,31]. It should be possible to distinguish between whether maternally provided prenatal risk effects are mediated environmentally or genetically in humans by studying offspring whose intrauterine environment is provided by a genetically unrelated mother; essentially an adoption study "in utero". In vitro fertilisation (IVF) is becoming an increasingly common means of conception. Current estimates suggest 1.3%-3.6% of European births are now due to IVF [32]; a proportion of these births involve donated gametes and surrogacy. Children conceived via these methods may be genetically related to both parents (homologous IVF), the mother only (sperm donation), the father only (egg donation), or to neither parent (embryo donation). With gestational surrogacy, both parents are genetically related to the child but the intrauterine environment is provided by a genetically unrelated surrogate. With both egg donation and embryo donation, the mother provides the intra-uterine environment but is not genetically related to the child. Such a sample would enable maternally provided genetic and environmental effects to be separated. Such a sample could also be used to examine the contribution of genetic and environmental influences to offspring phenotypic characteristics, to complement the other designs, twin and adoption studies, already used for this purpose. In an IVF sample, this would involve examining parent/offspring phenotype resemblance, for example using correlation coefficients for continuously distributed characteristics that are calculated separately for the different conception groups and estimates of genetic and shared environmental variance estimated using the types of statistical modelling techniques used in twin studies [33]. However, given the sensitivity of artificial methods of conception, the question arises as to whether such a design is feasible and acceptable. Methods Our aims were to 1) test the feasibility of identifying and recruiting a sample of children born by IVF from the 5 treatment groups, 2) establish a sample of children aged 5 to 9 years born by homologous IVF, sperm donation, egg donation, embryo donation and gestational surrogacy, 3) obtain questionnaire measures on a range of potential health and mental health related risk factors and outcomes, 4) obtain data from antenatal records for each child and 5) identify the extent to which families would be willing to engage in future research. The questionnaire measures and antenatal data obtained were aimed at testing a range of hypotheses, specifically that associations between a) specific antenatal events (e.g. pre-eclampsia, high blood sugar), b) maternally perceived gestational stress, c) markers of prenatal-growth (e.g. birth weight, head circumference, ponderal index) and child behaviour and mental health outcomes are attributable to maternally provided environmentally mediated effects in utero as well as genetic factors. The research protocol was approved by Wales Multicentre Research Ethics Committee. Genetic and environmental pathways between a prenatal risk factor and child outcome using the example of exposure to gestational stress and childhood anxiety Figure 1 Genetic and environmental pathways between a prenatal risk factor and child outcome using the example of exposure to gestational stress and childhood anxiety. Feasibility Over the past 3 years, in an ongoing study, 18 fertility centres have participated. Families with school aged children (aged 5 to 9 years) born following IVF treatment have been recruited. To date, 741 families have participated (13 father only participated, 231 mother only participated, 497 both parents participated) although data collection is ongoing and the conception groups have been recruited in a sequential manner. The expected and current numbers in each conception group are shown in Table 1. Greater than expected numbers of homologous IVF families have been recruited. Expected numbers are based on initial power calculations and the numbers of types of IVF treatments in the UK for the age range of children. Power estimates at the start of the study using pilot data showed that power based on the expected sample sizes is sufficient to detect most effects, except those of small (defined as 0.10) or very small effect size (<0.10) [34]. Among those who participated in this project, clinic staff and families reported a positive research experience, with the vast majority of families (655/741 = 88%) agreeing to receive information about future research studies. 80% (573/712) of mothers (excluding the 16 mothers from the gestational surrogacy group) agreed for researchers to access antenatal records to obtain detailed information about the pregnancy. The level of agreement between maternal report of pre-/peri-natal factors and information obtained from antenatal notes was excellent for a range of variables (e.g. birth weight (r = .991), smoking during pregnancy (kappa = .806), high blood pressure during pregnancy kappa = .716) [35]. The only exception was length of labour (kappa = .257) which was not well recalled by mothers [35]. This means that for where the focus is on prenatal variables, maternal reports alone are satisfactory in most instances. Thus the eligible sample here will include all those families where the mother has returned a questionnaire. Table 2 shows the genetic relationship between the woman undergoing the pregnancy and offspring for each of the conception groups. Where an association between a maternally provided environmental risk factor (e.g. gestational diabetes) and outcome (e.g. birth weight) is environmentally and not genetically mediated, association would be observed in families where the woman undergoing pregnancy and offspring are genetically related (homologous IVF, sperm donation) and also found to be significant in the offspring who are genetically unrelated to the woman who undergoes the pregnancy (egg donation, embryo donation, surrogacy). Where an association between the risk factor and outcome is entirely genetically mediated, we would expect to observe an association in families where the woman undergoing pregnancy and offspring are genetically related but not in those who are unrelated. Where genetic and environmental mediation both contribute, association will be observed in genetically related and unrelated dyads. The expected pattern of results for each of these scenarios is shown in table 2. Thus, for example, if prenatal stress effects on offspring anxiety symptoms were entirely genetically mediated, we would expect association between these variables in the homologous IVF and sperm donation groups but not in the other groups. Associations for continuously distributed variables will be examined using regression analysis. Relevant covariates such as being a twin will need to be included but by necessity will differ depending on the outcome of interest. Differences in the degree of association between risk factor and outcome for the groups where the dyads are genetically related (homologous IVF, sperm donation) vs. those who are genetically unrelated will be assessed. Table 3 shows rates of a number of maternally reported antenatal and peri-natal complications and intrauterine risk exposures in this sample. Table 4 shows maternal and paternal age at birth of the child for the sample so far. The mean age of the children was 6.76 years. There were 380 (51.3%) boys and 361 (48.7%) girls. The vast majority of children lived with their mother and father (679; 91.6%), 45 (6.0%) lived with their mother only, 8 (1.1% lived with their mother and step-father) and 9 children had other living arrangements such as shared residency, lived with father only or with lesbian parents. As expected, there is evidence of elevated rates of certain types of complications during pregnancy. For instance, hypertensive disorders are estimated to occur in up to 10% of all pregnancies [36] thus the rate of nearly 15% in the present sample is above expected levels although this may be attributable to older maternal age. Approximately 8% of infants in the UK are born weighing less than 2500 grams [39]. The elevated rate in the present sample seems to be due to the high proportion of multiple births (22.7% of the sample) given that the prevalence of low birth weight is 8.6% when multiple births are excluded. Conversely, rates of maternal smoking during pregnancy are lower than general U.K. population estimates [18]. This pattern of results is consistent with other studies across the world which show that the rate of certain, but not all, antenatal and peri-natal complications is increased in IVF versus naturally conceived pregnancies but that this is mainly attributable to high rates of multiple births and increased maternal age and that outcomes are more favourable in single embryo transfers [38]. Discussion These data show that this novel design is feasible and can be successfully employed in the U.K. to test the effects of maternally provided prenatal environment on offspring. As with all other research designs there are strengths and weakness. Issues that will need consideration are the effects of programming at the pre-implantation stage, the impact of differences in IVF methods and statistical power in relation to group and sub-group comparisons. Potential limitations include under-representation of some risk factors as well as health problems related to older maternal age and IVF. The objective of this design is not to obtain an epidemiologically representative sample but rather to test differences in the degree of association between risk factors and outcome across different conception groups. Thus it is important to consider whether specific associations differ in strength between the homologous IVF group and the general population. Given the current numbers of children born by IVF in the rare groups, this design will be most useful for examining trait measures or common conditions and risk factors such as emotional and behavioural symptoms, cognitive ability, asthma, blood pressure and BMI. For many outcomes, both clinical conditions and traits that appear later in life, the children will need to be followed up into adult life. Retrospectively obtaining information on pregnancy complications for the surrogacy group was difficult because commissioning parents were often not aware of details of the pregnancy and families were either no longer in touch with the surrogate or did not wish her to be contacted. Thus information was difficult to obtain from the surrogate who experienced the pregnancy. However, this is not crucial for the design as the egg donation and embryo donation groups also allows the researcher to separate the effects of genetic transmission and prenatal environmental exposure and here, pregnancy data are easily available. In summary there is evidence that many health problems have their origins in foetal life and are also genetically influenced. Associations between prenatal adversity and many health outcomes arise because of either true environmentally mediated effects or inherited genetic factors or both. There is a need for experimental research designs that enable disentanglement of inherited genetic from environmentally mediated risk effects. To date twin and adoption study designs of clinical disorders and behaviours have been used to separate the effects of postnatal and later environmental influences from inherited factors [3]. However such traditional designs cannot separate maternally provided prenatal environmental risk effects from inherited genetic influences on outcomes. Thus new, genetically sensitive experimental designs are needed. In addition to the novel design we propose, it is also possible to use a design whereby the offspring of adult twins are examined. Here, the offspring of monozygotic ("identical") twin pairs will be social cousins but biological half siblings (sharing on average 25% of their inherited genes). This design has different strengths and weaknesses from the new method proposed here [37]. For instance, for genetically influenced maternal behaviours, levels of discordant risk exposure to prenatal adversity will be expected to be low for the offspring of adult MZ twins. However, the offspring of adult twins design has been used to illustrate that the association between maternal smoking during pregnancy and low birth weight is environmentally rather than genetically mediated [37] consistent with an intrauterine effect. Both research designs provide complimentary approaches to disentangling the pathways involved in the aetiology of disease. Conclusion In conclusion, there is increasing evidence that prenatal risk factors may contribute to the aetiology of different disorders and traits. The observed associations between prenatal risk factors and specific outcomes could arise through genetic pathways. We present a novel method that appears to be feasible for testing whether associations between putative prenatal risk factors and different outcomes are attributable to an environmentally mediated effect.
Notch signalling maintains Hedgehog responsiveness via a Gli-dependent mechanism during spinal cord patterning in zebrafish Spinal cord patterning is orchestrated by multiple cell signalling pathways. Neural progenitors are maintained by Notch signalling, whereas ventral neural fates are specified by Hedgehog (Hh) signalling. However, how dynamic interactions between Notch and Hh signalling drive the precise pattern formation is still unknown. We applied the PHRESH (PHotoconvertible REporter of Signalling History) technique to analyse cell signalling dynamics in vivo during zebrafish spinal cord development. This approach reveals that Notch and Hh signalling display similar spatiotemporal kinetics throughout spinal cord patterning. Notch signalling functions upstream to control Hh response of neural progenitor cells. Using gain- and loss-of-function tools, we demonstrate that this regulation occurs not at the level of upstream regulators or primary cilia, but rather at the level of Gli transcription factors. Our results indicate that Notch signalling maintains Hh responsiveness of neural progenitors via a Gli-dependent mechanism in the spinal cord. INTRODUCTION Patterning of the spinal cord relies on the action of multiple cell signalling pathways with precise spatial and temporal dynamics (Briscoe and Novitch, 2008). Neural progenitors in the spinal cord are organised into discrete dorsoventral (DV) domains that can be identified by the combinatorial expression of conserved transcription factors (Alaynick et al., 2011;Dessaud et al., 2008;Jessell, 2000). Differentiated post-mitotic neurons migrate from the medial neural progenitor domain to occupy more lateral regions of the spinal cord. To achieve precise patterning, the developing spinal cord employs anti-parallel signalling gradients of Bone Morphogenic Protein (BMP) and Hedgehog (Hh) to specify dorsal and ventral cell fates, respectively (Le Dréau and Martí, 2012). Cells acquire their fates via sensing both graded inputs. This dual signal interpretation mechanism provides more refined positional information than separate signal interpretation (Zagorski et al., 2017). The action of Sonic Hedgehog (Shh) in the ventral spinal cord is one of the most well studied examples of graded morphogen signalling (Briscoe and Therond, 2013;Cohen et al., 2013). In vertebrates, Hh signalling requires the integrity of primary cilia, microtubule-based organelles present on the surface of most cells (Eggenschwiler and Anderson, 2007). In the absence of the Shh ligand, the transmembrane receptor Patched (Corbit et al., 2005;Rohatgi et al., 2007). This leads to the activation of the Gli family of transcription factors, resulting in expression of downstream target genes such as ptc. Shh thus controls the balance between full-length Gli activators and proteolytically processed Gli repressors (Huangfu and Anderson, 2006;Humke et al., 2010). In mouse, Gli2 is the main activator and its expression does not require active Hh signalling (Bai et al., 2002;Bai and Joyner, 2001). In zebrafish, Gli1 is the main activator (Karlstrom et al., 2003). Although gli1 is a direct target of Hh signalling, low-level gli1 expression is maintained in the absence of Hh signalling via an unknown mechanism (Karlstrom et al., 2003). It is thought that Hhindependent gli expression allows cells to respond to Hh signals. In the ventral spinal cord, it has been shown that both the level and duration of Hh signalling is critical to the correct formation of the discrete neural progenitor domains along the dorsoventral axis 4 (Dessaud et al., , 2007. However, the temporal dynamics of Hh signalling has been challenging to visualize in vivo due to the lack of appropriate tools. In addition to BMP and Hh signalling, Notch signalling has also been implicated in spinal cord development (Louvi and Artavanis-Tsakonas, 2006;Pierfelice et al., 2011). In contrast to long-range Hh signalling, the Notch signalling pathway requires direct cell-cell interaction, as both receptor and ligand are membrane bound proteins (Kopan and Ilagan, 2009). The Notch receptor, present at the "receiving" cell membrane, is activated by the Delta and Jagged/Serrate family of ligands, present at the membrane of the neighbouring "sending" cell. This leads to multiple cleavage events of Notch, the last of which is mediated by a γ-secretase complex that releases the Notch intracellular domain (NICD). NICD then translocates to the nucleus and forms a ternary transcription activation complex with the mastermind-like (MAML) coactivator and the DNA binding protein RBPJ. This activation complex is essential for the transcription of downstream targets, such as the Hes/Hey family of transcription factors (Artavanis-Tsakonas and Simpson, 1991;Pierfelice et al., 2011). Two major roles of Notch signalling in neural development are to generate binary cell fate decisions through lateral inhibition and to maintain neural progenitor state (Formosa-Jordan et al., 2013;Kageyama et al., 2008). However, how Notch signalling interacts with Hh signalling during spinal cord patterning is not clear. Using zebrafish lateral floor plate (LFP) development as a model, we previously demonstrated that Notch signalling maintains Hh responsiveness in LFP progenitor cells, while Hh signalling functions to induce cell fate identity (Huang et al., 2012). Thus, differentiation of Kolmer-Agduhr" (KA") interneurons from LFP progenitors requires the downregulation of both Notch and Hh signalling. Recent reports provide additional support for cross-talk between these pathways during spinal cord patterning in both chick and mouse (Kong et al., 2015;Stasiulewicz et al., 2015). Notch activation causes the Shhindependent accumulation of Smo to the primary cilia, whereas Notch inhibition results in ciliary enrichment of Ptc1. Accordingly, activation of Notch signalling enhances the response of neural progenitor cells to Shh, while inactivation of Notch signalling compromises Hh-dependent ventral fate specification. These results suggest that Notch signalling regulates Hh response by modulating the localisation of key Hh pathway components at primary cilia. Here, we determine the interaction between Notch and Hh signalling during spinal cord patterning in zebrafish. Using the photoconversion based PHRESH technique, we show 5 that Notch and Hh response display similar spatiotemporal kinetics. Gain-and loss-of function experiments confirm that Notch signalling is required to maintain Hh response in neural progenitors. Surprisingly, Notch signalling doesn't regulate the Hh pathway at the level of Smo or primary cilia, but rather at the level of Gli transcription factors. Together, our data reveal that Notch signalling functions to control the Hh responsiveness of neural progenitors in a primary cilium-independent mechanism. Generation of a Notch signalling reporter Spinal cord patterning is a dynamic process with complex interactions of cell signalling pathways in both space and time. To visualise the signalling events in a spatiotemporal manner, we have previously developed the PHRESH (PHotoconvertible REporter of Signalling History) technique (Huang et al., 2012). This analysis takes advantage of the photoconvertible properties of the Kaede fluorescent protein to visualise the dynamics of cell signalling response at high temporal and spatial resolution. We have utilised the PHRESH technique to visualise Hh signalling dynamics during spinal cord patterning (Huang et al., 2012). To apply the same technique to Notch signalling, we generated a reporter line for her12, a target gene of Notch signalling (Bae et al., 2005). This target was chosen because among other Notch target genes co-expressed with her12, such as her2, her4, and hes5, her12 had the highest level of expression throughout the spinal cord ( Figure S1). By BAC (bacteria artificial chromosome) recombineering, we generated a her12:Kaede reporter by replacing the first coding exon of her12 with the coding sequence for the photoconvertible fluorescent protein Kaede ( Figure 1A). The resulting her12:Kaede BAC contains 135 kb upstream and 63 kb downstream regulatory sequences. The her12:Kaede reporter line faithfully recapitulated endogenous her12 expression ( Figure 1B). This reporter also responded to different Notch pathway manipulations ( Figure 1C-E). The zebrafish mindbomb mutant is unable to activate Notch signalling due to an inability to endocytose the Delta ligand (Itoh et al., 2003). As expected, the expression of her12:Kaede was completely absent in the spinal cord of mindbomb mutants ( Figure 1C). Similarly, inhibition of Notch signalling with the small molecule γ-secretase inhibitor LY-411575 (Fauq et al., 2007) completely abolished her12 expression within 4 hours ( Figure 1D and Figure S2). By contrast, ectopic expression of NICD (Notch intracellular domain) using the hsp:Gal4; UAS:NICD line (Scheer and Campos-Ortega, 1999) resulted in upregulation and expansion of the her12:Kaede expression domain ( Figure 1E). These results demonstrate that her12:Kaede is a sensitive reporter for Notch pathway activity in the spinal cord. The combination of small molecule inhibitors and the her12:Kaede reporter allows us to manipulate and monitor Notch signalling dynamics in a tightly controlled temporal manner. 7 Notch and Hh signalling display similar dynamics during spinal cord patterning Using the her12:Kaede reporter of Notch response ( Figure 1) in parallel with the previously described ptc2:Kaede reporter of Hh response (Huang et al, 2012), we can observe the timing and duration of both pathway activities in vivo ( Figure 2A). All responding cells are initially labelled by green-fluorescent Kaede (Kaede green ), which can be photoconverted to red-fluorescent Kaede (Kaede red ) at any specific time (t 0 ). If the cell has finished its signalling response prior to t 0 , only perduring Kaede red will be detected. Conversely, if the cell begins its response after t 0 , only newly synthesised, unconverted Kaede green will be present. Finally, if the cell continuously responds to the signalling both before and after t 0 , a combination of newly-synthesised Kaede green and perduring Kaede red can be observed and the cell will appear yellow. Thus, Kaede red represents "past response" before t 0 , Kaede green indicates "new response" after t 0 , whereas Kaede red+green corresponds to "continued response" through t 0 ( Based on spatiotemporal maps of Notch and Hh response, we divided the signalling dynamics of spinal cord development into three general phases: "signalling activation" phase from 24 to 42 hpf, "signalling consolidation" phase from 42 to 66 hpf, and "signalling termination" phase from 66 to 78 hpf. In the first "signalling activation" phase, active Notch response occurred along the entire dorsoventral axis of the spinal cord ( Figure 2B, right), 8 while active Hh response constituted roughly the ventral 75% of the spinal cord in a graded manner ( Figure 2B, left). This pattern is consistent with the model that Notch signalling maintains neural progenitor domains, whereas Hh signalling patterns the ventral spinal cord. Interestingly, we found that the signalling response was not entirely homogeneous. In her12:Kaede embryos, the majority of cells showed continued Notch response throughout the "signalling activation" phase, but there were some isolated cells in which Kaede expression was completely absent. In ptc2:Kaede embryos, some cells had terminated their Hh response (marked by Kaede red ), while the majority of cells with the same dorsoventral positioning had continued Hh response. The differential Hh response at the same dorsoventral axis is reminiscent of the differentiation of the lateral floor plate domain (Huang et al., 2012). In the "signalling consolidation" phase ( Figure 2C), we observed a dramatic remodelling of the response profiles of both pathways. First, there was an extensive increase in Kaede red domains for both reporters, indicating the termination of signalling response in these cells. This loss of response was localised to the ventral and lateral regions for Hh signalling ( Figure 2C, left) and the ventral, lateral and dorsal regions for Notch signalling ( Figure 2C, right). Second, the number of Kaede red cells increased as the "signalling consolidation" phase progressed. Finally, the active signalling domain consolidated into a tight medial region, which sharpened further to encompass 1 -2 cell tiers directly dorsal to the spinal canal ( Figure 2C, 54hpf + 6h). Finally, during the "signalling termination" phase ( Figure 2D), both active Notch and active Hh responses (Kaede green ) slowly reduced to the basal level, and most of the spinal cord was marked by Kaede red . The active Notch response was notably weaker and restricted to a small medial domain above the spinal canal by 66 hpf before returning to a basal level by 72 hpf. Similarly, active Hh response marked a small medial domain at 66 hpf and 72 hpf, and reduced to a basal level by 78 hpf. After the end of the "signalling termination" phase, both pathways remained at the basal level as spinal cord development progressed ( Figure S3). Comparison of spatiotemporal signalling profiles reveals that Hh and Notch signalling share similar responsive domains. To examine this directly in the same embryo, we performed double fluorescent in situ hybridisation to visualise her12 and ptc2 expression together during all three signalling phases of spinal cord development ( Figure S4A). During the signalling activation phase ( Figure S4A, 24 hpf), ptc2 expression constituted 9 the ventral portion of the her12 expression domain, while during the signalling consolidation and termination phases ( Figure S4A, 48 hpf and 72 hpf, respectively), ptc2 and her12 expression was present within the same restricted medial domain. These results confirm that Notch and Hh response is active within the same cells of the spinal cord. Indeed, double labelling with neural progenitor cell marker sox2 showed that the medial domain with continued Notch and Hh response corresponded to the sox2 + neural progenitor domain ( Figure S4B-C). Together, our analysis reveals that Hh signalling response follows similar spatiotemporal kinetics as Notch signalling response during spinal cord patterning, suggesting that Notch signalling plays a role in maintaining Hh response in neural progenitor cells. Notch signalling maintains Hh response To explore the mechanism of interaction between the Notch and Hh signalling pathways, we first performed loss-of-function experiments combining small molecule inhibitors with PHRESH analysis ( Figure 3A). We used the Smoothened antagonist cyclopamine (Chen et al., 2002) Figure 3A). These results suggest that Notch signalling is required for maintaining Hh response, but not vice versa. Interestingly, despite the inhibition of Notch signalling, cells outside of the spinal cord in ptc2:Kaede embryos maintained their normal Hh response, indicated by Kaede green expression (Arrowheads in Figure 3A). This result suggests that regulation of Hh response by Notch signalling is tissue specific. To confirm these observations from our PHRESH analysis, we performed RNA in situ hybridisation following small molecule inhibition ( Figure 3B). Wild-type embryos were treated with cyclopamine or LY-411575 from 20 to 30 hpf. When Hh signalling was inhibited by cyclopamine, ptc2 expression in the spinal cord was significantly reduced but not abolished, while her12 expression in the spinal cord remained unchanged. By contrast, blocking Notch signalling by LY-411575 resulted in complete loss of both her12 and ptc2 expression in the spinal cord. As seen in the PHRESH analysis, ptc2 expression in cells outside of the spinal cord, such as the somites, was largely intact even after Notch inhibition. Similar results were observed in mindbomb mutants, where ptc2 expression was completely abolished in the spinal cord but not in somites ( Figure S5), confirming the effect of LY-411575 treatment. Together, these results are consistent with our model that Notch signalling regulates Hh response specifically in the spinal cord. It is also interesting to note that cyclopamine treated embryos showed residual levels of ptc2 expression in the spinal cord, whereas LY-411575 treatment completely eliminated ptc2 expression ( Figure 3B). It has been shown that zebrafish smoothened mutants maintain low-level gli1 expression in the spinal cord independent of Hh signalling, similar to cyclopamine treated embryos (Karlstrom et al., 2003). The complete loss of ptc2 expression after Notch inhibition suggests that in contrast to cyclopamine, Notch signalling controls Hh response via a different mechanism, likely downstream of Smo. In converse experiments, we utilised gain-of-function tools to test the interactions between Notch and Hh signalling ( Figure 4B). Combining our results from the loss-and gain-of-function experiments, we conclude that Notch signalling is required to maintain Hh response, specifically in the spinal cord. Notch signalling regulates Hh response downstream of Smo Our results suggest that Notch signalling regulates Hh response during spinal cord patterning. To explore the molecular mechanisms by which Notch controls Hh response, Interestingly, induction of rSmoM2 did cause an expansion of ptc2 expression in the surrounding somites despite Notch inhibition ( Figure 5C). This observation is consistent with our previous experiments and suggests that this Smo independent mechanism of control is specific to the spinal cord. Notch signalling regulates Hh response independent of primary cilia Vertebrate canonical Hh signalling requires the integrity of primary cilia (Eggenschwiler and Anderson, 2007). To test whether Notch signalling feeds into the Hh pathway via primary cilia, we utilised the iguana mutant which lacks primary cilia due to a mutation in the centrosomal gene dzip1 (Glazer et al., 2010;Huang and Schier, 2009;Kim et al., 12 2010;Sekimizu et al., 2004;Tay et al., 2010;Wolff et al., 2004). In zebrafish, the complete loss of primary cilia, such as in iguana mutants, results in reduction of high-level Hh response concomitant with expansion of low-level Hh pathway activity (Ben et al., 2011;Huang and Schier, 2009). This expanded Hh pathway activation is dependent on low level activation of endogenous Gli1, but does not require upstream regulators of the Hh pathway, such as Shh, Ptc and Smo (Huang and Schier, 2009) (Figure 6A). Thus, the iguana mutant also allows us to determine whether low level activation of the endogenous Gli1 transcription factor is able to restore Hh response in Notch off spinal cords. Ectopic expression of Gli1 partially rescues Hh response in Notch off spinal cords Since low level activation of endogenous Gli1 in iguana mutants is not sufficient to restore Hh response in Notch off spinal cords, we hypothesised that Notch signalling regulates Hh response by maintaining gli1 expression. To test this possibility, we treated wild-type embryos with DMSO, cyclopamine, or LY-411575 at 20 hpf for 10 hours, then assayed for gli1 gene expression at 30 hpf ( Figure 7A). In DMSO treated controls, gli1 expression was present throughout the ventral spinal cord and in the somites. In 13 cyclopamine treated embryos, gli1 expression was dramatically reduced, but a low level remained in the spinal cord, corresponding to Hh-independent gli1 transcription (Karlstrom et al., 2003). domain was also larger ( Figure 7D), a similar phenotype to rSmoM2 induction in DMSO treated embryos. Strikingly, when EGFP-Gli1 induction was followed by LY-411575 treatment, we observed significant ptc2 expression in the spinal cord, although at a slightly lower level compared to DMSO treated hsp:EGFP-Gli1 embryos ( Figure 7D). Critically, the restored Hh pathway activation in LY-411575 treated hsp:EGFP-Gli1 embryos was able to activate olig2 expression, although substantially weaker than the wild-type level ( Figure 7D). Together, these results suggest that, in the spinal cord, Notch signalling regulates Hh response by modulating the Gli1 transcription factor, as ectopic Gli1 can partially rescue the Hh response in Notch off spinal cords. This regulation is partly through transcriptional control of gli1 expression. However, since the ectopic EGFP-Gli1 was unable to rescue the highest level of Hh response and cannot fully restore olig2 expression in Notch off spinal cords, it is likely that Notch signalling also plays additional roles in regulating Gli1 protein -such as its post-translational modification or stability. DISCUSSION We provide in vivo evidence for cross-talk between two conserved developmental signalling pathways, Notch and Hh signalling, in the zebrafish spinal cord. Through the PHRESH technique, we observe shared spatiotemporal dynamics of pathway activity throughout spinal cord patterning, highlighting a role for Notch and Hh interaction in neural progenitor maintenance and specification. Using both gain-and loss-of function techniques, we establish a primary cilia-independent mechanism by which Notch signalling permits neural progenitors to respond to Hh signalling via gli maintenance. Studying cell signalling dynamics using PHRESH We have previously developed the PHRESH technique to study the dynamics of Hh signalling in vivo (Huang et al., 2012). In this study, we demonstrate the versatility of the PHRESH method by correlating the dynamics of Hh and Notch signalling in vivo using the ptc2:Kaede reporter and a new her12:Kaede reporter. Traditional transcriptional GFP reporters fail to provide temporal information due to GFP perdurance, whereas destabilised fluorescent protein reporters can only provide current activity at the expense of signalling history. By contrast, the PHRESH technique utilises Kaede photoconversion to delineate the cell signalling history in any given time window by comparing newly synthesised Kaede green (new signalling) with photoconverted Kaede red (past signalling). We envision that PHRESH analysis could be combined with cell transplantation and timelapse imaging to simultaneously analyse cell lineage and signalling dynamics at single cell resolution. Similar approaches can easily be adapted to study other dynamic events by using photoconvertible fluorescent reporters. Spatiotemporal dynamics of Hh and Notch signalling Using the PHRESH technique, we created a spatiotemporal map of signalling dynamics for the Hh and Notch pathways during spinal cord patterning. Strikingly, Notch and Hh signalling display similar activity profiles. We have characterised these profiles into three general phases: "signalling activation", "signalling consolidation", and "signalling termination". In the early "signalling activation" phase, Notch signalling is active throughout the spinal cord, while active Hh response occurs in the ventral ~75% of the spinal cord. During "signal consolidation", the responsive domain of both pathways sharpens into a small medial domain dorsal to the spinal canal; in "signalling termination" the response to both pathways returns to a basal level. Our detailed time course reveals three key features of Notch and Hh signalling dynamics. First, early active Hh signalling shows a graded response with the highest level in the ventral domain, as predicted by the classical morphogen model (Briscoe and Small, 2015). By contrast, active Notch response does not appear to be graded along the ventral-dorsal axis. Second, despite showing the highest level of Hh response early, the ventral spinal cord terminates Hh response earlier than the more dorsal domains. Therefore, the ventral domain shows higher level Hh response for a shorter duration, whereas the dorsal domain shows lower level response for a longer duration. Our observation is reminiscent of the floor plate induction in chick and mouse embryos, where the specification of the floor plate requires an early high level of Hh signalling and subsequent termination of Hh response . Our result suggests that Hh signalling dynamics is also evolutionarily conserved. Third, lateral regions of the spinal cord lose both Notch and Hh response before the medial domains. As the active signalling response consolidates into the medial domain, so does the expression of sox2, a neural progenitor marker, suggesting that neural differentiation is accompanied by the attenuation of Notch and Hh response. Our observation is consistent with the notion that neural progenitors occupy the medial domain of the spinal cord and that as they differentiate they move laterally. Notch signalling regulates Hh response The loss of Hh response is a necessary step for fate specification, as shown in the chick studying floor plate induction and post-mitotic motor neuron precursors (Ericson et al., 1996). We have previously shown that the time at which cells attenuate their Hh response is crucial for fate specification in the zebrafish ventral spinal cord (Huang et al., 2012). How do neural progenitor cells in the spinal cord maintain their Hh responsiveness until the correct time in order to achieve their specific fates? Three (Kong et al., 2015;Stasiulewicz et al., 2015), thereby modulating cellular responsiveness to Hh signals. By contrast, we show that in the absence of primary cilia in iguana mutants, the low level Hh response remaining due to constitutive Gli1 activation can be completely blocked by Notch inhibition. This result suggests that Notch signalling can regulate Hh response in a primary cilium independent manner. In zebrafish, Gli1 functions as the main activator downstream of Hh signalling, although Gli2a, Gli2b and Gli3 also contribute to the activator function (Karlstrom et al., 2003;Ke et al., 2008;Tyurina et al., 2005;Vanderlaan et al., 2005;Wang et al., 2013). Indeed, inhibition of Notch signalling abolishes both Hh-dependent and Hh-independent gli1 expression in the spinal cord. Similarly, gli2a, gli2b and gli3 expression in the spinal cord is largely eliminated in Notch off spinal cords. These results demonstrate that Notch signalling controls the transcription or mRNA stability of all members of the Gli family in the spinal cord. It is possible that gli genes are direct targets of Notch signalling, as shown in mouse cortical neural stem cells where N1ICD/RBPJ-κ binding regulates Gli2 and Gli3 expression (Li et al., 2012). Importantly, while ectopic expression of Gli1 from the hsp:EGFP-Gli1 transgene can re-establish Hh response as indicated by ptc2 expression in the Notch off spinal cord, it is unable to fully restore the olig2 motor neuron precursor domain. This finding argues that Notch signalling likely plays additional roles in regulating Gli1 protein level or activity. A similar mechanism has been suggested in Müller glia of the mouse retina, where Notch signalling controls Gli2 protein levels and therefore Hh response (Ringuette et al., 2016). Interestingly, the study by In summary, we demonstrate that Notch and Hh signalling share similar spatiotemporal kinetics during spinal cord patterning and that this dynamic interaction is likely required to maintain the neural progenitor zone. We also provide evidence for a primary ciliumindependent and Gli-dependent mechanism in which Notch signalling permits these neural progenitors to respond to Hh signalling. Generation of the her12:Kaede BAC transgenic line To generate the her12:Kaede transgenic line, BAC clone zK5I17 from the DanioKey library that contains the her12 locus and surrounding regulatory elements was selected for bacteria-mediated homologous recombination following standard protocol (Bussmann and Schulte-Merker, 2011). zK5I17 contains 135 kb upstream and 63 kb downstream regulatory sequences of her12. First, an iTol2-amp cassette containing two Tol2 arms in opposite directions flanking an ampicillin resistance gene was recombined into the vector backbone of zK5I17. Next, a cassette containing the Kaede open reading frame and the kanamycin resistance gene was recombined into the zK5I17-iTol2-amp to replace the first exon of the her12 gene. Successful recombinants were confirmed by PCR analysis. The resulting her12:Kaede BAC was co-injected with tol2 transposase mRNA into wild-type embryos and stable transgenic lines were established through screening for Kaede expression. Photoconversion for PHRESH analysis All fluorescent imaging was carried out using an Olympus FV1200 confocal microscope and Fluoview software. Photoconversion was carried out using the 405nm laser with a 20x objective. ptc2:Kaede and her12:Kaede embryos at the appropriate stages were anaesthetised with 0.4% tricaine and then embedded in 0.8% low melting agarose. To achieve complete conversion over a large area, a rectangular area of 1000 by 300 pixels was converted by scanning the area twice with 50% 405nm laser at 200 μs per pixel. Following confirmation of Kaede red expression, embryos were recovered in E3 water with phenylthiourea for 6 hours post-conversion before imaging. Appropriate imaging parameters were established using the unconverted region as a reference to avoid over or under exposure of the Kaede green signal. Cross-sections were generated using Fiji-ImageJ software (Schindelin et al., 2012) to create a 3D reconstruction of the image, then "resliced" to yield transverse views of the spinal cord. Drug treatment Embryos at the appropriate stage were treated with cyclopamine (Toronto Chemical, 100 μM), LY-411575 (Sigma, 50 μM), or DMSO control in E3 fish water. For PHRESH analysis, embryos were treated from 4 hours prior to the point of conversion until 6 hours post-conversion. To match this, all other drug treatments took place between 20 hpf and 30 hpf. Heat shock experiments To induce expression from the heat shock promoter, embryos at the relevant stage were placed in a 2 ml micro-centrifuge tube in a heat block set to 37°C for 30 minutes. After heat shock, embryos were transferred back into E3 water in a petri dish and recovered at 28.5°C. For drug treatment after heat shock, embryos were transferred directly from the heat shock to E3 water containing the appropriate drug. Cryosectioning To obtain transverse sections after whole-mount in situ hybridisation, embryos were cryoprotected with 30% sucrose at 4°C before being embedded in OCT compound (VWR) and frozen in the -80°C freezer. Sections were cut between 10-16μm using a Leica cryostat. Sections were taken from the region of the trunk dorsal to the yolk extension. her12:Kaede embryos were photoconverted at 48 hpf and imaged 6 hours after. Individual transverse sections generated by 3D reconstruction were prepared into a video. The first frame is the most anterior slice and each subsequent frame moves further posterior through the embryo. The merge (left) and Kaede green (right) channels are shown. The spinal cord is denoted by solid lines and the active signalling domain (Kaede green ) above 27 the spinal canal is indicated by an arrowhead. Note that Figure 2C shows one single slice in the middle of the converted region. Scale bar: 20 μm.
Evidence That Prestin Has at Least Two Voltage-dependent Steps* Prestin is a voltage-dependent membrane-spanning motor protein that confers electromotility on mammalian cochlear outer hair cells, which is essential for normal hearing of mammals. Voltage-induced charge movement in the prestin molecule is converted into mechanical work; however, little is known about the molecular mechanism of this process. For understanding the electromechanical coupling mechanism of prestin, we simultaneously measured voltage-dependent charge movement and electromotility under conditions in which the magnitudes of both charge movement and electromotility are gradually manipulated by the prestin inhibitor, salicylate. We show that the observed relationships of the charge movement and the physical displacement (q-d relations) are well represented by a three-state Boltzmann model but not by a two-state model or its previously proposed variant. Here, we suggest a molecular mechanism of prestin with at least two voltage-dependent conformational transition steps having distinct electromechanical coupling efficiencies. Prestin is a voltage-dependent membrane-spanning motor protein that confers electromotility on mammalian cochlear outer hair cells, which is essential for normal hearing of mammals. Voltage-induced charge movement in the prestin molecule is converted into mechanical work; however, little is known about the molecular mechanism of this process. For understanding the electromechanical coupling mechanism of prestin, we simultaneously measured voltage-dependent charge movement and electromotility under conditions in which the magnitudes of both charge movement and electromotility are gradually manipulated by the prestin inhibitor, salicylate. We show that the observed relationships of the charge movement and the physical displacement (q-d relations) are well represented by a three-state Boltzmann model but not by a two-state model or its previously proposed variant. Here, we suggest a molecular mechanism of prestin with at least two voltage-dependent conformational transition steps having distinct electromechanical coupling efficiencies. Electromotility (1) of mammalian cochlear outer hair cells (OHCs) 2 is a rapid voltage-induced force-generating cell length change that is indispensable for the frequency selectivity and sensitivity of mammalian hearing (2). The membranebased motor protein, prestin, which is a member of the solute carrier 26 anion transporter family (3), is known to be responsible for generating electromotility (4). Accompanying OHC electromotility, charge movement in the lateral membrane of the cell is observed. This charge movement is manifested in the bell-shaped voltage-dependent cell membrane capacitance, which is often referred to as nonlinear capacitance (NLC) (5,6)). The question of how prestin functions as a membranebased molecular motor has received a great deal of attention; however, even some fundamental issues are still obscure. To understand the mode of operation of the molecule, the relationship between charge movement, which initiates conformational change, and motor function needs to be quantified. However, only minimal attempts have been made to this effect (6,7). Under normal operating circumstances NLC data are usually well explained by the simple two-state Boltzmann model with an apparent valency of charge of less than unity. Interesting theoretical work suggested that three-or higher-state models might provide superior description of the process (8,9). In some work three-state fits were dictated by the data (10). To explain some properties of NLC at extreme membrane voltages, a modified two-state model has also been proposed (11). However, those models have not been subjected to rigorous examination as to their generality under various experimental conditions. It is one purpose of the present work to provide such an examination. We find that a Boltzmann model with at least two voltage-dependent steps is required for explaining prestin function. Since the discovery of prestin, assessments of effects of mutations and drugs on its function have become readily feasible using heterologous expression systems in which NLC measurement is easily performed, while motility measurements are difficult or impossible. NLC measurement has generally been accepted as a proper and sufficient substitute for directly measuring electromotility since its discovery and initial description (5,6). This substitution has been made for the past 20 years even though sufficient quantitative proof for its validity has not been provided. Only very recently was it quantitatively and statistically demonstrated that the prestin-dependent charge movement and the resulting electromotility are indeed fully coupled under normal operating conditions (12). Because the observed charge movement should intimately relate to conformational change of the prestin molecule for generating motility, further detailed quantitative knowledge of the relation between charge movement and physical displacement is essential for better understanding the molecular mechanism of prestin. However, this fundamental knowledge is still largely missing. In this study we used isolated murine OHCs for examining the electromechanical coupling of prestin. Isolated OHCs are the appropriate system for studying the relation between the charge movement and the motor activity of prestin for various reasons. First, because of the OHC quite regular cylindrical diameter, with restricted expression of prestin motor only in the lateral membrane, the overall motor activity of prestin molecules is effectively translated into longitudinal length-change of the cell. This permits the quantification of the relation between charge movement and mechanical displacement. Furthermore, because OHCs are the natural hosts of prestin, being the only mam-malian cell that expresses the functional protein, any conclusion derived from the current study has potential physiological significance. EXPERIMENTAL PROCEDURES Adult mice were euthanized with euthasol, and OHCs were isolated in the same way as described before (13). Whole-cell recordings were performed at room temperature with holding potentials at 0 mV using the Axopatch 200B amplifier (Molecular Devices, Sunnyvale, CA). Recording pipettes were pulled from borosilicate glass to achieve initial bath resistances averaging 3-4 megaohms and were filled with an intracellular solution containing 140 mM CsCl, 2 mM MgCl 2 , 10 mM EGTA, and 10 mM HEPES (pH 7.3). Cells were bathed during whole-cell recordings in an extracellular solution containing 120 mM NaCl, 20 mM triethylammonium chloride, 2 mM CoCl 2 , 2 mM MgCl 2 , 10 mM HEPES (pH 7.3). Osmolarity was adjusted to 310 mosmol liter Ϫ1 with glucose. Sinusoidal voltage stimulus (1-Hz, 120-mV amplitude or 2 Hz, 100-mV amplitude) superimposed with two higher frequency stimuli (390.6 and 781.2 Hz, 5-or 10-mV amplitude) was used for measuring NLC and for simultaneously measuring NLC and OHC motility. The intracellular pressure was kept at 0-mm Hg. Current data were collected by jClamp (SciSoft C., New Haven, CT) for the fast Fourier transform-based admittance analysis for determining NLC (14). The NLC data were analyzed by Equations 3, 4, and 5 shown under "Results." OHC electromotility was captured by WV-CD22 digital camera (Panasonic), and the sequential images were analyzed using ImageJ. For measuring the OHC displacement, we analyzed the image density in an observation pixel window (5 ϫ 5 pixels, 8-bit) positioned at the apical region of OHCs (supplemental Fig. S1). The motility data were analyzed by Equations 6 and 7 shown under "Results." PRISM (GraphPad software) and Igor (WaveMetrics, Inc.) were used for the curve fitting analysis of both motility and NLC. To test the identity of the describing parameters (␣ and V pk ) derived from NLC and motility measurements, the Deming linear regression analysis was employed (15). The average S.E. of curve-fitting derived from the NLC measurement and the motility measurement were used to determine the ratio of uncertainties associated with the two methods. The error ratio is required for calculating the sum of squared distances to be minimized in the Deming regression analysis (15). PRISM (GraphPad software) was used for the Deming regression analysis. The null hypothesis of identity was tested by the independent t tests, t ϭ ͉a Ϫ 0͉/S.E. a and t ϭ ͉b Ϫ 1͉/S.E. b , where a and b are the y intercept, and the slope (y ϭ a ϩ bx) is estimated by the Deming linear regression analysis. S.E. a and S.E. b are the S.E. of a and b, respectively. The p values were calculated from Student's t distribution (two-tailed) by using the t values defined above. Values smaller than 0.05 indicated rejection of the null hypothesis of identity. Akaike's information criterion was used for comparing the Boltzmann models (16). Two-state and Three-state Boltzmann Models for Describing the Prestin Electromechanical Coupling Process- The voltagedependent charge displacement of prestin is adequately explained by the simple two-state Boltzmann model in most cases (5,6); however, deviation of NLC data from this model sometimes becomes evident, especially at very large positive or negative membrane potentials (Ϯ200 mV) (11). The deviation may indicate that there is more than one voltage-dependent processes associated with prestin. Thus, understanding the deviation would be a key for unraveling the relevant molecular process. In this study we describe the voltage-dependent charge movement and the resulting electromotility of prestin by using the following two-state and three-state Boltzmann models. The models are based on the following scheme with distinct conformational states (C i ). z i is the apparent valence of charge movement at each step, and K 12 and K 23 are equilibrium constants that are expressed as exp{␣ 1 is the slope factor of the voltage dependence of charge transfer, where e is electron charge, k B is the Boltzmann constant, and T is absolute temperature. V m is the membrane potential. V pki is the voltage at which the maximum charge movement or motility response per voltage for each step is attained. If the C 3 state does not exist, then the scheme becomes a two-state Boltzmann model. Charge movement (q) is described as follows for the two-state (Equation 1) and three-state (Equation 2) models. N is the total number of functional prestin molecules. The Electromechanical Coupling of Prestin The following modified two-state Boltzmann model was proposed previously to correct the deviation of NLC data observed at very large positive or negative membrane potentials (11), which is referred to as two-state-C sa model in this study. where C 0 represents linear capacitance of a cell, and ⌬C represents the combination of prestin-dependent changes in the membrane area and in either the dielectric constant or thickness of the cell membrane. The voltage-induced prestin-dependent charge movement is presumed to trigger physical change in the molecules with resultant length change of OHCs. If prestin had multiple voltage-dependent steps, the electromechanical coupling efficiencies at each step could be different from one another. We defined electromechanical coupling efficiency as m i ϭ d i /z i e, where d i is a unitary displacement along the axial direction of an OHC induced at each voltage-dependent step. Thus, overall OHC displacement is described as follows for the two-state (Equation 6) and for the three-state (Equation 7) Boltzmann models. Using the Prestin Inhibitor, Salicylate, to Unravel the Steps in Prestin Operation-Salicylate is an inhibitor of OHC electromotility (7,17,18), which is thought to compete with anions such as chloride for the anion-binding site on prestin (19). Fig. 1 shows typical NLC recorded in the presence of salicylate. The NLC response is smaller than normal, and the voltage dependence is altered. Deviation of the NLC data from the simple two-state Boltzmann model (Equation 3) is obvious by examining the residues of the curve fitting process (Fig. 1A). Consequently, it is conceivable that there are more than two conformational states in prestin, which may not be distinguishable in a normal recording condition, e.g. in the absence of salicylate, or if extreme membrane potentials are not used. Therefore, salicylate could be a useful tool for dissecting the molecular mechanism of prestin. We tested the three-state Boltzmann model (Equation 4, Fig. 1B) together with the two-state-C sa model that was used previously for analyzing salicylate-dependent NLC data (Equation 5, Fig. 1C, Ref. 11). Compared with the simple two-state Boltzmann model, both the three-state and the two-state-C sa models drastically improved the fittings, which is obvious in examining the residual plots. For quantitatively evaluating the goodness-of-fit, we ran the Akaike's information criterion method, which is based on information theory for comparing one model to any other model (16). Use of the F-test is inappropriate because the two-state models are not nested in the three-state model. For the two-state models to be nested in the three-state model, one has to be able to choose the coefficient, ␣ 2 , such that K 23 is approximately 0 for all values of V m . A calculated Akaike's weight, which provides the likelihood of one model to be superior to an alternative model, were Ͼ10 13 times higher for the three-state and the two-state-C sa fittings than that for the simple two-state fitting, strongly suggesting that the improvement of the fits was not due to the increased number of free fitting parameters. Akaike's weights were 0.19 Ϯ 0.08 and 0.81 Ϯ 0.08 (average Ϯ S.D., n ϭ 8) for the 3-state and 2-state C sa models, respectively, suggesting that the 2-state C sa model fits the data about 4 times better than the 3-state model. This seems reasonable because the twostate-C sa model, with fewer free parameters, fits the data as well as the three-state model with more fitting parameters (five versus six). Because the three-state and the two-state-C sa models fit the salicylate-dependent NLC data similarly, both models were used in the present study for comparison, whereas below we also evaluate the likelihood of their correctness. For examining the salicylate-dependent NLC in detail, we performed repeated NLC measurements on individual OHCs during the application of 1.5 mM salicylate in the extracellular solution. The time-dependent NLC data were then analyzed by the three-state and the two-state-C sa models (n ϭ 15). Fig. 2 shows a typical example of such recordings analyzed by either the three-state (Fig. 2, A-E) or the two-state-C sa models (Fig. 2, F-J). At the beginning of the time course, before applying salicylate, the total number of functional prestin mole- , C). Solid lines show the fitting curves. Improvements of the fits are visually obvious in the residual plots by the three-state fitting (B) and the two-state-C sa fitting (C) compared with that by the simple two-state fitting (A). Indeed, Ͼ10 13 times higher Akaike's weights were obtained for the three-state and the two-state-C sa fittings over the simple two-state fitting. Akaike's weights for the three-state and two-state-C sa were 0.19 Ϯ 0.08 and 0.81 Ϯ 0.08% (n ϭ 8), respectively. cules was estimated to be ϳ5 million by both models (insets of Fig. 2, B and G). As expected, application of salicylate reduced the prestin-dependent charge movement of isolated OHCs, and the inhibition was reversible (data not shown) as reported previously (7,17,18), suggesting that the reduction of the prestin-dependent charge movement was not a consequence of some damage caused by the repetitive voltage stimulation. We also performed separate experiments for confirming that neither NLC nor electromotility is affected by repetitive voltage stimulations (supplemental Fig. S2). Salicylate is known to block prestin function by competing for the anion binding site ( Fig. 3A) (19). If we adopted the binding affinities of chloride and salicylate to prestin from the previous study (6.3 mM and 21 M, respectively) (19), almost all prestin molecules in OHCs, ϳ96% (computation, 140/(6.3 ϩ 140) ϭ 0.957), are estimated to be fully functional with 140 mM chloride at the beginning of each time course before the application of 1.5 mM salicylate. Application of 1.5 mM salicylate is expected to inhibit overall prestin activity ϳ77% (computation, 1-140/ (6.3(1 ϩ 1.5/0.021) ϩ 140) ϭ 0.765). Thus, roughly 23 ϳ 25% , F). A and F are the same NLC data set but with different data analyses. The six NLC parameters, N, C lin , V pk1 /V pk2 , and ␣ 1 /␣ 2 , for the threestate model (BϳE) and the five NLC parameters, Q max , ⌬C/C 0 , V pk , and ␣, for the two-state-C sa model (GϳJ) were determined for each NLC curve and were plotted against the recording time. The bars indicate the S.E. of fitting. The insets of B and G show the numbers of functional prestin molecules before application of salicylate, which are plotted against C lin or C 0 . Variation of the NLC parameters obtained from the three-state model before application of salicylate was large in some recordings. For this reason, 3 of 15 data sets were not included in the analysis for the three-state model (therefore, n ϭ 12 for the threestate model analyses). (if prestin activity at 140 mM (ϳ96% of chloride-bound prestin) were defined as 100%) prestin activity (N or Q max ) would be expected to remain after the application of 1.5 mM salicylate. The fractions of active prestin remained after the application of 1.5 mM salicylate were very similar when computed with the three-state and the two-state-C sa models and were reasonably close to the expected value of 25% (broken line in Fig. 3B). Increment in the basal linear capacitance was evident in all recordings, and both models found very similar values (C lin for the three-state model and C 0 for the two-state-C sa model; Figs. 2, C and H, and 3C and supplemental Fig. S3). This increment is prestin-dependent because the linear capacitance change was not observed in prestin-knock-out OHCs (supplemental Fig. S2). Similarly, no change in the linear capacitance of Deiters' cells was found (data not shown), which confirms previous studies (7,11). The increment in basal linear capacitance may indicate fixation of prestin in a thinner/expanded conformational state (higher capacitance) by salicylate. Curiously, a non-monotonic change in ⌬C with time (increasing salicylate concentration) was commonly derived when data fits were made with the two-state-C sa model (Fig. 2H). If ⌬C represented the combination of prestin-dependent changes in membrane surface area and in either the membrane dielectric constant or thickness as suggested in the previous study (11), gradual monotonic reduction in ⌬C should be expected upon monotonic inhibition by salicylate (Fig. 2G). To exclude the possibility that the non-monotonic change in ⌬C was falsely observed in our continuous NLC recordings during the gradual inhibition by salicylate, we measured NLC in the presence of various concentrations of salicylate under steady-state con-ditions with the same concentration of salicylate inside and outside OHCs (supplemental Fig. S3). The salicylate-dependent non-monotonic change of ⌬C was also evident in the steady-state recording condition, when the data were fit with the two-state-C sa model, suggesting that peculiar pattern is the consequence of the curve-fitting model. The three-state model found separation of V pk values (hyperpolarizing shift for V pk1 and depolarizing shift for V pk2 ), whereas the two-state-C sa model found a depolarizing V pk shift (Figs. 2, D and I, and 3D). The difference between V pk1 and V pk2 (defined as ⌬V pk ϭ (V pk2 Ϫ V pk1 ) was small before applying salicylate but became significantly larger with salicylate application (Fig. 3E). This was observed in all recordings. The small difference in V pk1 and V pk2 values before applying salicylate would explain why the simple two-state model fits NLC data reasonably well in the absence of salicylate. The three-state fitting typically produced larger S.E., suggesting that multiple ␣/V pk parameters are not essential for explaining the NLC data in the absence of salicylate (supplemental Fig. S4). The depolarizing V pk shift observed for the two-state C sa fitting seems to be consistent with the previous observation with the simple two-state model (17). ⌻he ␣ 1 and ␣ 2 values estimated by the three-state model were similar and were virtually constant throughout the time course, but there was a tendency of a slight transient decrement and recovery (Fig. 2E). On average, the ␣ 1 and ␣ 2 values were slightly smaller than the original values although the decrement is not significant (Fig. 3F). A similar tendency was much more obvious for the ␣ values estimated by the twostate C sa model (Figs. 2J and 3F). These transient changes of ␣ values can be understood by the salicylate-induced V pk shifts. An observed V pk value is the average of V pk values of many prestin molecules. During the inhibition by salicylate, V pk values of some prestin molecules shift, whereas those of other prestin molecules do not. This would cause a transiently wider distribution of V pk ; thus, ␣ would become smaller over the same time course. The Electromechanical Coupling of Prestin-Quite recently, Wang et al. (12) measured NLC and electromotility simultaneously in OHCs and concluded that NLC and electromotility are fully coupled. This was based on their observation that the average ␣ and V pk values determined by the simple two-state Boltzmann model could not be discriminated statistically as obtained from the two different measurements (12). This is the most complete demonstration of full coupling between NLC and motility thus far available in the literature. However, some issues are unresolved. It is appreciated that NLC parameters differ from cell to cell, reflecting their different physiological conditions. Variability is especially large in V pk among cells. Therefore, a more proper and rigorous way of testing the NLC motility coupling should be to compare individual ␣ and V pk values derived from the two different measurements obtained from the same cell instead of averaging data from multiple cells, as was done by Wang et al. (12). We measured NLC and electromotility simultaneously to examine the relation between charge movement and the conformational change of prestin. In the present study we used a digital video camera (30 frames per second) for capturing low The salicylate-dependent NLC recording exemplified in Fig. 2 was performed on 15 OHCs. The NLC parameters determined by the three-state and the two-state-C sa models are summarized. All bars indicate the S.D. A, shown is a reaction scheme of chloride/salicylate binding to prestin according to the competitive inhibition model. P stands for prestin. Sal, salicylate. The fraction of functional prestin that remains after application of salicylate is calculated by the displayed equation. B, inhibition of prestin-dependent charge movement by 1.5 mM salicylate is shown. The broken line indicates the prestin activity that is expected to remain (25%, see "Results"). C, shown is a summary of the basal linear capacitance increment induced by salicylate. pF, picofarads. D and E, a summary of salicylate-induced V pk shift is shown. F, a summary of salicylate-induced change in ␣ is shown. frequency OHC electromotility. The images were analyzed by the subpixel method (20 -22) with 45 nm accuracy (supplemental Fig. S1). Inasmuch as no salicylate was used in these experiments, the two-state Boltzmann model was used to separately analyze both the NLC data (Equation 3, Fig. 4A) and the motility data (Equation 6, Fig. 4B). The fitting parameters (␣ and V pk ) that describe the voltage-dependent charge movement of prestin and the resultant motility, derived from the two different fits, were plotted against each other for comparison (Fig. 4, C and D). For both ␣ and V pk , most data points were found on or in the close vicinity of the line of identity defined as y ϭ x (slope ϭ 1 and intercept ϭ 0, the diagonal broken lines). To quantitatively evaluate the identity, the Deming linear regression analysis (15) was performed (solid lines). Because ␣ and V pk derived from the two methods (the NLC measurement and the motility measurement) are inde- The error bars represent S.E. of the curve-fitting analyses. Alternatively, the observed maximum OHC displacement (d) is plotted against the observed prestin-dependent charge movement (q) that is determined by the area between NLC curves and the C lin (E, inset). The Deming linear regression analysis was performed with the y intercept fixed at zero (y ϭ ax). r 2 values for Q max -D max and q-d were 0.036 and 0.41, respectively. The NLC motility data (n ϭ 36) were also analyzed by the three-state model, and the ␣ 1 and ␣ 2 values (F) and the V pk1 and V pk2 values (G) were compared against each other. Because the three-state fitting on NLC/motility data recorded in the absence of salicylate usually finds larger S.E., only data sets whose S.E. found in V pk1 /V pk2 were less than 100 mV are shown, with bold symbols error bars, whereas others are shown with smaller pale symbols without error bars. The p values determined by Deming linear regression analyses (solid lines) followed by t tests for the y intercept (ϭ0) and the slope (ϭ1) were less than 0.05 for ␣ 1 , ␣ 2 , V pk1 , and V pk2 comparisons, suggesting that ␣ 1 , ␣ 2 , V pk1 , and V pk2 values are different between NLC and motility. H, shown are computer simulations of m 1 /m 2 -dependent OHC displacements using Equation 7. The parameters used were 0.0252 mV Ϫ1 , 0.0239 mV Ϫ1 , Ϫ85.7 mV, and Ϫ52.6 mV for ␣ 1 , ␣ 2 , V pk1 , and V pk2 , all of which were derived from the NLC measurements (F and G). The generated displacement data were analyzed by Equation 7 in which m 1 /m 2 was fix at unity, and the resultant fitting parameters were plotted against m 1 /m 2 for ␣ 1 and ␣ 2 (I) and V pk1 and V pk2 (J). The broken lines indicate the ␣ and V pk values used for generating the displacement data. K and L, NLC and OHC displacement were measured simultaneously as in A and B in the presence of 0.1ϳ1 mM salicylate. The NLC motility data (n ϭ 15) were analyzed by the three-state model, and the ␣ 1 and ␣ 2 values (K) and the V pk1 and V pk2 values (L) were compared against each other. The p values determined by Deming linear regression analyses (solid lines) followed by t tests for the y intercept (ϭ0) and the slope (ϭ1) were all greater than 0.05 for ␣ 1 , ␣ 2 , V pk1 , and V pk2 comparisons, suggesting that the NLC and motility are coupled in terms of ␣ 1 -␣ 2 and V pk1 -V pk2 . pendent and experimentally determined with uncertainties, the ordinary linear regression analysis should not be used for this purpose. Using the Deming's fitting parameters with the S.E. of the fittings, the null hypothesis of identity (y ϭ x) was tested by t tests as described under "Experimental Procedures." The p values calculated for the slopes and the y intercepts were 0.62 and 0.83 for the ␣-value comparison (Fig. 4C) and 0.40 and 0.32 for the V pk -value comparison (Fig. 4D), respectively, which were significantly larger than the criterion value of p ϭ 0.05. Thus, the null hypothesis of identity should not be rejected, suggesting that charge movement and motility of mammalian prestin are fully coupled in terms of ␣ and V pk under the two-state Boltzmann assumption. The two-state Boltzmann model-based curve-fitting also gives estimates of maximum charge movement (Q max ) and maximum cell displacement (D max ). We plotted D max against Q max for determining the relation between charge movement and OHC length change (Fig. 4E). The data were also analyzed by the Deming linear regression. Because voltage-dependent cell displacement in prestin-ko OHCs is negligible (23), we fixed the y intercept of the regression line at zero. One might expect greater D max for larger Q max ; however, a positive correlation was not obvious (r 2 ϭ 0.036). It is possible that Q max and D max were over/under-estimated in the analysis because the accuracies of estimating Q max and D max highly depend on the accuracies of ␣ and C lin . Therefore, instead of estimating Q max , we determined the charge movement that was actually observed with the Ϯ120-mV stimulus. This charge displacement was determined from the area between the NLC curve and C lin , and we defined this as q. Very similar q values were obtained by the two-state-C sa model (data not shown). We then compared q to OHC displacement, d, that was actually observed with the Ϯ120-mV stimulus (Fig. 4E, inset). Although the estimation of q still depends on C lin , as determined by the two-state model, a positive correlation between charge movement (q) and motility (d) became clearer (r 2 ϭ 0.41), the relation of which is likely to be linear. In other words, when actually measured charge movement (q) and motility (d) are compared, the relation between them is linear. When derived quantities (Q max and D max ) are related, the linearity (if exists) is obscured, probably by propagated errors in the estimation of other parameters. We also analyzed the NLC motility data using the threestate models, Equation 4 for NLC and Equation 7 for motility (Fig. 4, F and G). Because the three-state motility model with an additional free-fitting parameter, m 1 /m 2 , made fitting analysis very sensitive to initial values and usually found very large S.E. in the NLC parameters, the electromechanical coupling efficiencies at two steps were assumed to be the same (m 1 /m 2 ϭ 1) for the motility analyses. Deviations of the data points from the line of identity shown with the broken lines are obvious, especially for ␣ 1 and V pk2 . Indeed, Deming linear regression followed by t tests found p Ͻ 0.05 for the null hypothesis of identity (slope ϭ 1, y intercept ϭ 0) for all ␣/V pk parameters, suggesting either that NLC and motility are not fully coupled or that the electromechanical coupling efficiencies at two steps are not the same. To test the latter possibility, we computer-simulated the m 1 /m 2 -dependent motility response of prestin using Equation 7 with experimentally determined average ␣ 1 /␣ 2 and V pk1 /V pk2 values derived from the NLC measurements (Fig. 4H). We analyzed the generated data using Equation 7 with m 1 /m 2 fixed at unity and obtained ␣ and V pk values, which were then plotted against m 1 /m 2 (Fig. 4, I and J). The results show that ␣ 1 /␣ 2 and V pk1 /V pk2 values are exactly the same for NLC (shown with the horizontal broken lines) and motility at m 1 /m 2 ϭ 1 and that apparently larger values are computed for motility at m 1 /m 2 Ͻ 1 with greater increments for ␣ 1 and V pk2 compared with those for ␣ 2 and V pk1 , respectively. These deviations are caused by the discrepancy between the apparent ␣ 1 (ϭ␣ 1 m 1 /m 2 ) in the numerator of Equation 7 and the ␣ 1 in the equilibrium constant, K 12 , which is independent of m 1 /m 2 . The qualitative trends of the deviations from the line of identity shown in Fig. 4, F and G can be understood if, in fact, m 1 /m 2 Ͻ 1. The ratio m 1 /m 2 is roughly estimated to be in the range of 0ϳ0.6 by interpolation, using the average ␣/V pk values derived from the motility measurement in Fig. 4, F and G. To determine m 1 /m 2 , we measured NLC and motility simultaneously under fixed concentration of salicylate (0.1ϳ1 mM) to widely separate V pk1 /V pk2 for a more accurate curvefitting analysis with the three-state model. The motility response was analyzed by Equation 7 with a free parameter for m 1 /m 2 , and the resultant ␣/V pk values were compared with those derived from the NLC measurement (Fig. 4, K and L). Deming linear regressions followed by t tests found p Ͼ 0.05 for the null hypothesis of identity (slope ϭ 1, y intercept ϭ 0) for all ␣/V pk parameters, suggesting that the charge movement and the motility response are fully coupled at each voltage-dependent step of prestin. The ratio of the coupling efficiencies, m 1 /m 2 , was determined to be 0.54 Ϯ 0.22 (average Ϯ S.D., n ϭ 15). Because the S.E. of fitting of the ␣/V pk values derived from the motility measurements were usually very large due to the introduction of the additional free-fitting parameter, m 1 /m 2 , and because some fitting analyses were very sensitive to initial values, we also analyzed the motility data using Equation 7 where ␣ 1 /␣ 2 and V pk1 /V pk2 values were fixed with the values derived from the corresponding NLC measurements (data not shown). A very similar m 1 /m 2 value, 0.55 Ϯ 0.21 (average Ϯ S.D., n ϭ 15), was obtained by this analysis. These results indicate that the electromechanical coupling efficiency at the K 12 step is ϳ55% that of the K 23 step. Determination of the Electromechanical Coupling Efficiencies at the Two Steps-Having determined the ratio of the electromechanical coupling efficiencies, we next focused on determining the coupling efficiencies themselves. It is likely that the degree of electromechanical coupling of prestin is modulated by the physiological status of OHCs (24,25), and that our q-d analysis shown in Fig. 4E was affected by such cell-to-cell physiological variability. Because it is not readily feasible to control the physiological status of isolated OHCs, we further pursued the q-d relation by using salicylate to gradually manipulate the magnitudes of both NLC and electromotility in individual OHCs. We measured salicylate-dependent NLC with the same procedure used in Fig. 2, while OHC displacement was simultaneously recorded (Fig. 5, A and B). NLC data were analyzed by the three-state and the two-state-C sa models, and as before, q was determined from the area between the NLC curve and C lin or C sa . OHC displacement, d, was measured directly. Fig. 5, C and D, shows the summary of the q-d recordings from eight different OHCs. The Deming linear regression was used to analyze the results. Strong linear q-d relationships were found in all OHCs tested (n ϭ 8) irrespective of the models used for determining q, suggesting that the individual prestin molecule (or its dimer or tetramer as possible functional units (26,27)) is independently functional. The most intriguing and surprising feature of the q-d relation is the positive d intercepts at q ϭ 0, which were statistically significant (p Ͻ 0.001) for all the q-d relations irrespective of the models used for estimating basal capacitances (C lin or C sa ) for determining q values. As shown below, the non-zero d intercepts provide an alternative proof for the two-step electromechanical coupling mechanism with distinct coupling efficiencies (m 1 Ͻ m 2 ). For determining the q-d relations shown in Fig. 5, we simultaneously measured NLC and motility during the application of salicylate with Ϯ100-mV stimulus. In other words, we observed only portions of the entire NLC and motility responses through the Ϯ100-mV window. Computer simulations shown in Fig. 5E, which slightly exaggerate the qualitative character- Fig. 2, NLC and OHC motility were simultaneously measured in the presence of salicylate. NLC and motility were simultaneously recorded with one cycle of 2-Hz, 100-mV amplitude sinusoidal voltage stimulus superimposed with two higher frequency stimuli (390.6 and 781.2 Hz, 10-mV amplitude) at time points indicated by the solid triangles, while the membrane capacitance at 0 mV was constantly monitored during the measurements (A, inset). pF, picofarads. Accompanied by the reduction of the prestin-dependent charge movement, OHC displacement decreased. The same colors were used to show matching simultaneous NLC motility recordings. For determining prestin-dependent charge movement (q), C lin and C sa were first determined by the three-state and two-state-Csa Boltzmann models. Subsequently the area between NLC curves and the C lin (or C sa ) was calculated. OHC displacement (d) was determined by the maximum displacement observed. The following values were used: ␣ 1 ϭ ␣ 2 ϭ 0.04 mV Ϫ1 (with and without salicylate), V pk1 ϭ Ϫ75 mV, V pk2 ϭ Ϫ70 mV (without salicylate), V pk1 ϭ Ϫ150 mV, V pk2 ϭ 0 mV (with salicylate), m 1 /m 2 ϭ 0.55. F, using the d-q data shown in C and Equations 8, 9, and 10, m 2 values were determined for each OHC (n ϭ 8). The same colors are used for matching. G, the two-step electromechanical coupling mechanism of prestin is shown. z 1 and z 2 are apparent valences calculated from ␣ 1 and ␣ 2 , determined by the three-state model (Fig. 4F). The positive signs are assigned for the rightward transitions. K 12 and K 23 are voltage-dependent equilibrium constants defined in the text, which increase with depolarizing voltage stimuli, and vice versa. istics of the salicylate-dependent NLC motility response for clarity, explain how positive d intercepts can be had. During the gradual inhibition of prestin by salicylate, the K 12 (V pk1 ) component moved outside the Ϯ100-mV observation window (hyperpolarizing shift), whereas the K 23 (V pk2 ) component moved into the observation window (depolarizing shift). Because of the negatively set initial V pk1 and V pk2 values, if m 2 were higher than m 1 , the apparent overall d/q would increase as V pk separation proceeds, which would eventually cause a positive d intercept. On the other hand, if m 1 were higher than m 2 , apparent overall d/q would decrease as V pk separation proceeds, which would eventually cause a negative d intercept. If m 1 and m 2 were the same, apparent overall d/q would not change irrespective of V pk separation, In this case, the d intercept should become zero. Therefore, the observed positive d intercepts strongly reaffirm our conclusion that prestin has at least two voltage-dependent conformational transition steps with distinct electromechanical coupling efficiencies (m 1 Ͻ m 2 ). Based on Equations 2 and 7, the observed prestin-dependent charge movement, q, and the OHC displacement, d, are described as The NLC parameters included in Equation 8 were determined by the three-state Boltzmann fitting of NLC data. Because our results suggest that ␣ 1 , ␣ 2 , V pk1 , and V pk2 are the same for NLC and motility (Fig. 4, K and L) and because the ratio of m 1 to m 2 was already derived (m 1 /m 2 ϭ 0.55), the actual m values can be determined by Equation 9 using the fitting parameters obtained from the corresponding NLC data. The voltage values within the observation window (Ϯ100 mV) were corrected by the series resistance for actual calculations. Because prestin molecules are two-dimensionally distributed in the lateral membrane, the m values directly calculated by Equation 9 (m calc ) were corrected by m ϭ m calc ϫ N L ͱ N LD (Eq. 10) where L and D are the length and the diameter of an OHC (see supplemental text for the derivation). The m 2 values de-termined in this way using the q-d data (three-state, Fig. 5C) are summarized in Fig. 5F. Statistical t tests did not find significant differences from zero for the slopes (p Ͼ 0.05), suggesting that m 2 values are constant irrespective of the degree of inhibition by salicylate. This supports the validity of using salicylate for determining the m 1 /m 2 ratio in this study. The m 2 values were determined for each OHC by averaging each data set (2.4ϳ6.0 nm/atto coulomb). The variation of m 2 values found in individual OHCs is likely due to the modulation of the electromechanical coupling of prestin under different physiological status of the OHCs as mentioned above. We determined the m 2 value as 3.5 Ϯ 1.3 nm/atto coulomb (average Ϯ S.D., n ϭ 8). The m 1 value was then calculated as 1.9 nm/atto coulomb simply by multiplying with the coupling ratio of 0.55. Using these numbers together with the ␣ 1 /␣ 2 values determined in Fig. 4F, unitary displacements of prestin at the K 12 and the K 23 steps along the axial direction of an OHC are estimated to be 0.20 and 0.34 nm, respectively, as summarized in Fig. 5G. If the diameter of monomeric prestin in the membrane were estimated to be 5ϳ7 nm as expected from the diameter of a potentially tetrameric prestin complex observed in the lateral membrane of OHCs (10ϳ14 nm (28)), the total unitary displacement of a single prestin molecule (0.20 ϩ 0.34) nm is expected to change the membrane area 3ϳ4 nm 2 , which agrees with the estimates by others (2ϳ4 nm 2 (29,30)). DISCUSSION We found that two distinct voltage-dependent steps in prestin transition between its contracted and expanded states became discernable upon blocking with salicylate and that prestin function is well explained by the three-state Boltzmann model under such conditions. Two distinct peak voltages characterized the process, and the separation of V pk1 and V pk2 seen with salicylate could also be seen without salicylate under certain experimental conditions. Previously, three-state Boltzmann fits were found necessary in an investigation of dynamic stiffness of OHCs (10). The three-state model was also used in a recent study to better explain NLC data. 3 It was also demonstrated that with reduced chloride concentration, NLC and motility no longer fully covaried. 4 Aside from the present work, these results also imply that there are at least two voltage-dependent transition steps in the electromechanical coupling process of prestin. Theoretical work is in harmony with the recent experimental results (8,9). The fact that prestin function is described by the threestate Boltzmann model is not surprising if one considers that prestin is a member of solute carrier 26 anion transporter family and because non-mammalian orthologs of prestin are demonstrated to be electrogenic divalent/chloride anion exchangers (33). Kinetic models of prestin with two voltage-dependent steps have been proposed based on the transporter assumption (34). What has been unknown is how the voltagedependent charge movement is coupled to the motor activity of prestin. It is known that non-mammalian orthologs of prestin are not electromotile while retaining transporter activity (33,35,36). Thus, it is very likely that mammalian prestin has acquired a unique structural element(s) that couples the voltage-dependent charge movement to mechanical displacement. Because it appears that there are at least two voltagedependent steps, one fundamental question is which of these is responsible for generating mechanical displacement. Alternatively, if both steps generated mechanical displacement, what is the ratio of the contributions? Such information is important for understanding the molecular mechanism of prestin. Our study provides experimental evidence that prestin indeed has two voltage-dependent conformational transition steps, both of which generate mechanical displacement. The two steps have distinct electromechanical coupling efficiencies (Fig. 5G). Implication of the Three-state Model for Understanding the Molecular Mechanism of Prestin-Recently, there has been growing evidence that supports a prestin model that postulates an intrinsic voltage sensor(s) with chloride providing allosteric modulation (37)(38)(39). Systematic mutation screens have been conducted for identifying the voltage-sensing charged amino acid(s) in prestin using the two-state model for evaluating the effects of mutations (19,40,41). Although some mutations were found to significantly reduce the voltage-dependent charge movement of prestin, none was found to completely abolish NLC (41). If prestin had at least two voltage-dependent steps, as strongly suggested in the present study, a mutation(s) of the critical charged amino acid(s) could abolish charge displacement only at one of the two steps. Salicylate Action on Prestin-Salicylate is very useful for separating the two voltage-dependent electromechanical steps (characterized by V pk1 and V pk2 ), which are not readily distinguishable in a non-modified preparation. The salicylatedependent V pk change could be induced by intermolecular interaction of prestin molecules in the oligomer complex. Several factors are known to modulate V pk . Turgor pressure (17,29,32), resting potential (14), and membrane-cholesterol content (31) are all known to affect V pk . Therefore, it is very likely that freezing the structure of one prestin molecule in an oligomer complex at a certain conformational state by salicylate affects the V pk1 Ϫ V pk2 relation of other prestin molecules in the same complex. The total number of prestin molecules in individual OHCs, estimated by the three-state Boltzmann fitting as ϳ5 million, is consistent with the prestin oligomer model. We collected OHCs from the apical region of the murine cochlea, having typical dimensions of ϳ5 m in diameter and ϳ20 m in length. Thus, the density of prestin molecules in the lateral membrane of OHCs is roughly estimated as ϳ16,000/m 2 . Because prestin is not present in the apical and the basal regions of the lateral membrane, the density would be higher than this estimate. On the other hand, the density of the intramembrane particles found in the lateral membrane of OHCs is reported to be ϳ5600/m 2 (28). Thus, a prestin oligomer is expected to be composed of at least three prestin molecules and in conformity with the recent estimates, likely four (27,28). The Three-state Model Versus the Two-state-C sa Model-A model cannot be validated simply by curve-fit analyses and associated F-tests or Akaike's information criterion tests. Presenting novel experimental evidence predicted by the model is essential for its acceptance. The two-state-C sa model (Equation 5) (11) fit the voltage-dependent charge movement of prestin in the presence of salicylate ϳ4 times better than the three-state Boltzmann model because of similar quality curvefit with fewer numbers of free-fitting parameters. Even so, we champion the three-state model over the two-state-C sa model for the former provides more parsimonious explanations of experimental observations. The non-zero d intercepts of the q-d relations (Fig. 5C) can be understood by two voltage-dependent steps with distinct electromechanical coupling efficiencies (m 1 Ͻ m 2 ) in the three-state model as discussed above. We also observed non-zero d intercepts with the twostate-C sa model (Fig. 5D). The non-zero intercepts are hard to conceptualize with this model, which is based on only one voltage-dependent step. The salicylate-dependent ⌬C change in the two-state-C sa model is also difficult to understand ( Fig. 2H and supplemental Fig. S3A). The term, ⌬C, in the twostate-C sa model is considered to represent the combination of changes in membrane surface area and in either the membrane dielectric constant or its thickness, which are all prestin-dependent (11). Observed positive correlations between the number of prestin motors and the magnitude of ⌬C supported this idea (11). Based on the model, a gradual monotonic reduction of ⌬C would be expected during the gradual monotonic inhibition of prestin by salicylate. However, salicylate increased ⌬C after an initial rapid increment followed by small reduction (Fig. 2H and supplemental Fig. S3A). If ⌬C were indeed associated with the prestin physical state, it is difficult to envision how this general parameter could increase in a non-monotonic manner during the monotonic progress of inhibition of prestin by salicylate. With the threestate model, however, the non-monotonic change in ⌬C is simply understood by the V pk1 component shifting toward the hyperpolarizing direction in a limited V m observation window. These more parsimonious explanations of the results by the three-state model seem to provide a plausible physiological explanation of observations. However, this, of course, does not rule out the possibility that higher-order Boltzmann models with more steps might also describe the prestin operation (8). NLC Derived from Salicylate-bound Prestin-Anions that completely abolish prestin voltage-dependent charge movement have not been found. In other words, all anions tested so far support the prestin NLC to some extent. Salicylate is not an exception (19). The fact that salicylate does not completely eliminate the prestin NLC would affect our three-state model analyses if salicylate-supported NLC were not negligible. Thus, it is quite reasonable to ask if the multistep charge movement we claim is a consequence of the presence of an additional NLC component derived from salicylate-bound prestin. Information regarding the relative magnitude of salicylate-supported NLC to that of chloride-supported NLC is difficult to be extracted from the previous study (19) because sulfate was used for replacing chloride for comparison. Sulfate is reported to support NLC with positively shifted V pk (37,39). Thus, a slight shift in V pk that could be induced by salicylate would affect the magnitude of C m observed at certain V m . Even if the relative magnitude of salicylate-supported NLC was determined to be very small, it is still important to check if our NLC analyses might have been affected by the contribution of salicylate-supported NLC because the population of salicylate-bound prestin increases, whereas that of chloridebound prestin decreases during the application of salicylate. If the observed salicylate-dependent NLC data are solely explained by the sum of two independent two-state NLCs (2ϫ two-state) that is composed of chloride-supported NLC and salicylate-supported NLC, the NLC component derived from salicylate-bound prestin should continuously increase, whereas that derived from chloride-bound prestin would continuously decrease during the application of salicylate. The population of each component should be reflected in the Q max value. We performed the 2ϫ two-state analysis on our salicylate-dependent NLC data used for the three-state and the two-state-C sa analyses summarized in Fig. 3. Supplemental Fig. S5 shows an example of the 2ϫ two-state analysis. Continuous decrements of both NLC components (reflected in Q max values), but not a continuous increment of one of the two NLC components, were observed, suggesting that NLC derived from salicylate-bound prestin is too small to explain our salicylate-dependent NLC data. The contribution of salicylate-bound prestin to the observed NLCs can also be estimated to be very small in an alternative way. Although NLC is not completely abolished even with complete intracellular anion exchange with salicylate (19), salicylate is capable of completely abolishing OHC displacement (18), suggesting that the electromechanical coupling of prestin can be disconnected with salicylate. Incidentally, pentane sulfonate is also known to completely abolish OHC displacement (19), although it retains significant NLC (39). Very small (or zero) d/q is expected for salicylate-bound prestin. Therefore, if salicylate-supported NLC was a significant factor in explaining our salicylate-dependent NLC data (Figs. 2 and 3), negative d intercepts should be observed, which is opposite to what we have seen (Fig. 5).
Engulfment signals and the phagocytic machinery for apoptotic cell clearance The clearance of apoptotic cells is an essential process for tissue homeostasis. To this end, cells undergoing apoptosis must display engulfment signals, such as ‘find-me' and ‘eat-me' signals. Engulfment signals are recognized by multiple types of phagocytic machinery in phagocytes, leading to prompt clearance of apoptotic cells. In addition, apoptotic cells and phagocytes release tolerogenic signals to reduce immune responses against apoptotic cell-derived self-antigens. Here we discuss recent advances in our knowledge of engulfment signals, the phagocytic machinery and the signal transduction pathways for apoptotic cell engulfment. Several billion senescent or damaged cells in the body physiologically undergo apoptosis every day. Rapid removal of apoptotic cells from tissues is important for maintaining tissue homeostasis and preventing inappropriate inflammatory responses in multicellular organisms. During this process, apoptotic cells express engulfment signals such as 'find-me' and 'eat-me' signals that indicate they should be removed from tissues, and phagocytes engulf apoptotic cells using multiple types of phagocytic machinery. At this point, apoptotic cell phagocytosis is distinguished from other types of phagocytosis and is designated 'efferocytosis' ('effero' means 'to carry to the grave'). 1 This review focuses on several recent advances in our understanding of engulfment signals, the phagocytic machinery and signal transduction during efferocytosis. ENGULFMENT SIGNALS 'Find-me' signals Cells undergoing apoptosis secrete molecules, so-called 'find-me' signals (also referred to as 'come-to-get-me' signals), to attract phagocytes toward them. To date, four representative 'find-me' signals have been identified, including lysophosphatidylcholine (LPC), sphingosine-1-phosphate (S1P), CX3C motif chemokine ligand 1 (CX3CL1, also referred to as fractalkine), and nucleotides (ATP and UTP; Figure 1). LPC is released from apoptotic cells and binds to the G-protein-coupled receptor G2A on macrophages, facilitating the migration of macrophages to apoptotic cells. 2 In apoptotic cells, caspase-3 activation induces cleavage and activation of calcium-independent phospholipase A2 (iPLA2; also referred to as PLA2G6), which in turn processes phosphatidylcholine into LPC. 3 Recently, ATP-binding cassette transporter A1 (ABCA1) was shown to be required for the release of LPC from apoptotic cells. 4 CX3CL1 is generated as a membrane-associated protein and then released from apoptotic cells by proteolytic processing. 5 The secreted CX3CL1 binds to CX3C motif chemokine receptor 1 (CX3CR1) on microglia and macrophages, resulting in the migration of phagocytes. However, the roles of LPC and CX3CL1 as 'find-me' signals have not been clarified in an in vivo animal model. S1P is generated from sphingosine by sphingosine kinase. It is secreted by dying cells in a caspase-3dependent manner and binds to S1P receptors on macrophages, leading to the recruitment of macrophages to apoptotic cells. 6 Nucleotides, including ATP and UTP, are released from apoptotic cells in a caspase-3-dependent manner and are sensed by purinergic receptors on phagocytes, resulting in the recruitment of phagocytes to apoptotic cells. 7 The release of nucleotides from apoptotic cells is mediated by pannexin 1 channels, which are activated in apoptotic cells in a caspase-3-dependent manner. 8 Although these molecules are defined as 'find-me' signals, many unanswered questions remain to be elucidated, including their reaction range, functional mode (cooperativity or redundancy) and in vivo relevance. In addition, 'find-me' signals have multiple roles in efferocytosis. CX3CL1 appears to upregulate MFG-E8 expression in microglial cells and peritoneal macrophages. 9,10 S1P released by apoptotic cells acts as an anti-apoptotic mediator and attenuates macrophage apoptosis, 11 suggesting that apoptotic cells can prevent damage to neighboring cells to maintain tissue homeostasis. Recently, S1P has been shown to trigger the activation of erythropoietin (EPO)-EPO receptor (EPOR) signaling, which increases the expression of phagocytic receptors through peroxisome proliferator-activated receptor-γ. 12 'Eat-me' signals Dying cells also express 'eat-me' signals on the cell surface to indicate they should be engulfed by macrophages (Figure 2). Although a variety of potential 'eat-me' signals have been proposed, the best-characterized 'eat-me' signal is the expression of phosphatidylserine on the cell surface. Phosphatidylserine is a plasma membrane phospholipid that is localized on the inner membrane leaflet of the lipid bilayer in healthy cells and externalized on the cell surface in response to apoptotic stimuli. 13 The externalization of phosphatidylserine on the cell surface during apoptosis and its role in cell corpse clearance has also been identified in Caenorhabditis elegans and Drosophila. 14,15 Recently, Xk-related protein 8 (Xkr8) has been shown to mediate surface expression of phosphatidylserine in apoptotic cells in a caspase-3-dependent manner. 16 This process is also mediated by the Xkr8 ortholog CED-8 in Caenorhabditis elegans, indicating a conserved mechanism for apoptotic phosphatidylserine exposure. 17 More recently, Xkr8 has been shown to associate with basigin or neuroplastin at the plasma membrane in response to apoptotic stimuli, and this complex is required for the proper scrambling activity of Xkr8. 18 In addition, the P-type ATPase ATP11C acts as a flippase to transport aminophospholipids from the outer leaflet to the inner leaflet of the lipid bilayer to maintain membrane asymmetry. In cells undergoing apoptosis, it is inactivated by caspase-3-mediated cleavage, permitting phosphatidylserine externalization. 19 Calreticulin (CRT) is another potential 'eat-me' signal expressed on the apoptotic cell surface. In dying cells induced by endoplasmic reticulum (ER) stress, activated protein kinase RNA-like ER kinase phosphorylates eIF2α, which induces caspase-8 activation, Bap31 cleavage and Bax activation, resulting in the translocation of CRT from the ER to the Golgi and SNARE-mediated exocytosis. 20 CRT on the apoptotic cell surface is sensed by low-density lipoprotein receptor-related protein (also referred to as CD91) on phagocytes to promote Figure 1 'Find-me' signals released by apoptotic cells and extracellular vesicles. Four representative 'find-me' signals released by apoptotic cells have been identified, including S1P (sphingosine-1-phosphate), LPC (lysophosphatidylcholine), nucleotides (ATP or UTP) and CX3CL1 (CX3C motif chemokine ligand 1; fractalkine). They bind to S1PR, G2A, P2Y2 and CX3CR, respectively, on the phagocyte surface, promoting phagocyte migration to apoptotic cells. Extracellular vesicles released by apoptotic cells and phagocytes appear to modulate functions of phagocytes during efferocytosis. Apoptotic cell-derived microparticles also attract macrophages to sites of cell death through CX3CL1 and ICAM3. Phagocyte-derived microvesicles and exosomes modulate phagocytic capacity in epithelial cells and the transfer of apoptotic cell-derived antigens to dendritic cells, respectively. engulfment. 21 Recently, CRT is shown to bind to phosphatidylserine via its C-terminal acidic region, leading to apoptotic cell phagocytosis. 22 Furthermore, phagocytosis of cells expressing CRT on the cell surface appears to induce immunogenic responses, 23 suggesting that recognition of CRT by specific phagocytes, especially dendritic cells, might trigger immunogenic signals rather than self-tolerance signals. However, it is unclear whether recognition of CRT is sufficient to trigger a signal to induce immunogenic responses in phagocytes (macrophages and dendritic cells). CRT is also expressed on the cell surface of macrophages through TLR and Btk signaling, stimulating cancer cell phagocytosis. 24 It is therefore possible that CRT is a PS-binding bridging molecule released from apoptotic cells and phagocytes rather than an 'eat-me' signal. In addition to the role of phosphatidylserine as an 'eat-me' signal, the recognition of phosphatidylserine by phagocytes can enhance cholesterol efflux from cells to maintain cellular homeostasis 25 and trigger release of anti-inflammatory cytokines to induce immunogenic tolerance for apoptotic cell-derived antigens. 26 These findings suggest that phosphatidylserine exposure is not only an 'eat-me' flag to detect apoptotic cells but also a trigger of endogenous signaling for cellular homeostasis in phagocytes. However, the molecular details of phosphatidylserine-mediated signaling remain to be clarified further. 'Don't eat-me' signals Healthy cells display 'don't eat-me' signals, such as CD47 and CD31, on the cell surface to avoid efferocytosis. CD47 (also referred to as integrin-associated protein) is a membrane protein composed of an immunoglobulin (Ig) domain, five membrane-spanning regions and cytoplasmic region. Oldenborg et al. 27 found that CD47-deficient erythrocytes injected into mice are more rapidly removed by splenic macrophages than are CD47-positive erythrocytes. They suggested that CD47 functions as a signal for discrimination between self and non-self. In healthy cells, CD47 interacts with signal regulatory protein alpha (SIRPα; also referred to as SHPS-1 and CD172a) on macrophages. The CD47-SIRPα interaction induces tyrosine phosphorylation of the immunoreceptor tyrosinebased inhibitory motif in the SIRPα cytoplasmic tail and subsequent recruitment and activation of the inhibitory tyrosine phosphatases SHP-1 and SHP-2, resulting in the negative regulation of actin cytoskeletal rearrangement for phagocytosis. Senescent or damaged cells exhibit decreased CD47 expression or an altered pattern of CD47 distribution, Figure 2 'Eat-me' signals, phagocytic machinery and signaling pathways. Apoptotic cells express 'eat-me' signals, such as phosphatidylserine and calreticulin, on the cell surface in response to apoptotic stimuli. Exposed phosphatidylserine on the apoptotic cell surface is recognized directly by phosphatidylserine receptors (Tim family proteins, BAI1, Stabilin-2, CD300f and RAGE) or indirectly by bridging molecules (MFG-E8, Gas6, protein S and C1q). MFG-E8 bound to phosphatidylserine is recognized by integrin αvβ3/5 on the phagocytes, and Gas6 or protein S bound to phosphatidylserine is sensed by Mer-TK. Bridging molecule C1q is recognized by MEGF10 or scarf1. Another 'eat-me' signal, calreticulin, is associated with phosphatidylserine or C1q on the apoptotic cell surface and recognized by CD91 (LRP1). Integrin αvβ3/5 and BAI transduce signals for cytoskeletal rearrangement through DOCK180/ELMO1, whereas Stabilin-2, MEGF10 and CD91 use adaptor protein Gulp1 as an engulfment signaling pathway. thereby permitting efferocytosis. 21,28 Several cancer cell types, such as circulating leukemic stem cells and acute myeloid leukemia, were found to highly express CD47 on their surface to evade immune cells. 29,30 Recent studies showed that a neutralizing CD47 antibody or soluble SIRPα variants promote tumor cell engulfment by macrophages and suppress tumor growth in in vivo tumor models. [31][32][33] Another candidate 'don't eat-me' signal is CD31 (also referred to as platelet and endothelial cell adhesion molecule 1). A CD31-CD31 homotypic interaction between viable neutrophils and phagocytes acts as a repulsive signal, thereby mediating detachment of viable cells from phagocytes. In contrast, apoptotic cells do not trigger this repulsive signal and are efficiently engulfed by phagocytes. 34 However, the intracellular signaling pathways for CD31-mediated repulsion remain to be clarified. Extracellular vesicles Almost all cells release membrane vesicles, which play an important role in intercellular communications. 35 Apoptotic cells can mediate the recruitment of phagocytes through the release of microparticles (Figure 1). ICAM-3 in apoptotic cell-derived microparticles induces the migration of macrophages towards apoptotic cells. 36 CX3CL1-positive microparticles are shown to induce the recruitment of macrophages to apoptotic cells. 37 Adipocyte-derived microparticles are released in a caspase-3 and Rho-kinase-dependent manner and facilitate microphage migration to obese adipose tissues. 38 Recently, microparticles released from apoptotic cells have been shown to induce immune responses to apoptotic cell-derived antigens in the presence of IFN-α. 39 Chromatin on the apoptotic cell surface appears to be a self-antigen that triggers immunogenic responses. 40 Thus, microparticles elicited from apoptotic cells might be removed to maintain tissue homeostasis and prevent aberrant inflammation. However, the clearance mechanism of apoptotic cell-derived microparticles remains to be investigated. Phagocytes also appear to emit microparticles. 41 Recent studies showed that macrophages can communicate with other professional or nonprofessional phagocytes through the release of extracellular vesicles (Figure 1). Insulin-like growth factor-1 (IGF-1) released from macrophages promotes the engulfment of macrophage-derived microvesicles by epithelial cells, leading to reduced inflammatory responses in epithelial cells. 42 Macrophages are capable of transferring dead-cell-associated antigens to dendritic cells through the release of exosomes in a ceramide-dependent manner. 43 These observations suggest that microvesicles derived from apoptotic cells or phagocytes can modulate efferocytosis. TOLEROGENIC SIGNALS In the absence of infection or inflammation, apoptotic cell clearance is immunogenically silent. At this point, apoptotic cells and phagocytes might express signals to suppress the immune response to self-antigens. Apoptotic cells release signals to inhibit the recruitment of inflammatory cells, known as 'keep out' or 'stay away' signals. Lactoferrin is expressed in response to apoptotic stimuli and selectively inhibits the migration of granulocytes (neutrophils and eosinophils) but not monocytes and macrophages. 44,45 However, the role of lactoferrin in the negative regulation of the migration of inflammatory cells requires clarification in an in vivo animal model. Annexin A1 was originally defined as an engulfment signal for the efficient clearance of apoptotic cells. 46 Annexin A1 on the apoptotic cell surface is known to inhibit dendritic activation, which in turn inhibits inflammatory cytokines and T-cell activation for apoptotic cell-derived antigens. 47 Annexins A5 and A13 also suppress dendritic cell activation for apoptotic cell-derived antigens, resulting in immunogenic tolerance. 48 However, deficiency in individual annexins did not show an obvious phenotype such as autoimmunity, suggesting that annexin proteins may have a redundant function. Thus, it remains to be defined whether annexin proteins are tolerogenic factors to suppress immune responses for apoptotic cellderived antigens. The 12/15-lipoxygenase in resident peritoneal macrophages causes the cell surface exposure of oxidized phosphatidylethanolamine, which sequesters the MFG-E8 required for the clearance of apoptotic cells in inflammatory monocytes, suggesting that oxidized phosphatidylethanolamine on resident macrophages may be a signal to reduce immune responses. 49 Recently, the chromatin on microparticles secreted from apoptotic cells was shown to be a self-antigen that induces immunogenic responses. In this context, DNase1L3 produced by macrophages and dendritic cells digests chromatin in apoptotic cell-derived microparticles, 40 suggesting that secreted DNase1L3 is a molecular mechanism for achieving immune tolerance for apoptotic cell-associated antigens. PHAGOCYTIC MACHINERY Phagocytes can recognize phosphatidylserine on the apoptotic cell surface through two types of phosphatidylserine recognition machinery: phosphatidylserine receptors and soluble bridging molecules. Phosphatidylserine receptors on the surface of phagocytes directly bind to phosphatidylserine on apoptotic cells, whereas soluble bridging molecules recognize phosphatidylserine on the apoptotic cell surface and function as a bridge between apoptotic cells and cell surface receptors on phagocytes ( Figure 2). Phosphatidylserine receptors T-cell immunoglobulin and mucin domain-containing molecule (Tim) family proteins, Tim-1 (also referred to as kidney injury molecule 1 (Kim-1)), Tim-3 and Tim-4, act as phosphatidylserine receptors to clear apoptotic cells. [50][51][52] Tim-1 and Tim-4 bind to phosphatidylserine through a metal-iondependent ligand-binding site in their immunoglobulin V domain. 53 Tim-1 is highly expressed in damaged kidney epithelial cells and confers phagocytic capacity to them. 54 Tim-1-mediated efferocytosis is responsible for protecting the kidney after acute injury through PI3K-dependent Engulfment signals and the phagocytic machinery S-Y Park and I-S Kim downregulation of NF-κB. 55 Tim-3 is expressed in peritoneal exudate cells and CD8-positive dendritic cells and contributes to the clearance of apoptotic cells and cross-presentation of apoptotic cell-associated antigens. 52 Tim-4 is expressed by professional phagocytes (macrophages and dendritic cells) and controls phosphatidylserine-dependent efferocytosis and adaptive immunity. 50,56 However, Tim-4 does not seem to transduce a signal for engulfment, which suggests that Tim-4 functions as a tethering receptor to recognize phosphatidylserine on the apoptotic cell surface and may be required for other proteins to trigger internalization of apoptotic cells. 57 Indeed, recent studies identified that Mer-TK and integrin β1 act as partners to transduce signals after Tim-4-mediated phosphatidylserine recognition. 58,59 Brain-specific angiogenesis inhibitor 1 (BAI1) is a member of the G-protein-coupled receptor family; it has seven transmembrane regions and binds to phosphatidylserine through its thrombospondin type 1 repeats. 60 BAI1 interacts with the DOCK180/ELMO1 complex through an α-helical region in its cytoplasmic tail, thereby providing the signal for Rac1 activation. However, BAI1 is predominantly expressed in neuronal cells of the cerebral cortex, 61 suggesting that its role may be tissue-specific. Recently, BAI1 is known to contribute to phagosome formation and transport during the phagocytosis of apoptotic neurons by microglial cells. 62 In skeletal muscle, BAI1 and its homologous protein BAI3 bind to apoptotic myoblasts and transduce signals to fuse myoblasts. 63,64 Stabilin-2 (also referred to as hyaluronic acid receptor for endocytosis (HARE) and FEEL-2) is a large membrane protein that is composed of seven FAS1 domains, eight atypical epidermal growth factor (EGF)-like domains, fifteen EGF-like domains, a Link domain, a transmembrane region and a cytoplasmic domain. 65 Stabilin-2 binds to phosphatidylserine via its EGF-like domain repeats, promoting apoptotic cell engulfment. 66 The histidine residue in the PS-binding loops is conserved in four EGF-like-domain repeats and plays an important role in pH-dependent phagocytic activity. 67 Stabilin-1 (also referred to as CLEVER-1 and FEEL-1), a homologous protein of stabilin-2, mediates apoptotic cell engulfment through phosphatidylserine recognition. 68 Stabilin-1 and -2 are expressed in sinusoidal endothelial cells and macrophages. 65,[69][70][71] In hepatic endothelial cells, they act as tethering receptors for the capture of phosphatidylserineexposed damaged erythrocytes through phosphatidylserine recognition. 72 However, the functions of stabilin-1 and -2 on efferocytosis require clarification in a knockout mouse model. CD300 family proteins, including CD300b and CD300f, have recently been shown to act as phosphatidylserine recognition receptors to clear apoptotic cells. 73,74 CD300f regulates the engulfment of apoptotic cells via the PI3K pathway, leading to the activation of Rac1/Cdc42 GTPases to regulate F-actin. 75 CD300b is associated with DAP12 through its ITAM motif and activates the PI3K/Akt pathway. 73 In contrast, another CD300 family protein, CD300a, inhibits the uptake of apoptotic cells through binding to phosphatidylserine and phosphatidylethanolamine. 76 How the recognition of phosphatidylserine by CD300 family proteins induces stimulatory and inhibitory signals for apoptotic cell removal remains to be investigated. Receptor for advanced glycation end products (RAGE) also binds to phosphatidylserine and has a role in the clearance of apoptotic cells. 77 RAGE is a type I membrane protein that belongs to the immunoglobulin protein family and specifically binds to phosphatidylserine. However, various soluble forms of RAGE also bind to phosphatidylserine on the apoptotic cell surface, thereby preventing apoptotic cell engulfment by phagocytic receptors. The physiological role of soluble RAGE proteins remains to be studied. In addition, several scavenger receptors have been proposed as receptors for apoptotic cell clearance, including CD36 and CD14. CD36 associates with integrin to engulf apoptotic cells in a thrombospondin-dependent manner and directly binds to oxidized phosphatidylserine. 78,79 CD14 has been proposed to be a receptor for apoptotic cell engulfment. 80 Studies of CD14-deficient mice showed that CD14 is a tethering receptor but not an engulfment receptor for apoptotic cells. 81 Bridging molecules that recognize phosphatidylserine Several soluble proteins have been identified as bridging molecules that recognize the 'eat-me' signals on the surface of apoptotic cells, including milk fat globule EGF factor 8 (MFG-E8, also referred to as lactadherin), growth arrestspecific 6 (Gas6), protein S and C1q. They bind to both phosphatidylserine on the apoptotic cell surface and phagocytic receptors on phagocytes, providing a link between apoptotic cells and phagocytes. MFG-E8 secreted by macrophages and immature dendritic cells binds to phosphatidylserine on apoptotic cells through its C1 and C2 domains and interacts with αvβ3 or αvβ5 integrin on phagocytes through the RGD (Arg-Gly-Asp) motif in its EGF domain, resulting in the promotion of apoptotic cell phagocytosis. 82,83 Gas6 and protein S share a similar domain structure and bind to phosphatidylserine on apoptotic cells to promote efferocytosis. 84,85 They are composed of a Gla domain at the N terminus, four EGF-like domains, and two laminin G-like domains at the C terminus. They bind to phosphatidylserine in a calcium-dependent manner via their Gla domain and associate with Tyro3-Axl-Mer (TAM) family tyrosine-kinase receptors on phagocytes through laminin G-like domains. 86,87 Mer tyrosine kinase (Mer-TK) is the best-characterized TAM receptor and is known to transduce an important signal for apoptotic cell engulfment. 88 Mer-TK signaling is functionally associated with multiple engulfment systems for the efficient removal of apoptotic cells. Signaling from Mer-TK is induced by binding to Gas6 and functionally associated with αvβ5 integrin-mediated signaling. 89 Scavenger receptor A (SR-A) associates with Mer-TK to transduce signals during apoptotic cell engulfment. 90 Galectin-3 was found to be a new ligand of Mer-TK for apoptotic cell clearance. 91 Mer-TK mediates signal transduction from Tim-4-mediated efferocytosis. 58 Considering that Mer-TK plays a crucial role in inhibiting dendritic cell activation for apoptotic cellassociated antigens, 92 it is possible that Mer-TK has a common Engulfment signals and the phagocytic machinery S-Y Park and I-S Kim role in multiple engulfment systems to regulate immune responses. Recently, the TAM receptor tyrosine kinases Mer-TK and Axl were shown to function as phagocytic receptors under different environments. Mer-TK is primarily expressed at steady-state or under immune suppressive conditions and maintains immune tolerance, whereas Axl is expressed in response to proinflammatory stimuli and suppresses immune responses. 93 Another potential bridging molecule is C1q, the first component of complement, which binds to phosphatidylserine on the apoptotic cell surface. 94 C1q binds to apoptotic cells likely via its globular head and interacts with calreticulin-CD91 on phagocytes to promote apoptotic cell engulfment. 95 SCARF1 (also referred to as scavenger receptor expressed by endothelial cell 1) acts as a receptor that recognizes C1q bound to apoptotic cells. 96 Recently, MEGF10 has also been shown to mediate apoptotic neuron clearance by astrocytes through bridging molecule C1q. 97 Furthermore, the activation of macrophages by C1q regulates Mer-TK and Gas6 expression. 98 Possible link between many engulfment signals and the phagocytic machinery Why are multiple phagocytic receptors necessary for efficient efferocytosis? Such multiple apoptotic cell recognition systems may be useful at several levels. First, discriminating dying cells from live cells during efferocytosis is important for proper cellular turnover. Although several 'eat-me' signals are present, there are many cases in which phosphatidylserine is substantially expressed on the cell surface of live cells, including cell-cell fusion, T-cell activation and platelet activation. 99 It is possible that specificity for apoptotic cell recognition can be improved if multiple phagocytic receptors bind to specific 'eat-me' flags on the apoptotic cell surface, or that sufficient mechanical force for efferocytosis can be provided by multiple phosphatidylserine recognition systems. Second, the eating of apoptotic cells by macrophages requires various cellular events, including the tethering of apoptotic cells on phagocytes, cytoskeletal rearrangement for internalization, suppression of immune responses, and disposal of metabolic burden. Thus, multiple phagocytic receptors may be required to perform various cellular processes during efferocytosis. Recently, several receptors have been proposed to act cooperatively for efferocytosis. Stabilin-2 associates with integrin αvβ5 through its FAS1 domain and functions cooperatively for apoptotic cell engulfment. 100 In peritoneal macrophages, Tim-4 functions as a tethering receptor for adhesion between apoptotic cells and macrophages, and Mer-TK acts as a tickling receptor to transduce signals for cytoskeletal rearrangement. 58 During the engulfment of apoptotic neurons, BAI1 is involved in the formation and transport of phagosomes, whereas Tim-4 contributes to phagosome stabilization. 62 Third, particular types of phagocytic machinery for phosphatidylserine recognition may be required for the efficient efferocytosis of specific phagocytes or under specific conditions. For example, Tim-4 is indispensable for tissue homeostasis in resident peritoneal macrophages, whereas MFG-E8 is essential for apoptotic cell clearance in inflammatory macrophages. 101 Mer-TK is activated by Gas6 or protein S and acts as a receptor for the maintenance of self-tolerance in resting macrophages. In contrast, Axl is activated by only Gas6 under inflammatory conditions and acts as a receptor for immune suppression. 93 SIGNALING FOR APOPTOTIC CELL ENGULFMENT Signaling pathways for cytoskeletal rearrangement Genetic analyses in Caenorhabditis elegans identified three signaling pathways that mediate apoptotic cell clearance: (1) the CED-1, 6 and 7 pathway; (2) the CED-2, 5, and 12 pathway; and (3) the ABI-1 and ABL-1 pathway. [102][103][104][105] In the first pathway, multiple EGF-like domains 10 (MEGF10) and Jedi (also referred to as MEGF12), mammalian homologs of CED-1, act as phagocytic receptors for apoptotic cell clearance. 106,107 MEGF10 indirectly recognizes phosphatidylserine on the apoptotic cell surface through the bridging molecule Clq, 97 whereas the molecular mechanism by which apoptotic cells are recognized by Jedi remains to be studied. MEGF10 and Jedi bind to Gulp1 (phosphotyrosine-binding domain-containing engulfment adaptor protein 1), a mammalian ortholog of CED-6, through an NPxY motif in their cytoplasmic region, leading to the transduction of a signal for cytoskeletal rearrangement. 108,109 ABCA1 and ABCA7, mammalian orthologs of CED-7, are members of the ATP-binding cassette containing transporter family that transport a variety of substances across the plasma membrane. They are involved in apoptotic cell clearance through unknown mechanisms. 110,111 ABCA1 also has multiple functions in apoptotic cells and phagocytes during efferocytosis, including the release of 'find-me' signals, protection from oxidative stress-induced apoptosis, and enhancement of cholesterol efflux. 4,25,108,112 The Gulp1 signaling pathway converges on Rac1 (an ortholog of Caenorhabditis elegans CED-10) 113 and is the downstream signaling pathway through which several phagocytic receptors, such as low-density lipoprotein receptor-related protein-1, Stabilin-1 and Stabilin-2, regulate apoptotic cell engulfment. [114][115][116] Stabilin-2 is known to coordinate the activities of the two phagocytic pathways (the Gulp1 pathway and ELMO1/DOCK180 pathway) through a direct interaction with integrin αvβ5. 100 However, the intermediates between Gulp1 and Rac1 in this pathway are largely unknown. In the second pathway, the mammalian homologs of CED-2, 5 and 12 are CrkII, DOCK180 and ELMO1, respectively. CrkII associates with DOCK180, a guanine-nucleotide exchange factor, which in turn triggers Rac1 activation. 117 ELMO1 associates with DOCK180 and acts as a positive regulator of Rac1 activation in Caenorhabditis elegans and mammalian cells. 118,119 In addition, TRIO/UNC-73 and RhoG/MIG-2 signaling also contribute to DOCK180-mediated Rac1 activation for proper phagocytosis. 120 This pathway is downstream of the PS-receptor BAI1 as well as integrin αvβ5. 60,117 In the third signaling pathway, ABI-1 (Abi) promotes apoptotic cell clearance through regulation of Rac-1 activity or an independent pathway. ABL-1 interacts with ABI-1 and negatively regulates engulfment by inhibiting ABI-1. 121 However, the role Engulfment signals and the phagocytic machinery S-Y Park and I-S Kim of the mammalian counterparts of the genes involved in this pathway remains to be defined. Other signaling pathway during apoptotic cell engulfment Signaling of the tumor suppressor p53 is shown to regulate apoptotic cell engulfment. p53 controls phagocytosis of apoptotic cells by regulating the expression of death domain 1α. 122 Death domain 1α is an immunoglobulin superfamily receptor that mediates homophilic interactions between apoptotic cells and phagocytes, leading to the removal of apoptotic cells. The phosphatidylserine receptor BAI1 is also a specific target of p53 in the brain. 61 However, the molecular mechanism by which p53 signaling is activated in phagocytes remains to be defined. Several factors that can regulate cellular metabolic processes have been proposed as modulators for efferocytosis. Uncoupled protein 2, which reduces mitochondrial membrane potential in cells through uncoupling oxidative phosphorylation from ATP generation, has been shown to positively regulate the engulfment capacity of phagocytes. 123 Peroxisome proliferator-activated receptors and liver X receptors are activated by the engagement of apoptotic cell and regulate apoptotic cell engulfment, likely increasing the expression of phagocytic receptors or bridging molecules. [124][125][126] The nuclear receptor Nr4a1 contributes to anti-inflammatory effects during apoptotic cell phagocytosis. 127 These findings suggest that apoptotic cell clearance is associated with metabolic processes. However, it remains to be seen how the recognition of apoptotic cells activates nuclear receptors. CONCLUSIONS Apoptosis and efferocytosis are processes for homeostatic cell turnover in multicellular organisms, and proper corpse clearance is important to prevent inappropriate inflammatory responses such as autoimmunity. Over the past two decades, numerous studies have been performed to unveil the molecular mechanisms of apoptotic cell clearance, leading to a significant increase in our knowledge of this area. However, multiple unanswered questions concerning clearance mechanisms remain. Why are multiple phagocytic components necessary for efferocytosis? What is the signaling cascade mediated by receptors for engulfment signals? Can particular engulfment signals or phagocytic mechanisms determine immunogenic or tolerogenic clearance of apoptotic cells? To answer these questions, further understanding of engulfment signals and the phagocytic machinery is required. Furthermore, defective clearance of apoptotic cells in tissues is associated with the pathogenesis of various diseases, including autoimmune diseases, chronic obstructive pulmonary disease, atherosclerosis, Alzheimer's disease and cancer. 128 Thus, an understanding of the precise mechanism of apoptotic cell engulfment could be useful for the development of therapeutic strategies for controlling diseases associated with defective efferocytosis.
Altered macronutrient composition and genetics influence the complex transcriptional network associated with adiposity in the Collaborative Cross Background Obesity is a serious disease with a complex etiology characterized by overaccumulation of adiposity resulting in detrimental health outcomes. Given the liver’s critical role in the biological processes that attenuate adiposity accumulation, elucidating the influence of genetics and dietary patterns on hepatic gene expression is fundamental for improving methods of obesity prevention and treatment. To determine how genetics and diet impact obesity development, mice from 22 strains of the genetically diverse recombinant inbred Collaborative Cross (CC) mouse panel were challenged to either a high-protein or high-fat high-sucrose diet, followed by extensive phenotyping and analysis of hepatic gene expression. Results Over 1000 genes differentially expressed by perturbed dietary macronutrient composition were enriched for biological processes related to metabolic pathways. Additionally, over 9000 genes were differentially expressed by strain and enriched for biological process involved in cell adhesion and signaling. Weighted gene co-expression network analysis identified multiple gene clusters (modules) associated with body fat % whose average expression levels were influenced by both dietary macronutrient composition and genetics. Each module was enriched for distinct types of biological functions. Conclusions Genetic background affected hepatic gene expression in the CC overall, but diet macronutrient differences also altered expression of a specific subset of genes. Changes in macronutrient composition altered gene expression related to metabolic processes, while genetic background heavily influenced a broad range of cellular functions and processes irrespective of adiposity. Understanding the individual role of macronutrient composition, genetics, and their interaction is critical to developing therapeutic strategies and policy recommendations for precision nutrition. Supplementary Information The online version contains supplementary material available at 10.1186/s12263-022-00714-x. metabolic syndrome, type 2 diabetes, and certain types of cancer [97]. The simplest definition of obesity is excessive adiposity resulting from the chronic imbalance between energy intake and expenditure. The underlying mechanisms involved in maintaining energy balance are complex and regulated by numerous factors such as genetic background [3,52,82], metabolism [19,84,90], gut microbiome [36,57,91], and environmental factors such as diet in the context of overfeeding [11-13, 76, 81]. Additionally, the specific interaction of dietary macronutrients and the endocrine system, in particular insulin response and signaling, has a critical role in the etiology of obesity [53]. Differences in dietary macronutrient composition can influence substrate utilization; specifically, rapidly digestible carbohydrates may interact with insulin and other hormones to increase fat accumulation relative to other macronutrients. In addition to the complex interactions between adipose tissue, the central nervous system, nutrients, and hormones that regulate energy balance [3,25], the liver also influences the susceptibility to obesity, given its major role in the metabolism and processing of macronutrients including glycogenolysis, production of triglycerides, lipogenesis, and the synthesis of amino acids, cholesterol, and lipoproteins [75,93]. Obesity in turn can induce the pathological response of insulin resistance in the liver, which results in an impaired ability of insulin to decrease glucose output from the liver while continuing to stimulate lipogenesis; this disruption of appropriate carbohydrate and lipid metabolism is thought to contribute to some of the health complications associated with obesity like metabolic syndrome and cardiovascular disease. Adipokines such as adiponectin, adipocyte dysfunction, metabolism, and circulating metabolite levels affect hepatic gene expression [21,56], which regulates the mechanisms involved in lipid processing, determination of metabolic rate, and other physiological processes associated with energy imbalance [46,93]. Furthermore, an individual's inherent genetic architecture and specific environmental exposures such as diet also shape hepatic gene expression [31,41,80]. Given that the liver regulates so many biological processes related to obesity development, elucidating the effects of genetic architecture and diet on hepatic gene expression is necessary to understand the mechanisms underlying susceptibility to obesity and development of effective prevention and treatment regimes. Modern molecular biology techniques have revolutionized our ability to detect changes in gene expression [50,74], which allows one to infer potential candidate genes and pathways underlying metabolic dysfunction [16,33]. Identification of genes and pathways that determine susceptibility to obesity facilitates the understanding of the underlying mechanisms behind the development of obesity, which is instrumental to determining effective methods of prevention and treatment. Simultaneous to the advances in high-throughput assessment of gene expression, a novel population of mice has been developed. Derived from elaborate intercrosses of eight founder mouse strains [7,35,89], the CC is a large recombinant inbred mouse population with tremendous genetic diversity and genetic contribution from five classically inbred strains, A/J, C57BL/6J (B6), 129S1/SvImJ (129), NOD/ ShiLtJ (NOD), and NZO/HILtJ (NZO), and three wildderived strains, CAST/EiJ (CAST), PWK/PhJ (PWK), and WSB/EiJ (WSB) [9,64,79,85]. The genetic and phenotypic diversity of the CC is of similar scale to the human population [86] and provides an opportunity to address the complex interactions between genetics and dietary macronutrient composition that affect hepatic gene expression. The ability to utilize multiple replicates of individual CC strains allows for more precise delineation between confounding environmental influences and dietary effects within the context of a known genetic architecture. Previously, we examined the effects of diet and genetic background on adiposity and other obesityrelated traits [101]. In the current study, we focus on the effects of macronutrient composition and strain (genetic background) on hepatic gene expression and relate these to phenotypic traits and biological functions. To find potential candidate genes or functional pathways underlying metabolic dysfunction regulated by diet in a genetically diverse population, we administered a challenge of either high-protein (HP) or highfat high-sucrose (HS) diet to 22 strains of mice from the Collaborative Cross (CC) mouse panel for 8 weeks and performed microarray gene expression analysis of 11,542 genes using high-quality RNA from liver tissue, in addition to extensive phenotyping. To ascertain the expression of genes (mRNA) associated with adiposity, determine which genes were differentially expressed by dietary macronutrients and genetic strain, and identify groups of related genes affected by genetic background and/or diet in the liver, we examined hepatic gene expression levels and related them to phenotypes using one analyses pipeline centered around linear models for microarray (limma) and a separate analyses pipeline focused on weighted gene co-expression network analysis (WGCNA) (see Supplementary Fig. 1, Additional file 1), which facilitated exploration of gene expression from two perspectives: for individual genes using the limma approach and for groups of genes using the network approach. Differential gene expression analysis identified 1344 genes responsive to differences in dietary macronutrient composition Both genetics and environmental factors such as diet are critical determinants of obesity. Although genetics have a stronger effect on susceptibility to developing obesity than diet alone [10,29], the role of diet as an environmental factor that influences gene expression is still important, since changes in dietary patterns can help mitigate the degree of obesity that develops by altering gene expression levels. To assess which genes' expression levels are affected by diet, differential gene expression analysis was performed using the R package limma (linear models for microarray) on liver gene expression data. Comparing the HS diet to the HP diet revealed 1344 genes that were differentially expressed by diet (p adj < 0.05, Supplementary Table 3, Additional file 2) with the top 20 most significant hits showing patterns of expression clustering by diet ( Fig. 2A), where 16 genes showed increased expression and 4 genes showing decreased expression in mice fed the HP diet relative to the HS diet, though expression patterns exhibited some degree of inter-strain variation depending on the gene and strain. The opposite patterns of expression for these genes were shown in mice fed the HS diet, i.e., genes that showed increased expression in mice fed the HP diet had decreased levels of expression in mice fed the HS diet ( Fig. 2A). The expression levels of 389 differentially expressed genes (DEGs) by diet were significantly correlated with body fat % (p < 0.05), including Irs2 and Pik3r1. The Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway and gene ontology (GO) enrichment analyses identified 20 significantly overrepresented KEGG pathways and 187 significantly overrepresented GO terms for DEGs by diet (Fig. 2B-E; see Supplementary Table 4, Additional file 2), with varying degrees of gene richness defined by the number of up-or downregulated DEGs found belonging to each KEGG pathway or GO term out of the total number of genes that comprise each KEGG pathway or GO term. The most significantly overrepresented KEGG pathways identified were metabolic pathways, oxidative phosphorylation, and biosynthesis of amino acids (p adj ≤ 5.05 × 10 −8 ). In terms of each GO term category, 105 GO biological processes, 45 GO cellular components, and 37 GO molecular functions were significantly overrepresented (p adj < 0.05), with the top 10 most significantly overrepresented GO terms in DEGs by diet shown in Fig. 2C-E. The majority of enrichment terms were related to metabolism of a wide variety of substrates with numerous enrichments of mitochondrial cellular components (see Supplementary Table 4, Additional file 2). Genetic architecture perturbed global hepatic gene expression to a greater extent than macronutrient composition Genetics is clearly an important factor affecting susceptibility to metabolic dysfunction. We tested the role of genetics in regulatory gene expression by performing Fig. 1 Top 30 genes with expression levels most significantly correlated with body fat %. Multiple biweight midcorrelations (bicor) and their corresponding Student correlation p-values were calculated between phenotypic data and microarray liver gene expression data to properly take into account the actual number of observations when determining which genes' expression levels were correlated with post-diet phenotypes of interest. The top 15 genes whose expression is most significantly positively correlated with body fat % (bicor ≥ 0.410, p ≤ 2.53 × 10 −6 ) and top 15 genes whose expression is most significantly negatively correlated with body fat % (bicor ≤ −0.466, p ≤ 5.42 × 10 −8 ) are shown. With the exception of insulin and glucose/insulin ratio, most of the top 30 genes' expression most significantly correlated with body fat % were not significantly correlated with circulating analytes but were significantly correlated with metabolic (energy regulation) traits. Genes are ordered on the y-axis in descending order of bicor with the strongest positive correlation at the top and the strongest negative correlation at the bottom. Scale indicates bicor value with color darkness as indicator of correlation strength. †Indicates genes that are also differentially expressed by diet; all 30 genes were found to be differentially expressed by strain. *Indicates genes found to be associated with at least one obesity-related trait in humans according to the GWAS catalog. Annotation for all genes with expression significantly correlated with body fat % are shown in Supplementary (Fig. 3A). Unlike the inter-strain variation of expression patterns for diet DEGs, expression patterns were consistent across diets for strain DEGs. DEGs by CC strain showed similar levels of expression within each CC strain regardless of the diet fed. One-thousand onehundred thirty-one DEGs by CC strain were also differentially expressed by diet (such as Irs2 and Pik3r1), and 2367 of DEGs by CC strain were correlated with body fat % (nominal p < 0.05), including Ide, Insig1, Irs2, Jak1, and Pik3r1. Interestingly, additional genes encoding proteins crucial to insulin signaling [2,5,6,30,60,100] were differentially expressed by strain but not diet, specifically high mobility group AT-hook 1 (Hmga1), insulin-induced gene 2 (Insig2), and insulin receptor substrate 1 (Irs1) (Supplementary Table 5, Additional file 2). KEGG pathway and GO enrichment analyses identified fewer overrepresented KEGG pathways and GO terms for genes differentially expressed by CC strain than diet. For strain DEGs, 13 significantly overrepresented KEGG pathways and 163 significantly overrepresented GO terms were identified (p adj < 0.05, Fig. 3B-E; see Supplementary Table 6, Additional file 2), with varying degrees of gene richness. The most significantly overrepresented KEGG pathways identified were cell adhesion molecules (CAMs), ECM-receptor interaction, and focal adhesion (p adj ≤ 2.6 × 10 −3 ), which are pathways important to cell signaling and structural binding between cells. For each GO term category, 95 GO biological processes, 44 GO cellular components, and 24 GO molecular functions were significantly overrepresented in strain DEGs (p adj < 0.05), with the top 10 most significantly overrepresented GO terms in DEGs by strain shown in Fig. 3C-E. In contrast to the enrichments for diet DEGs, very few of the enrichments for strain DEGs were related to metabolism. Instead, most enrichment terms were related to basal biological functions such as cell or tissue motility, cell division, tissue development, and substrate binding; most cellular compartment enrichments were derivatives of the cell membrane as opposed to the mitochondria (Supplementary Table 6, Additional file 2). A query of the GWAS catalog identified DEGs in the CC that were associated with obesity-related traits in humans We were next interested in identifying clinically important genes that are suspected of causing underlying complex traits in humans to provide context for our findings relative to human obesity. Using the GWAS catalog to guide our search, we found that 15.8% of the genes expressed in the liver in this study (1819/11,542) have been previously found to be associated with obesity traits in humans [4]. Of these 1819 genes expressed in the livers of the CC mice that were also found associated with obesity traits in humans, greater than 85% (1570/1819) were found to be diet DEGs, strain DEGs, or significantly correlated with body fat % in this CC study. Using the CC as a model for obesity, we identified over 1500 genes expressed in the liver whose expression levels were either under genetic regulation, influenced by diet, or correlated with body fat %, which were also clinically important in humans. Of the 1344 genes differentially expressed by diet, 214 genes were found to be associated with obesity traits in humans according to the GWAS database; 65 of these 214 genes were also significantly correlated with body fat % in the CC ( Fig. 4A; see Supplementary Table 7, Additional file 2). Out of 9436 genes differentially expressed by CC strain, 1516 genes were found to be associated with obesity traits in humans according to the GWAS database, including Hmga1 and Irs1; 431 of these 1516 genes were also significantly correlated with body fat % in the CC ( Fig. 4B; see Supplementary Table 8, Additional file 2). By intersecting our lists of genes across multiple analyses, we found 434 differentially expressed genes with expression levels correlated with body fat % in the CC that were associated with obesity traits in humans ( Fig. 4C; see (See figure on next page.) Fig. 2 Expression patterns and enrichment of diet DEGs. A The top 20 most significant (BH-adjusted p ≤ 2.37 × 10 −8 ) diet DE genes' average Z scores of median robust multi-array average (RMA) normalized gene expression for each CC strain on either the high-protein (HP) or high-fat high-sucrose (HS) diet shown ordered from top to bottom by level of gene expression on the HP diet (highest to lowest). The genes' average Z scores for each CC strain and diet are clustered by Euclidean distance on the x-axis. ‡Denotes genes also differentially expressed by strain. *Indicates genes with human homologs found in the GWAS catalog to be associated with at least one obesity-related trait. Annotation and limma results are shown for all diet DEGs in Supplementary Table 3, Additional file 2. Limma analysis of microarray data revealed genes differentially expressed by diet showing significant enrichment (p adj < 0.05) for B KEGG (20 total), C GO biological pathways (105 total), D GO cellular components (45 total), and E GO molecular functions (37 total). Pathways are ordered from top to bottom by significance (highest to lowest) and colored by gene richness. The top 10 enrichments for each ontology category were all upregulated on the HP diet, except for the GO cellular component and "integral component of membrane, " which was downregulated. All significant enrichment terms and enrichment analysis results are shown in Supplementary Supplementary Tables 7 and 8, Additional file 2), with three genes exclusively differentially expressed by diet, 369 genes exclusively differentially expressed by strain (e.g., Ide), and 62 genes differentially expressed by both diet and strain (e.g., Pik3r1). Differences in diet macronutrient composition had mild effects on broad sense heritability (H 2 ) estimates for gene expression levels To quantify the degree to which genetic variation influences variation in gene expression levels, we calculated broad sense heritability (H 2 ) for the 11,542 genes used for differential gene expression analysis, which estimates the proportion of phenotypic variation attributed to differences between genetic variation [20]. Using hepatic gene expression as the observed "phenotype" in this study, H 2 was estimated by calculating the intraclass correlation (r I ) and coefficient of genetic determination (g 2 ) from the between-and within-strain mean square values (MSB and MSW, respectively) derived from linear models. The proportion of variation accounted for by differences between strains can be approximated by r I in general, while the calculation of g 2 takes into consideration the additive genetic variance that doubles during inbreeding [18,20,47]. Estimates of H 2 based on g 2 calculated using MSB and MSW derived from the "full" additive linear models for the 11,542 genes expressed in the liver used for differential gene expression analysis ranged from −0.056 to 0.983 with a median g 2 of 0.173. To assess whether differences in macronutrient composition ("diet environment") influenced H 2 by DEG status, r I and g 2 summary statistics were calculated for all expressed genes, diet DEGs, and strain DEGs (Table 1); g 2 for diet DEGs ranged from −0.044 to 0.735 with a median of 0.195, while g 2 for strain DEGs ranged from 0.045 to 0.983 with a median of 0.211. For diet-specific g 2 , the minimum g 2 values were slightly less than 0, implying that the variation in expression levels for these genes was greater within strains than between strains, but maximum g 2 and median g 2 values were similar both across diets and DEG status. Overall, the distributions of g 2 specifically for the HP and HS diets did not differ significantly neither by the Mann-Whitney test (W = 67,447,080, p = 0.098) nor the Kolmogorov-Smirnov test (D = 0.017, p = 0.074), demonstrating that the proportion of variation in gene expression levels attributed to genetic variation stays relatively constant despite differences in macronutrient composition. To quantify the proportion of the total gene expression variation that is accounted for by differences between diet, we next calculated the diet intraclass correlation (ICC) using the diet MSB and MSW values derived from the "full" additive linear models and then calculated summary statistics by DEG status group, i.e., all expressed genes, strain DEGs, and diet DEGs ( Table 1). Diet ICC for all expressed genes ranged from −0.017 to 0.799 with a median diet ICC of 0.015. Similarly, diet ICC for strain DEGs ranged from −0.017 to 0.787 with a median of 0.019. Though the maximum diet ICC for diet DEGs (diet ICC = 0.799) was similar to the diet ICC maximum values for all expressed genes and strain DEGs (Table 1), the diet DEGs' minimum (diet ICC = 0.099) and median (diet ICC = 0.235) estimates were slightly higher, confirming that the proportion of gene expression variation explained by diet differences was modestly increased for diet DEGs. To investigate the degree to which gene × environmental (diet) effects mediate variation in gene expression relative to genetics and environment, additional linear mixed model analyses with strain, diet, and strain × diet interactions all as random effects were performed for each gene to estimate the relative heritable variation that can be attributed to strain, diet, and strain × diet effects. From the results of these models, we calculated the variance for each of these terms and found that the proportion of heritable variation for gene expression attributed to strain × diet interactions on average was small (2.6%) and remained the same regardless of DEG status ( Table 2). For all genes used in differential expression analysis, the largest proportion of heritable variation for gene expression can be attributed to genetic background (strain) on average (30.3%), Fig. 3 Expression patterns and enrichment of transcripts differentially expressed by CC strain. A The top 20 most significant (BH-adjusted p ≤ 2.631 × 10 −56 ) strain DE genes' average Z scores of median robust multi-array average (RMA) normalized, gene expression for each CC strain on either the high-protein (HP) or high-fat high-sucrose (HS) diet shown. Gene average RMA Z scores for each CC strain and diet are clustered according to Euclidean distance by CC strain and diet on the x-axis and by gene on the y-axis. The human homolog of Gdpd3 was found in the GWAS catalog to be associated with at least one obesity-related trait. Annotation and limma results are shown for all strain DEGs in Supplementary Table 5, Additional file 2. Limma analysis of microarray data revealed genes differentially expressed by strain showing significant enrichment (p adj < 0.05) for B KEGG (13 total), C GO biological pathways (95 total), D GO cellular components (44 total), and E GO molecular functions (24 total). Pathways are ordered from top to bottom by significance (highest to lowest) and colored by gene richness. The top 10 enrichments for each ontology category were all upregulated on the HP diet, except for the linoleic acid metabolism KEGG pathway, and GO molecular functions "monooxygenase activity" and "oxidoreductase activity, acting on paired donors…, " which were downregulated. All significant enrichment terms and enrichment analysis results are shown in Supplementary Table 6 while the proportions of heritable variation for gene expression attributed to diet (3.9%) and strain × diet interactions (2.6%) were much smaller. As expected, the proportion of heritable variation for gene expression attributed to diet was increased in diet DEGs (18.7%), and the proportion of heritable variation for gene expression attributed to strain was increased in strain DEGs (36.0%). Transcriptional co-expression network analysis identified key modules associated with adiposity Because polygenic obesity is a complex physiological trait, we used a gene co-expression network approach to characterize the effects of strain and diet on expression of groups of related genes in addition to assessment of genes individually, specifically weighted gene co-expression network analysis (WGCNA). WGCNA determines which genes have similar expression profiles DEGs in the CC were associated with obesity-related traits in humans. Comparisons of differentially expressed genes, genes with expression levels significantly correlated with body fat % (BF%), and genes previously found to be associated with obesity-related traits in the GWAS catalog revealed A the number of genes differentially expressed by diet that also had expression levels significantly correlated with body fat % and associated with obesity traits in humans (65), B the number of genes differentially expressed by CC strain that also had expression levels significantly correlated with body fat % and associated with obesity traits in humans (431), and C the number of genes that fall under all four categories (62). Gene annotation, body fat % correlations, limma statistics, and a subset of related GWAS annotation are shown for the 65 diet DEGs in Supplementary using a clustering method based on correlations of gene expression, which identifies the network modules (groups of related genes); measures derived from gene expression correlations influence the strength of connections between genes within the network, where the highly interconnected genes that form modules may be components of biological pathways, helping to bridge the effects of individual genes and resulting phenotypes [45,106,107]. Taking a global approach to elucidate the relationship between gene expression and emergent phenotypes, WGCNA was performed using the 11,542 genes expressed in the liver and identified 13 clusters of genes (modules) each assigned an arbitrary color, where the Table 1 Heritability estimate and diet intraclass correlation summary statistics for all expressed genes and DEGs Post-diet heritability estimates were calculated from linear models including strain, diet, and week as covariates (r I or g 2 "full") for gene expression of the 11,542 expressed genes used in limma differential gene expression analysis. Diet-specific estimations of broad sense heritability were also calculated accordingly for gene expression levels represented by intraclass correlations (r I ) and coefficients of genetic determination (g 2 ) for each trait using the MSB and MSW for strain derived from linear models with strain and week as covariates using only data from each experimental diet per model as indicated to assess how different diet "environments" affect heritability. The intraclass correlation for diet (Diet ICC), which is the proportion of the total phenotypic variation that is accounted for by differences between diet, was calculated to compare the proportion of variation in gene expression attributed to diet in general or genetics. Summary statistics were calculated for each group of genes after heritability estimates, and diet ICC were obtained. g 2 accounts for the additive genetic variance that doubles during inbreeding and may be a more appropriate estimate for broad sense heritability in this study. However, both r I and g 2 values are presented to facilitate comparisons with other findings in the literature number of genes contained in each module ranged from 42 to 3319 (Fig. 5A, Table 3; see Supplementary Table 9, Additional file 2 and Supplementary Fig. 2, Additional file 1) with varying degrees of connectivity between genes (see Supplementary Fig. 3, Additional file 1 for an example). The percentage of genes significantly correlated with body fat % (15.1-69.0%), and the percentage of DEGs by diet (0-49.5%) showed a wide range of variation in gene numbers across modules, but the percentage of DEGs by CC strain remained consistently high (> 69%) for all modules (Table 3, Fig. 5B); the consistently high presence of strain DEGs in all modules compared to the lower percentage and variation of diet DEGs between modules suggest a stronger effect of CC strain than diet on gene expression. Of the DEGs with expression levels correlated with body fat % and associated with obesity-related traits in humans, the three diet DEGs were each assigned to different modules (black, blue, and pink); the range of strain DEGs per module was 1-106, with the turquoise module containing the highest number of strain DEGs (Table 4). Per module, the range of DEGs differentially expressed by both diet and strain with expression levels correlated with body fat % and also associated with obesity-related traits in humans was 0-19, where most modules contained at least one DEG and yellow contained the most DEGs (Table 4). After establishing the modules, module eigengenes (MEs) were calculated to estimate the average expression profiles of each module, and Spearman's correlations were performed between MEs and phenotype data from all mice to determine the relationships between the modules and measured phenotypic traits, revealing significant correlations between the pink, yellow, salmon, tan, red, and magenta modules and body fat % ( Fig. 5C; see Supplementary Table 10, Additional file 2). Concurrent with ME × phenotype data correlations, modules that were significantly correlated with body fat % had relatively higher percentages of individual genes whose expression levels were significantly correlated with body fat %. Because multiple modules were associated with clinical phenotypes (Fig. 5C), we performed enrichment analysis to determine potential mechanisms underlying these associations. Module enrichment varied widely (Table 5), from no enrichments at all (tan) to 419 total enrichments (brown). Figure 6A-D shows the top enrichments for each module if present. Of the modules that were significantly correlated with body fat % in the CC, the tan module showed no enrichments, the pink module showed enrichment for the RNA binding GO molecular function (GO: 0003723) (p adj = 0.042), the salmon module showed enrichment for the regulation of angiogenesis (GO: 0045765) (p adj = 0.009) and cGMP metabolic process GO biological processes (GO: 0046068) (p adj = 0.046), and the magenta, red, and yellow modules showed multiple enrichments for GO biological processes, GO Fig. 4, Additional file 1); genes assigned to the red module were significantly enriched for GO terms and KEGG pathways involved in steroid, cholesterol, and fatty acid biosynthesis/metabolism ( Supplementary Fig. 5, Additional file 1); and genes found in the yellow module were significantly enriched for a variety of functions in terms of GO terms and KEGG pathways, such as photoperiodism, transcription regulation, insulin signaling, and more (Supplementary Fig. 6, Additional file 1). Both diet macronutrient composition and genetic background affected expression of modules containing homologs associated with obesity in humans The magenta, red, and yellow modules were enriched for biological pathways and correlated with body fat % (Figs. 5C and 6E-G; Supplementary Figs. 4-6, Additional file 1). To determine whether these modules contained DEGs in the CC associated with obesity in humans, the lists of genes assigned to each module were intersected with the list of genes previously found to be associated with obesity traits in humans in the GWAS catalog (Supplementary Table 9, Additional file 2), with examples for these modules shown in Table 6. By intersecting our results across different analyses, DEGs important to obesity in humans were found in biologically relevant modules associated with body fat % in the CC, where the DEG distribution across modules highlighted the larger contribution of differential expression by strain over diet. After finding that gene modules were correlated with body fat % and contained DEGs, we ascertained whether the average gene expression profile of these modules defined by their ME first principal components (PC1) differed by diet and/or strain. Wilcoxon ranked-sum test of the PC1 between mice fed the HP and HS diets for each module (Fig. 7A-E) revealed significant differences by diet for the yellow, red, magenta, pink, and tan modules (p < 0.01), but not the salmon module (p > 0.1). Interestingly, when the Kruskal-Wallis test was performed to determine whether PC1 differed by strain for each module (Fig. 7F-H), PC1 significantly differed by strain for the yellow, red, magenta, and salmon modules (all p ≤ 8.1 × 10 −4 ), but not the pink nor tan modules. Of the modules with MEs significantly correlated with body fat %, the yellow, red, and magenta modules exhibited differences by both macronutrient composition and CC strain. Relating module MEs and body fat %, Spearman's correlations performed between MEs and body fat % for the yellow, red, and magenta modules using data from all samples revealed a significant negative correlation between body fat % and the yellow module (rho = −0.28, p = 0.0016) and significant positive correlations between body fat % and the magenta (rho = 0.19, p = 0.037) and red (rho = 0.27, p = 0.0027) modules (Fig. 6E-G). Given the many enrichments in biological pathways found and significant differences in MEs by diet and CC strain for these three modules, Spearman's correlations were performed between MEs and body fat % by diet for each module to determine whether the relationship between MEs and body fat % remained consistent across different diets for enriched modules. The correlation between expression of the yellow module and body fat % was significant and negative for the HS diet only, while the correlation between expression of the magenta module and body fat % was significant and positive for the HS diet only ( Supplementary Fig. 7, Additional file 1). Unlike the yellow and magenta modules where the correlations between MEs and body fat % were only significant for the HS diet, the correlation between the red ME and body fat % remained significant and consistently positive for both diets (Supplementary Fig. 7, Additional file 1). In summary, Spearman's correlations performed between MEs and body fat % by diet for biologically relevant modules illustrated alterations in the direction and magnitude of associations between module MEs and body fat % depending on diet for the yellow and magenta modules, in contrast to the red module where the direction and magnitude of associations between module MEs and body fat % for the red module remained consistent regardless of diet, demonstrating the modules' different responses to diet. Discussion Obesity is a complex and heterogeneous disease whose development is caused by numerous biological factors, particularly genetics, diet, and gene expression. Though long established that obesity results from a chronic imbalance between energy intake and expenditure at a fundamental level, our understanding of exactly how diet and genetics interact to influence gene expression and how gene expression regulates the development of obesity remains to be fully elucidated. Because the liver regulates metabolism of macronutrients, cholesterol, and triglycerides, we measured hepatic gene expression in the CC to gain insight of how diet and genetic background impact obesity and related obesity-related traits. Correlations performed between hepatic gene expression levels and post-diet phenotype data revealed 2552 genes whose expression levels were significantly correlated with body fat % in the CC, some which were negatively correlated such as ApoM and Fmo3, while others were positively Table 6 DEGs assigned to enriched modules associated with obesity traits in humans Multiple DEGs in the CC assigned to enriched modules were associated with obesity traits in humans in the GWAS catalog. The number of DEGs for the magenta, red, and yellow modules identified by WGCNA illustrates the larger contribution of differential expression by strain over diet. Examples of genes with human homologs associated with obesity traits are shown for each module, where a denotes genes that are significantly correlated with body fat % in the CC correlated such as Aldh1a1 and Adipor2. ApoM encodes a membrane-bound apolipoprotein associated with highdensity lipoproteins, low-density lipoproteins, and triglyceride-rich lipoproteins; secreted through the plasma membrane, alipoprotein M is involved in lipid transport [99]. In the mouse, leptin the "satiety" hormone and leptin receptor are essential for expression of ApoM, but excess concentrations of leptin inhibited ApoM mRNA expression in a dose-dependent manner in the human hepatoma cell line HepG2, suggesting that leptin may mediate ApoM expression [55]. Although FMO3 is more well-known for its role in preventing trimethylaminuria (fishy odor syndrome) in humans [92], FMO3 also functions as a drug-metabolizing enzyme to catalyze the NADPH-dependent oxygenation of various molecules including therapeutic drugs and dietary compounds [65]. Intriguingly, studies in the mouse have suggested additional roles for FMO3 in health and disease, such as modulating cholesterol metabolism [96], glucose, and lipid homeostasis [78], and as a target for downregulation by insulin [58]. Since adipocyte secretion of leptin and insulin occurs in proportion with the volume of adipose tissue under "normal" circumstances, this may partially explain the negative correlations between body fat % and expression of ApoM and Fmo3. Magenta module In the current study, the hepatic gene expression levels of Aldh1a1 and Adipor2 were positively correlated with body fat %. Aldh1a1 encodes the protein aldehyde dehydrogenase 1 family, member A1 (ALDH1A1), also known as retinaldehyde dehydrogenase 1 (RALDH1), which is a prominent enzyme in the oxidative pathway of alcohol metabolism. However, various studies in mice have shown that ALDH1A1 also modulates hepatic gluconeogenesis and lipid metabolism through its role in retinoid metabolism [39], and upregulation of ALDH1A1 is associated with reduced adiponectin expression in adipose tissue after high-fat diet feeding [44]. Furthermore, mice without ALDH1A1 are resistant to diet-induced obesity, and inhibition of ALDH1A1 in mice suppresses weight gain [27,28], which is consistent with our finding and illustrates the potential for ALDH1A1 as a drug target for obesity prevention or treatment. Adipor2 encodes adiponectin receptor 2 which interacts with adiponectin to mediate fatty acid oxidation and glucose uptake [103]. An agonist of adiponectin receptor 2, the adipokine adiponectin, is inversely correlated with body fat mass and visceral adiposity in humans, though the mechanisms of how adiponectin's interactions with its receptors to elicit antidiabetic, anti-atherogenic, and anti-inflammatory effects are not fully understood [63]. After confirming the relationship between expression of genes related to obesity and body fat % in the CC, we investigated the effects of genetic background (strain) and diet on hepatic gene expression levels. Similar to adiposity and the obesity-related traits examined in our previous study [101], genetic background had a far stronger effect on hepatic gene expression than diet, as shown by the overwhelmingly larger number of significant DEGs by strain (9436) compared to the number of DEGs by diet (1344). Interestingly, gene expression of 28.9% of diet DEGs was significantly correlated with adiposity (389/1344) compared to 25% of strain DEGs (2367/9436). Of the top 20 most significant diet DEGs identified in the CC, carbamoyl-phosphate synthase 1 (Cps1), isovaleryl-CoA dehydrogenase (Ivd), neuropilin 1 (Nrp1), and pyruvate kinase L/R (Pklr) were previously found to be associated with obesity traits in humans [38,51,69,72,108], but only one of the top 20 most significant strain DEGs was associated with at least one obesity trait in humans, namely glycerophosphodiester phosphodiesterase domain containing 3 (Gdpd3) [108]. Gene enrichment analysis of DEGs revealed different trends between DEGs by diet compared to strain. DEGs by diet showed enrichment for KEGG pathways and Gene Ontology (GO) biological processes related to numerous types of metabolism, amino acid synthesis, and nonalcoholic fatty liver disease, whereas DEGs by strain showed enrichment for cell function pathways, type 1 diabetes, and fatty acid metabolism. Like KEGG pathway enrichment, GO term enrichment for cellular components and molecular functions also showed distinct differences between DEGs by diet compared to strain; DEGs by diet showed enrichment for multiple cellular components related to the mitochondrion, endoplasmic reticulum, and cell membrane, while DEGs by strain showed enrichment for cellular components related to the cell membrane, extracellular components, and cell surface. In terms of molecular functions, DEGs by diet showed enrichment for metabolism and binding for nutrients and small molecules such as cofactor binding, vitamin B6 binding, catalytic activity, and electron transfer activity, while DEGs by strain showed enrichment for binding related to general cell and tissue functions, such as extracellular matrix, collagen, signaling receptor, and fibronectin binding. The culmination of our results suggests that generally, diet alters gene expression for "acute" metabolic processes sensitive to environmental changes, but genetic background more heavily influences overall "essential" cellular function. Having identified genes with expression strongly influenced by diet or strain, we used the GWAS catalog as a guide to highlight clinically important genes found in our study by determining which DEGs may be most relevant to obesity-related traits in humans. The comparison between DEGs in the CC and genes in the GWAS catalog revealed that 65 diet DEGs and 431 strain DEGs correlated with body fat % in the CC have previously been identified as associated with obesity-related traits such as body fat distribution, BMI, waist-hip ratio, weight, and fat body mass in humans. One caveat regarding the number of DEGs in the CC found to be associated with obesity-related traits in humans is that our study focused only on gene expression in the liver of CC mice, while the genes listed in the GWAS catalog associated with obesity traits include candidates found in multiple tissue types; thus, including gene expression from additional tissue type such as brain or adipose tissue could yield additional candidate genes. Nonetheless, we identified genes expressed in the liver whose expression levels were either under genetic regulation, influenced by diet, or correlated with body fat %, which were also clinically important in humans using the CC panel as a model for obesity, which enabled the use of genetic "replicates" with high genetic diversity so that the results from this study are additive in scope. Using the between-and within-strain mean square values derived from linear models, we calculated H 2 estimates to quantify the degree to which genetic variation affects hepatic gene expression level variation. For the 11,542 genes included in our analysis, the range of coefficient of genetic determination (g 2 ) was broad as expected (g 2 = −0.056-0.983), but the median was lower than anticipated (g 2 = 0.173) given the strong effect of strain on the expression of most genes. Median H 2 estimates by DEG status increased slightly but not drastically (diet DEG g 2 = 0.195, strain DEG g 2 = 0.211), while H 2 estimates remained similar, suggesting that differences in macronutrient composition did not have a large impact on hepatic gene expression in this study. Upon examination of the relative heritable variation that can be attributed to strain, diet, and strain × diet effects for all genes, the largest proportion of heritable variation for gene expression can be attributed to genetic background (strain) on average (30.3%), while the proportions of heritable variation for gene expression attributed to diet (3.9%) and strain × diet interactions (2.6%) were much smaller, which reaffirms the strong effect of strain on gene expression relative to diet and strain × diet effects. However, one caveat of these approximations is that increasing the sample size would provide a better estimation of the relative heritable variation since the number of mice per strain per diet is relatively low, so the estimation of strain × diet effect may not be precise. Since obesity is a complex trait regulated by multiple genes, we used a gene co-expression network approach including the 11,542 expressed genes to find groups of genes that are similarly regulated by diet or strain and identified 13 gene modules comprised of a wide number of genes from 42 to 3319. Consistent with our DEG analyses, all modules were comprised largely of genes that were strain DEGs (> 69%), while the proportion of diet DEGs (0-49.5%) and genes with expression significantly correlated with body fat % (15.1-69.0%) varied much more widely, illustrating the variable effect of diet on gene expression compared to genetic background. Spearman's correlation of the MEs for identified modules with phenotypic data revealed six modules related to body fat %: tan, pink, salmon, magenta, red, and yellow. The MEs for all of these modules differed significantly by diet, except for the salmon module, suggesting that differences in diet macronutrient composition induce changes in gene expression for entire groups of genes. Similar to diet, the MEs for most of the modules also differed significantly by strain, except for the pink and tan modules. However, it is important to note that the ME variation within each strain appeared much higher for these two modules than the magenta, red, and salmon modules, an observation shown through the ability of utilizing genetic "replicates" with high genotypic and phenotypic diversity that is inherent to the CC; in fact, increasing the number of "replicates" would enhance the ability to find significant strain-by-diet differences. Thus, we show that both diet and strain may strongly affect hepatic gene expression, and that the CC can be used to interrogate the sources of inter-individual variation that underlies the variable response to diet observed in humans and mice. Enrichment analysis performed using the lists of genes assigned to each module allowed us to assess which modules identified in the CC may be most biologically relevant to obesity and human health. Of the six modules whose MEs were significantly correlated with body fat %, the number of enrichment terms were few to none for the salmon, pink, and tan modules, but the magenta, red, and yellow modules were significantly enriched for numerous functional pathways, biological processes, and/or diseases. For example, the magenta module was enriched for pathways related to endoplasmic reticulum (ER) function and contained 163 genes total, with 16 strain DEGs and five DEGs by both diet and strain associated with at least one obesity trait in humans. Two DEGs associated with obesity in humans from the magenta module that merit further study are stress-associated endoplasmic reticulum protein 1 (Serp1) and UDP-glucose glycoprotein glucosyltransferase 1 (Uggt1). Serp1 participates in the metabolism of proteins in the ER by protecting target proteins against degradation [102] and was differentially expressed by strain in the CC. Similarly, Uggt1 encodes the enzyme UDP-glucose:glycoprotein glucosyltransferase (UGT), which is also located in the lumen of the ER and provides quality control for protein transport by selectively enabling misfolded glycoproteins to rebind calnexin, resulting in either the proper folding of the glycoprotein or exposure to degradation enzymes if proper folding fails to occur [15]; Uggt1 was differentially expressed by both diet and strain in the CC. Studies have demonstrated that hepatic ER stress induced by obesity can lead to the development of hepatic insulin resistance and gluconeogenesis, likely through the activation of the JNK pathway [40,62,105]. Our findings reaffirm the association between obesity and alterations in hepatic gene expression related to ER function, suggest potential candidate genes for future study in relation to patient screening for diabetes risk, and provide a link between diet, five hepatic ER genes, obesity, and insulin resistance. Focusing on nine major genes pivotal to insulin signaling expressed in the liver of CC mice, the expression levels of six genes were significantly correlated with body fat % (Ide, Insig1, Insr, Irs2, Jak1, and Pik3r1), while six genes were only differentially expressed by strain (Hmga1, Ide, Insig1, Insig2, Irs1, and Jak1) and two genes were differentially expressed by both strain and diet (Irs2 and Pik3r1). Although all nine genes except Jak1 were assigned to a module in our network analysis, only Insig1, Insig2, and Irs2 were found in the enriched modules correlated with body fat % (magenta, red, or yellow). Assigned to the red module, Insig1 (insulin-induced gene 1) illustrates one pathway that insulin signaling regulates to alter lipid metabolism in both mice and humans [61]. In the livers of transgenic mice, overexpression of the INSIG1 protein reduces insulin-stimulated lipogenesis by inhibiting processing of sterol regulatory element-binding proteins (SREBPs) in the ER, membrane-bound transcription factors that activate lipid synthesis [17]. In humans, INSIG1 variants have been shown to influence obesity-related hypertriglyceridemia [83]. Two genes crucial to insulin signaling that were assigned to the yellow module were Insig2 (insulin-induced gene 2) and Irs2 (insulin receptor substrate 2). Similar to Insig1, Insig2 obstructs processing of SREBPs by binding to SREBP cleavage-activating protein in the ER, which results in blockage of cholesterol synthesis [100]. Genetic variants in INSIG2 (rs75666605) have been associated with severe obesity in a North Indian human population [68] and increased blood pressure and triglyceride levels in Brazilian obese patients [60]. Differentially expressed by both strain and diet, IRS2 is a vital mediator of insulin signaling since it acts as an immediate downstream substrate of insulin receptors and activates a cascade of serine-protein kinases to modulate numerous metabolic processes [2,14]. In mice, conditional knockout of Irs2 led to increased appetite and insulin resistance that progressed to diabetes [48] and lower levels of thyroid hormones [34]. In summary, our findings help explain the influences of genetic background and dietary macronutrient composition on clinically significant genes involved in insulin response relative to obesity development. For future studies, investigating the transcriptome and epigenome of both adipose tissue and hepatic tissue together would further clarify the genetic and dietary mechanisms that drive the cross talk between tissue types to modulate energy balance and insulin response in the context of obesity development. If possible, integrating microbiome data would provide yet another "piece of the puzzle" for the elucidation of how genetic and environmental factors interact in the development of obesity. Nonetheless, our findings show that both variation in genetic background and diet can strongly influence hepatic gene expression of both individual genes and groups of related genes relevant to obesity. Conclusions This study determined the relationship between genetics and macronutrient composition on hepatic gene expression relative to obesity. To relate adiposity and obesityrelated traits to hepatic gene expression, correlations were performed using phenotype data and microarray data, revealing 2552 genes whose expression levels were significantly correlated with adiposity. In general, the effect of strain was much stronger than diet on hepatic gene expression as demonstrated by differential gene expression analysis which found over 9000 genes differentially expressed by strain compared to 1344 genes differentially expressed by diet. Interestingly, diet differentially expressed genes (DEGs) were enriched for many biological pathways associated with substrate metabolism, whereas strain DEGs were enriched for pathways less sensitive to environmental perturbations. Because common obesity is caused by multiple genes, weighted gene co-expression network analysis (WGCNA) was performed to identify clusters of related genes grouped into "modules. " Multiple gene modules were found that differed in average expression by both diet and strain, where three of the gene modules were correlated with adiposity and enriched for biological pathways related to obesity development. By combining all the analyses above and searching in the genome-wide association studies (GWAS) catalog, the list of obesity candidate genes found via GWAS in humans can be narrowed down to increase the success of future functional validations studies. Furthermore, we demonstrated that both strain and diet influence expression of individual genes as well as the expression for groups of related genes. By integrating phenotype data into our analysis, we found both individual genes and gene modules expressed in the liver that were related to adiposity and other clinical traits. This work sheds light on one way that genetic background and diet influence adiposity, where the identification of genes expressed in the liver related to adiposity provides concrete preliminary suggestions of specific "intermediary" mechanisms that bridge genetics and diet with obesity such as insulin signaling, which may be validated in future studies and contribute to the field of precision nutrition. Animals, husbandry, diets, and phenotyping Details on the origin, housing, husbandry, treatment of the CC mice, diet compositions, and phenotyping have been described previously [101]. Briefly, female mice from 22 CC strains (total n = 204) were obtained from the UNC Systems Genetics Core Facility in 2016 and put on either a high-protein (n = 102) or high-fat highsucrose (n = 102) diet for 8 weeks followed by analysis of body composition, metabolic rate, and physical activity. After 8 weeks on experimental diets, mice were euthanized following a 4-h fast for the collection of blood and liver tissue. Subsequently, circulating cholesterol, triglyceride (TG), glucose, albumin, creatinine, urea/BUN, aspartate transaminase (AST), and alanine transaminase (ALT) levels were quantified using the Cobas Integra 400 Plus (Roche Diagnostics, Indianapolis, IN), according to manufacturer's instructions. Circulating insulin was measured using an ultrasensitive mouse insulin ELISA (ALPCO Diagnostics, Salem, NH) per manufacturer's instructions. Trimethylamine N-oxide (TMAO), choline, betaine, and carnitine were quantified using liquid chromatography-mass spectrometry (LC-MS) methods as described with modifications [95]. Metabolic health scores were calculated using measurements of several metabolic risk factors (circulating glucose, insulin, glucose/insulin ratio, cholesterol, triglycerides, and body fat %) to approximate overall metabolic health [101]. Microarray analysis for identification of gene expression levels associated with post-diet traits and differentially expressed genes in liver tissue Methods of RNA extraction from livers and evaluation of RNA integrity were performed as previously described [8]. Randomly selecting 3 mice per stain per diet for microarray analysis, high-quality RNA was available from livers of 127 of the 204 CC mice and hybridized to Affymetrix Mouse Gene 2.1 ST 96-Array Plate using the GeneTitan Affymetrix instrument (Affymetrix, Inc., Santa Clara, CA) according to standard manufacturer's protocol. The robust multiarray average (RMA) method was used to estimate normalized expression levels of transcripts (median polish and sketch-quantile normalization) using the affy R package [23]. The quality of sample arrays was then assessed using the R package arrayQualityMetrics [37] for outlier detection using 3 methods: distance between arrays/principal component analysis, computation of the Kolmogorov-Smirnov statistic K a between each array's intensity distribution and the intensity distribution of the pooled data to compare individual array intensity to the intensity of all arrays, and computing Hoeffding's statistic D a to check individual array quality. Sample arrays identified as outliers by all three methods were removed, i.e., a sample array was removed if all three methods indicated that it was an outlier, leaving 123 out of 127 arrays for analysis (Supplementary Table 12, Additional file 2). Probes and transcript cluster IDs (TC IDs) were first filtered as described [70], resulting in the total number of 24,004 unique probes post-filter corresponding to 23,626 genes. Next, TC IDs were kept for analysis if their median expression was above the mean of all TC ID medians or if their median expression was above the mean of all TC ID medians in over 12.5% of samples, based on the assumption that by chance, one of the 8 founders may be contributing low/no expression alleles. For TC IDs associated with the same gene, the TC ID with the highest expression was selected to represent that gene so that each gene was represented by a unique TC ID for analysis, resulting in 11,542 TC IDs (genes) used for differential gene expression analysis and correlations between gene expression levels and phenotype data. After filtering TC IDs and arrays for quality, calculations of multiple biweight midcorrelations (bicor) and their corresponding Student correlation p-values were performed for the unique TC IDs corresponding to 11,542 genes using the bicorAndPvalue function from the weighted gene co-expression network analysis (WGCNA) R package [45] to ascertain which genes' expression in the liver was correlated with post-diet traits. Next, differential gene expression analysis was performed using the linear models for microarray analysis (limma) R package version 3.6.1 [73] and methods described [66] to find genes that were significantly differentially expressed by diet or CC strain. Genes with a Benjamini-Hochberg (BH)-adjusted p-value < 0.05 were designated as differentially expressed (DE). The Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway and gene ontology (GO) enrichment analyses were performed using the kegga and goana functions in limma for differentially expressed genes with the false discovery rate (FDR) cutoff set to 0.05. Broad-sense heritability estimates and diet intraclass correlations of hepatic gene expression levels Broad-sense heritability (H 2 ) estimates and the intraclass correlations (ICC) for diet were calculated as described previously [101] for the 11,542 genes used in limma analysis to assess the degree of influence on gene expression variation from genetics (strain) and diet, respectively. H 2 was estimated by calculating the intraclass correlation (r I ) and the coefficient of genetic determination (g 2 ) using mean square between (MSB) strains and mean square within (MSW) strains values derived from linear regression analysis [20]. The following linear models were fit using the lm function and implementing Satterthwaite approximations on the output of lm as described [54] to obtain MSB and MSW values for r I and g 2 calculations: (1) a "full" additive model with strain, diet, and week (mouse "batch") as variables fitted with gene expression data from both experimental diets, (2) a "HP" additive model including strain and week as variables fitted with gene expression data from only mice fed the HP diet, and (3) a "HS" additive model including strain and week as variables fitted with gene expression data from only mice fed the HS diet. H 2 estimates derived from models fitted with data from all mice post-diet compare the contribution of genetics (strain) and diet overall to heritable gene expression level variance, while diet-specific H 2 estimates were calculated to discern differences in heritability affected by differences in macronutrient composition. The diet ICCs were calculated using the mean square between (MSB) diets and mean square within (MSW) diets derived from the "full" additive linear model described above. Additional linear mixed model analyses with strain, diet, and strain × diet interactions as all random effects were performed for each gene to estimate the relative heritable variation in gene expression that can be attributed to strain, diet, and strain × diet effects. Weighted gene co-expression network analysis (WGCNA) The WGCNA R package was used to identify modules for the 11,542 expressed genes used in microarray analysis of differentially expressed genes since complex traits often result from changes in expression of multiple genes. Expression data from the 123 non-outlier sample arrays were used to detect modules, which are groups of highly correlated genes with similar connection strengths [24,106]. The soft threshold was chosen by running the pickSoftThreshold function to determine the best fit to a scale-free topology, and beta was set to 5 because it was the lowest power value where the R 2 value crossed the 0.9 threshold for approximate scale-free topology and connectivity measures implicated the possibility of finding highly connected genes. The blockwiseModules function was run to construct the unsigned network in one block, calculate an adjacency matrix with Pearson correlations, calculate the topological overlap matrix (TOM) using the signed method, cluster genes using the default average linkage hierarchical clustering, and establish modules by the dynamic hybrid tree cut method [45]. Next, the mergeCloseModules function was used to merge closely related and highly correlated modules. Module eigengenes were calculated, and Spearman's correlations were performed between module eigengenes and measured phenotypes. KEGG pathway enrichment and gene ontology analyses were performed on genes within each module using Enrichr as described [70] to see which modules contained genes associated with biological function or diseases. Cytoscape [77] was used to generate a visualization of the relationship between genes within a module, using the magenta module as an example. Human GWAS catalog analysis Entries in the EMBL-EBI Human GWAS catalog v1.0.2 accessed in 2021 were indexed to matching mouse genes [4] to compare the DEGs found in the CC with homologous genes in humans. Human gene symbols from the "MAPPED_GENE" catalog column (described here: https:// www. ebi. ac. uk/ gwas/ docs/ metho ds/ curat ion) were matched against mouse gene symbols after case normalization, white space removal, and, in the case of multiple mapped genes, delimiter separation. Additional statistical analyses All statistical analyses were performed in R (v.3.6.1) [71]. Diet or strain effects on module eigengenes were assessed using the two-group Mann-Whitney U (Wilcoxon rank) test or Kruskal-Wallis statistical test, respectively. The Mann-Whitney U (Wilcoxon rank) test and Kolmogorov-Smirnov test were performed to test whether the distributions of diet-specific H 2 estimates (g 2 ) differed significantly. In general, p-values were adjusted using the Benjamini-Hochberg (BH) method where indicated. Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12263-022-00714-x. Table 2. Significant correlations between phenotypes and the top 30 genes with expression levels most strongly correlated with post-diet BF%. Supplementary Table 3. All genes with significant differential expression by diet (1,344
Spinning particle orbits around a black hole in an expanding background We investigate analytically and numerically the orbits of spinning particles around black holes in the post Newtonian limit and in the presence of cosmic expansion. We show that orbits that are circular in the absence of spin, get deformed when the orbiting particle has spin. We show that the origin of this deformation is twofold: a. the background expansion rate which induces an attractive (repulsive) interaction due to the cosmic background fluid when the expansion is decelerating (accelerating) and b. a spin-orbit interaction which can be attractive or repulsive depending on the relative orientation between spin and orbital angular momentum and on the expansion rate. I. INTRODUCTION Even though most astrophysical bodies have spins and evolve in an expanding cosmological background, their motion is described well by ignoring the cosmic expansion and under the nonspinning test particle approximation for large distances from a central massive body and for relatively low spin values [1]. These approximations however become less accurate for large values of the spin and/or when the mass of the cosmic fluid inside the particle orbit becomes comparable to the mass of the central massive object. For such systems new types of interactions appear which are proportional to the time derivatives of the cosmic scale factor and the spin of the orbiting particle. For example, phantom dark energy models can lead to dissociation of all bound systems in the context of a Big-Rip future singularity [2][3][4][5]. Also, the spin-curvature interaction [6] can modify the motion of the test particles in black hole spacetimes [7][8][9][10][11] due to spin-spin or spin-orbit couplings [12][13][14], or make the motion chaotic [15][16][17] thus modifying significantly the orbit of the test body leading to the emission of characteristic forms of gravitational waves [18][19][20][21]. Such interactions have been investigated previously for nonspinning test particles in an expanding background around a massive body (McVittie background [22]) and it was shown that accelerating cosmic expansion can lead to dissociation of * ianton@uoi.gr † papadop@astro.auth.gr ‡ leandros@uoi.gr bound systems in the presence of phantom dark energy with equation of state parameter w < −1 [2][3][4]. In the absence of expansion but in the presence of spin for the test particles it has been shown that spin-orbit and spin-spin interactions in a Kerr spacetime can lead to deformations of circular orbits for large spin values [14]. In view of these facts, the following interesting questions emerge 1. Are there circular orbit deformations for spinning test particles embedded in the post Newtonian limit of McVittie background (Schwarzschild metric embedded in an expanding background)? Such deformations could be anticipated due to the coupling of the particle spin with its orbital angular momentum. 2. What is the nature of such deformation and how do the corresponding deformations depend on the orientation of the spin with respect to the angular momentum? 3. How do these deformations depend on the nature of the background expansion? These questions are addressed in the present analysis. The structure of this paper is the following: In the next section we briefly review the Mathisson-Papapetrou (MP) equations [23] and the common supplementary conditions, we introduce the McVittie background corresponding to a black hole embedded in an expanding background and its post Newtonian limit. In section III we discuss the conserved quantities of a spinning test particle in a given spherically symmetric metric in an arXiv:1903.03835v1 [gr-qc] 9 Mar 2019 expanded background, we consider the post Newtonian limit of McVittie metric and construct the geodesic equations of a spinning particle using the Mathisson-Papapetrou equations. We also solve these equations numerically and identify the deformation of orbits due to the presence of test particle spin. We identify the dependence of this deformation on the relative orientation between spin and orbital angular momentum of the spinning test particle. Finally, in section IV we summarize, discuss the implications of our results and identify possible future extensions of our analysis. Consider a massive spinning test particle, in MP's model [23,24]. The equations of motion of a spinning particle originally derived from Papapetrou (1951) and later on reformulated by Dixon [25,26] can be extracted through the corresponding Hamiltonian [27,28] or through the extremization of the corresponding action [29], whose variation is [30] where υ µ = dx µ dτ is the four-velocity of the test particle tangent to the orbit x µ = x µ (τ ), τ is the proper time across the worldline x µ (τ ), p µ is its four-momentum and S µν are the components of the antisymmetric spin tensor. Also, Ω µν = η IJ e µI De νJ dτ is an antisymmetric tensor, η IJ = e µ I e ν J g µν and e µ I is a tetrad attached to each point of the worldline. The MP equations are of the form [30][31][32][33]: The dynamical equations imply, spin-orbit coupling, i.e., spin couples to the velocity of the orbiting spinning particle, thus deforming the geodesic. Therefore the spin force deforms the geodesic. The spin tensor keeps track of the intrinsic angular momentum associated with a spinning particle. The term in the r.h.s. of Eq. (2.2) shows an interaction between the curvature of the spacetime and the spin of the particle. Due to the coupling between curvature and spin, the fourmomentum is not always parallel to the υ µ . This may be seen by multiplying Eq. (2.3) with υ ν . Then, leads to where m = −p µ υ µ is the rest mass of the particle with respect to υ µ . Since τ is the proper time, the condition υ µ υ µ = −1 applies. The measure of the fourmomentum provides the 'total' or 'effective' [8] rest mass µ (p µ = µu µ ) with respect to p µ where u µ is the 'dynamical four-velocity'. and is equal to m, only if υ µ coincides with the four-velocity u µ (u µ = υ µ ). In the linear approximation of the spin p µ and υ µ are parallel. Generally, since u µ = υ µ which means that DS µν dτ = 0 (see Eq. (2.4)) a spinning particle does not follow the geodesics of the spacetime (the r.h.s. of Eq. (2.2) is non zero, since S µν = 0). Therefore its motion is generalized on a world line rather than geodesics. In the context of the MP equations the multipole moments of the particle higher than a spin dipole are ignored [34]. This is the spin-dipole approximation, because the particle is described as a mass monopole and spin dipole [35]. The equations in quadratic order of spin have also been derived [36]. The MP equations can also get generalized in order to describe a test spinning particle in Modified theories of Gravity [37]. Eqs. (2.2) and (2.3) are the equations of motion for a spinning body which reduce to the familiar geodesic equations when the spin tensor S µν vanishes. However, they do not form a complete set of equations and we need further equations to close the system [62]. The problem of the unclosed set of equations in (2.2) and (2.3) can be physically understood by the requirement that the particle must have a finite size which does not make the choice of the reference worldline uniquely defined 1 . The additional conditions used are the spin supplementary conditions (SSC) [63]. When we choose a SSC, we define the evolution of the test body in a unique worldline x µ (τ ) and we fix the center of mass (corresponds to the centre where the mass dipole vanishes), which is usually called centroid. The centroid is a single reference point inside the body, with respect to which the spin is measured [64]. There are several SSC but two of them are more commonly used so that the spin four-vector is perpendicular to the four-velocity and implies that dµ dτ = 0 [66]. It does not provides a unique choice of representative worldline, as it is dependent on the observer's velocity and therewith on the initial conditions. It is often referred to as the proper centre of mass [63]. • The T condition (Tulczyjew-Dixon) [67] p µ S µν = 0 (2.7) so that the spin four-vector is perpendicular to the four-momentum and implies that dm dτ = 0 [62]. This condition is physically correct, since the trajectory of the extended body is determined by the position of the center of mass of the body itself [68]. This constraint is a consequence of the theory, i.e., the Tulczyjew constraint can derived from the Lagrangian theory [69] and restricts the spin tensor to generate rotations only. Analytic discussions and thorough reviews on different choices about the SSCs may be found in refs [70][71][72]. Generally, different SSC are not equivalent since every SSC defines a different centroid for the system. The author of ref. [1] point out that the difference between the two conditions (2.6) and (2.7) is third order in the spin, so results for physically realistic spin values, are unaffected. In what follows we use the T condition, 1 https://d-nb.info/1098374932/34 which defines the centre of mass of the particle in the rest frame of the central gravitating body. The McVittie metric describes a expanding cosmological background with strong gravity, such as a spacetime near a black hole or a neutron star. In a (t, r, θ, φ) coordinate system McVittie [73] found a solution given by the equation (see eq. (29) of ref. [73] with G = c = 1) The component G t r of the Einstein tensor is Imposing the "no-accretion" condition G t r = 0 (there is no flux of relativistic mass across the equatorial surface [73]) we find thatȧ a = −ṁ m or m = m0 a(t) , where m 0 is a constant of integration and is identified with the mass of the central body at the origin [74]. The curvature of space is here assumed to be asymptotically zero. At any instant of time t 1 the observer's coordinate for measuring distance from the origin is r = ra(t 1 ). If we write M = m(t 1 )a(t 1 ), the metric (2.8) becomes (2.10) In the weak field limit we have M 2r 1, ie (2.11) which is the Newtonian limit of Schwarzschild's spacetime. Setting r = a(t)ρ and R s = 2M the metric (2.11) reads (2.12) For a static background (a = 1) the metric (2.12) becomes the Schwarzschild metric in isotropic coordinates (the spacelike slices are as close as possible to Euclidean) as expected [75], while for R s = 0 becomes the FRW metric in spherical coordinates. The 'areal' radius [76] of the metric (2.12) is equal to the square root of the modulus of the coefficient of the angular part dΩ 2 of the metric, namely and the corresponding modulus of angular momentum, which is a constant of motion for a spinless particle, defined as III.1. The MP equations in an expanding Universe We consider the case where the spinning particle orbits on the equatorial plane, which means that θ = π/2. Also, on the equatorial plane valid υ 2 ≡ υ θ = 0 and p θ = 0 since p µ = µ 2 m υ µ . The metric (2.12) is independent of the φ coordinate, therefore admits a φ-Killing vector e.g. ξ µ = (0, 0, 0, 1) which gives where J z is the z component of the angular momentum, which is a conserved quantity of the motion of a spinning particle. This constant of motion exists independently of the choice of the supplementary condition and reflects the symmetry of the background spacetime. The spin tensor has six independent components but since we demand equatorial planar motion, the particle must have angular momentum only in z axis (J z = 0). The conditions J x = 0, J y = 0 and p θ = 0 (necessary conditions for motion in the equatorial plane) require that S rθ = 0 and S θφ = 0. Also, the absence of acceleration perpendicular to the equatorial plane implies that S tθ = 0 [57]. Thus, planar motion requires alignment of the spin with the orbital angular momentum and the motion characterized only by three independent spin components. With these assumptions, the spin tensor becomes a vector and the formulation will be simpler. From the T condition (2.7) we derive the spin components S 03 and S 13 in terms of S 01 as In order to complete the system of Eqs. (2.2) and (2.3) we have to add two more equations, corresponding to conserved quantities in the context of the T condition. The first is the dynamical mass µ [77] with respect to the four-momentum p µ which defined through Eq. (2.5) and the second is the particle's total spin s which is defined as the positive root of The first derivative of s 2 with respect to τ isṡ 2 = 2p µ S µν υ ν [62] which vanishes in the context of T condition. From (3.4) we have Using the Eqs. (2.5) and (3.5) we define the parameter Ω 2 as the ratio which is a constant of motion, since µ and s are conserved quantities. From Eq. (3.7) it is easy to calculate the spin component S 01 Thus, from Eqs. (3.3) and (3.8) the non zero spin components in our consideration are Using now the post Newtonian limit of McVittie metric (2.12), starting from the MP equation (2.2) and setting the index µ = 1 it is straightforward to derive the radial geodesic equation for the spinning particle. We replace the distance ρ as ρ = r/a and the corresponding derivatives with respect to t,ρ = dρ/dt andρ = d 2 ρ/dt 2 . Also, we ignore terms of order (R s ) 2 (post Newtonian limit) and the final result is Similarly, from the MP equation (2.2) and setting the index µ = 3 = φ we obtain which would lead to orbital angular momentum conservation in the absence of spin (Ω = 0). Indeed, the first derivative of Eq. (2.14) with respect to time must be zero and gives the Eq. (3.11) for a spinless particle [76]. Now, we introduce the rescalling through the variablest ≡ t Rs ,r ≡ r Rs and Ω s ≡ Ω Rs = s µRs and from now on we omit the bar. The radial equation (3.10) leads tö • It is clear from Eq. (3.13) that the orbital angular momentum is not conserved due to the presence of the spin angular momentum. What is actually conserved is the z component of the total angular momentum J z which is expressed through Eq. (3.2) in terms of the angular and the spin angular momenta. • The driving force term proportional to Ω s andφ in the radial geodesic equation (3.12) has the form of a spin-orbit coupling and changes sign when the spin angular momentum reverses its direction with respect to the orbital angular momentum which is proportional toφ. This term is responsible for the deformation of the circular orbits and induces the well known chaotic behavior [78] of the spinning particle orbits in the absence of background expansion. In what follows we solve the geodesic equations (3.12)-(3.13) for different forms of the expansion (static, accelerating, decelerating and constant) of the cosmological background and various values of the magnitude of the spin s and consequently of the dimensionless parameter Ω s . We setṙ(t i ) = 0 (t i = 1 is the initial time of the simulation) andφ(t i ) so thatr i = 0 corresponding to an initially circular orbit. We present analytically this issue in the Appendix. Also, we normalize the scale factor setting a(1) = 1 and we set the particle at initial distance r i = 6 from the black hole. III.2. Numerical Solutions For a static universe (a(t) = 1) Eq. (3.12) reduces tö while the Eq. (3.13) becomes The effect of the spin-orbit coupling force is demonstrated in Figs. 1 and 2 where we show the circular orbits disrupted due to the spin-orbit coupling. For Ω sφ > 0 (see Fig. 1) the spinorbit coupling force is attractive, since the term − 3Ωsφ 2r 2 in Eq. (3.12) is negative and the circular orbits (for a spinless particle) are deformed FIG. 1. Spinning particle orbits in a static universe. The circular orbits that would be present for a non-spinning particle get disrupted due to the spin-orbit coupling in the presence of spin. For Ωsφ > 0 the spin-orbit coupling force is attractive and the circular orbits are deformed inward. The left panel (where Ωs = 0.6) corresponds to maximum (critical) value of Ωs, for which the particle remains bounded. The innermost stable circular orbit (ISCO) is 3Rs. When Ωs > 0.6, at some time the radius of the orbit becomes less than 3Rs and the particle is captured by the black hole (right panel). For non-spinning particle (Ωs = 0) the circular orbits shown in right panel remain undisrupted. inward. The orbit of the motion of the particle remains bounded if the radius of the orbit is larger than 3R s . This is the well known effect of the 'innermost stable circular orbit' (ISCO) [50,79,80]. It is defined as the smallest circular orbit in which a test particle can stably orbit a massive object [81]. Since r ISCO = 3R s for a spinless central body in Schwarzschild spacetime, it is obvious that only black holes have innermost radius outside their surface. This minimum allowed radius for bounded motion corresponds to a critical value of the dimensionless parameter Ω s = 0.6 (left panel). Generally, in the presence of spin the orbits are bounded between a minimum and a maximum radius (R s = 6). As the spin increases (Ω s > 0.6), at some time the orbit's radius becomes less than 3R s and the particle gets captured by the black hole (right panel). For non-spinning particle (Ω s = 0) the circular orbits shown in right panel remain undisrupted. For Ω sφ < 0 (see Fig. 2) the spin-orbit coupling force is repulsive, since the term − 3Ωsφ 2r 2 in Eq. (3.12) is positive and the circular orbits (for s = 0) are deformed outward. The orbits of the motion of the spinning particle in all cases are bounded between a minimum (R s = 6) and a maximum radius. In the presence of a decelerating expansion with a(t) ∼ t 2/3 the orbits (solutions of Eqs. (3.12)-(3.13)) are shown in Fig. 3 for clockwise and counterclockwise rotation and initial conditions that would lead to a circular orbit in the absence of spin and expansion. In this case the effects of the expansion combined with the effects of the spin lead to rapid dissociation of the system or capture by the black hole. The result depends on the magnitude of the attractive and repulsive terms in Eq. (3.12). Some orbits of the spinning particles for this case are shown in Fig. 3. In left panel of Fig. 3 the initial rotation is clockwise, sinceφ(1) < 0. In this case, the term − 3Ωsφ 2r 2 in Eq. (3.12) is repulsive and even if the cosmological background is decelerating, for large enough values of spin, such as Ω s = 1 the particle rapidly gets deflected to an unbounded orbit. However, for small values of spin, such as Ω s = 0.1 or Ω s = 0.5 the decelerating background dominates and at some time the particle gets captured by the black hole. Fig. 1 but the spinning particle orbits in the opposite direction. The circular orbits that would be present for a non-spinning particle get disrupted due to the spin-orbit coupling in the presence of spin. For Ωsφ < 0 the spin-orbit coupling force is repulsive and the circular orbits are deformed outward. For non-spinning particle (Ωs = 0) the circular orbits shown in both panels remain undisrupted. Notice that the Ωs = 0 circular orbit, which corresponds to the absence of spin, is an inner bound for clockwise rotation. In any case the particle remains bound. Similar results are shown in the right panel of Fig. 3, where the initial rotation of the particle is counterclockwise. In this case, the term − 3Ωsφ 2r 2 in Eq. (3.12) which describes the spin-orbit interaction is attractive. For small values of the dimensionless parameter Ω s , such as Ω s = 0.1 the spinning particle approaches the black hole and when the radius of the orbit becomes less than 3R s , the particle gets captured by the strong gravity of the central body. However, when the spin takes larger values such as Ω s = 0.5 or Ω s = 1 the particle gets deflected to an unbounded orbit, despite of the initially attractive effective force induced on the spinning particle. The expansion effects lead to dissociation of the initially bound system. Now, we consider the effects of a de Sitter background expansion of the form a(t) = e Ht (3.16) where H = Λ 3 andΛ is the cosmological constant in dimensionless form . We solve the system of Eqs. (3.12) and (3.13) with the same initial conditions (circular orbit in the absence of spin and expansion). We set the cosmological constant equal toΛ = ΛR 2 s = 3 × 10 −2 [82] and we present the trajectories of the particle in Fig. 4. We also show the corresponding orbit of a spinless particle in a static universe, in order to observe the deviation of each orbit from the circular. Setting a mass value of a typical black hole as M = 10M = 2 × 10 31 Kg, we conclude that the dimensionless value ΛR 2 s = 0.03 corresponds to Λ 3 × 10 6 sec −2 or Λ 1.3 × 10 −42 GeV 2 much larger than the cosmological constant leading to the cosmic acceleration Λ 10 −82 GeV 2 . Due to this normalization, orbit disturbances are much larger than the realistic form corresponding to a realistic cosmological setup. In left panel of Fig. 4 the initial rotation is clockwise, sinceφ(1) < 0. In this case, the term (− 3Ωsφ 2r 2 ) in Eq. (3.12) is positive and induces repulsion. In this case the repulsive effects of the accelerating cosmic expansion are amplified by the effects of the spin. For initial counterclockwise rotation (right panel in Fig. 4) the term − 3Ωsφ 2r 2 in radial equation is negative and induces attraction. However, for spinless particle or small values of spin and consequently of the parameter Ω s , such as Ω s = 10, the accelerating cosmological back- Black hole ,Ω s =0 For small values of Ωs the particle gets captured by the black hole, but as the parameter Ωs increases the particle rapidly gets deflected to an unbounded orbit. McVittie-Post Newtonian limit ground dominates and the particles get deflected to unbounded orbit. On the contrary, when the spin of the particle is large, such as Ω s = 100, the attractive term − 3Ωsφ 2r 2 in radial equation dominates the expansion and the spinning particle gets captured by the black hole. A crucial question of our analysis is which are the cosmological time intervals after which the effects of the expansion would become apparent. The answer can be easily obtained on dimensional grounds by equating the dimensionless parameters relevant for gravitational attraction (M/r) and background expansion H 0 ∆t where H 0 is the Hubble parameterȧ/a at the present time and ∆t is the required time interval for the expansion effects to be observable. By equating these two parameters we find that the required time interval after which the cosmological expansion effects would become apparent on the trajectories is where we have set G = 1. The time interval ∆t can be easily derived in S.I. as ∆t GM H0rc 2 and in Table I we give some estimates of the cosmological time intervals for a typical black hole, the solar system, a typical galaxy and a typical cluster of galaxies. The time intervals are in years, since we have consider that 1/H 0 1.4 × 10 10 years (the approximate age of the Universe). The MP equations have also been generalized to the case of modified theories of gravity, in which the matter energy-momentum tensor is not conserved. In modified gravity theories the Schwarzschild metric gets modified and so does the weak field limit, as we can see e.g. from eq. (32) of Ref. [83], which state to f (R) theories (G = 1) Here, Q = rV (r) is the charge of a black hole, V (r) the potential and R 0 the curvature of the spacetime which we consider constant. An analysis along the line of the derivation of the McVittie metric for General Relativity (as discussed in [5]) Fig. 3, but the scale factor is of the form a(t) = e Λ 3 t (de Sitter universe) withΛ = ΛR 2 s . Notice the strong repulsive effects on the trajectories of the spinning/spinless particle for initial clockwise rotation (left panel) due to accelerating background expansion. The term − 3Ωsφ 2r 2 in radial equation (3.12) induces repulsion (left panel). However, for initial counterclockwise rotation (right panel) and extremely large spin, the particle captured by the black hole since the term − 3Ωsφ 2r 2 in radial equation induces attraction and dominates. could generalize this metric to the case of f (R) theories and also lead to the derivation of its Newtonian limit (the generalization of Eq. (2.12)). Alternatively one could directly include the scale factor a(t) as a new factor along with the radial coordinate in Eq. (32) of [83] and then take the Newtonian limit showing that it is a good approximation of the dynamical field equations for f (R) gravity. This task is beyond the scope of the present analysis but it should be straightforward to implement in a future extension of our analysis. IV. CONCLUSIONS We have constructed and solved numerically the MP equations in the post Newtonian limit of McVittie background thus obtaining the orbits of spinning particles close to a massive object in an expanding cosmological background. We have identified the effects of a spin-orbit coupling which can be repulsive or attractive depending on the relative orientation between spin and orbital angular momentum. A static universe (no expansion) was shown to lead to disrupted spinning particle orbits which are not closed and are confined between a maximum and a minimum radius. This range increases with the value of the spin. As expected for the spin values, for which the radius of the motion of the particle becomes less that 3R s , the particle is captured by the black hole. This result is in agreement with previous studies that have indicated the presence of such behavior of the orbits [31]. Interesting extensions of our analysis include the construction and solution of the MP equations for the strong field regime of the McVittie metric, or the consideration of different SSC like the P condition. APPENDIX In the present analysis we have focused on the distortion of orbits that would be circular in the absence of expansion and spin. In order to solve the system of equations (3.12) and (3.13) we have assumed that initially the test particle has zero radial velocity (ṙ(t i = 1) = 0) and zero radial acceleration (r(t i = 1) = 0). The initial value of the derivativeφ(1) can derived through the geodesic equation (3.12). We set a(t i = 1) = 1 and initial position for the particle r i = 6 in units of R s . Assuming a static Universe with a(t) = 1 we compute the initial angular momentum from equation (3.12). We set all the time derivatives of the scale factor equal to zero and thus we arrive at the following quadratic equation r 2 i (2r i − 1)(φ(1)) 2 − 3Ω sφ (1) − 1 = 0 (4.1) Setting Ω s = 0, we obtaiṅ φ(1) = ± √ 11 66 ±5 × 10 −2 (4.2)
The Bacterial Microbiota of Edible Insects Acheta domesticus and Gryllus assimilis Revealed by High Content Analysis In the concept of novel food, insects reared under controlled conditions are considered mini livestock. Mass-reared edible insect production is an economically and ecologically beneficial alternative to conventional meat gain. Regarding food safety, insect origin ingredients must comply with food microbial requirements. House crickets (Acheta domesticus) and Jamaican field crickets (Gryllus assimilis) are preferred insect species that are used commercially as food. In this study, we examined cricket-associated bacterial communities using amplicon-based sequencing of the 16S ribosomal RNA gene region (V3–V4). The high taxonomic richness of the bacterial populations inhabiting both tested cricket species was revealed. According to the analysis of alpha and beta diversity, house crickets and Jamaican field crickets displayed significantly different bacterial communities. Investigation of bacterial amplicon sequence variants (ASVs) diversity revealed cricket species as well as surface and entire body-associated bacterial assemblages. The efficiency of crickets processing and microbial safety were evaluated based on viable bacterial counts and identified bacterial species. Among the microorganisms inhabiting both tested cricket species, the potentially pathogenic bacteria are documented. Some bacteria representing identified genera are inhabitants of the gastrointestinal tract of animals and humans, forming a normal intestinal microflora and performing beneficial probiotic functions. The novel information on the edible insect-associated microbiota will contribute to developing strategies for cricket processing to avoid bacteria-caused risks and reap the benefits. Introduction Over the next three decades, the global human population is expected to reach almost 10 billion people, which, combined with rising welfare, will result in significant resource use and environmental issues [1]. Population and consumption growth continue to drive up food demand and put strain on food supply systems [2,3]. More wildland might be converted to agriculture, allowing more cattle or crops to be raised, hence increasing global food production. However, such a decision would have detrimental consequences for the environment, including increased carbon emissions, high water and land usage, and a rapid loss of biodiversity [4,5]. Therefore, industries are obligated to think of strategies for the sustainable intensification of food production [6]. The suitable options are to consume less meat and/or start growing alternative protein sources of less resource-needy animal origin. Insect mass rearing is one of the solutions suitable for the feed and food industries [7]. Insects do not emit a large amount of waste heat since they are poikilotherms, and as a result, they have a high feed conversion ratio. In comparison to traditional livestock, growing 1 kg of insect mass requires less feed, water, and land area [8]. Insect rearing is also recognized for its economic benefits, as their and extend the shelf life of the finished product. Different drying technologies (sun drying, smoke drying, roasting, freeze drying, oven drying, microwaving) have different levels of effectiveness and influence the quality of products (such as sensory characteristics, bioactive compounds and protein extraction efficiency, microbiological safety, and shelf life) [31][32][33]. Different numbers of mesophilic aerobes, Enterobacteriaceae, and lactic acid bacteria are observed on crickets depending on the processing of insects [5,33]. The viable counts of bacteria on fresh G. assimilis were reported to reach 7.3 log CFU/g [34]. After crickets were blanched and following oven drying, it was about 6.52 log CFU/g [17]. There are many more scientific data on microbial contamination of house crickets. The count of bacteria in raw A. domesticus usually varies between 6.3 and 9.24 log CFU/g, while blanching reduces it to 2.3-4.39 log CFU/g. After drying, bacterial viable counts reach 1-4.8 log CFU/g values [5,23,24,26,27,31,[33][34][35][36]. Unfortunately, culture-dependent food microbiome investigations have limited contribution to knowledge. Standard approaches cannot culture more than 99 percent of naturally occurring microorganisms [37]. Moreover, research applying non-cultural methods such as metagenomics analysis can disclose cultivable as well as unculturable bacterial diversity. Pyrosequencing analysis of the 16S rRNA gene revealed that ready-to-be-consumed house crickets possess a high diversity of bacterial operational taxonomic units (OTUs) [5]. Processed in whole and powdered crickets were dominated by three bacterial phyla ascribed to Proteobacteria, Firmicutes, and Bacteroidetes. In sum, these bacterial groups represented up to 98.8 percent of the total bacterial diversity [5]. The same dominant bacteria phyla were established in fresh house crickets using MySeq Illumina [24,38], and were confirmed again in ready-to-use crickets [39,40]. Few studies focusing on the house cricket-associated bacterial microbiota have been published thus far [5,24,[38][39][40] but no study exploring the microbial diversity of Jamaican field crickets using a metagenomic approach has been reported. In this context, the aims of this study were (i.) to provide an in-depth characterization of bacterial communities associated with the house cricket (Acheta domesticus) and Jamaican field cricket (Gryllus assimilis) by using the Next-Generation Sequencing (NGS) approach; (ii.) to perform a comparative analysis of bacterial populations associated to the surface and whole body of crickets by uncovering potentially beneficial and pathogenic microorganisms; (iii.) to assess the microbial contamination of processed house crickets and Jamaican field crickets. Scientific studies on the microbiological safety of edible crickets need to be carried out for markets and consumers; therefore, the obtained knowledge will contribute not only to the elucidation of the edible insect-associated microbiota, but also for the formulation of the most efficient raw cricket production steps and the setting of conditions to avoid bacterial risk. Crickets Experimental house crickets (Acheta domesticus) and Jamaican field crickets (Gryllus assimilis) were gained from colonies maintained in the Nature Research Centre, Vilnius, Lithuania. Climate chamber conditions were set at 27 ± 2 • C, 12:12 h of the day:night light cycle, and 40-50% relative humidity. Both species were kept separately in 130 L plastic bins covered by lids with an aluminum mesh. Egg cartons were used as space extenders for crickets. They were vertically stacked side by side to one another through the entire space of the box. Water for the insects was added into a 3 L sealed plastic container placed on top of egg cartons with wicks sticking out of it. Crickets were able to drink out of a wet wick and had no access to open water so as not to spoil it. Feed was placed on top of egg cartons in aluminum plates (33.3 × 23.3 cm). As cricket feed, quail compound feed was used (Dobele, Latvia). According to the manufacturer, most of the feed ingredients are corn, wheat, soybean meal, sunflower meal, and rapeseed oil. There are added vitamins and microelements. Total crude protein content was 20.25%, crude fat 5.26%, crude fiber 13.17%, calcium 3.4%, phosphorus 0.81%, sodium 0.16%, ash 13.17%, lysin 1.25%, and methionine 0.65%. In terms of composition and nutritional specification, this feed is similar to the ones specifically dedicated to the cultivation of crickets [41,42]. The feeding substrate was refiled periodically according to the need. When the crickets became adults, a box with wet coconut husks was placed in the bin. The husks were heat-treated by soaking them in boiling water and used after cooling to room temperature. This substrate was dedicated to collect eggs after crickets oviposited in it. Crickets were allowed to lay eggs for approximately 3-4 days. After this period, the box with the eggs was placed into a newly prepared growing bin. For the experiments, randomly selected 45-55-day old adult crickets were separated from the maintained population. Live insects were immobilized by squeezing their heads and used for bacteria sampling. Sampling of Microorganisms from the Surface of Crickets Freshly immobilized crickets (400 g) were mixed with 800 mL of sterile TE buffer (10 mM Tris-HCl, pH 8.0, 1 mM EDTA) and incubated at 20 • C for 45 min with orbital shaking at 120 rpm. The outwashes were filtered through a 1.5 mm wired mesh and the supernatant was centrifuged in 50 mL Falcon test tubes at 5000 rpm for 20 min. The obtained pellet was collected in Eppendorf tubes, centrifuged at 12,000 rpm for 10 min, and used for microbial DNA extraction. Sampling of Microorganisms from the Whole-Body of Crickets Thirty grams of washed crickets were aseptically homogenized for 3 min with sterile mortar and pestle in 50 mL of TE buffer. The remnants of the crickets were removed by filtering the homogenate through 1.5 mm wired mesh and following centrifugation at 800 rpm for 10 min. The supernatant was transferred into new tubes and the pellet was collected by centrifugation at 12,000 rpm for 10 min. The pellet was stored at -20 • C and used for the extraction of microbial DNA. DNA Extraction The microbial DNA was isolated from sediments obtained from whole-body homogenized crickets and surface-associated samples (about 50 mg) using the manufacturer's protocol for the Genomic DNA purification kit (Thermo Fisher Scientific Baltics, Vilnius, Lithuania). The quality and quantity parameters of the extracted DNA were assessed by optical reading at 260, 280, and 234 nm, using NanoPhotometer P330 (Implen GmbH, Munich, Germany). Amplicon Sequencing The extracted DNA was used to study bacterial diversity by targeting hypervariable regions V3 and V4 of the 16S rRNA gene for sequencing, using 341F/785R primers [43]. Targeted amplicon libraries were generated using Illumina adapters (www.illumina.com, accessed on 6 October 2021), verified on an Agilent Technologies Bioanalizer DNA 1000 and sequenced in pair-end mode (2 × 300) on an Illumina MiSeq platform (Macrogene Inc., Seoul, Korea). All sequences obtained during this work are available in the Sequence Read Archive (SRA) of the National Center for Biotechnology Information (NCBI), under accession number PRJNA806726. Processing and Analysis of the Sequencing Data Macrogen provided demultiplexed sequence data in FASTQ format, which were imported for processing into the QIIME2 v2020.06 edition of the QIIME program [44]. In essence, amplicon primers were first deleted with the Cutadapt 2.8 [45]. The DADA2 plugin was used to denoise, filter, and trim the reads (where the median quality rating decreased below 30) [46]. The Greengenes v13_5 database [47] was used to classify amplicon sequence variants (ASVs) in QIIME2, with a classifier trained on the amplified region [48]. For the dataset, the majority taxonomy with seven levels was employed, i.e., this taxonomy was taken to species level, but because species-level identification was not full, we chose to utilize genus-level classifications. Log10 read counts and the phylogeny align-to-tree-mafftfastree (MAFFT multiple sequence alignment program) [49] plugin in QIIME2 were used to generate a de novo phylogenetic tree utilized in downstream assessments of diversity that included phylogenetic distances. QIIME2 generated stacked bar graphs representing the relative abundance of % distinct species in samples. Shannon's Diversity, Faith's Phylogenetic Diversity (PD), and Pilou's Evenness indexes were calculated per sample within QIIME2 using rarefied counts to determine alpha (within one sample) diversity (i.e., subsampled to the same sequencing depth across samples; rarefied to 40,000). Excel 2019 was used to construct boxplot figures for alpha diversity. Beta (between-samples) diversity was calculated using the Bray-Curtis dissimilarity statistic [50] based on compositional dissimilarity between samples taking abundance into account, unweighted UniFrac distances that measure phylogenetic distances between taxa, or weighted UniFrac distances that measure phylogenetic distances, and additionally accounting for relative abundance. Principal coordinates (PCoA) plots were created in QIIME2 using the EMPeror graphics tools [51]. Between-group statistical differences were established using weighted and unweighted UniFrac distance metrics and permutational analysis of variance (PERMANOVA, 999 permutations). The ggplot2 package in R was used to create the heatmap [52]. Processing of Crickets Jamaican field crickets and house crickets before microbiological analysis underwent several processing steps, such as rinsing, boiling, and drying. For rinsing, raw crickets were placed on 1.5 mm wired mesh and washed with sterile deionized running water for 1 min. For thermal processing, washed crickets were placed in boiling water for 5 min. The final processing step samples were prepared by washing, boiling, and oven drying crickets for 13 h at 75 • C. After treatment, the samples were placed in sterile glass flasks covered with aluminum foil and soon after used in the following microbial analysis experiment. Microbial Analysis of Crickets and Feeding Substrate Microorganisms of raw and processed crickets were sampled as described in Section 2.2.2. For the analysis of quail compound feed microbial contamination, 30 g of feeding substrate were washed with 50 mL of sterile 0.9% NaCl solution. The feeding substrate residue was removed by centrifugation of the outwashes at 800 rpm for 10 min. The supernatant was transferred into a new tube, serially diluted, and applied for the analysis of microbial loads. For the evaluation of total aerobic counts (TAC), serially diluted in sterile 0.9% NaCl solution aliquots were spread onto standard plate count agar (PCA) (0.5% tryptone, 0.25% yeast extract, 0.1% glucose, 1.5% agar) plates followed by incubation at 30 • C for 48 h. After incubation, the colonies were counted as colony-forming units (CFU), then the logarithmic transformation and the mean value of log CFU per gram of crickets or feeding substrate was calculated. Microbiological counts were carried out in triplicates. One-way analysis of variance (ANOVA) was used to compare TAC on fresh and processed crickets. The p-value of <0.05 was considered statistically significant. Viable Bacteria Identification Randomly selected colonies were used for molecular identification. For identification of bacteria, the V3-V4 region of the 16S rRNA gene was amplified with primers W001 5 AGAGTTTGATCMTGGCTC3 and W002 5 GNTACCTTGTTACGACTT3 . The PCR was performed directly from the bacterial suspension without DNA extraction. The reaction mixture consisted of 5 µL DreamTaq buffer, 1 µL of 2 mM dNTP mix, 1 µL of each primer (10 µmol/L), 2.5 units of DreamTaq DNA polymerase (Thermo Fisher Scientific Baltics, Vilnius, Lithuania), 1 µL of bacterial suspension in PCA medium, and sterile distilled water up to 50 µL. The following PCR conditions were used: an initial denaturation at 95 • C for 5 min, followed by 30 cycles of 94 • C for 30s, 45 • C for 30 s, and 72 • C for 2 min. The final extension was carried out at 72 • C for 10 min. PCR products were purified using GeneJet PCR purification kit (Thermo Fisher Scientific Baltics, Vilnius, Lithuania) and sequenced using W001 and/or W002 primers at BaseClear (Leiden, The Netherlands). The generated sequences were compared with those found in the FASTA network service of the EMBL-EBI database (http://www.ebi.ac.uk/Tools/sss/fasta/nucleotide.html (accessed on 30 March 2022). Diversity and Richness of Acheta domesticus and Gryllus assimilis Bacterial Communities The bacterial community of Jamaican and house crickets raised under controlled conditions was revealed by Next-Generation Sequencing of V3-V4 region of 16S rDNA amplified from total DNA extracted from surface and whole-body of both cricket's species. The four variations (JS-Jamaican cricket surface; JW-Jamaican cricket whole-body; HS-house cricket surface; HW-house cricket whole-body) had three biological replicates each. A total of 2.36 million raw paired-end reads were generated across 12 samples. The number of reads ranged from 179,088 to 224,948, with an average of 196,223 per sample (Table 1). After the pre-processing and quality filtering of reads, a total of 601,809 high quality reads were recovered with an average of 50,150 sequences per sample ( Table 1). The number of joined paired-end reads were comparable in whole-body or surface-associated Acheta domesticus and Gryllus assimilis samples. Rarefaction plots based on Shannon's diversity index demonstrated that maximum alpha diversity is achieved at 7000 reads and confirmed equivalent alpha diversity in the range of read depths from 7000 to more than 35,000 ( Figure S1). The clustering of the sequences at 97% sequence identity generated a total of 2527 amplicon sequence variants (ASVs). The total number of ASVs detected in individual samples ranged from 190 to 231. Based on analysis of prokaryotic sequences, the lowest number of ASVs was observed in Jamaican cricket whole-body samples 611 (208 ± 12.1, hereafter median for 3 samples ± standard deviation) and house cricket surface samples 614 (204 ± 8.02). The richness of the ASVs was slightly higher in Jamaican surface samples (636 (205 ± 12.12)) and whole-body house cricket samples (666 (223 ± 9.54)) ( Table 1). Sequencing depth was comprehensive enough to estimate microbial diversity in all single samples. Alpha diversity metrics, such as Shannon diversity index, observed ASVs index and Phylogenetic diversity index (Faith's PD), did not reveal statistically significant differences in bacterial diversity among the four testing groups (external and whole-body samples of both cricket species) (Figure 1). These estimates match the results of a pseudo-F statistical analysis of both weighted and unweighted sample groups using permutational multivariate analysis of variance (Permanova). Samples JS, JW, HS, and HW did not show statistically significant differences in beta diversity (p > 0.05) ( Table 2.). However, when external and whole-body samples were combined, bacterial diversity became apparent between cricket species themselves. Based on the alpha diversity analysis performed with the nonparametric Kruskal-Wallis test and the beta diversity analysis performed on weighted and unweighted UniFrac distance metrics, statistically significant differences between Jamaican vs. house crickets were observed (Shannon diversity, p = 0.025; UniFrac unweighted, p = 0.004 and weighted, p = 0.018) (Figure 1, Table 2). Bacterial Community Profiling of Jamaican and House Crickets The bacterial microbiota associated with Jamaican and house crickets showed slight differences at the highest taxonomic level. Both cricket species carried bacterial DNA sequences assigned to the four main phyla, collectively accounting for more than 99% of the total bacterial population (Figure 2A, Table S1). The most prevalent phylum on both crickets was Bacteroidetes (68.79% on Jamaican crickets and 62.09% on house crickets), which was mainly represented by Bacteroidia prokaryotic microorganisms at class level (Table S1, Figure S2). The phylum of Proteobacteria was dominant in house crickets compared to Jamaican (25.43% and 15.50%, respectively), while the phylum of Verrucomicrobia was more abundant in Jamaican crickets (Jamaican vs. house crickets, 5.94% and 1.20%, respectively). The majority of microorganisms in the Proteobacteria phylum at the class level were assigned to Gammaproteobacteria and they prevailed inside the crickets (Figure 2A, Table S1). The microorganisms from the Firmicutes phylum were observed at a comparable level on both cricket species (JC-9.45%, HC-10.91%), with slightly increased abundance in whole-body Jamaican cricket samples (JW-13.76%). Metagenomic analysis of Jamaican and house crickets reared under controlled conditions revealed differences in the composition of the bacterial community at a lower taxonomic level. In total, 35 families and 35 genera were differentiated during this study. The members of the Porphyromonadaceae family constituted the major bacterial group of both cricket species (41.37% for JC and 35.79% for HC), followed by Bacteroidaceae (14.92% for JC and 16.04% for HC) and Rikenellaceae (11.34% for JC and 8.79% for HC) ( Figure 2B). Porphyromonadaceae was represented by bacteria of Parabacteroides, Dysgomonas and Paludibacter genera ( Figure 2C). Parabacteroides was the most dominant taxon at genus level (36.17% for JC and 31.97% for HC), with higher prevalence on the surface of tested insects (42.57% for JS and 41.09% for HS) ( Figure 2C). The Bacteroidaceae family was represented by members of Bacteroides genus distributed similarly on both cricket species (14.32% for JC and 14.65% for HC) ( Figure 2C). ASVs assigned to Parabacteroides and Bacteroides genera formed the core microbiomes in HC and JC samples (Table S1). The abundance of bacteria belonging to the Pseudomonadaceae family was more than five-fold higher in house crickets vs. Jamaican field crickets (17.21% for HC and 3.05% for JC). In contrast, the Verrucomicrobiaceae family was significantly more represented in Jamaican field crickets (5.94% for JC and 1.20% for HC). Distributed in a low frequency, the Lactococcus, Candidatus Azobacteroides, and Coprococcus genera were more prevalent in Jamaican field crickets (1.95%, 1.28%, and 0.5%, respectively). Meanwhile, Enterococcus, Akkermansia, and Acinetobacter genera were more represented in the house cricket (1.16%, 1.03%, and 0.49%, respectively) ( Figure 2C). Comparison of Jamaican Field Cricket and House Cricket Bacterial Communities Principal coordinate analysis (PCoA) based on both weighted and unweighted UniFrac distances showed clear separation of Jamaican and house cricket samples, thus pointing to differences in the bacterial microbiota composition (Figure 3). The distribution of unique ASVs between sample groups is illustrated by a Venn diagram (Figure 4). A total of 450 unique ASVs identified in this study, 100 were exclusive to Jamaican field crickets and 104 to house crickets, while 237 ASVs were shared by both cricket species. By comparing surface-associated ASVs, 194 were common to both cricket species, while 78 ASVs were distributed only on JS and 94 on HS samples. The distribution of ASVs in the whole body was similar to surface samples; 200 ASVs were shared by both HW and JW, while 85 ASVs were unique to JW and 84 to HW. The heatmap depicts the distribution of the most common ASVs ( Figure 5). Based on hierarchical cluster analysis, the structure of microbial community differs in Jamaican field crickets and house crickets. Among bacterial community, ASVs of Pseudomonadaceae (ASV1, ASV69) and Akkermansia (ASV38) inhabited mainly house crickets, while those belonging to Verrucomicrobiaceae (ASV8, ASV52), Lactococcus (ASV29), and Candidatus Azobacteroides (ASV35) were more abundant on Jamaican field crickets. Among the most abundant Parabacteroides genera, a different distribution of closely related microorganisms was observed in both crickets: ASV3 and ASV6 were present mainly in house cricket samples, while ASV4, ASV21, and ASV62 dominated on Jamaican crickets. A similar distribution pattern was observed with Bacteroides genera: ASV5 was more abundant in house crickets, while ASV7 and ASV22 were present in Jamaican field cricket samples. Looking at the surface and whole-body samples, differences in ASVs distribution were also visible. A higher abundance of ASVs matching to Parabacteroides (ASV3, ASV4, ASV21, and ASV62) and Porphyromonadaceae (ASV2) was documented on the surface of Jamaican or house crickets as compared to whole-body samples. In contrast, some ASVs, such as Akkermansia (ASV38) and Lactococcus (ASV29), dominated in the interior of crickets. Microbial Analysis of Crickets and Feeding Substrate The microbiological safety aspects of house crickets and Jamaican field crickets were also analyzed based on bacterial loads of freshly collected and processed crickets. The mean of TAC determined in raw material of A. domesticus and G. assimilis was comparable-7.65 and 7.90 log CFU/g, respectively ( Figure 6). The application of the rinsing step only slightly reduced microbial loads. The level of total viable counts for rinsed house crickets was 7.50 ± 0.44 log CFU/g and 7.51 ± 0.24 log CFU/g for Jamaican field crickets. A statistically significant reduction in microbial counts was observed after thermal processing of both cricket species. A reduction of about 4.85 log CFU/g (p = 0.00008) was detected after boiling A. domesticus in a kettle of water for 5 min. The introduction of a drying step decreased total bacterial counts to 1.63 ± 0.18 log CFU/g (p = 0.00002). Similar findings were observed in the case of G. assimilis: 3.09 ± 0.73 and 1.49 ± 0.62 log CFU/g of total aerobic counts were recovered from boiled and additionally dried crickets, respectively. The TAC levels in both processing steps decreased significantly (p < 0.0006) comparing to unprocessed G. assimilis crickets. Since cricket feeding substrate could be an important source of microbial contamination, quail compound feed was analyzed for microbiological quality. The TAC level observed in the cricket feeding substrate was low-1.24 ± 0.32 log CFU/g of material. Figure 6. Distribution of the total aerobic plate counts (TAC, log CFU/g) for differently processed A. domesticus and G. assimilis crickets and feeding substrate. HC-house cricket, JC-Jamaican field cricket. All values are the mean of 3 replicates with ± standard deviation (SD). Asterisk above the column indicates statistically significant differences (p < 0.05) between fresh and processed crickets. Based on molecular identification of isolated bacteria, representatives of Bacillus, Staphylococcus, Micrococcus, and Pseudomonas were mainly observed on tested crickets or feeding substrate (Table S2). In some samples, bacteria from Acinetobacter, Moraxella, Enterococcus, or Rhodococcus genera were identified. Greater diversity of isolated bacteria was detected on unprocessed crickets. Even though the total number of bacteria decreased during the processing of crickets, certain bacterial species, observed in raw material or even cricket feeding substrate, remained in boiled and dried crickets. Among those bacteria are Staphylococcus epidermidis, Micrococcus luteus, Staphylococcus warneri, and Bacillus subtilis species. Most likely, such heat-resistant bacteria were not eliminated completely using cricket processing. Discussion Numerous insect species from Orthoptera, Hymenoptera, Coleoptera, etc. orders are consumed worldwide at different stages of the development [5]. Living and processed edible insects through transferred microorganisms or bioactive compounds can affect the health of consumers (humans and animals). Therefore, the microbial communities associated with edible insects need to be evaluated by paying attention to potentially beneficial and pathogenic bacteria. The bacterial characterization of cricket species Acheta domesticus and Gryllus assimilis was carried out through a Next-Generation Sequencing analysis. This study allowed for an in-depth evaluation of the cricket-associated bacterial communities. The number of high-quality reads recovered from either the whole body or surface of Jamaican field and house crickets was comparable to those resolved by others on Illumina MiSeq platform powdered house cricket samples [39]. When 16S rRNA amplicon pyrosequencing was applied on house crickets, the efficiency of the reads obtained was more than tenfold lower [5]. This could be due to differences in the processing of insects, DNA preparation, and sequencing strategies. The richness of bacterial community on the surface and in the whole body of both cricket species was higher compared to previous studies performed by others. The number of OTUs detected previously in fresh house cricket samples was lower compared to our study (the number of OTUs ranged from 313 to 402) [24,38]. In processed house cricket samples, the variety of OTUs decreased even more to 157 and 175 [5]. Alpha and beta diversity analysis revealed statistically significant differences between entire bacterial communities associated with Jamaican field and house crickets. However, when individual samples, such as surface and whole-body, were examined, the differences became insignificant. House cricket bacterial diversity has previously been shown to differ between rearing companies and there is only a slight difference between the same company rearing production cycles [38]. All these findings are not surprising, because insect bacterial diversity can fluctuate due to variations in living space, feed, and insect species themselves [53,54]. The latter was demonstrated in the Jamaican and house cricket case in the present study. The structure of bacterial community associated to Jamaican field and house crickets showed slight differences at the highest taxonomy level. Four main bacterial phyla were observed, with a higher abundance of Bacteroidetes on both crickets tested. However, Proteobacteria dominated in house crickets, while Verrucomicrobia was more abundant in Jamaican field crickets. Previous metagenomic studies have revealed the high abundance of three bacterial phyla Bacteroidetes, Proteobacteria, and Firmicutes in fresh and processed house crickets. There was only a difference in ratio between them. Similar to our finding, Bacteroidetes dominated in almost all crickets studied [5,24,[38][39][40]. Starting from the family taxonomic level, the differences in the composition of Jamaican and house cricket bacterial community became more obvious. The core microbiomes forming Parabacteroides and Bacteroides genera dominated on both cricket species. Significant amounts of bacteria from these genera are also found in both fresh [38] and processed house crickets [5,39,40]. Parabacteroides and Bacteroides have also been observed in cricket feed, suggesting possible spreading through the food chain [55]. The abovementioned microorganisms are one of the most common bacteria in the human body, they populate the mouth, upper respiratory tract, urogenital tract, and, most notably, the intestinal tract of humans and other animals [56]. Representatives of these genera can resist intestinal inflammation, suppress the growth of pathogens, and accelerate the establishment of intestinal microbial balance [57]. Nevertheless, some species can act as opportunistic pathogens, causing infections in immunosuppressed hosts [56]. Differences in the microbial communities of Jamaican field and house crickets were mainly caused by the genera distributed at a low frequency. Representatives of the genera Lactococcus, Candidatus Azobacteroides, and Coprococcus were more prevalent in Jamaican field crickets, while Enterococcus, Akkermansia, and Acinetobacter more inhabited house crickets. It should be noted that some distinctions were found in the bacterial communities between the different sample groups (surface and whole-body). For example, Acinetobacter and Enterococcus were more likely to inhabit the surface of the house crickets, while bacteria from the genera Akkermansia and Paludibacter were more likely to be related to the interior. A higher abundance of Coprococcus and Lactococcus bacteria in a whole-body Jamaican cricket samples (JW) than on the surface (JS) also indicates inside distribution. Acinetobacter and Enterococcus can be transmitted by water, food, or contact and are associated with some infectious diseases. The presence of these bacteria signals poor substrate hygiene [58,59]. A high content of bacteria on the surface of the crickets could be removed by washing the raw material before further processing. As a result, the washing step should be included in the cricket processing scheme, not only to remove feed residues or frass, but also as a preliminary microbiological safety measure. Some bacteria from Lactococcus, Akkermansia, and Coprococcus genera are receiving increasing attention for their ability to regulate the gut microbiota and improve host health [60,61]. Akkermansia is a human intestinal mucin-utilizing symbiont, capable of enriching host metabolic and immune response functioning and is considered as a promising probiotic [60,62]. Butyric acid-producing Coprococcus bacteria are abundant in the human gut and their anti-inflammatory, neuroactive potential has been demonstrated [63]. Abun-dant in the intestinal tract or distributed in a wide range of fermented foods, representatives of Paludibacter and Lactococcus are responsible for polysaccharide fermentation, involved in the biochemical conversion of milk components [61,64,65]. The species of the latter genus are well known for their ability to produce lactic acid and its probiotic features [65]. Trabulsiella and Lactococcus have been listed as intestinal symbionts of phytophagous termites and are involved in the biodegradation of plant biomass, thus helping the host to digest and utilize its food [66][67][68]. Given that crickets are omnivorous and that plant-based foods are part of their natural diet, it can be assumed that orthopterans may also benefit from the symbiotic polysaccharide-fermenting bacteria that inhabit them, as is the case with termites or other animals. Lactococcus may also be valuable in the development of technologies to produce foods containing cricket ingredients. Lactococcus garvieae is known for its interaction with crickets and their growing environment [5]. Regarding that, this species was tested for abilities of spontaneous fermentation for cricket-wheat bakery production [40]. Candidatus Azobacteroides are intracellular symbionts of intestinal protists. They are nitrogen fixers and cellulose decomposers [69]. This bacterial group is associated only with insects and is mostly widespread in termites [70][71][72] but can also be found in cockroaches [73]. It is likely, crickets, as plant consumers, can benefit from these bacteria in similar ways as termites do. The suitability of insects for human consumption cannot be judged only based on the microbiological composition of the unprocessed raw insect material [19,20]. Insect processing and storage conditions have a significant effect on the presence of foodborne pathogens [5,[19][20][21]. Enterobacteriaceae (also detected during this study) is one of the main bacterial families related to hygiene and quality of the food [74]. Drying of insects, as single processing step, is not sufficient to inactivate most of Enterobacteriaceae and should be performed after boiling, which is more effective against them [75]. Our data are in line with others, demonstrating the effectiveness of the short boiling step in the removal of microbial contamination from analyzed crickets. It is worth noting that insufficient heat treatment without the elimination of spore-forming bacteria can lead to their rapid multiplication and the production of hazardous toxins, as there will be no other competitive bacteria left [21]. In our study, spore-forming bacteria from the Bacillus genus were isolated in different processing steps of crickets and their presence was observed in feeding substrate. Some Bacillus species, such as B. cereus, are included in the list of biological hazards of edible insects [20], while others, such as B. flexus, B. subtilis, etc. are known microbial symbionts of insects, usually non-pathogenic to humans [76,77]. Even more, Bacillus spp., due to the production of antimicrobial compounds, vitamins, carotenoids, etc. as well as lasting stability in processing chain and in the gastrointestinal environment, are gaining interest in functional food production and human health [78,79]. Numerous Staphylococcus genus bacteria (e.g., S. epidermidis, S. warneri, S. hominis) were isolated from processed A. domesticus and G. assimilis crickets. These species are not only abundant on human skin but were isolated from the gut of insects and could be related to the transfer of multidrug resistance [80]. Overall, more research is needed to formulate the most efficient raw cricket production steps and to elucidate potential microbiological risks. On the other hand, edible insect-associated microorganisms should gain much attention considering their beneficial, health-promoting features. Conclusions During this study, the bacterial community associated with Jamaican field cricket was characterized for the first time by applying the NGS approach and compared to a house cricket-inhabiting bacterial population. Analysis of the alpha and beta diversity of the bacterial communities, investigation of the distribution of microbial ASVs showed clear separation between bacteria inhabiting A. domesticus and G. assimilis. The core microbiomes forming Parabacteroides and Bacteroides dominated in both crickets tested, while the distribution pattern of unique ASVs from these genera varied in different cricket species and samples. Bacterial genera occurring at low frequency caused major differences in the structure of the microbiota. Lactococcus, Candidatus Azobacteroides, and Coprococcus prevailed in Jamaican field crickets, while Enterococcus, Akkermansia, and Acinetobacter dominated in house crickets. The efficiency of cricket processing was evaluated, and the high effectiveness of thermal treatment (boiling and oven-drying) was demonstrated in the removal of microbial contamination. Among the microorganisms, inhabiting both species of untreated crickets (as revealed by NGS analysis), as well as identified viable bacteria, surviving cricket processing, the potentially pathogenic bacteria were observed. These bacteria must be considered for possible biological hazards of cricket-based food. Nevertheless, among established prokaryotic microorganisms, natural inhabitants of the gastrointestinal tract of animals and humans, potentially beneficial probiotics were documented. These bacteria may be of great interest for functional food production and human health. The findings of this study will be helpful to culture both cricket species in controlled environments with proper antimicrobial feed ingredients that will increase the number of beneficial microbes and eliminate the microbial communities having detrimental properties. The obtained data will foster the development of strategies for cricket-based safe food production as well as the exploitation of beneficial properties of cricket-associated microorganisms. Supplementary Materials: The following supporting information is available online at https:// www.mdpi.com/article/10.3390/foods11081073/s1, Figure S1: Rarefaction curves for each sample, Figure S2: Relative abundance of bacterial taxonomy at the phylum (A) and family (B) levels for samples of Jamaican field (J) and house (H) crickets, Table S1: Bacterial taxonomy abundance count of Jamaican (J) and house (H) cricket samples, Table S2: Bacteria isolates detected in this study.
A Diels-Alder polymer platform for thermally enhanced drug release toward efficient local cancer chemotherapy ABSTRACT We reports a novel thermally enhanced drug release system synthesized via a dynamic Diels-Alder (DA) reaction to develop chemotherapy for pancreatic cancer. The anticancer prodrug was designed by tethering gemcitabine (GEM) to poly(furfuryl methacrylate) (PFMA) via N-(3-maleimidopropionyloxy)succinimide as a linker by DA reaction (PFMA-L-GEM). The conversion rate of the DA reaction was found to be approximately 60% at room temperature for 120 h. The reversible deconstruction of the DA covalent bond in retro Diels-Alder (rDA) reaction was confirmed by proton nuclear magnetic resonance, and the reaction was significantly accelerated at 90 °C. A PFMA-LGEM film containing magnetic nanoparticles (MNPs) was prepared for thermally enhanced release of the drug via the rDA reaction. Drug release was initiated by heating MNPs by alternating magnetic field. This enables local heating within the film above the rDA reaction temperature while maintaining a constant surrounding medium temperature. The MNPs/PFMA-L-GEM film decreased the viability of pancreatic cancer cells by 49% over 24 h. Our results suggest that DA/rDA-based thermally enhanced drug release systems can serve as a local drug release platform and deliver the target drug within locally heated tissue, thereby improving the therapeutic efficiency and overcoming the side effects of conventional drugs used to treat pancreatic cancer. Introduction Cancer is the leading cause of death in the developed world, as one in three individuals develops cancer during their lifetime. Particularly, pancreatic cancer is lethal, and approximately 95% of patients with pancreatic cancer die as they experience few or no symptoms during the early stages [1]. While systemic drug delivery is commonly used for cancer chemotherapy, it is often difficult to deliver the drug to the target location because of limited extravasation from the bloodstream into the target tissue. Over time, there is often a need for higher drug dosages to maintain the requisite local concentration during the treatment period [2][3][4][5]. However, higher dosages increase the risk of toxicity and the occurrence of adverse side effects. The situation becomes even more challenging when the blood supply is minimal or preliminarily destroyed due to trauma or surgery. To overcome this major limitation, it is desirable to deliver active ingredients locally, targeting the medication directly to the disease site [6]. This can be done by implanting a drug reservoir directly into the target area and releasing the drug over the desired period and rate; this is phenomenon is called 'local drug delivery'. Injectable hydrogels have been the most extensively researched materials for use as carriers of therapeutic agents [7,8]. One of the important advantages of injectable hydrogels is their low invasiveness and high usability. Despite these advantages, the development of injectable hydrogels may face some challenges in meeting various clinical requirements. We have been developing implantable nanofiber or film-based platforms for localized drug delivery [9][10][11][12]. This system is essentially aimed at delivering and retaining sufficient quantities of active drug molecules within an adequate period. For example, we have demonstrated the enhanced treatments of lung cancer, skin cancer, prostate cancer, and liver cancer using electrospun nanofiber meshes incorporating paclitaxel [13], imiquimod [8,14], Hemagglutinating Virus of Japan Envelope (HVJ-E) [15], and micro RNA 145 [16], respectively. Furthermore, thermo-responsive nanofiber meshes have been used in conjunction with magnetic nanoparticles (MNPs) for cancer chemotherapy/thermotherapy [17,18]. The combination of cancer drugs and hyperthermia was shown to have a greater impact on cell apoptosis, with the advantage of controlled release using an alternating magnetic field (AMF). Recently, the effectiveness of hyperthermia has been further enhanced through the inhibition of heat shock protein activity by releasing the inhibitor 17allylamino-17-demethoxygeldanamycin from the mesh [19]. Although these systems are highly desired, ON/ OFF drug release is not controlled and remains a challenge. Therefore, the risk of toxicity remains unclear. To improve thermally enhanced drug release, we focused on dynamic and reversible covalent chemistry, such as the Diels-Alder (DA) reaction. The DA reaction is a chemically selective [4 + 2] cyclization between a diene and a dienophile, electron-donating, and electron-withdrawing groups, respectively, which increases its reactivity. This covalent bond can also reversibly return to its original form upon heating. This is called retro Diels-Alder (rDA) reaction. One of the benefits of the DA reaction in the biomedical field is that it can be used in an aqueous medium [20]. Oluwasanmi [22]. The drug was released through the rDA reaction upon exposure to alternating magnetic fields. In the aforementioned studies, drugs were directly conjugated to the surface of metal particles. Limitations in terms of their fabrication variations, such as films, fibers, and hydrogels, were noted in those studies. To overcome these limitations, in this study, anticancer drug gemcitabine (GEM) was directly conjugated to poly(furfuryl methacrylate) (PFMA) via N-(3-maleimidopropionyloxy)succinimide as a linker by the DA reaction (PFMA-L-GEM) ( Figure 1). The conversion rate of DA/rDA reaction was determined using proton nuclear magnetic resonance ( 1 H NMR) spectroscopy. A magnetic nanoparticle-incorporated PFMA-L-GEM film was prepared. Further, the release of the drug was initiated by heat generation of MNPs after AMF irradiation. In vitro experiments demonstrated the cytotoxicity of released L-GEM on pancreatic cancer cells. Synthesis of PFMA As shown in Scheme S1 (Supplementary Information), the polymerization of FMA was carried out via free radical polymerization. As conventional radical polymerization can easily lead to excessive gel formation, the concentration was carefully adjusted. Briefly, FMA and AIBN (0.01 mol% of total monomer concentration) were dissolved in 20 mL of DMF. The total monomer was 50 mmol. The polymerization was carried out at 60° C for 20 h, after nitrogen was bubbled. After polymerization, AIBN, unreacted monomers, impurities, and solvent were removed by dialysis against DMF and distilled with dichloromethane for 3 days. The dialyzed solutions were then evaporated. The chemical structure of the obtained polymer was confirmed via 1 H NMR spectroscopy at 400 MHz (JEOL, Tokyo, Japan). All NMR samples were prepared in deuterated solvents, with all values quoted in ppm relative to TMS as an internal reference. The average molecular weight (M n ) and polydispersity index (PDI) of the homopolymers were determined via gel permeation chromatography (GPC, JASCO International, Tokyo, Japan) using DMF with lithium bromide (LiBr, 10 mM) (Tosoh Corporation, Tokyo, Japan) as the eluent. Conjugation of maleimide-linker to PFMA (DA reaction) N-Succinimidyl-3-maleimidopropionate (linker) (0.80 g) was dissolved in dichloromethane (20 mL), followed by PFMA (0.50 g). The mixture was vigorously stirred, which resulted in the formation of a slurry. The slurry was then stirred for 120 h at room temperature. A white solid (PFMA-L) was collected by evaporation. The chemical structure was confirmed by 1 H NMR spectroscopy using chloroform-d as the solvent. Conjugation of GEM to PFMA-L GEM was conjugated to PFMA-L by nucleophilic acyl substitution between the activated ester group of PFMA-L and the amine group of gemcitabine. One gram of PFMA-L and 0.6 g of GEM were placed in a round-bottomed flask with 150 mL of HFIP under N 2 gas. Then, 2 mL of triethylamine was added to the reaction solution. The mixture was allowed to react under stirring at room temperature for 24 h. The product was completely dried under vacuum at room temperature. The structures were characterized by 1 H NMR spectroscopy with DMSO-d 6 and attenuated total reflection Fourier transform infrared spectroscopy (ATR-FTIR, Thermo Fisher Scientific K.K., Tokyo, Japan). Preparation of MNP/PFMA-L-GEM films PFMA-L-GEM (1.13 g) and MNPs (0.90 g) were dissolved in HFIP at 41.5 wt%. The mixture solution was coated on 15 mm glass coverslips at 3000 rpm for 60 s by spin coating (Active Spin Coater, Axel, Osaka, Japan). The MNP/PFMA-L-GEM films were cut into 100 mg film pieces (37.7 mg L-GEM/100 mg film). Heating potential of MNP/PFMA-L-GEM films The heating profiles of the MNP/PFMA-L-GEM films were investigated. The film was immersed in 50 mL PBS at 20 °C. The sample was placed in the center of a copper coil and exposed to an AMF (166 kHz and 192 A, HOSHOT2, Alonics Co., Ltd., Tokyo, Japan). The heating profiles were obtained by taking photos using a forward-looking infrared camera (CPA-E6, FLIR Systems Japan K.K., Tokyo, Japan). In vitro drug release Drug release studies of the PFMA-L-GEM films were conducted using a dialysis membrane. Briefly, 1 mL of PBS with PFMA-L-GEM film (0.1 g) was loaded into a Spectra/Por® dialysis tubing with a molecular weight cut-off of 10 kDa (Repligen, Massachusetts, USA). The dialysis membrane was immersed in 100 mL PBS. The solutions were stirred, and the rate of drug release was measured at various temperatures (25°C, 37°C, 45°C, and 90°C). Ten milliliters of released L-GEM in PBS were collected, and 10 mL of fresh PBS was added to each sample. The released amount of L-GEM was quantified using a UV-vis spectrophotometer (V-650 spectrophotometer, Jasco, Tokyo, Japan) from a calibration curve (R 2 = 0.996). Thermal analysis of PFMA-L-GEM was also conducted using DSC as a same way of PFMA-L. In vitro cytotoxic assay All in vitro experiments in this study were carried out using the pancreatic cancer cell line MIAPaCa-II. MIAPaCa-II cells were grown in RPMI1640 supplemented with 10% FBS, 1% L-glutamine, and 1% penicillin/streptomycin. The cells were maintained at 37 °C in a humidified atmosphere of 5% CO 2 . Subculturing was performed every 2-3 days with 0.25% trypsin-EDTA until the cells were ready for use. Cell cultures were produced in 96-well plates by seeding 4.0 × 10 3 MIAPaCa-II cells in their exponential growth phase and incubating them overnight at 37°C in a 5% CO 2 atmosphere in RPMI-1640 media supplemented with 1% streptomycin/penicillin and 5% fetal bovine serum. The released L-GEM from the PFMA-L-GEM films under different conditions was collected and added to each cell culture well. Cytotoxicity was confirmed via the AlamarBlue® assay after incubation at 37°C for 24 h. DA reaction PFMA was prepared by free radical polymerization with a yield exceeding 62% (Scheme S1 and Figure S1 in Supplementary Information). The molecular weight (Mn) was estimated to be 72,000 g mol-1 with a PDI of 1.30 ( Figure S2 in Supplementary Information). Immobilization of the thermally labile linker onto the PFMA molecule was achieved by the DA reaction (Scheme 1). PFMA-L is insoluble in water due to its high hydrophobicity. Dichloromethane was used as the reaction solvent for the DA reaction. The progress of the DA reaction was confirmed via ATR-FTIR and 1 H NMR spectroscopy with chloroformd (Figure 2(a)). The ATR-FTIR spectra revealed that pronounced peaks at 1013 cm −1 were observed for PFMA, which is presumed to be the furan ring. This peak disappears after the reaction with linker ( Figure S3 in Supplementary Information) [23]. The peaks of PFMA-L were observed at 6.47 (N), 5.30 (M), and 4.52 (L) ppm. Free furan and free maleimide groups were assigned at 7.43 (A), 6.38 (B, C), and 6.73 (E, F) ppm, respectively. The conversion rate was calculated based on the integral ratio of resonance at 3.70 (K-endo) and 3.38 (K-exo) ppm. The conversion rate of the DA reaction between the PFMA and linker (Figure 2(b)) increased with time and saturated at approximately 60% at 120 h. The reaction speed of the DA reaction is highly dependent on the reaction environment, such as the reaction solvent or temperature. For example, the DA reaction time in water was reported to be 30 min at 60 °C [24], whereas the DA reaction time in diethyl ether was 7 days at room temperature under a N 2 atmosphere [21]. In addition, it has been difficult to assign every peak observed for the polymeric materials because of the broad NMR signals obtained when compared to the signals obtained for low molecular weight compounds [25,26]. Therefore, careful observation of the conversion rate using 1 H NMR measurement is important ( Figure S4 in Supplementary Information). Retro DA reaction For the rDA reaction, 1, 1, 2, 2-tetrachlorethane-d 2 solution was used because it does not dissolve in aqueous media. Another reason for the use of tetrachloroethane is its high boiling temperature; therefore, the rDA reaction can be performed at higher temperatures. The conversion rate was calculated from the integrated area of the peak using 1 H NMR spectroscopy in the same manner as for the DA reaction. Figure 3(a) shows the time-dependent changes in the 1 H NMR peaks for the rDA reaction at 90 °C. Figure 3 (b) shows a comparison of the time-dependent conversion rates at different temperatures. The vertical axis indicates the conversion rate of the rDA reaction. According to this result, 100% of the rDA reaction was observed at 90 °C for approximately 30 min of heating. In contrast, no changes in the conversion rate were observed below 45 °C. Interestingly, the conversion rate exceeded 100% when the reaction was conducted at 37 and 45 °C. These results indicate that the DA reaction proceeded in opposite directions at these temperatures in tetrachloroethane. Other researchers have also reported that the threshold temperature of thermal breakdown of the produced ring is unstable depending on the reaction environment, especially the temperature or the reaction solvent [24,[27][28][29]. Froidevaux et al. reported that there are three distinct regions for the DA/rDA reaction, and the reaction does not increase or decrease linearly [30]. In addition, the DA reaction usually leads to a mixture of two diastereomers -endo and exo. In our material, the endothermic peaks for endo and exo products appear at approximately 50-110°C and 130-142°C in the dry state, respectively in the DSC measurements ( Figure S5 in Supplementary Information). In addition, the endothermic reaction due to the rDA reaction was not observed like the first cycle in the second heating cycle ( Figure S5a, S4b in Supplementary Information). It is considered that the DA reaction was not confirmed in cooling cycle ( Figure S5c, S5d in Supplementary Information) because it takes a long time as shown in Figure 2(b). Owing to the effects of the solvents, the observed temperatures were not consistent with those obtained in Figure 3. The heating rate was also considered to affect the reaction kinetics. Therefore, we chose the temperature at 90 °C as the rDA reaction temperature in the present study for the following experiments. Heat generation To examine the thermally enhanced drug release potential of PFMA-L-GEM, MNP-incorporated films were prepared. Iron-oxide MNPs are known to generate large amounts of thermal energy under the influence of AMF of optimal frequency and amplitude. In this experiment, we used AMF at 166 kHz and 192 A, which corresponds to 2.73 × 10 9 Am −1 s −1 (specific absorption rate, SAR = 2.68 W g −1 ). The amplitude of AMF was 1.65 × 10 4 A m −1 . This intensity of the amplitude of AMF is relevant for clinical use. The MNPs used in this study were Fe 2 O 3 nanoparticles with diameters <50 nm. The saturated magnetization obtained by vibrating sample magnetometer was 57 emu g −1 [17]. Figure 4(a) shows the infrared thermal images of PFMA-L-GEM films loaded with 0.9 g of MNPs. The temperature of the AMF-exposed film rose to above 100°C within 10 min. Figure 4(b) depicts the time-dependent temperature changes of the films during AMF application. The samples showed a sharp increase in temperature during the first 5 min of AMF application before reaching a plateau. We also measured the solution temperature of the surrounding medium, and small changes in temperature increase were observed. These results demonstrated that even if the solution temperature did not increase significantly, the actual local temperature of the MNPs increased above the rDA reaction temperature (> 90°C) to release the drug. Figure S6 in the Supplementary Information). On the contrary, when heating was conducted, over 100 μM of L-GEM was detected in the solution as released drug, indicating that approximately 13.5 mg of L-GEM from the film was released. The higher release rate observed at elevated temperature corresponds to an increased reverse rate in the rDA reaction. As shown in Figure 4, the local temperature was found to be above 90 °C. Therefore, the accelerated drug release observed is an expected result. A significant observation was that continuous drug release (zero-order) was noted without a significant initial burst release (Figure 4(b)) . This result indicates that the drug release mechanism is based not only on simple diffusion from the film, but also reversible DA/rDA reaction because drug release becomes anomalous or non-Fickian when drugmatrix interaction is much greater than the diffusion time [31]. Therefore, the accelerated release of L-GEM from the films was most likely mediated by the rDA reaction induced by the thermal effect of the MNPs in the film, although complete on-off drug release control was not achieved. Anticancer effects Since the released drug compound L-GEM has a maleimide chain (linker) attached onto it, which differs in structure to the native GEM, the cytotoxicity of L-GEM on MIAPaCa-II cells [32] was evaluated with different L-GEM concentrations (0.01-100 μM) for 24 h at 37°C. Anticancer effect of L-GEM was saturated at the concentration of 0.01 μM, and it was observed that approximately 34% of cells survived post treatment with 100 μM of L-GEM (Figure 6(a)). Figure 6(b) shows the survival numbers of cells after treatment with released L-GEM from the PFMA-L-GEM films. In this study, heating and cell culture experiments were conducted separately to prevent the effects of heating on the cells. First, the films were exposed to AMF for 60 min, and then the released L-GEM (supernatant solution) was collected and applied to MIAPaCa-II cells. For samples wherein heating was not conducted, approximately 28% of the cells were killed. These observations are consistent with the fact that approximately 38 μM of L-GEM was released from the film at 37 °C ( Figure 5). On the contrary, approximately 49% of cells were killed, when L-GEM released from the film in the supernatant solution upon heating the cells. Although the number of cells surviving was not significantly decreased by heating, an enhanced cell killing effect was observed by thermally accelerated drug release based on the rDA reaction. Another advantage of using this system is that the structure of L-GEM can avoid deamination of cytidine in the DNA chain by activation-induced (cytidine) deaminase (AID), which enhances the bioavailability and the cytotoxicity effect [33]. Conclusions This study demonstrates the efficiency of a thermally accelerated drug release system against pancreatic cancer cells using dynamic and reversible covalent chemistry. Anticancer prodrug GEM-tethered polymers (PFMA-L-GEM) were prepared with the DA reaction. The tethering bond was destructed by heating, resulting in a thermally accelerated drug release. By incorporating MNPs, the PFMA-L-GEM film could be heated by alternating magnetic field. In vivo experiments demonstrated that the film showed an enhanced cell-killing effect when pre-treated. These results proved that the temperature-triggered drug release platform might serve to preserve the payload at body temperature and can aid the rapid delivery of the drug within locally heated tissue to overcome side effects. Disclosure of potential conflicts of interest No potential conflict of interest was reported by the author(s).
Does grooming facilitate the development of Stockholm syndrome? The social work practice implications INTRODUCTION : This article focuses on the problem of risk instrumentalism in social work and the way it can erode the relationship-based nature of practice and with it, the kinds of critical reflexivity required for remedial interventions to keep children safe. METHOD : By exploring the relationship between the process of grooming and the condition known as Stockholm syndrome, the article seeks to address this problem by offering some concepts to inform a critical understanding of case dynamics in the sexual abuse of children which can explain the reluctance of victim-survivors to disclose. FINDINGS : Beginning with an overview of the development of actuarial risk assessment (ARA) tools the article examines the grooming process in child sexual abuse contexts raising the question: “Is grooming a facilitator of Stockholm syndrome?” and seeks to answer it by examining the precursors and psychological responses that constitute both grooming and Stockholm syndrome. CONCLUSION : The article identifies the underlying concepts that enable an understanding of the dynamics of child sexual abuse, but also identifies the propensity of practitioners to be exposed to some of the features of Stockholm syndrome. Introduction In this article, the overview of both Stockholm syndrome and grooming is explored in the context of victim-survivors and the conspiracy of silence. It is sometimes assumed that child sexual abuse victims feel unable to report abuse because of their lack of voice, lack of power, their position in the family or their inability to frame experience as abusive. However, these are not the only reasons because if it were, then as adults, these victims would surely disclose the abuse or report it to an authority, but they do not. Victim-survivors in Jülich's (2001) study remained extraordinarily loyal and silent: a silence which persisted well into adulthood, and was so profound that victim-survivors appeared reluctant to disclose or report the sexual abuse to which they had been subjected. Their silence continued to protect the abuser long after the abuse had ceased. Jülich named this a conspiracy of silence. The reluctance to disclose and report can be attributed to attachment disorders (Bowlby, 1979) or it can be explained by Summit's (1983) child sexual abuse accommodation syndrome (CSAAS). He identified five stages of the CSAAS that THEORETICAL RESEARCH ORIGINAL ARTICLE enabled children to deal with the impact of child sexual abuse: secrecy, helplessness, entrapment and accommodation, delayed disclosure and retraction. However, though plausible in explaining the behaviour of children and young people, it does not explain why victim-survivors persist in maintaining the conspiracy of silence into adulthood. In this article we offer an explanation. We argue that grooming techniques used by those who sexually abuse children facilitates the development of Stockholm syndrome (traumatic bonding) which protects the abuser for decades. Further, we make the argument that risk instrumentalism, with its narrow definitions of risk, could inhibit the ability of professionals using ARAs to identify risk. This is exacerbated by the subtleties and complexities of the dynamics associated with the sexual abuse of children. Before discussing the rise of neoliberal risk instrumentalism, we comment on the use of terminology. The term victim-survivor denotes a victim of child sexual abuse (CSA). Abuser denotes a perpetrator of CSA, while bystander (Herman, 1997) is used to describe family members or close family friends subjected to the complex family dynamics in the abusive situations. The term outsiders is adapted from Graham's (1994) work and refers to professionals and other people not subjected to the complex family dynamics involved in the prevention of the sexual abuse of children. The rise of neoliberal risk instrumentalism in social work The past twenty years have witnessed the growth of formalised risk assessment tools in child care social work in Australia, Canada, New Zealand, the UK and US (Oak, 2015). Such risk assessment instruments can be divided into two types: the formalised structured risk assessment instrument characterised, by standard questionnaires and regular templates that serve to assist professional judgement such as the "Common Assessment Framework" (CWDC, 2006), and the actuarial riskassessment (ARA) tools in which empirical research methods are deployed to identify a series of risk factors which are believed to "have a strong statistical relationship to behavioural outcome" (Shlonsky & Wagner, 2005, p. 410). The new Tuituia Assessment Framework (Child Youth and Family, 2013) launched in 2013 entails both the formalised assessment templates and ARA dimensions (Oak, 2015). Despite the popularity of ARAs with senior managers for the ways they are perceived to reduce practitioner bias and assist with professional judgement, they are criticised for ignoring the day-to-day client-social worker aspects of the case and hence the moral and ethical dimensions (Broadhurst et al., 2010), or to result in the erosion of rapport building skills and the kinds of reflexivity required for remedial action to protect children (Littlechild, 2008;Munro, 2011;Oak, 2015). Littlechild (2005Littlechild ( , 2008 commented on how practitioners fail to recognise that concepts of risk are socially constructed and dynamic entities, not easily amenable to risk instrumental quantification. This problem is compounded by the fact that, when using ARAs practitioners tend to use concepts such as risk of harm and actual harm interchangeably (Gillingham, 2006). Moreover, ARAs ignores the fact that social workers need to translate risk information into a range of choices regarding the most effective service interventions (Shlonsky & Wagner, 2005). The inability to define risk or to develop an operational definition will impact upon the practitioner's ability to determine effective thresholds for intervention (Oak, 2015). All these practice problems can be linked to the decline of the relationship-based nature of practice and the erosion of critical thinking skills as a result of the introduction of the ARAs (Broadhurst et al., 2010;Gillingham, 2006). The problems with the types of risk instrumentalism that underpin such risk frameworks, are that they embody a specific construction of risk that is somewhat mechanistic and uniform (which belies the complex and individualised nature of the THEORETICAL RESEARCH casework dynamics) and also entail the assumption that risk is something that can be measured, predicted and contained (Horlick-Jones, 2005). Given this scenario, the authors' concern is to consider what conceptual frameworks can be developed to assist practitioners develop a critical understanding of the complex, relationship dynamics that exist in child protection cases? One possible answer is to look at the relationship between Stockholm syndrome and grooming and to consider whether grooming facilitates the behaviours associated with this condition. Craven, Brown, and Gilchrist (2006) addressed the paucity of theorising on grooming in the context of child sexual abuse by highlighting the ways definitions of grooming such as those developed by Howitt (1995) and O'Connell (2003) conflate the term paedophile with sex offender. They identified the practice implications of this conflation by pointing out firstly, the term paedophile is a specific clinical diagnosis and most child sex offenders engage in sexual grooming not just paedophiles. Secondly, people who know the offender may not recognise the grooming process because the offender may not fit the stereotype of a paedophile and thirdly, the conflation of paedophile with sex offenders may prevent the offender recognising and taking responsibility for their grooming behaviours. These misconceptions, particularly regarding paedophile stereotypes such as "strangerdanger" detract from the fact that most child sexual abuse victim-survivors know their abuser (Cowburn & Dominelli, 2001). Craven et al. (2006) posited an alternative, and more holistic definition of grooming: Grooming [A] Process by which a person prepares a child, significant adults and the environment for the abuse of a child. Specific goals involve gaining access to the child, gaining the child's compliance and maintaining the child's secrecy to avoid disclosure (p. 297). Craven et al.'s (2006) literature review identified three types of sexual grooming: self-grooming, grooming the environment and significant others and grooming the child. Self-grooming involves the justification or denial of the offending behaviour as a precursor to the move from thinking about the act to being motivated to abuse (Van Dam, 2001). Self-grooming is likely to be affected by the response of both the wider community and the child and the success of the grooming process. It includes the cognitive distortions adopted in a similar fashion to those of victim-survivors to minimise the harm or to justify behaviour, for example, children are regarded as sex objects rather than human beings, or there is a sense of entitlement on the part of the abuser, or the behaviour is excused by the belief system "we live in a dangerous world" or it is excused by "uncontrollable urges". Grooming the environment begins with identifying the vulnerable child (Conte, Wolf, & Smith, 1989;Van Dam, 2001). Offenders groom the wider environment in the form of parents, carers, teachers, social workers etc. by integrating themselves into places and community networks where they are likely to have contact with children. Craven et al. (2006) commented on the ways that sex offenders exploit opportunity, in that they seek to ingratiate themselves into a community and places where they are likely to meet children and will often assume a position of trust. Van Dam (2001) reported that many descriptions of abusers amongst research respondents are that they are frequently "charming", "very helpful" and have "insider status". Another tactic is to become indispensable to the wider community. Hare and Hart (1992) suggested that abusers have a penchant for reading community needs and meeting those needs and will often willingly undertake tasks or jobs that other people will not do (Leberg, 1997). Some abusers groom the environment by targeting lone-parent families to gain this status, or they may target children or young THEORETICAL RESEARCH ORIGINAL ARTICLE people who have absent parents, and hence have less protection. In the absence of parental role models, it is easier to befriend a child and create opportunities to be alone with them. In intrafamilial situations, abusers often isolate the victim from the non-abusing parent and the outside world by developing exclusivity with the child. They may also exploit the parents' needs for a life outside the household by encouraging them to be more proactive in community activities and at the same time this gives them increased access to their victims. Conversely, they may isolate non-abusing parents from the outside world in order to prevent them from having people with whom to share their concerns (Leberg, 1997), in a similar way to holding the child hostage (Jülich, 2005). Some abusers achieve this by encouraging drug or alcohol dependency in lone parents, which also offsets any future disclosures made which will be likely to lack credibility (Leberg, 1997). Another strategy aimed at reducing credibility is questioning (usually) the mother's parenting competence in front of friends and other family members. The vulnerability of the community to such grooming tactics is exacerbated by the cognitive dissonance or cognitive distortions parents/carers, other family members and even professionals may experience. This dissonance/distortion manifests itself in the initial wariness and unease they have about trusting the prospective abuser which coexists with their feelings and reactions to the repeated offers of hospitality or help. Cognitive dissonance occurs when parents/carers and practitioners ignore their wariness and adopt a more appropriate response to these overtures of help and their thoughts are changed to be more consistent with behaviour (Van Dam, 2001). Grooming is a long-term strategy (Sanford, 1980) and is often undertaken so well that, even if abuse is later disclosed, the perpetrator has gained such a position of trust in the community, that the victim is unlikely to be believed. There are two aspects to grooming the child: the physical grooming which gradually reduces the child sensitivity to touching and results in the gradual sexualisation of the child (Berliner & Conte, 1990) and psychological grooming which may begin with the abuser's version of "sex education" or attempts to enter a child's bedroom when they are changing or stroking a child's head when discussing explicit sexual material. Such attempts to normalise this sexualised behaviour are assisted by the roles that abusers adopt to legitimate their actions, for example, Herman et al.'s (1990) study identified how abusing fathers adopted the role of "suitor" to the daughters they abused. Another technique is the effort to interact with the child on their "wave-length" (Van Dam, 2001) or raising the child's status to that of adult (Wilson, 1999). Chase and Statham (2005) identified a four stage continuum to the grooming process: stage 1: identify the vulnerable child, stage 2: socially isolate the child, stage 3: develop an emotional attachment, and stage 4: isolate the child from their families and develop progressive control over the child. The study of child sex offenders by Elliott, Browne, and Kilcoyne (1995) demonstrated how groomers looked for specific behaviours to identify vulnerability, such as the way the young person was dressed, whether they lacked confidence and self-esteem, or whether they had a problematic relationship with parents/ carers. Similarly, Ward and Keenan (1999) explored the distal planning strategies of groomers and describes two types; covert/ explicit planning -the abuser/offender does not acknowledge any premeditated thought or planning but, manipulates circumstances in order to enhance contact with potential victims and explicit planning -deliberately initiating contact for sexual purposes. This is similar to techniques used by hostage takers (Graham et al., 1994)). ORIGINAL ARTICLE THEORETICAL RESEARCH three typologies of groomer: the aggressive groomer whose approach is characterised by violence, threat or force (Gupta, Raj, Decker, Reed, & Silverman, 2009); the criminal opportunist, those who engage in one-off offences against strangers; and the intimate groomer who perceives a relationship with their victims as analogous to a consenting sexual relationship between two adults (Canter, Hughes, & Kirby, 1998) and where intimacy is ensured through the promise of gifts, reassurance, affection, desensitisation, kissing and oral sex by abuser on victim. A fourth dimension of grooming is suggested by McAlinden (2012) who described a style of grooming known as "forbidden fruit" activities where groomers use items or treats that are illegal for children or young people to consume such as alcohol, cigarettes and drugs, the showing of (adult and child) pornography and telling lewd jokes. Forbidden fruit activities, by their deviant nature, are likely to ensure the compliance of children and to reduce the likelihood of disclosure. Just as an understanding of grooming techniques is vital to understand the processes through which CSA occurs, Williams (2015) asserted that an understanding of pre-offence grooming is equally important, because it provides insights into the ways a perpetrator manipulates the behaviour of the victim and changes the relationship to an overtly abusive one (Berliner & Conte, 1990). Thus it is necessary to understand how victims are approached and groomed as part of their routine activities (Felson, 2008) and how the manipulation and control occurs. This power and control is further secured through the construction of trauma bond that some victim-survivors form with their perpetrators through a process of violence counterpoised with affection and degradation (Jordan, Patel, & Rapp, 2013). This is similar to the small kindness being amplified in the context of terror, identified in Jülich's (2001) research as a precursor to Stockholm syndrome. The relevance of Stockholm syndrome Stockholm syndrome is a useful concept as it can provide an over-arching understanding of why victim-survivors of child sexual abuse have acted and responded as they do. This phenomenon is also referred to as traumatic bonding, hostage identification syndrome, or survival identification syndrome. Stockholm syndrome is named after the robbery of Kreditbanken at Norrmalmstorg in Stockholm, Sweden in 1973. During the crime, several bank employees were held hostage in a bank vault from August 23 to 28, 1973, while their captors negotiated with police. It has been accepted that hostages can develop Stockholm syndrome and we have many examples of this beginning with the puzzling reactions of employees in the Stockholm bank (Graham et al., 1994). During six days of captivity, the hostages (bank staff: three women, one man) developed an emotional bond to the hostage takers. This was a complex bidirectional bond that formed the basis of a survival strategy for the hostages. They believed if the hostage takers liked them, then they would not hurt them. This relationship persisted well beyond the siege and the hostages continued to view the hostage takers as their protectors, and were unable to censure them in any way. The emotional bond with the hostage-takers was so powerful they not only identified with the hostage-takers but also came to view the police as the enemy. Subsequently, the hostages attempted to protect the hostage takers from the police (Goddard & Tucci, 1991;Graham et al., 1994). The relationship between the hostages and hostage-takers did not cease at the end of siege but persisted for years after the actual incident. Moreover, the female member of staff formed an intimate relationship with one of the hostage takers (Jameson, 2010). The reactions of hostages in this event, and other similar instances, have been studied to provide the basis for what has come to be known as classic Stockholm ORIGINAL ARTICLE syndrome (Graham et al., 1994;Hacker, 1976;Kuleshnyk, 1984;Soskis & Ochberg, 1982;Strentz, 1982). Drawing on the literature related to hostages, Graham et al. (1994) extended classic Stockholm syndrome to provide an overarching theory referred to as Graham's Stockholm syndrome theory. Graham and her colleagues theorised that emotional bonding could occur between a victim and an offender and reviewed the literature relating to nine victimised groups to determine whether bonding to an offender occurred as it had in Stockholm syndrome. These groups included concentration camp prisoners, cult members, and civilians in Chinese Communist prisons, pimp-procured prostitutes, incest victims, physically and/ or emotionally abused children, battered women, prisoners of war, and hostages in general. It was found that in all nine groups, bonding between an offender and a victim occurred when the four following conditions co-existed: (a) perceived threat to survival and the belief that one's captor is willing to carry out that threat; (b) the captive's perception of some small kindness from the captor within a context of terror; (c) isolation from perspectives other than those of the captor; and (d) perceived inability to escape (Graham et al., 1994, p. 33). All these factors were identified in Jülich's (2001) research as precursors to the development of Stockholm syndrome. Jameson (2010) explored the psychology of Stockholm syndrome and described it as both a survival strategy and form of adaptive behaviour which provides hope for the victim in a hopeless situation. Seen in this context it is easy to understand how victim-survivors of child sexual abuse form strong emotional attachments to their abusers and misconstrue small acts of kindness as love. While the general public would not think of children and young people as hostages, they can be victims and they can be held captive, and in chronic abusive relationships they are particularly vulnerable to the forces of Stockholm syndrome which can be understood as a survival technique for children in this situation. Victims of child sexual abuse are more likely to develop Stockholm syndrome (Alvarez & Alessi, 2012). Their hostage situation exists in both material and subliminal form manifested in: their perceived threat to survival and belief the abuser is willing to carry out that threat, the victim's perception of some small kindness from the abuser within a context of terror, fear of isolation and the perceived inability to escape. These elements are the four precursors or conditions that Graham et al. (1994) identified as the precursors for Stockholm syndrome and Jülich (2001) analysed her interviews of adult survivors of CSA using these precursors as a framework. Emotional abuse or the threat of harm is a threat to physical survival. Child sexual abuse (CSA) includes physical and emotional abuse which threatens a child's psychological survival and in some cases his/her physical survival. Adult victimsurvivors of CSA in Jülich's (2001) study indicated they had experienced threats in many different ways -physical, sexual, the withdrawal of love, threats that people they loved might be harmed or pets harmed. A person under threat perceives kindness differently than a person who has not been threatened, as is the case for instance, in the cessation of violence experienced by battered women. Victim-survivors spoke about physical sensations that were enjoyable, they often prefaced statements with "at least he didn't" and ended with "hurt my sister/ brother/mother "etc. They often said "it wasn't that bad", "or it could have been worse" (Jülich, 2001, p. 183). Isolation is not as obvious for victims of CSA as in other hostage taking situations. However, the emotional and psychological isolation described by adult victim-survivors of child sexual abuse in Jülich's (2001) research was profound. For some this was reinforced by the lack of action by various authorities (outsiders). Victim-survivors said they blamed themselves, they felt guilty, and were ashamed, and this alone served to isolate them from the perspectives of others (Jülich, 2001). This situation is exacerbated ORIGINAL ARTICLE THEORETICAL RESEARCH by threats abusers make to children as they silence them and which renders them incapable of escape. The victim-survivors in Jülich's research said they tried to stop the abuse but were unable to. Other adults (bystanders) who should or could have known what was happening did nothing. All too often in those cases when reports or disclosures were made, the abuse did not stop. Some mothers were unable to protect victim-survivors because they were subjected to abuse as well. Victim-survivors interpreted this as proof that there were unable to escape (Jülich, 2001). Advocates of Stockholm syndrome theory would argue that, given these precursors Stockholm syndrome can develop. However, we argue that grooming can also facilitate the development of Stockholm syndrome. Subliminal messages of Stockholm syndrome and grooming The subliminal messages associated with Stockholm syndrome lead victims to have narrowed perceptions: they are focused on the immediate, surviving in the here and now and, as a result, cognitive distortions distortions or dissonance occurs. Such distortions are evident in their reframing of the situation where they do not see themselves as abused when actually they are, or they minimise and rationalise the abuse e.g., -"it wasn't that bad", or the abuser "couldn't help him/herself ". Often they blame themselves or they see the abuser as "good "and themselves as "bad" or they switch back and forth. They frequently interpret violence as a sign of caring and love and demonstration of small kindnesses in a context of chronic abuse, become large kindnesses and enable victims to have hope for the future. In extreme cases they believe they love the abuser and they are convinced they need the abuser's love to survive. Finally they become convinced that the abuser will know if they have been disloyal and will "get them" or that they will retaliate in some way. Subliminal messages to grooming are very similar: victim-survivors feel they are to blame, they are bound to the abuser through secrecy, or they think the abuser is the only one who understands them, or they feel that the abuser treats them like a grownup, and in some cases victims want to "protect" the abuser (Jülich, 2001). Cognitive distortions can generate a sense of false or pseudo agency in victim-survivors. The pretend or pseudo-agency in this instance, refers to the ways child sex abusers lull victims into thinking they are giving informed consent and that they are engaged in a sexual relationship with an equal when in fact they are victims-survivors of CSA. Thus, the victim feels as though they are in control and making informed decisions about the relationship, not only as children but well into adulthood. They are unable to see the relationship as abusive, they might know on some level that it is wrong but they become incredibly practised at maintaining the silence. Conte et al. (1989) identified in their research with convicted child sex abusers that the development of pseudoagency was a popular grooming tactic with young victims. Implications for assessing risk and abuse The complex bidirectional relationship central to Stockholm syndrome could still be very strong according to where the victimsurvivor is on his/her journey of recovery. This relationship does break down, but it takes time. Victim-survivors of CSA when they are prepared to disclose, will appear to practitioners as ambivalent and even contradictory, they may tell their story then recant (part of the process of child sexual abuse accommodation syndrome, but also an anticipated outcome of exposure to the precursors of Stockholm syndrome). Therefore it can be frustrating to work with victims of child sexual abuse as they seem to keep changing their minds, and practitioners may start to doubt them and doubt themselves and their understanding of what is happening or has happened. Thus they need to be mindful that support THEORETICAL RESEARCH ORIGINAL ARTICLE persons (bystanders) can be subjected to the same forces the abused child was, and that they too could be subjected to the influence of Stockholm syndrome and grooming. Moreover, victims may not be confident that family members (bystanders) or professionals (outsiders) can contribute objectively. Often family members are the very people who should have been able to protect the child, but they did not for whatever reason. Herman (1997) has reminded us that bystanders traditionally have "looked the other way", and "outsiders" (professionals) too have failed to recognise the signs of chronic child sexual abuse. It is also pertinent to remind ourselves that as practitioners, social workers working in families are not immune to the development of Stockholm syndrome. There has been some research in Australia indicating that social workers could use the same techniques as children when dealing with a potentially violent parent (Goddard & Tucci, 1991), while in the UK, research has shown the impact of Stockholm syndrome has led child protection practitioners to reframe violence and sexual abuse as something else (Littlechild, 2008;Munro, 2011), or become susceptible to cognitive dissonance in relation to prospective abusers (Munro, 2011). Conclusion: Conceptual frameworks to inform social work practice? This article has explored the connection between grooming and Stockholm syndrome in order to provide an explanation for nonreporting of CSA and to identify the common relationship dynamics that develop in both scenarios so as to inform the relationship based aspects of social work practice. In doing so, it has attempted to provide a brief overview of the psychology and behavioural characteristics of abusers and hostagetakers and to identify how certain forms of vulnerability and opportunity increase the likelihood of abuse occurring. Moreover, it provided a range of research evidence to support the developing of Stockholm syndrome as a result of CSA grooming (Graham et al, 1994: Julich, 2001. As a result, it has identified numerous ways this connection can inform social work practice. In terms of grooming, Ost (2004) suggested that developing grooming typologies informs practice because they have the potential (if not to predict) to identify certain regular patterns of behaviour and motive so as to develop a modus operandi of an abuser who has had previous convictions for CSA. Elliott et al. (1995) identified a number of core grooming typologies for framing the complex dynamics of abusive relationships and for enabling comparisons between different, yet connected categories of abusive behaviour. In terms of Stockholm syndrome and grooming there are further similarities between the distal planning strategies used by hostage takers and abusers, the targeting of victims and the ways opportunities are exploited to gain access (Gupta et al., 2009;Ward & Keenan, 1999). Moreover, Craven et al.'s (2006) tripartite model of grooming, including reference to how abusers groom the environment, coupled with the concept of cognitive distortions, offer insights into the ways child protection practitioners could become susceptible to elements of Stockholm syndrome. Whilst none of these conceptual frameworks could be regarded as constituting a body of knowledge to generate a precise science, they do render a more critical understanding of the power plays at the centre of these abusive relationships and hence contribute to an understanding of casework dynamics in child sexual abuse contexts. These dynamics do not readily lend themselves to the kinds of risk instrumentalism underpinning ARAs like the Tuituia Assessment Framework and hence these concepts are vital to the development of a critical practice aimed at remedial interventions to protect children. Indeed, if certain conditions and the four precursors identified by Graham et al. (1994) exist, Stockholm syndrome may be present. It is likely this will not be identified on an ARA instrument.
Mapping and quantification of ferruginous outcrop savannas in the Brazilian Amazon: A challenge for biodiversity conservation The eastern Brazilian Amazon contains many isolated ferruginous savanna ecosystem patches (locally known as ‘canga vegetation’) located on ironstone rocky outcrops on the top of plateaus and ridges, surrounded by tropical rainforests. In the Carajás Mineral Province (CMP), these outcrops contain large iron ore reserves that have been exploited by opencast mining since the 1980s. The canga vegetation is particularly impacted by mining, since the iron ores that occur are associated with this type of vegetation and currently, little is known regarding the extent of canga vegetation patches before mining activities began. This information is important for quantifying the impact of mining, in addition to helping plan conservation programmes. Here, land cover changes of the Canga area in the CMP are evaluated by estimating the pre-mining area of canga patches and comparing it to the actual extent of canga patches. We mapped canga vegetation using geographic object-based image analysis (GEOBIA) from 1973 Landsat-1 MSS, 1984 and 2001 Landsat-5 TM, and 2016 Landsat-8 OLI images, and found that canga vegetation originally occupied an area of 144.2 km2 before mining exploitation. By 2016, 19.6% of the canga area was lost in the CMP due to conversion to other land-use types (mining areas, pasturelands). In the Carajás National Forest (CNF), located within the CMP, the original canga vegetation covered 105.2 km2 (2.55% of the CNF total area), and in 2016, canga vegetation occupied an area of 77.2 km2 (1.87%). Therefore, after more than three decades of mineral exploitation, less than 20% of the total canga area was lost. Currently, 21% of the canga area in the CMP is protected by the Campos Ferruginosos National Park. By documenting the initial extent of canga vegetation in the eastern Amazon and the extent to which it has been lost due to mining operations, the results of this work are the first step towards conserving this ecosystem. Introduction Several studies have investigated conservation and threats to biodiversity and ecosystem services in tropical rainforests [1]. Deforestation rates in the Amazon, the largest remaining tropical forest in the world, have also been well studied [2]. However, little information is available regarding the unique ecosystems found on ironstone rocky outcrops on the tops of plateaus and ridges. In the Carajás Mineral Province (CMP), located in the Eastern Amazon, these ferruginous outcrop savanna ecosystems are called "canga" [3] and occur within a dense forest matrix typical of the Amazon rainforest biome [4]. Canga vegetation, also associated with the presence of iron ore, is known to exist in at least two more regions in Brazil, namely, the Quadrilátero Ferrífero, or Iron Quadrangle [5], and the lateritic banks at Corumbá [6]. There are other types of open vegetation in the Amazon (Fig 1), but they are different from canga vegetation and are determined by different soil conditions (lateritic or very poor sandy soils). In 1967, geologists from United States Steel discovered these ferruginous outcrops on top of the ridges of the CMP, which is one of the most important metallogenic provinces in the world (2), and Serra do Cipó (3). In Bahia State (BA), the vegetation occurs in Chapada Diamantina (4). In Pará State (PA), the vegetation occurs in Serra dos Carajás (5), Maraconaí (6), and Maicuru (7). In Amazonas State (AM), the vegetation occurs in Serra dos Seis Lagos [24], while in Mato Grosso do Sul (MS), the vegetation occurs in Morraria de Urucum (9). N1, N4, N5, N8, S2, S11, S23, S38, and S43 are examples of geomorphic units located in the study area. B) The Shuttle Radar Topography Mission (SRTM) elevation map of the study area with canga vegetation before mining implementation. The red and black lines represent the boundaries of the Carajás National Forest (CNF) and the Campos Ferruginosos National Park (CFNP) protected areas, respectively. The digital elevation model (SRTM, 1 arc-second) was obtained from USGS Earth Explorer (https://earthexplorer.usgs.gov) and the CNF and CFNP shapefiles from ICMBIO (http://mapas.icmbio.gov.br/i3geo/datadownload.htm). All other layers and photos were produced by the authors and are copyright-free. (AMG), and seventh (JTFG) authors were supported by CNPq through research scholarships. The specific roles of this author are articulated in the 'author contributions' section. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: MFC is employed by Vale S. A. This does not alter our adherence to PLOS ONE policies on sharing data and materials. that contains large deposits of iron, as well as manganese, nickel, copper and gold [7]. Significant investments in mineral and ore exploration and exploitation have occurred over the past four decades [8]. Brazil's constitution and National Forest Code require that in order to obtain a mining license, no net loss of biodiversity and only minimal environmental impacts can occur. Licensing processes demand basic information about biota and environmental services associated with future mining areas [9, 10]. Mining activities must be conducted in the interest of controlling their interference in the environment. Hence, it is necessary to present a Degraded Area Recovery Plan (PRAD in Portuguese), when the environmental viability of the project is assessed [11]. Compliance with legal demands brought opportunities for research, which has contributed to increasing the knowledge about flora in canga of the CMP [4] and in other Brazilian mining sites located in Minas Gerais, Bahia, and Mato Grosso do Sul (Fig 1). However, it is clear that there are few floristic links between the Amazonian canga and the species found in the Brazilian cerrado, which are prevalent in the Amazonian savanna in lowlands, sandy soils, or on top of plateaus, as seen between Venezuela and Suriname [12]. Two locations within our study area, the cangas of the Carajás National Forest (CNF) and the Campos Ferruginosos National Park (CFNP), contain 856 seed plant species, most of which are herbs (40%), with 24 endemic species. For invasive plant species, the same two localities contain 17 exotic invasive plant species, most of them located in the recently created CFNP [12]. However, knowledge of plant growth strategies and other factors that could affect the dynamics of recovery or rehabilitation of canga vegetation is still very limited [13]. Canga plateaus surrounded by evergreen forests are considered isolated entities, although little is known about the dispersal between plateaus. Recent genetic analyses have demonstrated that two perennial morning glories (Ipomoea spp.) exhibited gene flow between these canga plateaus, and genetic diversity in these species was not influenced by the size of the plateaus [14]. Another work focusing on obligate cave dwellers revealed decreasing community similarity with increasing distance between the caves, suggesting that these organisms are indeed moving between caves and plateaus [15]. As opposed to the suppression of canga vegetation from mining in the Iron Quadrangle (Minas Gerais State, southeastern Brazil), which began in the 18 th century, the suppression of canga vegetation in the CMP began only recently, in the 1980s. Some authors have described the environmental degradation of canga vegetation in Brazil [16,17] and of a similar vegetation type in the ironstone ranges of Australia [18,19], recommending the establishment of protected areas to guarantee their conservation. In the CMP, seven protected areas are in place, pursuing a balance between mining and conservation. On one hand, protected areas safeguard mining-licensed operations from illegal activities through a green protected belt; on the other hand, the mining companies participate in the protection of natural areas, preventing fires and undesired human occupation through regular surveillance [20]. This kind of protection appears to have been achieved inside protected areas of the Carajás region, where the forests are mostly undisturbed. In contrast, the surrounding areas of Carajás (the Itacaiúnas River watershed area) have lost 70% of its natural land cover (forests) over the past 40 years due to agriculture and cattle grazing [21]. The future expansion of mining is regulated by the CNF Management Plan that was recently published, which recommends that mining could expand until reaching 14% of the total CNF area [22]. However, the CNF Management Plan does not specify the minimum extent of canga that must be preserved. Hence, the current loss of canga areas is still a challenge since the areas of loss have only been estimated by analogic aerial photographs [23] that are not sufficiently accurate, unlike the orthorectified satellite images used in this study. The accurate mapping and quantifying of canga vegetation areas within the eastern Amazon could be a first step to guide conservation strategies. Other authors have already discussed the importance of conserving the canga ecosystem and its rich biodiversity [12]. Human disturbances are a major threat and began in the 18 th century with vegetation suppression and anthropogenic burning in support of early mining activities, cattle industry, eucalyptus plantation and wood extraction [17]. Currently, the harvesting of ornamental plants (such as orchids), road construction, urbanization, and invasive species are also considered to be important threats [17]. In this study, we aim to evaluate the land-cover and land-use (LCLU) changes in the canga vegetation of the CMP (eastern Brazilian Amazon) during the cycle of mining operations in order to quantify the impact of mining on canga vegetation. The objectives of this study are (1) to present a geographical object-based Landsat image classification to quantitatively assess the extent of canga areas in the study area before mining projects were implemented in 1973; (2) to determine the extent of canga and forest areas around the beginning of mining activities (1984) and different snapshots in time afterwards, specifically in 2001 and 2016; and (3) to assess the average rate of canga vegetation suppression by land-use changes from one snapshot in time to another. This study is important due to the current lack of effective mapping and quantifying of changes to canga vegetation area using orthorectified satellite images during mining operations. This gap in our current knowledge is a challenge that hinders the national and local understanding of canga vegetation in a setting of open pit mining inside protected areas, as well as determining the necessary next steps to protect this vegetation. Materials and methods This project was carried out in the Carajás National Forest under permission of IBAMA (SIS-BIO 35594-2). Study area The study site is represented by the CMP ridges in the eastern Brazilian Amazon [7]. This region is recognized as a major Neoarchean tectonic province of the Amazonian Craton [25]. Geologically, this region is called the Carajás Formation, and it is composed of banded iron formations (BIFs) represented by jaspilites, with mafic rocks situated above and beneath it. Andesites, basalts, volcanoclastic materials, and gabbro are also present [26]. During the formation of these iron-rich deposits, the weathering processes of the Carajás Formation rocks occurred under humid climate conditions that allowed the formation of an extensively weathered profile on basic volcanic and BIF rocks. This alteration mantle contains iron-aluminous laterite, haematitic breccia, and ortho-and para-conglomerates [27], and acts as a surface crust on the tops of some ridges regionally represented by the Carajás Ridge [28]. The climate in the region is classified as the Aw type according to Köppen [29]. The region experiences high annual rainfall (~2,000 mm). Peak precipitation occurs during the rainy season between January and March, while the driest season occurs between June and August. Monthly temperatures vary between 25˚C and 26˚C, with the absolute minimum temperature between 16˚C and 18˚C between July and October, and the maximum temperature between 34˚C and 38˚C during all other months [30]. In the CMP, canga vegetation occurs over laterites and haematite breccia and conglomerates on top of some ridges with altitudes that range from 280 m to 904 m and average 670 m (Fig 1). In this paper, we subdivided the study site into seven geomorphic units: North (N1-N9), East (L1-L3), South (S1-S17), Tarzan (S18-S28), Bocaina (S29-S40), Cristalino (S43-S45), and Pium and São Felix (SF1-SF3) ridges. The nomenclature uses the letters N, S, L and SF to indicate North ("Norte"), South ("Sul"), East ("Leste") and São Felix ridges [31], respectively (Fig 1). Mining projects in the CMP began in 1984 with the implantation of the N4-N5 mines. Later, the East Ridge and S11D mines began operating in 2012 and 2016, respectively [32]. It is important to emphasize that the largest iron ore mines N4-N5 and S11D occur inside the CNF. Only the East ridge mine is located outside of the CNF protected area. Remote sensing dataset, digital image processing and field data collection Four Landsat images were used in this study. The 1973 Landsat-1 MMS image, with an 80 m spatial resolution, was only used to provide a visual observation of the canga areas before the first cycle of Amazon settlement. The 1984 and 2001 Landsat-5 TM and 2016 Landsat-8 OLI images were acquired in the Level 1 Terrain (L1T) format. The images were orthorectified with 30 m pixels to the Universal Transverse Mercator (UTM) 22S zone projection and datum WGS84. All 1984, 2001 and 2016 Landsat images were converted to ground reflectance in percentages. For each Landsat image, we derived the Normalized Difference Vegetation Index (NDVI) between vegetated areas and exposed soil [33]. Fieldwork was conducted in 2014 and 2015 to determine the LCLU classes (e.g., canga, forest, and water) using panoramic digital photographs. During the fieldwork, 166 ground control points (GCPs) were collected using a differential global positioning system (DGPS) with reliable real-time positioning through the OmniSTAR mode for decimetre-level accuracy. These GCPs were used to validate the 2016 Landsat-8 OLI image classification. Training and validation samples were defined per class based on the GCPs. These data were also complemented by Google Earth Pro online high-resolution imagery. Regardless of the up to thirtyyear difference among the images and the field data acquisition, all georeferenced field descrip- Measuring land-cover and land-use changes To estimate the canga area at the four snapshots in time, we used a multiresolution segmentation algorithm based on the homogeneity definition [34]. The three-date segmentation was conducted from all of the ground reflectance bands from the 1984 and 2001 Landsat-5 TM and 2016 Landsat-8 OLI images since they had the same spatial resolution (30 m). They were segmented using weight five for the near-infrared band and the NDVI index, and weight one for all other bands. The three-date segment was copied and used to classify the LCLU classes based on the 1973 Landsat-1 MSS, 1984 and 2001 Landsat-5 TM and 2016 Landsat-8 OLI images. This process followed geographic object-based image analysis (GEOBIA), combining the advantage of quality human interpretation and the capacities of quantitative computing [35]. For the purpose of detecting LCLU changes, we carried out a segmentation process from three separate single-date images, such as used by Desclée et al. [36] and Duveiller et al. [37]. Multi-date segmentation allowed for the comparison of three single images based on objects with the same geometry, delineating spatially and spectrally consistent segments and avoiding misclassification, allowing for the most accurate and rapid process and reducing additional processing efforts of outlining polygons [38]. All Landsat TM and OLI bands (ground reflectance values) of images (1984,2001 and 2016) were used as input layers in the segmentation process. Fig 3 illustrates the step-by-step multi-date segmentation and classification process in a small mining site in the study area. The segmentation approach was developed in two levels (Fig 3). Small objects were created in the level 1 segmentation (~1 ha), corresponding to approximately 9 Landsat TM and OLI pixels. Later, objects generated during the process of segmentation at level 1 were grouped to coarser objects in level 2, with a size of 4 ha, equivalent to 36 Landsat TM and OLI pixels. The segmentation in Level 2 aimed to reduce the number of objects and increase the size of polygons to facilitate the visual interpretation and change detection analysis. In regards to the definition of the scale parameter (h sc ), several unsupervised and supervised methods are available to define the optimal scale parameter [39]. However, the selection of appropriate scale parameters is heavily dependent on trial-and-error exploration, which is iterative and time-consuming [40], because there is no obvious mathematical relationship between scale parameters and the success of the segmentation [39]. The segmentation shape (w sp ), compactness (w cp ) and scale (h sc ) parameters were established as being equal to 0.1, 0.5, and 10, respectively, for all images. The h sc parameters at level 1 and level 2 segmentation were 10 and 5, respectively, to image segmentation with minimum object sizes approximately 1 and 4 ha, respectively. During the process of the automated classification of each image, we adopted membership functions to describe specific properties of the objects based on all Landsat spectral bands and elevation data obtained from the Shuttle Radar Topography Mission (SRTM). This process allowed for various features in the description of classes to be integrated by logical operators. The selection of features was assisted by an analysis of separability of the comparable classes. Each class was classified separately in the domain of the image object level using the filter class "unclassified", according to the following order: i) forest; ii) water; iii) pasturelands; iv) mines; and v) canga vegetation. It is important to emphasize that the 1973 Landsat-1 MSS image contains i) the canga areas before the mining project, ii) the 1984 Landsat-5 TM image coinciding with the year of the Carajás Mining Project installation, iii) the 2001 Landsat-5 TM image representing the mid-term age of mining project, and iv) the 2016 Landsat-8 OLI image representing the current condition, when all iron ore exploitation projects (N4-N5, S11D and Serra Leste mines) were already in operation. The LCLU changes were also analysed based on the "from-to" spatiotemporal change detection approach [35,41] to recognize the trajectories of thematic classes from 1973-1984, 1984-2001, 2001-2016, and 1973-2016. We identified five classes that did not change over the period of investigation (forest-forest, savanna-savanna, lake-lake, mine-mine, and pasturelands-pasturelands) to understand their possible change trajectories, related to the conversions "from-to" of forests to pasturelands, forests to mines, canga to mines, canga to pasturelands, pasturelands to mines, and lakes to mines. Assessing the classification accuracy of LCLU classes An object-based accuracy assessment is different from pixel-based validation due to the sampling units, i.e., objects vs. pixels [42]. However, a generally accepted approach is that classified polygons can be validated by GCPs [43]. To assess the classification accuracy of the 2016 Landsat-8 OLI image, 166 GCPs collected during fieldwork along accessible roads were used. As older GCPs and thematic maps were unavailable for the 1973, 1984 and 2001 Landsat-5 TM images, approximately 154, 159 and 137 validation points, respectively, were randomly stratified using the PCI Geomatica 2016 software. Hence, the accuracy of the Landsat image classifications was assessed using non-normalized and normalized confusion matrices [44]. The producer and user accuracies [45], Kappa per class, Kappa index of agreement and overall accuracy were also determined [43]. Results To assess the relationship between the Landsat image interpretations and terrain features, field campaigns were conducted in the study area to improve GEOBIA analysis, aiming to identify and map the different land cover and land use units. The multiresolution classification based on the GEOBIA analysis effectively classified the canga vegetation and its related mines. Fig 5 shows these classes distributed within the study site throughout the years before mine implementation, indicating the pristine area of canga vegetation (Fig 5A), its current extent and the area of mining activities in 2016 (Fig 5B). Based on random samples collected from the Landsat images, the results indicated that the overall accuracies and Kappa indices were 98. Table). The overall accuracies indicate that a large majority of segments were correctly identified according to the reference data (random samples). Some lake pixels were classified as artificial lakes in mines, while some mine samples were classified as forests (e.g., reclaimed areas colonized by grasslands) and canga vegetation, whose spectral responses are very similar to those of outcrops in mines. The highest omission errors occurred in lakes (18.2%) for the 2001 Landsat-5 TM image, while the highest commission errors occurred in the pasturelands class for the 1973 Landsat-1 MSS image (11.8%). The confusion between forests, pasturelands and mines can be explained by the regeneration of small patches of secondary forest in pasture areas and the revegetation of open pit mines with grasses. Misclassification of segments belonging to these classes can be observed in S1 Table. The classification shows that canga vegetation in the CMP occupied an area of 144.2 km 2 in 1973, before the implementation of the Carajás N4-N5 mines and open pit mining exploitation ( Table 1, Fig 5A). Table 1 presents the canga area before the implementation of mining projects (1973) and its extent in 1984, 2001 and currently (2016) in different geographical sites. Canga vegetation was converted to mines on the North, South and East ridges, but to different extents. On the North Ridge, where the N4 and N5 mines are located, 45.6% of the canga vegetation was lost between 1973 and 2016. On the South and East Ridges, where the S11D and SL mines are located, 13% and 8%, of the canga vegetation were lost, respectively. On the Tarzan, Bocaina, Cristalino, Pium and São Felix Ridges, the canga areas remained unchanged (Fig 5C). Inside the CNF protected area, where the three largest iron ore mines are located, there was 105.2 km 2 of canga vegetation before mining implementation, which represents 2.5% of the CNF area and 73% of the total area of canga vegetation in the Carajás region. The N4 (16.6 km 2 ), S11D (16.2 km 2 ), S11A (14.5 km 2 ), N1 (12.1 km 2 ) and N5 (11.8 km 2 ) ridges contained the largest canga areas in the study site. Over the past three decades, mining activities suppressed 28.3 km 2 of the canga area in the CMP, especially on the N4 (13.0 km 2 ), N5 (8.9 km 2 ), and S11D (6.0 km 2 ) ridges within the CNF. This area represents 1.8% of the CNF protected area until July 2016. Outside of the CNF, there was 39 km 2 of canga vegetation on the Bocaina, Cristalino, East and São Felix Ridges. The majority of this area (38.6 km 2 ) remains conserved, and the suppression of 0.4 km 2 of the canga area is associated with the implementation of the SL mine in the East Ridge (Fig 5). Based on the area of canga vegetation suppressed at each site (Table 1), the rates of suppression were calculated from the moment that mining activities began in 1984, 2012 and 2014 in N4-N5, SL and S11D mines, respectively. Hence, in the SL mine, the rate of canga suppression was approximately 0.1 km 2 .yr -1 from 2012 to 2016. In the S11D mine, the suppression rate reached 2 km 2 .yr -1 from 2013 to 2016, while in the N4-N5 mines, the rate decreased from 0.9 km 2 .yr -1 from 1984 to 2001 and to 0.5 km 2 .yr -1 from 2001 to 2016. S2 Table lists 1 km 2 ). Between 1984 and 2001, LCLU changes were most notable in the conversion from canga to mines (8.8 km 2 ) and from forests to mines (5.7 km 2 ). From 2001 to 2016, 15.4 km 2 of canga were converted to mines and 10.6 km 2 of forests were converted to mines in the N4-N5 mines, while 0.4 km 2 and 6 km 2 of canga were converted to mines in SL and S11D, respectively. During the entire period investigated , the main land cover changes were associated with conversions from forests to mines (26.6 km 2 ) and canga to mines (24.3 km 2 ). Table 2 shows all quantifications of LCLU changes between the periods of investigation. Discussion In this paper, we mapped and quantified canga vegetation as well as the changes in LCLU classes in the Carajás region, mainly focusing on the effects of mining operations. Previously, the canga area in Brazil was noted as being approximately 261.6 km 2 , with 102 km 2 in the Iron Quadrangle and 103 km 2 in the Carajás region [23]; however, the methods used for these estimates were not fully described. Our results show that the Carajás region originally included 144.2 km 2 of canga vegetation, a figure 40% higher than the previous estimates [23]. As mentioned before, previous studies do not describe how canga area was estimated. It is probable that canga area was calculated from analogic aerial photographs, whose spatial distortion was not corrected. Mining activities suppressed 28.3 km 2 of canga (19.6%), while large areas of canga were still conserved in the Carajás region (115.9 km 2 ). This area corresponds to 80.4% of the pristine canga area and represents one of the largest conserved canga ecosystem areas in Brazil. In the Iron Quadrangle (Minas Gerais State), the total area covered by canga vegetation is approximately 100 km 2 [46]. However, this estimated extent must be reviewed due to the methods used to estimate the area, based on uncorrected analogic aerial photographs. According to Sonter et al. [10], 17.6 km 2 of canga area has been cleared by mining activities. The percentage of canga suppression in the Iron Quadrangle (approximately 17%) is similar to that in the Carajás region. The high rates of canga vegetation suppression observed in the S11D mine are associated with early stages of mine implementation. According to the S11D project, the useful lifespan of a mine is approximately 30 years (http://www.vale.com/en/initiatives/innovation/s11d/pages/ default.aspx). Hence, the canga vegetation suppression observed over the past three years (~6 km 2 ) will increase less than 10% over the next 30 years, and the suppression rate will reach approximately 0.2 km 2 .yr -1 . In the N4-N5 mines, the main rump-up period is represented by the changes from 1984 to 2001, where canga suppression is almost twice as high as in the open pit phase from 2001 to 2016. However, if an increasing demand for iron ore occurs over the next several years, the useful lifespan of the mine will decrease, and the rates of canga vegetation suppression may increase. A similar process has been observed in terms of Amazon rainforest losses due to the large demands relative to Brazil's natural resources, including land, timber, minerals and hydroelectric potential, demands mainly driven by the high prices of commodities [47,48]. The results of Sonter et al. [49] supported a hypothesis that global demand for steel drives extensive land-use changes in the Iron Quadrangle, where increased steel production was correlated with increased iron ore production and mine expansion. Consequently, this process is also responsible for increasing charcoal production and the Mapping ferruginous outcrop savannas in the Brazilian Amazon expansion of subsistence crops. In a future study, this hypothesis will be evaluated in relation to Carajás mining projects. The accurate mapping of areas is vital to guiding conservation strategies, especially for canga vegetation in the eastern Brazilian Amazon where iron ores are located. Many authors have already described the threats to canga and the challenges of conserving this vegetation [16,17,50,51]. Among the threats, vegetation suppression for open pit mining, fire and invasive plant species have been noted as the major threats [17]. The greatest challenge of conserving canga vegetation is to manage biological invasions and create protected areas to avoid species losses [23,51]. The canga areas in the Carajás region are partially within the CNF (67%). This category of protected area allows for sustainable use, including sustainable mining activities. The CNF has also contributed to forest conservation, with surveillance improving the ability to inspect different anthropogenic impacts associated with fire, human settlements and gold digging. Otherwise, other economic activities such as livestock and agriculture would have threatened the natural land cover, mainly of tropical rainforests, which would have been completely converted to pasturelands or croplands as observed in adjacent areas [21]. To improve the conservation of canga vegetation in this area, the Brazilian Institute for Biodiversity Conservation (ICMBio) created the Campos Ferruginosos National Park (CFNP) in June 2017, an integral conservation unit financed by offset strategies for mining operations. This park includes the Bocaina and Tarzan Ridges, totalling approximately 24 km 2 of canga vegetation that represent 21% of canga area in the CMP, where mining activities will not be allowed. Hence, an additional step was taken towards canga vegetation conservation in the Amazon region. In addition, the recently-published CNF Management Plan defines the areas to be protected and those that can be mined and divides these areas into categories, which include i) preservation areas, with 15% of the total area of the CNF where human activities are not allowed; ii) transition areas, covering 15% of the CNF with mixed areas of conservation and management; iii) Fig 5. Digital elevation model extracted from the Shuttle Radar Topography Mission (SRTM) showing canga areas (yellow polygons) (A) before implementation of mining projects and (B) currently, with mining areas (blue polygons). The limits of the two protected areas are shown, the Carajás National Forest (red line) and the Campos Ferruginosos National Park (black line), created in 1998 and 2017, respectively. The São Felix Ridge is illustrated as a window, 160 km from the South Ridge. (C) The canga areas converted to mining structures between the 1970s and 2016 over the North, South and East Ridges, and over total canga area. Letters N, S and SL represent the geographic locations of North, South, and East ridges, respectively. Other mines: a = Igarapé Bahia, b = Azul, and c = Project 118. The digital elevation model (SRTM, 1 arc-second) was obtained from USGS Earth Explorer (https://earthexplorer.usgs.gov) and the National Forest and National Park shapefiles from ICMBIO (http://mapas.icmbio.gov.br/i3geo/datadownload.htm). All other layers were produced by the authors and are copyright-free. https://doi.org/10.1371/journal.pone.0211095.g005 mining areas where mining activities can be conducted, which is 14% of the CNF area; iv) forest areas designated for sustainable management (50% of the CNF area); and v) special and public use areas, covering 6% and designed for infrastructure and general use of the CNF [52]. The areas of canga vegetation destined for mining consist of mines (see location in Fig 4) that are already installed, such as N4 and N5 on the North Ridge, Azul, Igarapé Bahia, Project 118, and S11D on the South Ridge, among others that will be installed in the near future (N1, N2, N3 in the North Ridge). The Carajás ridges are a clear example of challenges to the conservation and exploitation of natural resources. The growing demands of society and the quality of the Carajás iron ore have encouraged the exploitation of this resource in these areas. However, for sustainable mining, adequate conservation strategies need to be implemented properly, and scientific research is a key aspect of this process. Heavy-metal pollution in water from iron ore exploitation has not yet been detected in the Carajás region [53].The proper determination of areas outside the Carajás region to be protected as offsets may represent an important tool for the protection of species in the region. Conclusion Remote sensing data and GIS tools provided four snapshots in time, permitting the mapping of canga areas and the quantifying of changes in land use. Based on the image analysis, we observed that canga areas in the CMP are 40% higher than previous estimates. The suppression of canga vegetation was associated with the implementation of mining projects, which favours the suppression of forest areas for canga conservation. It is important to emphasize that the most substantial vegetation suppression occurred during the earlier stages of mine implementation, from 1984 to 2001. Later, vegetation suppression was substantially reduced during the open pit mining phase from 2001 to 2016, where a hole is excavated from the earth's surface. After three decades of mineral exploitation, 80.6% of the canga area in the Carajás region remains untouched. Government and mining industries have used offsets to compensate the unavoidable impacts of iron ore exploitation. Hence, the CFNP was created to protect 21% of the canga area in the CMP. We believe that mapping and quantifying the areas of canga vegetation that have already been lost can be considered the first step towards conserving this important rocky environment.
Social Responsibility and SDG 8 during the First Wave of the COVID-19 Pandemic: The Role of Chartered Accountants in Portugal : The fragility of the Portuguese economy, the weight of sectors that were especially vulnerable to the crisis caused by the pandemic, and the small size of enterprises meant that their economic and financial structure was not capable of supporting the effects of the economic crisis, jeopardizing the achievement of the SDG 8. This research explores the perception of chartered accountants about their role in supporting small and medium-sized enterprises during the first wave of the COVID-19 pandemic in Portugal, based on a literature review and on a questionnaire. The results show that 70% of professionals consider that their clients evaluated their work positively during the first wave of the pandemic. However, most chartered accountants did not charge their clients for their extra-work and expenses and 30% even decreased their monthly fees. Portuguese chartered accountants, confronted with the economic–financial problem caused by the pandemic, focused on saving most of their clients from collapse and safeguarding many jobs. This research highlights the public utility and social responsibility of chartered accountants’ work, in the pandemic context in Portugal, as well as their central role for the efficient application of Government economic policies to maintain economic growth and decent work (SDG 8). Introduction The global COVID-19 pandemic had repercussions for countries' economy and society as there were sudden and previously unthinkable drops in the gross domestic product (GDP), resulting from order cancellations, a lack of demand, and increased unemployment [1]. All around the world, many businesses were forced to close, even temporarily, creating unprecedented disruption in trade and most sectors of activity, and there was significant volatility in financial markets [2,3]. In March 2020, the International Labour Organization (ILO) predicted that almost 25 million jobs could be lost worldwide as well as other significant consequences, such as income and revenue losses, reduced working hours, lay-offs, teleworking, and the massive and daily adoption of information and communication technology (ICT) [4]. However, the reduction in economic activity affected several economic sectors differently [5]. In April governmental measures (grants), with the obvious consequences in terms of the business closure and increased unemployment". The President of the Portuguese Confederation of Micro, Small, and Medium Enterprises acknowledged that, in desperation, it was to chartered accountants that those small businesses turned, to obtain governmental support to survive to the economic crisis. Therefore, it is relevant to gain an in-depth understanding of the role of chartered accountants in the social and economic aspects of this catastrophe. Thus, the overall objective of this article is to highlight the importance of the Portuguese certified accountant as a key player in the implementation of social and economic public policies arising from the pandemic, contributing to reduce its negative effect on economic growth and to maintain many jobs and their quality, being the accountant's role essential to keep Portugal more aligned with the objectives of SDG 8. Moreover, this research explores their sense of social responsibility, based on a self-assessment of this professionals, and additionally intend to understand whether the personal and professional characteristics and ICT skills of the professionals influenced their self-assessment. The methodology of this research relies on two analyses. The first is a theoretical analysis based on a literature review of the Portuguese legal accounting [15] and taxation regimes and the framework of social responsibility [16,17]. ISSB (International Sustainability Standards Board) [18] and IFAC (International Federation of Accountants) [19], have actively promoted the accounting information system to enhance transparency and produce better firm results. The second is an empirical analysis based on a questionnaire, focusing on chartered accountants in Portugal. The statistical and econometric analysis will allow an assessment of the model proposed in this paper following the methodology suggested by Greene [20] and Hair et al. [21]. Following the introduction, the paper is structured as follows: a literature review, materials and methods, hypotheses development, results, and their discussion, including suggestions for future research and limitations. Literature Review Regarding the theoretical framework, this research is based on social responsibility (SR), specifically on Freeman's (2004) [22] stakeholder theory, which considers that, in addition to the shareholders of enterprises, there are other stakeholders (clients, suppliers, employees, and others) who must be taken into account in their governance because, without these "key pieces", enterprises will not survive or will have great difficulty in doing so because they will not be able to integrate truly into the society. On one hand, the concept of the stakeholder is fundamental in understanding the enterprise relationship to society [23], in general, and the role of chartered accountants during the first wave of the COVID-19 pandemic. In particular, that it is beyond common business practices and knowing that the main focus is the promotion of sustainable, inclusive and economic growth [24]. On the other hand, the characteristic of the stakeholder influences the decision-making process on the enterprise [25], especially when activities assumed are more than in compliance with their opportunities but recognizing their importance on the enterprise [26]. Indeed, this is reason to focus on the definition of Freeman (2004: 229) [22] which details "any group or individual that can affect or is affected by the achievement of a corporation's purpose". This justifies how it emerges from the accountant feedback to continuously improve their work [27]. Moreover, the authors agree with Cornelius and Gagnon (1999) [28], when they defend that the research is about practices at the level of social transactions and their interactions between members (managers, employees, and other stakeholders) that will help to bridge the gap between academic theory and practice [29]. Then, it emerges, as Beaulieu and Pasquero (2002) [30] argue, that legitimacy cannot be managed without identifying the relevant stakeholders since it depends greatly on their perceptions. Without a doubt, the stakeholder theory is fundamental to justify the purpose of accountant work to the enterprise [31]. For a different number of reasons, corporate adherents to the stakeholder model can more readily provide value to all their stakeholders if they adhere to the requisites of good faith engagement [32]. However, the promotion of the legitimacy theory [33] is also relevant within the institutional theory [34,35], to resources dependence theory [36,37] and to stakeholder theory [26,38], which offers advantages to justifies social responsibility inherent to the chartered accountant activities giving meaningless to them [39]. Furthermore, studies published prior to the pandemic showed that chartered accountants and banks were among the main external sources that SMEs (including microenterprises) managers used to obtain economic advice [40], but chartered accountants were classified as the primary source of business advice in practically all aspects [41][42][43][44]. In this regard, Jules and Erskine [45] (p. 6) argued that chartered accountants, "including those in small and medium-sized entities (SMEs), play a fundamental role in the financial reporting supply chain and facilitate effective governance in organizations" and Blackburn and Jarvis [46] (p. 7) stated that "indeed, the evidence shows that accountants are invariably the most frequently used source of advice out of all advice providers, private and public, if not the first choice by SMEs". In the Portuguese case, the role of chartered accountants in enterprises is even more relevant due to the small size of most enterprises [47][48][49]. The Portuguese business fabric is composed essentially of SMEs [50], which transfer their tax and accounting obligations to chartered accountants as well as approaching them for advice and technical support in the economic and financial areas [47][48][49]. In Portugal, chartered accountants also have a great proximity to smaller enterprises because, in many cases, they also support and monitor enterprises' management, mitigating the problem of some entrepreneurs' unpreparedness in financial areas [47][48][49]. With the COVID-19 pandemic, the "dependence" of most Portuguese enterprises on chartered accountants' advice and consultancy increased substantially [13,14,49] as the disease caused by the SARS-CoV-2 virus had a cataclysmic effect on the economy, with greater emphasis on SMEs, especially in weaker economies [51,52]. Thus, Portuguese chartered accountants played a decisive role in managing the economic crisis caused by the first wave of the COVID-19 pandemic, saving many SMEs from financial collapse as well as from bankruptcy and dismissal of employees, mainly by supporting their clients in the application processes for governmental funds, by helping them with the development of new forms of business, and through the strict and timely control of their information [13,14,49,53]. Therefore, in Portugal and other countries, in the context of the public health, economic, and social crisis, the framework of small enterprises' "dependence" on chartered accountants was reinforced since they began to play an even more active role in the survival and economic continuity of enterprises [13,14,40,49,53,54], which can also be seen as a social responsibility of this profession. Islam et al. [55] refer to the devastating power of the crisis caused by COVID-19 in the context of SMEs and the need for resilience and measures of enterprises to survive. At this point, it is important to notice that SR is a phenomenon connected essentially with enterprises and large businesses due to its association with substantial resources; however, in recent decades, some authors have concluded that Corporate Social Responsibility (CSR) also makes sense in smaller enterprises, associated with simpler and less sophisticated initiatives, in which it is important to make the connection between business and society [56][57][58]. As the pandemic had a huge effect on society, enterprises, chartered accountants' work, and their importance to enterprises (mostly to SMEs), it is essential to gain a better understanding of their role in the pandemic context and to identify and measure the consequences of this disruptive situation for chartered accountants' work [59]. Although this is a very recent topic, in the international context, some studies about the role of chartered accountants in managing the economic effects of the pandemic and on its impact on their activity have already been published. Frumusanu et al. [60] and Jabin [61] analyzed the perception of accountants of Romania and Bangladesh on the effect of the pandemic in their activity and concluded that teleworking has negatively affected their activity because the stress levels caused by isolation may affect their judgement. However, they considered that the increase in digitalization and the use of digital tools are positive solutions for the future [60,61]. The research concluded that the increase in accountants' remote work has raised concerns about cybersecurity [61]. Similarly, Heltzer and Mindak [62] asserted that the requirement for accountants to work remotely jeopardizes their productivity and ability to perform their work as well as their capacity to maintain relationships with their clients and co-workers. There are also studies that explore the use of new technologies in accounting during the pandemic of COVID-19 and concluded that its use has been increased [63,64]. Mardawi et al. [65] warned of the likelihood of an increase in fraud, in the accounting context, in times of economic crisis, such as the one currently experienced, and highlighted the importance of ethics and new strategies in ethics education. The involvement of professionals in the processes of applying for state support during the COVID-19 crisis also raises questions regarding professional ethics [12,65]. Ahrens and Ferry [66,67], considering the importance of governmental funds in mitigating the effects of the economic crisis caused by the pandemic, discussed the role of accountants in local governments, highlighting their importance in the planning and budgeting of these entities. Mendes [40], in an empirical research based on a questionnaire applied to entrepreneurs in João Pessoa (Brazil), found that 88% of the respondents identified the pandemic as the greatest adversity that they have faced and 97% of them stated that accountants are fundamental in supporting their business in the pandemic context. Papadopoulou and Papadopoulou [54] and Pires [49] also realized the importance of accountants' role in supporting enterprises in mitigating the economic effects of the pandemic. As stated by Alao and Gbolagade [68] (p. 109), "in this new unprecedented reality, the authors will witness a dramatic restructuring of the economic and social order in which business and society traditionally operate". From our point of view, accountants are contributing positively to the construction of that "new reality". It is possible to observe, from the previous literature analyzing the importance of chartered accountants' work in mitigating the economic effects of the COVID-19 pandemic, that the subject requires further attention and that many aspects still need to be explored. For instance, in the existing literature about COVID-19, there is a lack of study on the SR associated with chartered accountants' performance, although there are studies that refer to an increase in concerns about CSR during the pandemic crisis [69,70]. Thus, this research aims to contribute to a better understanding of the performance of these professionals in the context of this crisis from SR and economic sustainability (SDG 8) perspectives. Materials and Methods This research was designed according to the process diagram shown in Figure 1 to achieve the objective stated above. Firstly, the authors carried out a literature review of the Portuguese legal regime of accounting [15], including the framework of taxation, along with other international entities, such as the ISSB and IFAC [18,19], which have actively promoted the accounting information system to enhance transparency, economic sustainability and facilitate better firm results. Having defined the theoretical framework, the authors carried out a questionnaire with a sample of the public target of this research, Portuguese chartered accountants, who outsource their services to enterprises and professionals. These accountants' clients are mostly SMEs (including microenterprises). Regarding the target of the questionnaire, according to data provided by the OCC, in 2020, there were 68,278 members registered in the professional Order [71], with members throughout the country, but mostly concentrated in the regions of Lisbon and Porto Despite the number of registered members, only 30,735 exercise their activity as chartered accountants (data provided by OCC). The questionnaire (see Appendix A) intended to collect a self-assessment of chartered accountants, about their role in the business context during the pandemic, namely their sense of SR and their contribution to mitigate the negative effects of the pandemic on the objectives of SDG 8. The structure of the questionnaire and the typology of questions is similar to other studies that applied questionnaires to Portuguese chartered accountants [72][73][74]. The set of questions aims to achieve the following objectives: The questionnaire was released in electronic format since this is the easiest option to reach members of the target audience throughout the country without involving complicated and costly logistics. The questionnaire was disseminated through email to chartered accountants and to two major communities of Portuguese chartered accountants on Facebook (one group with around 25 thousand members and another with approximately 19 thousand members) as well as a few other small groups on the Internet frequented by The questionnaire (see Appendix A) intended to collect a self-assessment of chartered accountants, about their role in the business context during the pandemic, namely their sense of SR and their contribution to mitigate the negative effects of the pandemic on the objectives of SDG 8. The structure of the questionnaire and the typology of questions is similar to other studies that applied questionnaires to Portuguese chartered accountants [72][73][74]. The set of questions aims to achieve the following objectives: The questionnaire was released in electronic format since this is the easiest option to reach members of the target audience throughout the country without involving complicated and costly logistics. The questionnaire was disseminated through email to chartered accountants and to two major communities of Portuguese chartered accountants on Facebook (one group with around 25 thousand members and another with approximately 19 thousand members) as well as a few other small groups on the Internet frequented by Portuguese chartered accountants who carry out the activity, to share knowledge and to solve their doubts. However, it should be noted that this type of approach involves dealing with a convenience sample, which allows conclusions to be drawn but introduces the need for caution regarding the generalization of the research results. The questionnaire was released at the beginning of October and closed on 12 December 2020 (i.e., after the first lockdown), receiving 503 valid responses. Thus, our response rate was 1.6% of those who actually practice the profession. In terms of research based on a questionnaire about accountants' perception, it is an acceptable rate, as stated for instance by Borrego (2015) [72], who had a response rate of 4.1%, Dinis (2019) [74] with 2%, Dâmaso (2015) [73] and McKerchar (2005) [78] with 1% of response rate. Although this research aimed to measure the perceptions and opinions of the public target, the statistical treatment of the collected data was quantitative, so this research fits into this research paradigm. The statistical and econometric analyses used in this paper are supported by the conclusions of Greene [20] and Hair et al. [21]. Hypotheses Development Mendes [40] and Papadopoulou and Papadopoulou [54] highlight the importance of the accountants' assistance in the survival/continuity of enterprises in their countries, in the context of economic crisis caused by the pandemic. In OCC (2021) [14], the testimonies of Portuguese personalities, such as the President of the Republic, the President of the Court of Auditors, the Provider of Justice, several Ministers of State, the President of the Economic and Social Council, and the Presidents of Confederations of Enterprises from different sectors of activity, among others, refer to the importance of the performance of Portuguese chartered accountants in the context of the economic crisis caused by the pandemic, saving many enterprises from the financial collapse and, thus, safeguarding many jobs (SDG 8). In the Portuguese context, it is also important to comprise the performance of these professionals, in that context, from their perspective, as well as to understand whether the personal and professional characteristics and ICT skills of the chartered accountants influenced their self-assessment, so three research hypotheses were drawn to be tested. In the studies that aim to understand decisions, perceptions and attitudes of Portuguese chartered accountants, the socio-demographic and professional variables are use as explanatory variables [72][73][74]. For this purpose, the following characteristics of Portuguese chartered accountants are used: gender, age group, region of activity, professional experience, size of their portfolio of clients, and ability to use ICT.Although there is a lack of supporting literature that justifies the inclusion of ICT proficiency as an explanatory variable for self-assessment of Portuguese chartered accountants' performance in the context of the pandemic, the application forms created by the Portuguese government for enterprises, individual entrepreneurs, and self-employed workers to apply for support, were all based on digital platforms. Moreover, the training that OCC created in that context to the chartered accountants was in digital support, so it justified the inclusion of ICT skills as an explanatory variable in the hypothesis tests. To understand Portuguese chartered accountants' perception of the clients' evaluation of their performance in supporting the enterprises during the first wave of the COVID-19 pandemic and the impact of their professionals and personal characteristics and their ICT skills on that perception. The following hypothesis is proposed: H1. The perception of how accountants' clients value the effort that they have made to support enterprises in facing the economic problems experienced due to the COVID-19 pandemic is related to accountants' personal and professional characteristics and their ICT skills. To determine whether, due to the pandemic, Portuguese chartered accountants perceive an increase in their workload and expenses related to their activity, as well as whether they decided to charge their clients for these potential increments by increasing the value of their monthly fees, this research will also identify the importance of chartered accountants' personal and professional characteristics and ICT skills in this context. The following hypotheses are proposed: H2. Accountants' perceptions of the variation in their workload due to their response to the economic problems caused by the COVID-19 pandemic are related to their personal and professional characteristics and their ICT skills. H3. Accountants' decision to charge their clients for the variation in their workload and expenses through an increase in their monthly fees is related to their personal and professional characteristics and their ICT skills. The tests used to assess the rejection or non-rejection of our research hypotheses were the non-parametric "Mann-Whitney test" and "Kruskal-Wallis test"; the use of nonparametric tests was justified because the aim was to measure opinions and perceptions. For the same reason, the correlation between variables was tested using the "Spearman correlation" (rho). Having defined the objective, the hypotheses, the data collection process, and the research methodology, the following sections present the results of the statistical analysis of the data and their discussion. Descriptive Statistics Regarding the personal characteristics of the surveyed chartered accountants, it was possible to obtain answers from professionals in all the Portuguese districts and autonomous regions (Portuguese local administrative organizations), with greater incidence in the districts of Lisbon and Setúbal as well as in the district of Porto, a situation that is compatible with the concentration of business activities and with the distribution of chartered accountants by regions [79]. Regarding the age and gender of the surveyed chartered accountants, the obtained data highlight that around 63% of them are aged between 35 and 55 years and that most of them are female (around 75%), which is partially justified by the fact that the number of women enrolled as members of the OCC is currently slightly higher than the number of men [71]. Furthermore, previous studies on the perceptions of Portuguese chartered accountants have shown that women are more likely to participate in this kind of study than men. Data taken from the questionnaire also show that 55.4% of the surveyed chartered accountants have between 10 and 25 years of professional experience and 26.4% have more than 25 years of experience, which is also compatible with the characteristics of the target group. Regarding the size of the surveyed chartered accountants' portfolio of clients, the results are summarized in Table 1. The data regarding the experience of the professionals and those presented in Table 1, concerning their portfolio of clients, provide evidence that the surveyed chartered accountants are mostly experienced professionals with small and medium-sized portfolios of clients. These data are particularly relevant because they allow us to confirm that the weight of the perceptions of the chartered accountants working in "big accounting enterprises", with working realities that are quite different from those experienced by the generality of Portuguese chartered accountants as outsourcing service providers, has little impact on the results obtained in the questionnaire. Table 2 presents the results relating to the surveyed accountants' level of ability to use ICT; it stands out that 36.1% considered that they have an acceptable level of ability and around half classified their ICT skills as good or excellent. These data on ICT proficiency might at first seem not to make sense, considering that the surveyed chartered accountants from the lower age groups are a minority; however, the increasing dematerialization of accounting, tax, and social security obligations in Portugal in the last two decades have made ICTs everyday tools for most Portuguese chartered accountants, regardless of their age. The surveyed chartered accountants were asked to indicate their opinion on how their clients perceived the impact of their work on the survival/continuity of their enterprises in the first wave of the pandemic, which allows to obtain an insight view of how evaluate their contribution, in the context of alignment with the main goals of SDG 8. Table 3 presents their responses. The majority (about 70%) of the surveyed chartered accountants believed that their clients perceived their work in the context of the COVID-19 pandemic (focusing on the first wave of the pandemic) as having a positive or very positive impact on the survival/continuity of clients' enterprises. In addition, the authors sought to reinforce these data with information on the effective impact of accountants' work on the survival/continuity of their clients' enterprises. Thus, in this sense, the authors aimed to determine the weight of clients with suspended activity or closure (due to the pandemic) in the surveyed accountants' portfolios of clients. The data regarding this reality (on the date of the questionnaire's application) are summarized in Table 4. The results regarding the importance of the Portuguese chartered accountants' work supporting the survival or continuity of enterprises in the pandemic period were, thus, corroborated by the fact that about 41% of the chartered accountants surveyed, on the date they completed the questionnaires (October to December 2020), reported not having cases of clients with suspended activity (without income) or closed due to the pandemic. Moreover, about 44% of the professionals reported cases of suspension or closure of clients' activity in a small percentage of their portfolio (up to 10%). Considering the dimension of the crisis, which, by the time the questionnaire was carried out, had already taken hold of the Portuguese economy, the authors considered the results regarding the percentage of accountants' clients with suspended or closed activity as very positive and demonstrates an effective contribution of Portuguese chartered accountants to mitigating potential setbacks in the scope of SDG 8 due to the crisis caused by the pandemic, corroborating the testimonies of Portuguese personalities about the importance of their role in pandemic context [14]. These positive results required dedication and hard work from the Portuguese chartered accountants, as well as the need to adapt their work model to conform to the requirements of the Portuguese Health Authority' standards. The following tables (Tables 5 and 6) present the results relating to the impacts suffered by the professionals' work due to the need to adapt and support clients during the period under analysis. The data regarding the chartered accountants' perception of the change in their work volume after the first wave of the pandemic are summarized in Table 5. From the data presented in Table 5, on the surveyed chartered accountants' perception of the impact of the COVID-19 pandemic's first wave on their workload, the authors concluded that most of them (78.6%) perceive that their work volume has increased or highly increased due to the additional work involved in helping their clients' businesses to avoid the adverse economic consequences of the pandemic's first wave. These results are in line with the data obtained from a questionnaire applied by the OCC to its members in April 2021, regarding the first year of the pandemic, which showed that 78% of the surveyed chartered accountants considered that the pandemic had increased their workload. The data collected by our questionnaire also allowed us to conclude that the increase in the accountants' work volume, due to the economic effects of the first wave of the COVID-19 pandemic, was essentially due to the time spent by accountants reading and interpreting the large amount of legislation that was being published, as well as the treatment of the support for the business sector, created by the Portuguese Government in the scope of the pandemic crisis, the instruction and preparation for which was carried out mainly by them. Regarding other impacts of the pandemic on the activity of surveyed chartered accountants, in terms of investments and expenses, the results are summarized in Table 6. As can be seen from the analysis of the data in Table 6, according to most of the surveyed chartered accountants, during the first wave of the pandemic, there was an increase in their spending, mainly to purchase equipment, training, and telecommunications. Moreover, the expenses that oscillated less during that period were those related to rent and staff, which correspond to fixed or structural costs. It should be noted that the small fluctuation in labor expenditure is corroborated by the fact that most of the surveyed accountants (around 91%) did not dismiss any employees during the first wave of the pandemic period and did not expect to do so until the end of 2020. It was also relevant to understand whether accountants' income increased during the first wave of the pandemic due to charging their clients for the extra work and expenses either through an increase in their monthly fee or as a direct extra fee for the services provided in this context (see Table 7). Even though most of the surveyed chartered accountants experienced an increase in their volume of work and in their activity expenses, derived from the extra support provided to clients, as can be seen in Table 7, most accountants did not charge their clients for these (around 67% of surveyed chartered accountants), and there were even situations in which the accounting fees decreased at the beginning of the pandemic period due to accountants' solidarity with the difficult situation that some clients were experiencing and their fear of losing clients. The analysis of data in Table 7, reinforces the fact that most Portuguese chartered accountants have supported on their own the extra work provided to their clients during the height of the economic crisis caused by COVID-19, evidencing their contribution on a social responsibility perspective, since most did not charge their clients with an extra fee to compensate for the increase in the volume of work and expenses. The data collected by the questionnaire also allowed us to verify the increased difficulty in carrying out the receiving of regular monthly fees in the first wave of the pandemic compared with the pre-pandemic period. These results regarding the variation in the surveyed chartered accountants' income during the first wave of the pandemic follow the same trend as those obtained by the OCC in the survey of its members; that is, most of the surveyed chartered accountants, despite experiencing an increase in their volume of work, did not increase their clients' monthly fees. However, the results obtained by the OCC are slightly more optimistic, with 19.4% of the surveyed chartered accountants stating that they had increased their income during such period. Figure 2 summarizes the position of most accountants interviewed, about the changes in their work volume and expenses and on their fees, during the 1st wave of COVID-19 pandemic. of work, did not increase their clients' monthly fees. However, the results obtained by th OCC are slightly more optimistic, with 19.4% of the surveyed chartered accountants sta ing that they had increased their income during such period. Figure 2 summarizes the position of most accountants interviewed, about th changes in their work volume and expenses and on their fees, during the 1 st wave o COVID-19 pandemic. As can be seen in Figure 2, most chartered accountants, despite the increase in the work volume and expenses during the pandemic, assumed these charges internally, no passing them on to their clients. As a result of this evidence, the authors can state that, during the first wave of th pandemic, most chartered accountants working with smaller enterprises in Portug played a key role in mitigating the resultant economic problems (Tables 3 and 4). Th confirms the conclusions of Stocker et al. (2022) [80] (p. 175) when they defend that "ne practices and policies can emerge more effectively and inclusively, with a holist perspe tive". During this process, most of them had an increase in their volume of work and ex penses, however they did not increase their fees ( Figure 2). Thus, most accountants sacr ficed their free time and incurred in additional costs without charging it to their clients. Within this scope, Portuguese chartered accountants, by embarking on a mission t save their clients' enterprises from collapsing economically and financially, highlighte clients as the most relevant stakeholders, saved many SMEs and many jobs, simultan ously safeguarded their own clients' portfolios, and provided a service for the benefit o the society into which they are integrated, living up to their status as a profession of publ utility (stakeholder theory). Tests of the Hypotheses In the previous section, we found, based on a self-assessment, that most Portugues chartered accountants had an important role on maintaining the economic sustainabilit of their clients at the height of the economic crisis caused by COVID-19 (contributing t mitigate the impacts of the crisis on the SDG 8 goals), and that they acted unselfishl spending their time, and supporting some additional costs (demonstrating a sense of so cial responsibility). In this section, we aim to understand whether the self-assessment o their performance and attitudes (SR and positive impact on SDG 8) can be explained base on their personal and professional characteristics, as well as on their ICT skills. Increase or big increase in work volume. Moderate increase in investments and expenditures. Work No increase or decrease in monthly fees. More difficulty in receiving their regular monthly fees. Figure 2. Summarizes of the accountants' position regarding their work and fees. Fees As can be seen in Figure 2, most chartered accountants, despite the increase in their work volume and expenses during the pandemic, assumed these charges internally, not passing them on to their clients. As a result of this evidence, the authors can state that, during the first wave of the pandemic, most chartered accountants working with smaller enterprises in Portugal played a key role in mitigating the resultant economic problems (Tables 3 and 4). This confirms the conclusions of Stocker et al. (2022) [80] (p. 175) when they defend that "new practices and policies can emerge more effectively and inclusively, with a holist perspective". During this process, most of them had an increase in their volume of work and expenses, however they did not increase their fees ( Figure 2). Thus, most accountants sacrificed their free time and incurred in additional costs without charging it to their clients. Within this scope, Portuguese chartered accountants, by embarking on a mission to save their clients' enterprises from collapsing economically and financially, highlighted clients as the most relevant stakeholders, saved many SMEs and many jobs, simultaneously safeguarded their own clients' portfolios, and provided a service for the benefit of the society into which they are integrated, living up to their status as a profession of public utility (stakeholder theory). Tests of the Hypotheses In the previous section, we found, based on a self-assessment, that most Portuguese chartered accountants had an important role on maintaining the economic sustainability of their clients at the height of the economic crisis caused by COVID-19 (contributing to mitigate the impacts of the crisis on the SDG 8 goals), and that they acted unselfishly, spending their time, and supporting some additional costs (demonstrating a sense of social responsibility). In this section, we aim to understand whether the self-assessment of their performance and attitudes (SR and positive impact on SDG 8) can be explained based on their personal and professional characteristics, as well as on their ICT skills. Test to H1 It is important to notice that the variable "perception of how accountants' clients value the effort that they have made to support enterprises in facing the economic problems experienced due to the COVID-19 pandemic" is used, in this research, to understand the chartered accountants' self-assessment about the importance of their role in mitigating the negative impact of the pandemic in SDG 8 compliance. Although most of the surveyed chartered accountants believed that their clients perceived their performance in the context of the first wave of the pandemic as positive or very positive, it is also important to note that around 25% of the surveyed chartered accountants considered that their clients classified their work, in the pandemic context, as not positive, highlighting that 10.1% of them believed that their clients' perception was negative or very negative. It is, therefore, important to seek some explanations for these very different opinions of the surveyed chartered accountants. Accordingly, the authors tested the explanatory capacity of the chartered accountants' personal and professional characteristics and their ICT skills on their perception of clients' evaluation of their performance in the pandemic context. From the set of tests performed on the independent variables, only gender and ICT proficiency showed a statistically significant influence on their perception, as shown below. "Gender": In the context analyzed, statistically significant differences in perceptions were found between genders (U (474) = 18,447.5; p < 0.05). The mean rank is higher in the male domain, so men have a more positive perception of their performance in enterprises, while women tend to have a less positive view of their performance. There is also a correlation between the two variables (with p = 0.034). These results are justified by previous studies that found that women are more pessimistic than men in the work environment and in their career development [81]. "ICT skills": In this scope, there are statistically significant differences in perceptions according to the professionals' ICT skills (H (474) = 15,402; p < 0.05). The mean rank increases as the degree of ICT skills increases since both variables are on an increasing scale: the greater the professionals' ICT proficiency, the greater their perception of a positive performance in their clients' enterprises and vice versa. A positive correlation was also found between the two variables (with p = 0.000). Other independent variables tested: There are no statistically significant relationships between the other dependent variable and the remaining independent variables tested. Thus, H1 is partially accepted for accountants' gender and ICT proficiency and partially rejected for their age group, region, size of the portfolio of clients, and level of experience. Male accountants and those with a greater ICT proficiency consider having played a more important role in the economic sustainability of their clients at the height of the economic crisis caused by the pandemic, consequently, a more important role in mitigating the negative impacts of COVID-19 in SDG 8 compliance. Test to H2 Although most of the surveyed chartered accountants perceived that one of the immediate consequences of the "economic management of the pandemic" was an increase in their work volume, 20% did not perceive such increase; some chartered accountants even considered that their work volume had decreased or highly decreased due to the pandemic. Thus, it is important to understand which factors may explain these very different perceptions among the surveyed chartered accountants in the analyzed context. Thus, the authors tested the explanatory capacity of the chartered accountants' personal and professional characteristics and their ICT skills for their perception of the variation in their workload in the scope of the pandemic. It was found that the variables of gender and size of the portfolio of clients, among all the explanatory variables tested, were the only ones that showed a statistically significant influence on such perception, as presented below. "Gender": In this context, statistically significant differences in perception were found between genders (U (503) = 20,927.5; p < 0.05). The mean rank is higher in the female domain, so women have a higher perception of an increased workload due to the pandemic, while men have a lower perception. There is also a correlation between the two variables (with p = 0.038). As previously mentioned, according to earlier studies, there is a tendency for women to be more pessimistic in the work context [58], which is corroborated by these results. "Size of the portfolio of clients": In this regard, statistically significant differences in perceptions were identified depending on the size of chartered accountants' portfolio of clients (H (503) = 16,096; p < 0.05). The mean rank increases as the size of the professionals' portfolio of clients increases since both variables are on an increasing scale: the larger the portfolio of clients of chartered accountants, the greater their perception of an increase in their workload and vice versa. A positive correlation between the two variables was also found (with p = 0.000). Other tested independent variables: There are no statistically significant relationships between the dependent variable and the remaining independent variables tested. Thus, H2 is partially accepted for chartered accountants' gender and for the size of their portfolio of clients and partially rejected for their age group, region, ICT skills, and level of experience. Female accountants and those with a greater clients' portfolio perceived more the increase in their workload. Test to H3 It is important to notice that the variable "Accountants' decision to charge their clients for the variation in their workload and expenses through an increase in their monthly fees" in this research is used to understand the chartered accountants' self-assessment about their contribution to the SDG 8 compliance and they sense of social responsibility. Most surveyed chartered accountants did not manage to bill their clients with the increment in their work volume and additional expenses, and there is a group that even reduced the amount charged; however, another group, albeit with little significance, managed to charge their clients for the variation in their work volume. It was important to seek explanations, especially for situations that differed from the majority. Accordingly, the authors tested the explanatory capacity of chartered accountants' personal and professional characteristics and their ICT skills for their decision to charge their clients for the additional workload and expenses in monetary terms. The results provide evidence that none of the tested independent variables have a statistically significant relationship with the dependent variable. H3 is totally rejected. Despite, in the pandemic context, most chartered accountants have contributed with their pro bono work in favor of SDG 8 and have shown their sense of social responsibility, none of the tested variables statistically explains these accountants' behavior. Summary of Tests Result Analysis The results of the hypothesis tests to H1-H2 signify the importance of gender for chartered accountants' perception of their work and its valuation by third parties. The results suggest that male chartered accountants, as well as professionals with better ICT skills, tend to evaluate the importance of their work for the survival and continuity of their clients' enterprises more positively in the context of the first wave of the pandemic; regarding the increase in chartered accountants' workload, the perception is greater and statistically significant in the case of female chartered accountants and professionals with larger clients' portfolios. These results corroborate the previous literature on women's less optimism in the work context [81,82] and suggests that the domain of ICT was important in the performance of accountants in pandemic context. Discussion This research explored the self-assessment of Portuguese chartered accountants about their role in supporting small and medium-sized enterprises (SMEs) during the first wave of the COVID-19 pandemic in Portugal, as well as seeking to understand how they perceived their contribution to the maintenance of the economic sustainability of SMEs in the pandemic context and assess their sense of SR. In addition, the authors investigated whether their perceptions, performance, and decisions in pandemic context are related to their personal and professional characteristics as well as to their ICT skills. To achieve these objectives, a survey was designed, and 503 valid answers were obtained from Portuguese chartered accountants. Furthermore, the chartered accountants who responded are mainly female (75%), aged between 35 and 55 (63%), and located in Lisbon, Setúbal, and Porto districts. Indeed, these chartered accountants have small or medium-sized portfolios of clients and are professionals who essentially provide enterprises with outsourcing services. In addition, it should be noted that almost half of the chartered accountants stated that they have good or excellent ICT proficiency. The results of the survey show that 70% of the Portuguese chartered accountants consider that their work in supporting their clients' enterprises at the beginning of pandemic was perceived as positive or very positive by their clients. Nearly 80% of the chartered accountants considered that the pandemic had increased their workload (49% considered the increase to be very significant). The results obtained also allowed us to conclude that this increase in the accountants' work was essentially due to the need to research and interpret the huge amount of legislation published at the beginning of the pandemic, as well as the need to fill in and submit their clients' support requests to the government programs created in this context, mainly aimed at mitigating enterprises' loss of income and for safeguarding jobs. Regarding the increase in expenses due to the pandemic, some of the chartered accountants affirmed that they had incurred additional expenses, mainly for the purchase of equipment, telecommunications, and training. Despite the increase in workload and expenses, most accountants decided not to charge an extra-fee. It is noteworthy that the percentage of chartered accountants who billed their clients for the extra services is residual (3.3%), and almost a third of the chartered accountants decreased their monthly fees, in most cases, in solidarity with the difficult situation of some clients and in other cases for fear of losing some of them. It should, however, be noted that the decision not to charge for the extra services was not related to a lack of recognition by clients of the importance of the role played by the chartered accountants in the economic sustainability of the enterprises during the pandemic period because most professionals are convinced that their clients consider their work in helping their enterprises to survive the catastrophic effects of the pandemic to be positive or very positive and the personalities representing politics, justice and the business fabric (among others) in Portugal, corroborate this perception of Portuguese chartered accountants [14]. At this point, it is important to mention that the Portuguese chartered accountants understood the importance of a more coherent relationship between the company and society, by giving priority to jobs safeguard and enterprises, and they received from society in general, and from enterprises in particular, the recognition for that actuation [23,30]. In a deeper analysis of the results, through the hypothesis testing, some statistically significant relationships were identified with certain characteristics of the accountants. For example, men perceived their performance in supporting clients' enterprises more positively, while women tended to have a greater perception of the increase in their workload in this period of crisis. These findings are in line with previous studies indicating a greater tendency for women to be more pessimistic than men at work. Chartered accountants with a better ability to use ICT had a more positive perspective on the importance of their work in supporting their clients' enterprises, which is to be expected since most of the problems in this period involved digital solutions. In terms of the perception of their workload, accountants with more clients experienced a greater increase in their work volume. Test results also highlight the sense of social responsibility of Portuguese accountants to their clients and to the society during the first wave of the COVID-19 pandemic in Portugal, following the conclusions of Barreiro-Gen et al. [69] and Zang et al. [70] about the increase in social responsibility concerns in the business context during the COVID-19 pandemic. In summary, during the first wave of the pandemic, Portuguese chartered accountants, confronted with the serious economic and financial problems that their clients faced, knew how to adapt their work, reinvent themselves, work even more diligently, and increase the productivity levels of their employees to save most of their clients from economic and financial collapse and dismissal of employees without burdening them with an increase in their monthly fees because many of them would not have been able to afford it. Chartered accountants were profoundly affected by the pandemic, and their social responsibility strategy contributed to promoting their importance in supporting SMEs (including microenterprises) in that context. Their performance, and decisions during the first wave of the pandemic demonstrate a sense of social responsibility to their clients and to the society; they actively contributed to economic sustainability and employment resilience, helping to mitigate possible setbacks in the context of the SDGs, especially of the SDG 8, due to the economic and social crisis caused by the pandemic. This research provides a better understanding of the importance of chartered accountants in times of crisis. These results can contribute to the regulation of the activity and to a greater recognition of accountants' work among enterprises and society in general. The main limitation felt by the researchers in this study was the impossibility of using a random sample due to the limitations of the Portuguese General Data Protection Regime. Another limitation was the impossibility of directly questioning the enterprises and self-employees. In this regard, it is important to mention that 99.9% of Portuguese enterprises are SMEs, from which 96% are micro-enterprises [50], that implies that we are referring to a wide range of enterprises and self-employees of a small or very small size, which are not obliged to provide information and for which there is no contact database available (due to the Portuguese General data Protection Regime). Such facts may turn unfeasible studies using a questionnaire to be applied directly to Portuguese enterprises, due to the representativeness of the number of answers that could be reached. The research was limited to the defined objectives; however, much of the effect of the pandemic on accountants' activity remains to be explored. For instance, the ways in which the crisis has changed accountants' working methods and their relationship with clients are two important aspects for future research. Another aspect to explore in this context, based on the theory of legitimacy, is to understand whether society is aware of the importance of the work that these professionals undertook with enterprises in the first wave of the pandemic. ≤20 clients +20 and up to 50 clients +50 and up to 100 clients +100 and up to 200 clients +200 clients 7 Regarding the expenses and investments of your activity, identify the variations resulting from the need to adapt your work to the pandemic context (compared to February 2020): Respond on a scale of 1 (Decreased a lot) to 5 (Increased a lot) and use 3 (in cases where there have been no changes). +200 clients 7 Regarding the expenses and investments of your activity, identify the variations resulting from the need to adapt your work to the pandemic context (compared to February 2020): Respond on a scale of 1 (Decreased a lot) to 5 (Increased a lot) and use 3 (in cases where there have been no changes). 10 In case you have chosen option 4 or 5 in question 9, please indicate how the following tasks have impacted the increase in your workload. Respond on a scale of 1 (no impact) to 5 (very significant increase) and use 3 (in cases where there have been no changes).: Respond on a scale of 1 (no impact) to 5 (very significant increase) and use 3 (in cases where there have been no changes). On the contrary, there was a need to decrease the value of the fees, due to fear of losing clients On the contrary, there was a need to decrease the value of the fees, due consideration of clients' situation The value of most fees remains unchanged The value of the fees increased, on average, 10% The value of the fees increased, on average, by+10% to 25% The value of the fees increased, on average, 25% 12 During the lockdown period (between March and May 2020) were there changes in your capacity to receive your "normal" fees, compared to February 2020? On the contrary, there was a need to decrease the value of the fees, due to fear of losing clients On the contrary, there was a need to decrease the value of the fees, due consideration of clients' situation The value of most fees remains unchanged The value of the fees increased, on average, 10% The value of the fees increased, on average, by+10% to 25% The value of the fees increased, on average, 25%
Inverse problem for the mean-field monomer-dimer model with attractive interaction The inverse problem method is tested for a class of monomer-dimer statistical mechanics models that contain also an attractive potential and display a mean-field critical point at a boundary of a coexistence line. The inversion is obtained by analytically identifying the parameters in terms of the correlation functions and via the maximum-likelihood method. The precision is tested in the whole phase space and, when close to the coexistence line, the algorithm is used together with a clustering method to take care of the underlying possible ambiguity of the inversion. Introduction In the last decade a growing corpus of scientific research has been built that focus on the attempt to infer parameters by reconstructing them from statistical observations of systems. The problem itself is known as statistical inference and traces back to the times when the mathematical-physics description of nature became fully operative thanks to the advances of mechanics and calculus, i.e. with the French mathematicians Laplace and Lagrange. In recent times this field and its most ambitious problems have deeply connected with statistical physics [1,2,3] at least in those cases in which the structure of the problem include the assumption of an underlying model to describe the investigated phenomena. The aforementioned connection is surely related to the ability that statistical physics has acquired to describe phase transitions. In this paper we study the inverse problem for a model of interacting monomer-dimers in the mean-field, i.e. in the complete, graph. The denomination comes from the fact that the standard calculation in statistical mechanics, i.e. the derivation of the free energy and correlation from the assignment of the parameters is called the direct problem. Monomer-dimer models appeared in equilibrium statistical mechanics to describe the process of absorption of monoatomic or diatomic molecules in condensed matter lattices [4]. From the physical point of view monomers and dimers cannot occupy the same site of the lattice due to the hard-core interaction i.e. the strong contact repulsion generated by the Pauli exclusion principle. Beside such interaction though, as first noticed by Peierls [5], the attractive component of the Van der Waals potentials might influence the phase structure of the model and the thermodynamic behaviour of the material. In the mean field setting analysed here the monomer-dimer model displays the phenomenon of 1 phase coexistence among the two types of particles [6,7,8]. This makes the inverse problem particularly challenging since in the presence of phase coexistence the non uniqueness of its solution requires a special attention in identifying the right set of configurations. Under mean-field theory, the monomer-dimer model can be solved for the monomer densities and the correlations between monomers and dimers: the mean-field solution is inverted to yield the parameters of the model (external field and imitation coefficient) as a function of the empirical observables. The inverse problem has also been known for a long time as Boltzmann machine learning [9]. Its renewed interest is linked to the large number of applications in many different scientific fields like biology [10,11,12,13], computer science for the matching problem [14,15,16] and also social sciences [17,18]. In this paper we follow an approach to the inverse problem similar to the one introduced for the multi-species mean-field spin model in the work [19]. The paper is organised in the following chapters and results. In the second section we recall briefly the monomer-dimer model and we review the basic properties of its solution [6,8]. In the third section we solve the inverse problem: using the monomer density and the susceptibility of the model, we compute the values of the two parameters, here called coupling constants, J and h. The first measure the preference for a vertex to be occupied by a monomer (respectively dimer), by imitating his neighbours. Firstly we identify the analytical inverse formulas providing an explicit expression of the free parameters in terms of the mentioned macroscopic thermodynamic variables. Then we use the maximum likelihood estimation procedure in order to provide an evaluation of the macroscopic variables starting from real data. The fourth section presents and discusses a set of numerical tests for finite number of particles and finite number of samples. The dependence of the monomer density and the susceptibility is studied with respect to the system size. We find that both of them have a monotonic behavior which depends on the parameters value and reach their limiting values with a correction that vanishes as at the inverse volume. We then investigate how the experimental monomer density and susceptibility at fixed volume depend on the number of samples. The effectiveness of the inversion is tested for different values of the imitation coefficients and external fields. After observing that the error of the inversion does not vanish when the parameters are close to the coexistence phase we investigate the effectiveness of clustering algorithms to overcome the difficulty. We find in all cases that the inverse method reconstructs, with a modest amount of samples, the values of the parameters with a precision of a few percentages. The paper has two technical appendices: the first on the rigorous derivation of the exact inverse formulas, the second that supports the first and studies the non homogeneous Laplace method convergence to the second order. 2 Definition of the model Let G = (V, E) be a finite simple graph with vertex set V and edge set Definition 2.1. A dimer configuration D on the graph G is a set of dimers (pairwise non-incident edges): The associated set of monomers (dimer-free vertices), is denoted by Given a dimer configuration D ∈ D G , we set for all v ∈ V and e ∈ E otherwise. Definition 2.2. Let D G be the set of all possible dimer configurations on the graph G. The imitative monomer-dimer model on G is obtained by assigning an external field h ∈ R and an imitation coefficient J ≥ 0 which gives an attractive interaction among particles occupying neighbouring sites. The Hamiltonian of the model is defined by the function H imd The choice of the Hamiltonian naturally induces a Gibbs probability measure on the space of configuration D G : where the partition function is the normalizing factor. The natural logarithm of the partition function is called pressure function and it is related to the free energy of the model. The normalized expected fraction of monomers on the graph is called monomer density. It can also be obtained computing the derivative of the pressure per particle with respect to h: It is easy to check that 2|D| + |M(D)| = |V |. In this paper we study the imitative monomer-dimer model on the complete graph, that is In order to keep the pressure function of order N , it is necessary to normalize the imitation coefficient by 1 N because the number of edges grows like N 2 and to subtract the term log N e∈EN α e to the external field. Thus we will consider the Hamiltonian H imd N : D N → R, All the thermodynamic quantities will therefore be functions of N and we are interested in studying the large volume limits. Before studying the inverse problem, we briefly recall the main properties of the model (see [6,8]). Taking m ∈ [0, 1], the following variational principle holds where p imd is the pressure of the model at the thermodynamic limit and The solution of the model reduces to identify the value m * that maximizes the functionp and it is found among the solutions of the consistency equation m = g((2m − 1)J + h) that include, beside the equilibrium value, also the unstable and metastable points. It is possible to prove that m * (which represents the monomer density) is a smooth function for all the values of J and h with the exception of the coexistence curve Γ(J, h). Such curve is differentiable in the half-plane (J, h) which stems from the critical point (J c , h c ) = ( The inverse problem The evaluation of the parameters of the model starting from real data is usually called inverse problem and amounts of two steps. The analytical part of the inverse problem is the computation of the values J and h starting from those of the first and second moment of the monomer (or dimer) density. The statistical part instead is the estimation of the values of the moments starting from the real data and using the maximum likelihood principle [20] or the equivalent formulations in statistical mechanics terms [21]. For what it concerns the analytical part, using the results of Appendix A and B, it can be proved that in the thermodynamic limit the imitation coefficient and the external field can be respectively computed as and We denote by m N and χ N the finite size monomer density average and susceptibility N ( m 2 N − m N 2 ), while their limiting values are denoted without the subscript N . For the statistical part we use the maximum likelihood estimation procedure. Given a sample of M independent dimer configurations D (1) , . . . , D (M) all distributed according to the measure of Gibbs (2), the maximum likelihood function is defined by . The function L(J, h) reaches its maximum when the first and the second momentum of the monomer density are calculated from the data according to the following equations: Since in real data we have a finite number of vertices and a finite number of configurations, the robustness will be studied with respect to both these two quantities. The data that we are going to use are extracted from a virtually exact simulation of the equilibrium distribution. In fact, the mean-field nature of the model allows to rewrite the Hamiltonian (1) as a function of the dimer, or monomer, density (see (3)): (12) In particular we use the following definition of the partition function: where the term c N (D) = N ! |D|!(N −2|D|)! 2 −|D| is the number of the possible configurations with |D| dimers on the complete graph with N vertices. Using the previous representation of the partition function we extract large samples of dimer densities values according to the equilibrium distribution. Those will be used for the statistical estimation of the first two moments (7). We are going to illustrate the results with some examples. Figure 1 shows the finite size average monomer density m N and finite size susceptibility χ N for the monomer-dimer model at different N 's for different couples of parameters (J, h). The figure highlights the monotonic behavior of m N and χ N as function of N . We point out that the different monotonic behaviors of the finite size monomer density and susceptibility provide a useful information about the phase space region at which the system is found before applying the full inversion procedure. Figure 2 shows the power-law fits of the behavior of the finite size corrections both for monomer density and susceptibility. In order to test numerically our procedure, we consider 20 M −samples for each couple (J, h) and we solve the inverse problem for each one of them independently; then we average the inferred values over the 20 M −samples. We denote by m exp , χ exp , J exp and h exp such averaged quantities. The two panels of figure as functions of J. Note that the inferred values of the parameters are in optimal agreement with the exact values. Observe that for large values of J, the reconstruction get worse since the interaction between particles grows. In figure 5 we represent the absolute errors as a function of the imitation coefficient in reconstructing J and h in the cases of figure 4. Figure 6 shows relative errors in recostructing parameters for increasing sizes of the graph. It highlights that for large values of N and M , the inference of parameters doesn't give good results only in the case that the couple (J, h) is close to the coexistence line, but when we deal with real data, it may happen that we don't have a model defined over a graph with a large number of vertices or numerous configurations of the sample. In these cases, when J and h take values in the region of metastability, the inversion at finite volume and finite sample size can't be made using the method descripted above and we need another procedure to solve the problem, as it is shown in the following section. 5 The inversion at finite volume and finite sample size with clustered phase space We are now going to work over the monomer-dimer inverse problem when the phase space doesn't present only one equilibrium state, i.e. when the system undergoes a phase transition. We explain how to modify the mean-field approach we have seen above. If the model is defined for the parameters J and h such that the couple (J, h) ∈ Γ, the Gibbs probability density of the model presents two local maxima and we cannot study the inversion problem in a global way as we have done in the second section but we have to understand what happens in a local neighborhood of each maximum. Given M independent dimer configurations D (1) , . . . , D (M) all distributed according to the Gibbs probability measure for this model, we can understand their behavior around m 1 and m 2 separating them in two sets, before applying formulas (9) and (10), i.e. we divide the configurations of the sample in clusters using the so called clustering algorithms which classify elements into classes with respect to their similarity (see [22,23,24,25]). The clustering algorithms we use are based on the distance between the monomer density of the configurations: we put them in the same group if they are close enough and far from the other clusters (the concept of distance between clusters will be discussed later). The method we use is the density clustering [22], which is based on the idea that the cluster centers are encircled by near configurations with a lower local density and that they are relatively far from any other configuration with a high local density. For each configuration we compute two quantities: its local density ρ i and its distance δ i from configurations with higher density. These quantities depend on the euclidean distance d ij = |m (i) − m (j) |, where m (i) , for i = 1, . . . , M is the monomer density of the configuration D (i) . The local density ρ i of D (i) is defined by where d c is an arbitrary cutoff distance (we will discuss later the choice of d c ) and In other words, the local density ρ i corresponds to the number of configurations that are closer than d c to the configuration D (i) . parameters when d c is setted to be equal to 0.01. Obviously the choice depends on the range where the clusters centers have to be found and on the number of configurations of which the sample is made. More in general we have seen that for large values of M , the minimum absolute error in reconstructing parameters occurs when the cutoff distance is equal to C M . The distances δ i are the minimum distance between the configuration D (i) and any other configuration with higher local density: while for the configuration with the highest local density we take δī = max j d ij . Observe that the quantity δ i is much larger than the typical nearest neighbor distance only for the configurations that are local or global maxima in the density. Thus cluster centers are recognised as configurations for which the δ i is anomalously large (this situation will be illustrated in example 5.1 in the following). After the cluster centers have been found, each remaining configuration is assigned to its closest neighbor with higher density. Remark 5.2. We tested our inversion formulas using two other clustering algorithms, obtaining analogous results, which put a number of data points into K clusters starting from K random values for the centers x (1) , . . . , x (K) : the K-means clustering algorithm and the soft K-means clustering algorithm [23]. However the results we are going to talk about have been obtained using the density clustering algorithm: by using this algorithm we do not have to specify the number of clusters since it finds them by itself. (9) and (10) to each cluster and averaging the inferred values as follows. We define the respective observables of the two classes as where k ∈ {1, 2}, C k is the set of indices of the configurations belonging to the k th cluster and M k = |C k | is its cardinality. We now apply (9) separately to each group in order to obtain two different estimators J exp ; finally we take the weighted average of all the different estimates in order to obtain the estimate for the imitation coefficient. To estimate the parameter h, we first compute the values h (1) exp and h (2) exp within each cluster using equation (10) and the corresponding J (k) exp ; the final estimate for h is given by the weighted average over the clusters We now focus on some cases of clustered phase space and we solve the inverse problem applying the density clustering algorithm. In order to test numerically the inversion procedure for the monomer-dimer model, we consider a sample of M = 10000 dimer configurations {D (i) }, i = 1, . . . , M over a complete graph with N = 3000 vertices. We denote by the bar averaged quantities and the errors are standard deviations over 20−M samples. the Gibbs probability distribution of the monomer densities for this choice of parameters is represented in figure 7. Given M = 10000 independent dimer configurations D (1) , . . . , D (M) all distributed according to the Gibbs probability measure for this model, we use the density clustering algorithm in order to divide them in two sets to reconstruct the parameters. As we can see by figures 7 and 8, configurations are divided in two clusters C 1 and C 2 respectively centered in m 1 = 0.1507±5.7·10 −17 and m 2 = 0.9402±9.9·10 −4 ; moreover the cluster centered in m 1 contains more configurations than that centered in m 2 . Let start observing that the reconstructed parameters are better solving the problem only respect to the largest cluster. Applying equations (9) and (10) both to the configurations in C 1 and C 2 according to remark 5.3, by formulas (16) and (17) Applying instead equations (9) and (10) only to the configurations in the largest cluster C 1 , we obtain the following reconstructed values of parameters: J exp = 2.0036 ± 0.0353 and h exp = −0.4091 ± 0.0247. In order to justify our choice for the cutoff distance, we focus on figure 9, which shows the euclidean distances between J exp and the true parameter J (blue stars) and between h exp and the true parameter h (red circles) for each choice of d c , that takes value 10 −j , for j = 1, . . . , 6. We can see that, taking a sample of M = 10000 dimer configurations over a complete graph with N = 3000 vertices, we obtain the minimum absolute error considering d c = 0.01. According to what we have told above, the choice is arbitrary and it depends on the range of values of the monomer densities and on the number of configurations in the sample: obviously, working with a larger set of dimer configurations we have more freedom in the choice of the cutoff distance. Figure 8: Density clustering algorithm. Left panel: plot of the vector ρ, whose components are computed according to (14), of the density of configurations around each configurations of the considered sample as a function of the monomer densities. Right panel: decision graph, plot of the vector δ, whose components are computed according to (15), as a function of the vector ρ. In conclusion we have seen that in the case that the couple of parameters (J, h) belongs to the region of metastability and is far enough from the coexistence line, at finite volume and at finite sample size, there are two clusters and one of them is much larger than the other one. According to remark 5.3, the obtained results confirm that the reconstruction of the parameters is better if we apply formulas (9) and (10) only to the largest set of configurations. The goodness of results is estimated comparing (18) and (19): the distance between the reconstructed parameters J exp and the true value J is smaller in the first case, while the respective recontructions of h are equivalent. We proceede considering ten different couples of parameters which are nearby the coexistence line Γ(J, h) descripted above. In order to define them we take ten equispaced values for the imitation coefficient J in the interval [1.6, 2] and we compute the corresponding values for the parameter J using equations (16) and (17). The obtained values are shown in figure 10, where J exp and h exp are plotted as a function of J. A Monomer-dimer model. Thermodynamic limit of the susceptibility. In this appendix, using the extended Laplace's method studied in Appendix B, we prove that Remark A.1. According to results in [7], write the partition function of the monomer-dimer model as Proof. Let start computing the expectation of the monomer density using the definition of the pressure function given in (22): The finite size susceptibility can be written as: Now we are going to use the extended Laplace's method in order to evaluate the behavior of (23) and (24) at the thermodynamic limit. Observe that, since all the quantities computed above are limited, the second order extended Laplace's method suffices to study the behavior of the finite size susceptibility as N → ∞. As N → ∞, the numerator of (23) can be approximated as: 19 As N → ∞, the numerator of the first fraction in (24) can be approximated as: As N → ∞, the numerator of the second fraction in (24) can be approximated as: As N → ∞, the integral R e N FN (x) dx can be approximated as: 20 Putting together (25) and (28) we obtain: Putting together (26),(27) and (28), we obtain: . Using (29) and (30), we find that as N → ∞ At the thermodynamic limit, the susceptibility is the partial derivative of the solution m(J, h) of the consistency equation with respect to the parameter h, so that: Hence, (20) is proved. B Extended Laplace's method. Control at the second order. The usual Laplace method works with integrals of the form R (ψ(x)) n u(x)dx as n → ∞. In this appendix we prove an extension at the second order of the previous method when the functions ψ and u may depend on n (see [7] for the control at first order). We have used that in Appendix A. Theorem B.1. For all n ∈ N, let ψ n : R → R and u n : R → R. Suppose that there exists a compact interval K ⊂ R such that ψ n , u n > 0 on K, so that in particular ψ n (x) = e fn(x) ∀x ∈ K. Suppose that f n ∈ C 4 (K) and that u n ∈ C 2 (K). Then, as n → ∞, R (ψ n (x)) n u n (x)dx = 2π −nf ′′ (c) e nfn(cn) u(c) where In the proof we use the following elementary fact: We proceed with the proof of the theorem. Proof. Since c n is an internal point of maximum of f n (hypothesis 4 ), f ′ n (c n ) = 0.
Molecular Characterization of Velogenic Newcastle Disease Virus (Sub-Genotype VII.1.1) from Wild Birds, with Assessment of Its Pathogenicity in Susceptible Chickens Simple Summary Newcastle disease virus (NDV) is a highly contagious viral disease affecting a wide range of avian species. The disease can be particularly virulent in chickens, resulting in high mortality and morbidity. In this study, we characterized velogenic NDV sub-genotype VII.1.1 from wild birds and assessed its pathogenicity in susceptible chickens. One hundred wild birds from the vicinity of poultry farms with a history of NDV infection were examined clinically. Pooled samples from the spleen, lung, and brain were screened using real-time reverse transcriptase polymerase chain reaction (RRT-PCR) and reverse transcriptase polymerase chain reaction (RT-PCR) to detect the NDV F gene fragment, and phylogenetic analysis was carried out for identification of the genetic relatedness of the virus. Chickens were infected with the strains identified, and the major histopathological changes were assessed. Interestingly, NDV was detected in 44% of cattle egret samples and 26% of house sparrow samples by RRT-PCR, while RT-PCR detected NDV in 36% of cattle egrets examined and 20% of house sparrow samples. Phylogenetic analysis revealed close identity, of 99.7–98.5% (0.3–1.5% pairwise distance), between the isolates used in our study and other Egyptian class II, sub-genotype VII.1.1 NDV strains. Histopathological examination identified marked histopathological changes that are consistent with NDV. These findings provide interesting data in relation to the detection of NDV sub-genotype VII.1.1 in wild birds and reveal the major advantages of the combined use of molecular and histopathological methods in the detection and characterization of the virus. More research is needed to determine the characteristics of this contagious disease in the Egyptian environment. Abstract Newcastle disease (ND) is considered to be one of the most economically significant avian viral diseases. It has a worldwide distribution and a continuous diversity of genotypes. Despite its limited zoonotic potential, Newcastle disease virus (NDV) outbreaks in Egypt occur frequently and result in serious economic losses in the poultry industry. In this study, we investigated and characterized NDV in wild cattle egrets and house sparrows. Fifty cattle egrets and fifty house sparrows were collected from the vicinity of chicken farms in Kafrelsheikh Governorate, Egypt, which has a history of NDV infection. Lung, spleen, and brain tissue samples were pooled from each bird and screened for NDV by real-time reverse transcriptase polymerase chain reaction (RRT-PCR) and reverse transcriptase polymerase chain reaction (RT-PCR) to amplify the 370 bp NDV F gene fragment. NDV was detected by RRT-PCR in 22 of 50 (44%) cattle egrets and 13 of 50 (26%) house sparrows, while the conventional RT-PCR detected NDV in 18 of 50 (36%) cattle egrets and 10 of 50 (20%) of house sparrows. Phylogenic analysis revealed that the NDV strains identified in the present study are closely related to other Egyptian class II, sub-genotype VII.1.1 NDV strains from GenBank, having 99.7–98.5% identity. The pathogenicity of the wild-bird-origin NDV sub-genotype VII.1.1 NDV strains were assessed by experimental inoculation of identified strains (KFS-Motobas-2, KFS-Elhamoul-1, and KFS-Elhamoul-3) in 28-day-old specific-pathogen-free (SPF) Cobb chickens. The clinical signs and post-mortem changes of velogenic NDV genotype VII (GVII) were observed in inoculated chickens 3 to 7 days post-inoculation, with 67.5–70% mortality rates. NDV was detected in all NDV-inoculated chickens by RRT-PCR and RT-PCR at 3, 7, and 10 days post-inoculation. The histopathological findings of the experimentally infected chickens showed marked pulmonary congestion and pneumonia associated with complete bronchial stenosis. The spleen showed histocytic cell proliferation with marked lymphoid depletion, while the brain had malacia and diffuse gliosis. These findings provide interesting data about the characterization of NDV in wild birds from Egypt and add to our understanding of their possible role in the transmission dynamics of the disease in Egypt. Further research is needed to explore the role of other species of wild birds in the epidemiology of this disease and to compare the strains circulating in wild birds with those found in poultry. Introduction Newcastle disease (ND) is a highly contagious disease caused by Newcastle disease virus (NDV) [1]. This disease is ranked the third-most-significant poultry disease, having been reported in 109 member countries of the World Organization for Animal Health (OIE) [2,3]. The disease has attracted the attention of several researchers over the past decades due to its global impact on the poultry industry [1][2][3][4]. The frequent incidence of NDV infection, even in vaccinated birds, is due to improper vaccination and may also be associated with mutations of the virus that alter its biological properties and virulence [5][6][7]. According to the OIE, the most virulent strains of the virus are fatal and their intracerebral pathogenicity index is around 0.7 or higher [8]. Avian paramyxoviruses 1 and ND viruses were classified by the International Committee on Taxonomy of Viruses as Avian orthoavula virus serotype 1 (formerly Avian avulavirus 1) in the new subfamily Avulavirinae and family Paramyxoviridae [9]. To the best of the author's knowledge, three strains of the NDV are known: lentogenic, mesogenic, and velogenic. Among these, the velogenic strain is considered to be the most virulent, producing high mortality and severe respiratory and nervous symptoms [10]. NDV is divided into class I NDV strains, grouped into a single genotype and 3 sub-genotypes, and class II NDV strains, divided into at least 20 distinct genotypes (I-XXI) made up of several subgenotypes [11]. Genotype VII is subdivided into three sub-genotypes [12], while genotypes I, V, VI, VII, XII, XIII, XIV, and XVIII are each divided into several sub-genotypes [11,[13][14][15]. The fourth NDV panzootic was caused by viruses of a single genotype (VII.1.1) that includes former sub-genotypes VIIb, VIId, VIIe, VIIj, and VIIl; sub-genotype VIIf is considered to be a separate sub-genotype, VII.1.2 [11]. The groups of viruses involved in the fifth NDV panzootic that affected Africa, Asia, the Middle East, and Europe [16][17][18][19][20][21], were merged into a single sub-genotype, VII.2, together with five other sequences identified as sub-genotype VIIk and then also assigned to VII.2 [22]. The predominant NDV sub-genotype in Egypt is VIId (VII.1.1), which has led to several outbreaks in poultry [23]. The disease is a significant biosecurity risk in NDV-free zones, where sporadic outbreaks might have significant impacts on trade. NDV remains one of the major causes of huge economic losses, is a harmful pathogen for poultry breeding, and possesses limited zoonotic importance [24]. ND is highly contagious, affecting more than 250 domestic and wild bird species, as well as reptiles and humans [1,25,26]. The infection is transmitted through exposure to fecal matter and other excretions of infected birds, as well as direct and indirect contact with contaminated food, water, and utensils [27]. The main reservoirs of virulent strains are poultry, while wild birds, such as house sparrows, crows, hawks, and waterfowls, could harbor the low-virulent strains [28][29][30]. However, virus exchange among wild birds produces high risk for both bird populations [30,31], as some viruses pose threats when introduced into new geographic locations and new host species [30][31][32]. As mentioned above, a wide range of wild bird species can contract the infection by NDV strains with varying degrees of pathogenicity and genetic diversity [21,24,33]. Cattle egrets are susceptible to infection with velogenic viscerotropic NDV (VVNDV) and act as potential carriers in the transmission of VVNDV among poultry flocks [34]. However, the vast majority of the NDV genotypes reported in wild birds rarely result in severe or clinically significant lesions in infected birds [35]. Despite this fact, understanding the extent of the viral burden and the pathotypic and genotypic characteristics is valuable for assessing the possible risks of emerging disease and consequently developing appropriate control measures for combating the disease [36]. The present study was initially undertaken to assess the molecular nature of the NDV genotype circulating in cattle egrets and house sparrows associated with recent outbreaks in poultry flocks in Kafrelsheikh Governorate, Egypt. Ethical Considerations Ethical approval was obtained from the Research, Publication and Ethics Committee of the Faculty of Veterinary Medicine, Kafrelsheikh University, Egypt. The research complied with all relevant Egyptian legislation. The ethical approval number is KFS 2017/3. Research and all experimental procedures were performed in accordance with the pertinent guidelines concerning animal handling, following international and national guidelines for animal care and welfare. Study Area, Sample Collection, and Sample Preparation The study was conducted from October 2017 to October 2019 (Table 1). Fifty cattle egrets and fifty house sparrows (total N = 100) were sampled from the vicinity of poultry farms with a history of NDV infection in El-Hamoul, Kafrelsheikh, Balteem, and Motobas cities, Kafrelsheikh Governorate, Egypt. The birds were trapped alive overnight, using nets, from the trees around poultry farms suspected to be infected with velogenic NDV. The wild birds were euthanized and slaughtered humanely after intravenous injection of diazepam tranquilizer (2.5 mg/kg) to reduce stress, as previously described [37]. Birds were clinically examined, and clinical signs and post-mortem changes were recorded. Lung, spleen, and brain tissue samples were aseptically collected and pooled from each bird. Tissue samples from the lung, spleen, and brain were also pooled from three healthy chickens, confirmed to be NDV-free by RT-PCR, as negative control samples. Samples from the wild birds and negative control samples were homogenized in phosphate-buffered saline (pH 7.2) with an antibiotic mixture (50 IU/mL penicillin and 50 µg/mL streptomycin) and mycostatin as an antifungal (50 mg/mL). Tissue homogenates were then centrifuged at 2000 rpm for 10 min, and the clarified supernatants were collected and stored at −80 • C until further use in virus isolation and viral RNA extraction [38]. The wild birds were confirmed using real-time reverse transcriptase polymerase chain reaction (RRT-PCR) to be free from other infectious agents causing diarrhea, such as avian influenza, infectious bursal disease virus, and salmonellosis. Standard NDV The standard velogenic mans1 NDV strain (GenBank accession no. MN537832) from a previous study was used as a positive control sample in RRT-PCR and RT-PCR [39]. This velogenic mans1 NDV strain was a field strain isolated from 45-day-old broiler chickens from the Dakahalia Governorate, Egypt [39]. Viral RNA Extraction Viral RNA was extracted from the supernatant fluids of homogenized pooled lung, spleen, and brain samples collected from cattle egrets and house sparrows and positive and negative control samples using commercial kits (QIAamp ® MinElut ® Virus Spin Kit; QIAGEN GmbH, Hilden, Germany) in accordance with the manufacturer's guidelines. The extracted RNA was then stored at −80 • C until further use. Real-Time Reverse Transcriptase Polymerase Chain Reaction QuantiTect Probe RT-PCR Master Mix (QIAGEN, Qiagen Str. 1, 40,724 Hilden, Germany) was used for RRT-PCR amplifications of the NDV F gene fragment (101 bp) from extracted RNA in accordance with kit guidelines. RRT-PCR amplification of velogenic and mesogenic strains of the NDV F gene fragment was conducted using a previously reported set of primers, as shown in Table 2 [40]. RT-PCR amplification was performed in a final volume of 25 µL, containing 7 µL of RNA template, 12.5 µL of 2× QuantiTect Probe RT-PCR Master Mix, 3.625 µL of PCR-grade water, 0.25 µL (50 pmol conc.) of each primer (F+4839 and F-4939), 0.125 µL (30 pmol conc.) of the probe (F+4894 (VFP-1)), and 0.25 µL of QuantiTect RT Mix. A Stratagene MX3005P real-time PCR machine was adjusted to 50 • C for 30 min (reverse transcription) and then 94 • C for 15 min (primary denaturation), followed by 40 cycles of denaturation at 94 • C for 15 s, annealing at 52 • C for 30 s, and extension at 72 • C for 10 s. Table 2. Details of the two sets of primers and probes used for amplification of the NDV gene fragment in real-time reverse transcriptase polymerase chain reaction (RRT-PCR) and RT-PCR. Primer Sequence ( Hilden, Germany) were used for RT-PCR amplification of the virulent NDV F gene fragment (400 bp) from the extracted RNA in accordance with kit guidelines. RT-PCR amplification of the virulent NDV F gene fragment was conducted using a previously reported set of primers ( Figure 1 and Table 2) [41]. The 50 µL reaction mixture consisted of 10.0 µL of 5x QIAGEN OneStep RT-PCR Buffer, 1 µL (10 pmol) of each primer (NDV-F330 and NDV-R700), 2 µL of deoxyribonucleotides tri-phosphate (dNTPs) mix (10 mM of each dNTP), 2 µL of QIAGEN OneStep RT-PCR Enzyme Mix, 5 µL of extracted RNA, and RNase-free water up to 50 µL. The PCR protocol was performed on a T3 Biometra thermal cycler (Germany) as follows: a single cycle of initial denaturation at 94 • C for 2 min, followed by 40 cycles of denaturation at 95 • C for 30 s, and annealing at 50 • C for 45 s, and the reaction was completed by a final extension at 72 • C for 1 min, with a final incubation step at 72 • C for 10 min. After amplification, 5 µL of PCR products were analyzed by gel electrophoresis (100 volts for 40 min) in 1.5% agarose gel in 0.5 X Tris-Borate Ethylenediaminetetraacetic acid (EDTA) buffer with 0.5 µg/mL of ethidium bromide, against a 100 bp DNA ladder (Jena Bioscience, Germany), after which the DNA bands were visualized with a UV transilluminator. Sequencing and Phylogenetic Analysis of the Selected Samples The RT-PCR products of three selected samples (sharp bands) were excised from the gel, and their DNA was purified with QIAquick PCR gel purification kits (QIAGEN, Valencia, CA, USA) in accordance with the manufacturer's guidelines. The purified DNA from PCR products of the selected samples was sequenced using the Sanger method, using Seqscape ® software for raw data analysis. The nucleotide sequences were then placed in GenBank (http://www.ncbi.nlm.nih.gov/Genbank accessed on 12 December 2020) with accession numbers MT878465 (KFS-Elhamoul-1 strain), MT878466 (KFS-Motobas-2 strain), and MT878467 (KFS-Elhamoul-3 strain), as shown in Table 3. ClustalW2 (https: //www.ebi.ac.uk/Tools/msa/clustalw2/ accessed on 12 December 2020) was used for the analysis of the sequences. The output alignment files were used for phylogenic maximumlikelihood analysis, with 1000 repeat bootstrap tests in MEGA X software [42]. The obtained nucleotide and deduced amino acid sequences were aligned with other sequences from GenBank using the Clustal W algorithm of BioEdit software Version 7.1, with the Damietta6 strain as a reference strain [43]. One-day-old commercial Cobb 500 ® chicks (n = 160) were purchased from a certified local commercial hatchery and housed in separate pens with all appropriate biosecurity restrictions. Pens were physically separated from each other to avoid transmission of the infection between groups. Water and food were provided ad libitum throughout the experimental period. The chicks were kept for 28 days without any medication or vaccination. All chicks were bred according to the experimental animal care and welfare guidelines of the Animal Health Research Institute, Kafrelsheikh, Egypt. The 28-day-old chickens were used for assessment of the pathogenicity of the strains with genotype VII.1.1 originating in wild birds. Virus Propagation in Embryonated Chicken Eggs Specific-pathogen-free 10-day-old embryonated chicken eggs were obtained from an Egyptian SPF egg production farm (Nile SPF), Fayoum, Egypt. About 0.2 mL of supernatant fluid from each sample (KFS-Motobas-2, KFS-Elhamoul-1, and KFS-Elhamoul-3 strains) and a negative control sample from normal healthy chickens were inoculated into the allantoic cavity of the SPF-ECEs (five ECEs per sample) for three successive passages. Inoculated eggs were incubated at 37 • C for 5 days; dead embryos at 24 h post-inoculation (PI) were eliminated. Eggs with dead embryos 2-5 days PI were collected and examined for embryonic lesions, and their allantoic fluids were collected. Live embryos 5 days PI were preserved at 4 • C for 12 h, and then the allantoic fluids from the dead embryos 2 to 6 days PI were collected and used for further egg passage. The allantoic fluids collected from the third egg passage were preserved at −80 • C until use in virus titration and experimental infection of susceptible chickens [44]. Experimental Design Chickens used for experimental design were divided into four groups (G1-G4) of 40 birds each at 28 days old. Each group was kept in a separate pen with strict biosecurity measures to avoid cross infection. Taking into account that intramuscular inoculation does not mimic the natural route of exposure, groups G1-G3 were experimentally infected with KFS-Motobas-2, KFS-Elhamoul-1, and KFS-Elhamoul-3 sub-genotype VII.1.1 NDV strains, respectively, by intramuscular inoculation (dose = 10 6 EID50/mL). Birds in G4 were inoculated intramuscularly with negative control samples of allantoic fluid from the third egg passage [8,46]. Infected chickens were observed daily for 10 days post-infection (dpi), and mortality, clinical signs, and pathological lesions were recorded. At 3, 7, and 10 dpi, three birds were randomly selected from each group and euthanized, and lung, spleen, and brain tissue samples were aseptically collected for NDV detection using RRT-PCR and RT-PCR, as previously described [40,41]. At the fifth dpi, lung, spleen, and brain tissue samples were aseptically collected from freshly dead and/or euthanized birds (three birds per group) and fixed in 10% neutral formalin for histopathological examination. Histopathological Examination Formalin-fixed lung, spleen, and brain tissue samples from experimentally infected chickens were dehydrated, and embedded in paraffin wax. Tissue sections (5 µm) were then de-paraffinized, stained with hematoxylin and eosin stain, and microscopically examined [47]. Clinical Signs and Post-Mortem Lesions In the present study, most of the clinically examined cattle egrets and house sparrows were apparently healthy with no clinical signs. However, four cattle egrets and three house sparrows showed ruffled feathers, with three cattle egrets and two house sparrows also showing whitish-green diarrhea. The post-mortem examination of euthanized birds showed enteritis, whitish-green intestinal contents, and cloudiness of the air sacs in a few birds. Real-Time Reverse Transcriptase Polymerase Chain Reaction Of the 50 cattle egrets and 50 house sparrows subjected to RRT-PCR, 22 cattle egret samples (44%) and 13 house sparrow samples (26%) were positive, with cycle threshold (C t ) values ranging from 15.45 to 39.65 (Table 3). Reverse Transcriptase Polymerase Chain Reaction Conventional RT-PCR was used to amplify a 370 bp fragment of the NDV F gene from 18 samples (36%) from cattle egrets, 10 samples (20%) from house sparrows, and a positive control sample. The other 32 samples (64%) from cattle egrets, 40 samples (80%) from house sparrows, and a negative control sample showed no band at 370 bp (Table 3). Animals 2021, 11, 505 9 of 21 two common nucleotide substitutions (G609A and T675C). The KFS-Elhamoul-1 and KFS-Elhamoul-3 strains showed two nucleotide substitutions (T523C and A540G), while KFS-Motobas-2 had a single nucleotide substitution (T634C). All of these nucleotide substitutions were silent, without any amino acid substitutions (Figures 3 and 4). The BioEdit nucleotide and the deduced amino acid sequences aligned with the Damietta6 strain as a reference strain revealed that the three strains identified in this study had two common nucleotide substitutions (G609A and T675C). The KFS-Elhamoul-1 and KFS-Elhamoul-3 strains showed two nucleotide substitutions (T523C and A540G), while KFS-Motobas-2 had a single nucleotide substitution (T634C). All of these nucleotide substitutions were silent, without any amino acid substitutions (Figures 3 and 4). Virus Propagation in Embryonated Chicken Eggs KFS-Motobas-2, KFS-Elhamoul-1, and KFS-Elhamoul-3 strains were propagated in 10-day-old ECEs. Inoculated embryos were dwarfed and congested, with sub-cutaneous hemorrhages in the head, legs, and the back area and had edema, abnormal feathering, and gelatinous material on the skin. The embryos' mortality was recorded at 72 h PI (first and second passages) and 48 h into the third passage ( Figure 5). Virus Propagation in Embryonated Chicken Eggs KFS-Motobas-2, KFS-Elhamoul-1, and KFS-Elhamoul-3 strains were propagated in 10-day-old ECEs. Inoculated embryos were dwarfed and congested, with sub-cutaneous hemorrhages in the head, legs, and the back area and had edema, abnormal feathering, and gelatinous material on the skin. The embryos' mortality was recorded at 72 h PI (first and second passages) and 48 h into the third passage ( Figure 5). Virus Propagation in Embryonated Chicken Eggs KFS-Motobas-2, KFS-Elhamoul-1, and KFS-Elhamoul-3 strains were propagated in 10-day-old ECEs. Inoculated embryos were dwarfed and congested, with sub-cutaneous hemorrhages in the head, legs, and the back area and had edema, abnormal feathering, and gelatinous material on the skin. The embryos' mortality was recorded at 72 h PI (first and second passages) and 48 h into the third passage ( Figure 5). Clinical Signs and Post-mortem Changes in Experimentally Infected Chickens Twenty-eight-day-old Cobb chickens in groups G1 (KFS-Motobas-2), G2 (KFS-E hamoul-1), and G3 (KFS-Elhamoul-3), together with G4 (negative control group) wer clinically examined daily for any clinical signs, mortality, or post-mortem changes in dea birds. Clinical signs started to appear at the third dpi as ruffled feathers, depression, an decrease in feed and water intake, with mortality in some birds. Severe depression wa observed during the fourth dpi, with anorexia, greenish diarrhea, a swollen head, an respiratory signs, with a high mortality rate (Table 4). Nervous signs appeared at the fift dpi, with other, previously mentioned signs. These signs remained until the end of th experiment (tenth dpi). Mortality rates were 80%, 67.5%, and 62.5% in groups G1, G2, an G3, respectively, while group G4 showed no mortality. The post-mortem examination o dead birds showed hemorrhagic tracheitis, lung congestion, enlarged hemorrhagic ceca tonsils, enlarged spleen, hemorrhages on the periventricular gland tips with greenish pro ventricular contents ( Figure 6A), and hemorrhagic intestinal serosal surface with greenis intestinal contents ( Figure 6B). NDV was successfully detected by RT-PCR in all teste samples at 3, 7, and 10 dpi from groups G1, G2, and G3, while group G4 was negativ ( Table 5). Clinical Signs and Post-Mortem Changes in Experimentally Infected Chickens Twenty-eight-day-old Cobb chickens in groups G1 (KFS-Motobas-2), G2 (KFS-Elhamoul-1), and G3 (KFS-Elhamoul-3), together with G4 (negative control group) were clinically examined daily for any clinical signs, mortality, or post-mortem changes in dead birds. Clinical signs started to appear at the third dpi as ruffled feathers, depression, and decrease in feed and water intake, with mortality in some birds. Severe depression was observed during the fourth dpi, with anorexia, greenish diarrhea, a swollen head, and respiratory signs, with a high mortality rate (Table 4). Nervous signs appeared at the fifth dpi, with other, previously mentioned signs. These signs remained until the end of the experiment (tenth dpi). Mortality rates were 80%, 67.5%, and 62.5% in groups G1, G2, and G3, respectively, while group G4 showed no mortality. The post-mortem examination of dead birds showed hemorrhagic tracheitis, lung congestion, enlarged hemorrhagic cecal tonsils, enlarged spleen, hemorrhages on the periventricular gland tips with greenish proventricular contents ( Figure 6A), and hemorrhagic intestinal serosal surface with greenish intestinal contents ( Figure 6B). NDV was successfully detected by RT-PCR in all tested samples at 3, 7, and 10 dpi from groups G1, G2, and G3, while group G4 was negative ( Table 5). Histopathological Examination Histopathological examination of the lung, spleen, and brain tissue samples collected at the fifth dpi from groups G1, G2, and G3 revealed that the affected lungs showed congestion of blood capillaries and bronchial obstruction attributed to the infiltration of peribronchial inflammatory cells ( Figure 7B) with marked endodermal hyperplasia around the parabronchi, associated with obvious inflammatory cell infiltration ( Figure 7C). Focal pneumonia associated with infiltration of inflammatory cells was also observed ( Figure 7D), with mild congestion and mostly patent bronchi and air capillaries ( Figure 7E) and mild endodermal hyperplasia with an increase in the functional respiratory spaces ( Figure 7F). The spleen of experimentally infected Cobb chickens 5 dpi showed marked lymphoid depletion associated with marked histocytic cell proliferation in group G1 ( Figure 8B) and marked histocytic cell proliferation in group G2 ( Figure 8C), and normal lymphoid nodules were also observed ( Figure 8D). Moreover, increased lymphoid cell proliferation within the white pulp was also observed ( Figure 8E,F). The brain of experimentally infected Cobb chickens 5 dpi showed spongiosis of nerve fibers, with diffuse and focal conglomerate aggregation of glia cells ( Figure 9B) and gliosis associated with neuronophagia ( Figure 9C). Ischemic neuronal injury was also observed with marked neuronal tigrolysis ( Figure 9D-F). peribronchial inflammatory cells infiltration (arrows); (C) lung of an experimentally infected bird in G1 showing marked endodermal hyperplasia around the parabronchi (arrows), associated with obvious inflammatory cells infiltration; (D) lung of an experimentally infected bird in G2 showing focal pneumonia associated with inflammatory cell infiltration (arrow); (E) lung of an experimentally infected bird in G2 showing mild congestion (arrowhead) and mostly patent bronchi and air capillaries; and (F) lung of an experimentally infected bird in G3 showing mild endodermal hyperplasia (arrows) and increase in the functional respiratory spaces stained by Hematoxylin and eosin (H&E X200. Discussion ND is an economically devastating viral disease that affects the poultry industry [5,24]. The disease is considered to be endemic in various areas of the world, such as Central and South America, Asia, the Middle East, and Africa [4]. Wild birds play a critical role in the evolution of NDV [48]. Clearly, surveillance of NDV in wild birds is important Discussion ND is an economically devastating viral disease that affects the poultry industry [5,24]. The disease is considered to be endemic in various areas of the world, such as Central and South America, Asia, the Middle East, and Africa [4]. Wild birds play a critical role in the evolution of NDV [48]. Clearly, surveillance of NDV in wild birds is important to reduce the risk of possible spreading of NDV to poultry flocks. The present work characterized NDV in wild cattle egrets and house sparrows. The work involved the molecular characterization and phylogenetic analysis of the NDV strains circulating in wild birds collected from Kafrelsheikh Governorate, Egypt. The study also included pathogenicity testing of isolates from chickens, followed by histopathological examination and molecular identification of the identified virus for verification of the results. It is noteworthy to state that several previous reports have documented the role played by different species of wild birds, including cattle egrets (Bublicus ibis) and house sparrows (Passer domesticus), migratory waterfowl, and other aquatic birds, in the transmission of different strains of NDV [21,28,[49][50][51][52][53][54][55][56][57][58][59]. A review of this literature identified the possible release of highly virulent viruses into poultry or wild birds and the existence of epidemiological links between field isolates [20,59,60]. At a national level, a previous study reported and identified two velogenic viscerotropic and one mesogenic pathotype in cattle egrets in the EL-Marg area in Cairo, Egypt [61]. NDV was also characterized in sparrows in several previous reports [49,62,63]. Many efforts have been made over the past decades to develop efficient diagnostic methods, such as virus isolation in embryonated chicken eggs and conventional serological methods using enzyme-linked immunosorbent assay, hemagglutination (HA), and hemagglutination inhibition (HI) tests [5,59,60,[62][63][64], but a high incidence of false-positive results and low sensitivity have been reported with these methods [5,[64][65][66]. In addition, virus isolates from oropharyngeal or cloacal swabs or tissues from infected birds have been used, but the methods are tedious and time consuming [59,67]. PCR-based assays targeting the amplification of a specific region of the genome of NDV offer many advantages for identification of the virus, besides their important role in differentiation of the tremendous number of virus strains [5,52,68]. Amplification of the NDV F gene using RT-PCR is usually used for NDV detection, and the resulting PCR product can be used for assessment of the virulence of NDV [69]. In the present study, RRT-PCR detected virulent NDV in 22 of 50 cattle egret samples (4%) and 13 of 50 house sparrows (26%), while conventional RT-PCR amplified a 370 bp fragment of the NDV F gene from 18 samples (36%) of cattle egrets and 10 samples (20%) of house sparrows. A previous study in Egypt detected NDV by RT-PCR in 3.6% (4/112) of tested tracheal and cloacal samples [70]. Schelling et al. [71] reported that RT-PCR failed to amplify the NDV RNA extracted from cloacal swabs of 115 different wild bird species. NDV genotype VII has caused fatal infections in susceptible birds and is thought to be responsible for the fourth major NDV panzootic worldwide [21]. Analysis of GVII subtypes in the present study revealed that the GVII subtypes are highly divergent. We found a 10.7% pairwise distance (89.3% identity) between the Namibia-5620 (GVII.2) strain and the Egyptian GVII.1.1 strains (KFS-Elhamoul-1, KFS-Elhamoul-3, MN51, and MR84). These results were supported by Xue et al. [72], who concluded that NDV genotype VII is the most predominant genotype worldwide, with complex genetic diversity. The three strains identified in the present study (KFS-Motobas-2, KFS-Elhamoul-1, and KFS-Elhamoul-3) were clustered with other Egyptian strains in sub-genotype VII.1.1 (formerly GVIId). The present findings are also consistent with those of Kim et al. [73], who mentioned that NDV genotype VII is the prevalent genotype in the Middle East and that most NDV isolates from wild birds are aligned to this genotype. Another previous study concluded that sub-genotype VII.1.1 is the predominant NDV sub-genotype, causing several outbreaks in Egypt [23]. Dimitrov et al. [11] reported that the viruses responsible for the fourth NDV panzootic were grouped together into a single genotype (VII.1.1). Despite the geographical separation of the hosts and taking into account that our study did not involve analysis of the genetic relatedness of identified strains to that of poultry, this closely related antigenic and genetic analysis of the isolated strains of the NDV may reflect the possible role of wild and migratory birds in maintaining the transmission cycle of the disease [24,28,74]. As mentioned above, NDV is an acute contagious disease affecting birds of all ages [3]. As shown in our work, wild birds were apparently healthy, but a few birds showed ruffled feathers with whitish-green diarrhea, while post-mortem examination showed enteritis, whitish-green intestinal contents, and cloudiness of the air sacs. The present clinical findings are consistent with previous data reporting respiratory, gastrointestinal, circulatory, and nervous signs in infected chickens [75,76]. The clinical signs might vary depending on several factors, such as the pathogenicity of the virus; host factors such as age, species, and immune status; infectious dose duration and extent; and concurrent infections [77]. In the present work, the pathogenicity of sub-genotype VII 1.1 NDV strains from wild birds was assessed through experimental inoculation of identified strains in 28-day-old Cobb chickens. Specific clinical signs and post-mortem changes of velogenic NDV genotype VII were observed in inoculated chickens 3 to 7 days PI, with 62.5-80% mortality rates. NDV was successfully detected by RRT-PCR and RT-PCR in all NDV-inoculated chickens at 3, 7, and 10 dpi. These results are consistent with those of a previous study in Egypt in which NDV was detected in cloacal swabs from chickens challenged with cattle-egret-origin NDV 4-10 dpi [70]. In the same study [70], the authors revealed that the NDV signs that started to appear on the fourth dpi in chickens challenged with cattle-egret-origin NDV were anorexia, depression, mild respiratory sounds, ocular/nasal discharges, and severe neurological disorders, with 100% mortality. Histopathological examination of experimentally infected chickens in the present study revealed that the affected lungs showed severe congestion and mostly patent bronchi and air capillaries. The spleen exhibited increased lymphoid cell proliferation with severe ischemic neuronal injury. Similarly, several previous reports have documented microscopic pictures of chickens challenged with cattle-egret-origin NDV that have revealed severe histopathological changes in lungs, such as congested blood vessels, pneumonia, focal pulmonary hemorrhage, and mononuclear infiltration of the air capillaries [70,78]. The spleen showed marked depletion with fibrinoid and lymphocytic necrosis, while the brain exhibited congested blood vessels, neuronal edema, and necrotic neurons, with neuronophagia consistent with several previous reports [78,79]. Conclusions The present study reinforced the importance of the combined use of molecular methods and pathogenicity testing for the characterization and identification of the major circulating strains of NDV in wild birds and for calculating their genetic relatedness. Given the economic importance of the poultry industry, the present findings reveal the necessity of the application of more strict hygienic measures and management practices in the poultry industry to prevent contact between wild birds and poultry flocks in order to avoid possible spreading of the infection. Our data suggest future research to compare NDV from wild birds and poultry flocks. Obtaining this information would help better understand the epidemiological pattern and transmission dynamics and consequently combat this viral disease.
A Long-Term Evaluation on Transmission Line Expansion Planning with Multistage Stochastic Programming The purpose of this paper is to apply multistage stochastic programming to the transmission line expansion planning problem, especially when uncertain demand scenarios exist. Since the problem of transmission line expansion planning requires an intensive computational load, dual decomposition is used to decompose the problem into smaller problems. Following this, progressive hedging and proximal bundle methods are used to restore the decomposed solutions to the original problems. Mixed-integer linear programming is involved in the problem to decide where new transmission lines should be constructed or reinforced. However, integer variables in multistage stochastic programming (MSSP) are intractable since integer variables are not restored. Therefore, the branch-and-bound algorithm is applied to multistage stochastic programming methods to force convergence of integer variables.In addition, this paper suggests combining progressive hedging and dual decomposition in stochastic integer programming by sharing penalty parameters. The simulation results tested on the IEEE 30-bus system verify that our combined model sped up the computation and achieved higher accuracy by achieving the minimised cost. Introduction Development of renewable energy has been expedited by the global effort to reduce greenhouse gas emissions. These include emissions from fossil fuels used in transportation, which are declining as electric vehicle use increases [1][2][3]. The global move to electric vehicles and new facilities to support renewable resources now requires much higher electricity consumption and generation capability. If transmission line capacity fails to catch up to the growth of them, the failure will cascade and cause shortages in the grid requiring expensive repairs for the power system network [4]. Transmission line expansion planning (TLEP) has been suggested to avoid future shortages. Economic effects of probable transmission line failures are compared against the investment budget spent on new transmission lines [5]. The TLEP is usually implemented by system planners to analyse old transmission lines that would be uneconomical in a long-term evaluation of the growth of electricity demand [6]. In the evaluation process, the planner decides locations for a new construction from the present point of view to optimise the future cost. Thus, optimal decisions for the planner are conveyed by considering probable uncertain scenarios of the growth of demand and representing the most suitable candidate among all possible realisations. To find optimal expansions in the TLEP process, the planner reconciles system reliability and energy economy over a long-term time horizon. The typical optimisation formulation of TLEP determines a minimised cost for organising transmission lines and operating generators. In this regard, the investment budget for expansions can be found by optimising the use of resources. Considering more detailed scenarios of future demand usually helps planners to mitigate this uncertainty and yields lower costs than only considering the worst-case [7]. Many stochastic optimisation processes such as the robust method, the chance constraint method and multistage stochastic programming (MSSP) have been developed to account for uncertainty in the power system [8][9][10]. Typically, they predict the impact of future events by calculating the value of their possibility as costs of the objective function and compare them to the current decisions. MSSP shows an outstanding performance among the three methods but usually requires many scenarios and variables depending on time horizons [11]. Considering many scenarios and variables helps to verify results, but the size of the computational load becomes intractable. Therefore, decomposition methods are used to break the original problem into several smaller problems, called subproblems. However, the decomposition is followed by the coupling constraint added between separated subproblems. The constraints are treated as a price for solving them independently so that one subproblem can consider the others' decisions as they should pay for not having a unified value [12]. Meanwhile, there are two types of decomposition methods to consider for uncertain electricity demands in MSSP. One is the primal decomposition, represented by Bender's method and the L-shape method [13], which divides the problem into two parts: the master and the slave problem. Yet, it is hard to calculate more than three stages, since too many master and slave pairs are generated as the time span is extended, so it is not appropriate for a long-term evaluation process. The second, dual decomposition (DD), horizontally separates the scenario tree by its time horizon, so that it is inherently possible to consider a long-term horizon since subproblems are sufficiently smaller than the original. Instead, subproblems for the DD method could generate inconsistent results of subproblems since they also separate decision variables, thus the coupling constraint is added to obtain the unified solution. The coupling constraint is defined as a nonanticipativity constraint (NC) in DD methods, which explains the indistinguishable state of variables in which all separations should be converged to the one nodal solution. However, optimality of expansions is not achieved due to the independence of subproblems when integer variables are decomposed. Indeed, some decision variables used to construct a new line in the TLEP problem are comprised by integers to define a disjunctive decision; therefore, the system planner cannot decide on the construction when results diverge. To solve the variable convergence issue, various DD methods have been developed and a few of them are adopted in this paper. Progressive hedging (PH) can transform the NC into a Lagrangian function then heuristically approach the optimal based on the proximal operator and augmented Lagrangian [14,15]. PH can quickly obtain a reliable solution, but its convergence is not guaranteed when integer variables are involved, and the integrated solution could be infeasible if the convergence rate is not sufficient. A generalised process for the dual decomposition for stochastic integer programming (DDSIP) in [16] was developed to consider integer variables, and it solves the Lagrangian dual problem of the NC [16]. Dual variables for the NC are obtained by the proximal bundle (PB) method [17], which determines a delegate among candidates from subproblems only if it is plausible. However, it is slow because of the branch-and-bound (BB) process, which bounds constraints for integer variables and usually takes a long time to calculate branched cases. Recent studies have consistently reported improved DD methods, and numerical results have proved reliable. Studies in [12,18] mitigated the penalty of the NC by formulating it in various ways, which verified that a better matrix formulation of the NC can improve the result. Studies in [19,20] suggested methods to transform dual functions. Since the NC can be relaxed as a dual function, better objective bounds could be obtained in those studies by reversing the approach to the formulation to solve dual variables. Studies in [21][22][23] developed methods to support the heuristic approach inherent in DD methods to achieve high convergence speed and accuracy. However, those improvements were not properly implemented in practical studies of power system. The TLEP in [24], where a long-term scenario for transmission lines was decomposed and bundled according to its similarity, also in [8,25], where the DD was introduced to make generation resource schedules with many integer variables, could have been assessed better if they adjusted their DD methods according to upper and lower bounds of results and the consistency of the results was verified over a long-term time span if methods were generalised regardless of the program size. Numerical results reported in [26,27] achieved the cost-effectiveness of plans through the methodological improvement, enhancing the convergence of stochastic variables. In this paper, a long-term evaluation problem by MSSP is proposed for the TLEP. Typical TLEP formulations are used, where decisions on new construction and reinforcement of existing transmission lines can be made by considering DC optimal power flow (OPF). However, we extend the problem in a form of MSSP to involve 30 years (with six stages) in the plan. As the time stage increases, binary variables devastate the solving process to obtain unified results and slow down the process. By adjusting DD methods in various cases studies with different sizes, we verify that stochastic variables can deliver varying convergence and expansion results with respect to algorithm. With the analysis on those spectral outputs, general forms of stochastic optimisation of DD methods are examined, and the solving process is unfolded to construct a generalised optimisation environment on varying sizes. We combine the PH and PB methods where the penalty value in PH is concatenated as a warm-start value of the PB method since they have an equal target value. Exchanging the penalty value could speed up the medium-iteration convergence and improve the quality of the solution. This contribution can make a combined method less affected by the program size. The expansion results are compared in three aspects: simulation time, nonanticipativity and objective cost. The nonanticipativity results are delivered to rectify and classify the failed results. Thus, the method termination and the feasibility are separately considered in a scope on the results. Moreover, with modified IEEE 30-bus system, the overall optimisation formulation and expansion results are given over a 30-year time span. We observed that a methodological improvement enables cost-effective evaluation, finding the best with the coherency among the feasible candidates obtained by improved boundaries of objective functions for the NC. The organisation of this paper is summarised as follows. In Section 3, the overall process for solving MSSP with mixed-integer variables is introduced. In Section 2, TLEP problems are modelled and are assessed from the viewpoints of economy and reliability. In Section 4, the numerical results of the TLEP are presented and analysed. In Section 5, conclusions are drawn. Transmission Line Expansion Planning Modelling In this subsection, we introduce our multistaged transmission line expansion planning formulation. Objective Function of TLEP The formulation represents one subproblem, where stage is allocated to a five-year investment period. Symbols used to formulate the TLEP problem and their descriptions are presented in Table 1. The objective function of TLEP is structured as follows: In (1), we take into account the following terms in our objective functions as a view of power system planner: transmission line expansion cost (investment cost), power generation cost (operation cost) and the cost of probable load curtailments due to single line outage (load shedding cost). The investment cost is further divided into new construction cost and reinforcement cost of existing transmission lines. We only consider peak load scenarios. K c/r is a unit costs' vector of line investments, which is the hourly long-run marginal cost (LRMC) of transmission line capacity. Its dimension is $/MWh, which balances the finance scale with operation and investment cost. The investment cost is calculated by dividing annuitized LRMC by the number of hours in a year. For this parameter, discount rates can be utilised [28], but in this paper we do not consider the present value of annual cost. K g/s is the unit cost of generation and outage, respectively, whose dimension is also $/MWh. Constraints of TLEP For the constraints of TLEP problems, first, a balance between power supply and demand should be obtained. This constraint can be represented as follows: In (2), the sum generator outputs PG and load shedding amount PS should be equal to the sum of power flow f whose direction is out of each bus, and power load PD. Second, constraints on expansion decision u are enumerated as follows: In (3)-(5), the DC-OPF constraints are considered. Power flows f are defined by the phase angles of the buses θ and the reactance of the transmission lines X. In (4) and (5), the power flow of newly constructed lines is constrained by a disjunctive decision u with a large number Q. In this constraint, power flow of newly constructed lines and expansion decisions are linked. In (6) and (7), power flows in all existing lines and expansion candidates abide by the maximum capacity of line loading. This capacity can be increased depending on expansion decisions. In (8) and (9), construction decisions are expressed as binary variables and reinforcement decisions are expressed as integer variables. Finally, in (10) and (11), the sum of expansion is constrained by one, meaning transmission lines can only undergo one construction event. Constraints on the generator output PG and load shedding amount PS are enumerated as: In (12), the maximum capacity and minimum generation output is considered to decide generation output. In (13), the load shedding amount of each bus should be within the load of each bus. Finally, in (14), the generator's cost function is simplified into piece-wise linear functions for constant a and b with H pieces. Replaced transmission lines include old lines whose regular lifetimes or life expectancies are less than their initial operation periods plus the panning time horizon. Decomposition Method Formulations In this section, we introduce two dual decomposition (DD) methods, PH and DDSIP, and their theoretical backgrounds. PH and PB methods propose a pathway to the optimal solution, recursively updating the decisions. The BB process enforces the convergence of integer variables. Basic Form of MSSP and Nonanticipativity Constraint The most intuitive and intrinsic approach to represent uncertainty in possible realisations is to formulate a finite number of scenarios with relative probabilities. This is the basic form of the MSSP called the extensive-form (EF) problem and requires the scenario tree which is a set for bundles of scenarios. A scenario tree is structured by branching to possible realisations from a precedent scenario, which gives conditional probability for its succeeding scenario, as depicted in Figure 1 (left). The basic form of the MSSP problem for the scenario tree that structured over the time horizon T stages with decision variables x = [x 1 , x 2 , ..., x T ] T can be defined as follows: In (15), a simplified form of MSSP problem for TLEP is given, where f (x) is the objective function, and G is a simple expression of a set of all constraints involved in the problem for x t . The operator E[·] represents expectation of costs over related scenarios, and the scenario tree Ξ consists of all nodes of scenarios ξ t , such that ξ t ∈ Ξ. The objective function and constraints in MSSP can be decomposed into subproblems. Therefore, f (x s ) and G s represent the objective function in (1) and constraints (2)- (14), which can be extended to the multistage problem by the time span T. In (16), the representative formulation of DD is presented, where x s is set of variables for each scenario subproblem, and ξ s is the corresponding scenario, and separated variables x s is equal to the node solutionx, which is the NC. The structure of scenario tree and subproblems are presented in Figure 1, where DD can divide the scenario tree horizontally so the number of subproblems is determined by the number of branches. In the same figure, if the NC forx b , which is the solution of node b is satisfied in the corresponding subproblems, then it is also satisfied for the second stage variable in subproblems s 1 and s 2 , such that x s 1 ,2nd = x s 2 ,2nd =x b . However, NC cannot be presupposed since x s is the optimal solution of MSSP. DD methods add a relaxed form of NC in the objective function in (16) to mitigate the constraint. The Lagrangian function in (17) can be solved for PH and PB methods by using heuristic approaches as follows: 1. The PH designates implementablex by the projection but derives λ S proximally. 2. The PB solves the dual problem of (17) to get a representative for λ s and verify its degree of improvement. Progressive Hedging Method We provide formulations of the PH from the Lagrangian function. Independent scenario projection to the nonanticipativity solution space is assumed in the PH. The projection operator averages the subproblem solutions by considering probabilities of scenarios. The projected solution can be regarded as optimal when the NC is satisfied for all nodes in the scenario tree. An implementable solutionx can be generated by the projection. The assumption in (18) that the optimal is the average of subproblem solutions inherently makes the NC solution space N . The solutionx ∈ N deduces the direction toward the optimal solution, and it is aligned by the probabilities ρ s . Hence, the alignment to N is possible just once so that Proj 2 = Proj and then Equation (19) is satisfied. Decision variables in PH are updated as follows. First, in the middle of iterative process of the PH, subproblems untied from the scenario tree are solved and saved in the subproblem space S. Second, node solutions are updated by the quadratic formulation of the augmented Lagrangian. Each subproblem in (17) is transformed to be quadratic since a penalty constant ω of the NC is updated by the proximal operator for subdifferential of the dual function, ∂ λ D(λ) = (x s −x), with step size γ [15]. To update the dual variable λ s , the PH uses constant penalties for the NC in (20), which represents the distance between S and N such that ω s = x s −x. The dual variables satisfy the NC in (21) because of (19), only if the initial value for penalties is zero, and make an even boundary for the dual variables as shown in Figure 2. In short, ω represents the NC penalties according to the projected solutions, and it is orthogonal to the Proj as in (21) such that S ⊥ N . Finally, the augmented Lagrangian is derived from iterative updates of penalties in (20) and the Lagrangian function in (17). The first order necessity condition to find the optimal solution for (22) yields the augmented Lagrangian as follows: The overall process is summarised in Algorithm 1. Algorithm 1 Progressive Hedging Algorithm Convergence strategy comparison between progressive hedging (left) and proximal bundle (right). Proximal Bundle Method DDSIP uses PB method to solve subproblems and bounds integer variables after PB method is terminated. The PB method solves the dual problem of (17) by sharing the dual variable λ with corresponding subproblems. The lower boundary (LB) of (17), which is z, can be updated by finding the better upper bounds of the D(λ). The NC in (24) is relaxed and added to the objective in (25), where H s is structured artificially for each scenario as studied in [16,29]. Therefore, dual variable λ is shared with corresponding scenarios in same nodes, assuming that the difference is indicated by H s respective to the same radius λ as depicted in Figure 2. The underlying problem becomes maximising objectives concerned with λ, andx is ignored as derived in the second line of (25). It should be noted that subproblems are still minimising objectives regarding x s as denoted in the third line of (25). The differential of D(λ) and its expected bounds z are used to update λ. We can derive (26) by differentiating (25) with λ. The z in (27) is decision variables for the LBs for the dual function, and its differential is decided by step size p. PB uses the stability centre pointλ to represent the implementable value of λ and it decides the better LB of (17), giving higher value of bounds with the smallest possible radius from the optimal point as depicted in Figure 2. Therefore, a new λ is obtained by maximising the following optimisation problem as follows: In (28), z s is maximised to satisfy its optimal condition ∂z − p k (λ −λ) ∈ 0, which is the maximisation of the objective with constraints for its cutting planes that are indicating the possible movement for z s with the differential of subproblems (26). During iterations, k ∈ [1, ..., k max ] , PB updatesλ only if the improvement is higher than the plausible minimum value. The differential of subproblems given in (29) represents the actual improvement of solutions with a new λ k . The difference between the feasible boundary z s and current boundary D(λ k−1 ) given in (30) represents the expected improvement of subproblem solutions with λ k . PB accepts new λ asλ when the improvement of the solution which can be confirmed by comparing subproblem solutions, ∂D(λ) to ∂z. If the improvement is higher than mL it will updateλ, or else it will go through the null step [29]. Thus, the stability centreλ can be updated if the actual improvement is larger than mL ∈ (0, 0.5] of the estimated improvement as given in (31). The convergence of variables is thus indicated when there is no expected improvement of the bounds of (27) such that ∂z ∈ 0. In each iteration, the step size p k is calculated according to results of (31) and (32) to smooth the heuristic searching process. When the actual improvement is too small, the step size decreases; on the other hand, when the actual improvement is sufficiently high with mR ∈ [0.5, 1) as given in (32), the step size increases. The step size is determined as follows: In (33), new step size for the next iteration h k is yielded by assuming that the actual improvement is the same as the expected improvement as follows: Therefore, p k+1 is adjusted by h k and initial configurations. In (35), the step size is decreased when (31) is satisfied, and in (36), the step size is decreased when (32) is satisfied. Note that h k ≤ u k when the factor mR ∈ [0.5, 1) as follows: The detailed process to update the step size is featured in [17], and the detailed process of the PB method is summarised in Algorithm 2. Branch and Bound in MSSP Solutions of MSSP given in Section 3.1 can be obtained by using the methods suggested in former subsections. However, the solution is infeasible if the integrality and the NC are not satisfied. The BB algorithm bounds unconverged variables with all corresponding subproblems. A search tree is generated to observe varying results of the bounding process for the NC and manages multiple scenario trees in it; the bounding process is not limited to the single MILP problem. Moreover, the gap between the true solution of (17) and the boundary of (25) is resolved by controlling the additional cost from BB constraints. Thus, the gap between the true solution of (17) and the boundary of (25) is resolved during the process of BB. First, after the process of Algorithm 2 is done, it generates a search tree where branches B are structured to find solutions with different conditions. Constraints added by the branching are applied to all corresponding subproblems. These constraints are added in addition to the original constraints G, so that G B includes G for all branches in the B. Second, to branch variables which violate integrality, it splits B into B and B by adding two disjunctive constraints. In (38) and (39), x t is a node variable for time stages t = [1, ..., T], and the projection in (18) is used to generate the integrated solutionx t . We can choose a node variable having the most fractional value. On the other hand, it is possible to branch over continuous variables where the NC is not satisfied. In (40) and (41), NC is a tolerance of the NC to make a disjunction. The measurement of violation can be used to choose the variable, e.g., the highest distance between subproblems. Termination of BB is determined when there is no need to branch further or no feasible solution. We can define a node as fathomed when the branching on this node is clearly meaningless. This node becomes a permanent leaf. The process will be terminated when all branches are fathomed. If all variables satisfy NC and the objective value is lower than the current best upper boundary (UB) and also feasible, the node is fathomed as the best UB. On the contrary, a node can be also fathomed if the solution is infeasible or is higher than the best UB. The best LB is designated by the lowest boundary among leaf nodes that is not fathomed, which makes it possible to have better results than the current best UB. Therefore, the BB algorithm can also be terminated when LBs closely approach the best UB. Since BB yields the higher boundary of (25), so that Z LB ends up with the same value of the UB or higher, it is obviously not required to branch on nodes that satisfy the inequality in (42). Finally, Z UB is accepted as the final solution of MSSP that obtains the condition of global optimality. The termination process can be summarised as follows: 1. The node would be "fathomed" if new bound D(λ) is higher than the current optimal or is infeasible. 2. The node would be updated as the Best UB if new bound D(λ) is lower than the current Best UB, and the NC is satisfied. 3. The node would be updated as a candidate for the Best LB if new bound D(λ) is lower than the current Best UB, but the NC is still not satisfied. The overall process of BB methods is detailed in Algorithm 3. Simulation Results Our simulation conditions are described, and the results are discussed in this section. The time horizon is considered up to 30 years with five years per stage; therefore, six stages are considered at maximum horizon. We use predicted data of economic growth and electrification rates to forecast the demand scenario tree, where both components are considering the electrical demand growth [30,31]. Test System Configuration Our test bed system contains 22 load buses with 41 existing lines and 6 generators, modified from the IEEE 30-bus network. In the scenario tree, we consider 2, 4, 6 stages with 10 way, 7 way, 5 way splits for demand scenarios, respectively. The split for the scenario tree is the number of branchings from a node in the tree to the succeeding nodes. Candidate lines for construction are marked with dotted lines in Figure 3 and featured in Table 2, so that the planner can construct lines, or else augment the capacity of existing lines. In Table 2, there are 12 candidate lines for the system network. Among candidates, there are 8 lines A-H (red) selected from the IEEE 30-bus network, and 4 lines I − L (blue) nearby bus 11, where additional load of 17 MW will be added. We assume that no more generation resources will be added, and the demand will increase every five years. That is, the number of generators is enough to cover expected demand loads, but power flows will not be delivered without expansions. Stopping Criteria Testing In this paper, estimations required to obtain valid results from the DD methods are reported through empirical results concerning the initial parameters of each method. Those parameters are associated with the intensity of the NC as the methods to deduce the optimal decision. We measure the method performance on the PH's step size γ, PB's NC matrix H and its step size p to verify the impact on the termination condition. Moreover, branching results from BB are reported. Progressive Hedging The step size γ represents the slope of the NC to update the penalty ω. A steep slope for the NC with a small γ takes less progressive steps for the gaps enclosing the optimal decisions. On the other hand, a gentle slope with a large γ more rigorously forces convergence. However, the simulation time does not directly follow the step size; in fact, subproblem solutions are more likely to cycle around the optimal solution when the step size increases. The cycling effect of decisions can cause a weak convergence rate that disturbs the integrated solution to satisfy the original constraints of the TLEP. As shown in Table 3, very small or large step sizes lead to expensive termination in our cases, requiring a lot of time to solve. Figure 4 depicts 4 cases with different γ values tested on the 4 stage 7 split TLEP problem. Cases with step size 1 and 5 successfully terminated, yet cases with 0.1 and 10 failed to reach the stopping criteria 0.0001 under the given 100 iteration limit. We observed that the simulation time to solve subproblems was longest using the largest step size and the objective function was not properly minimised. This case shows poor convergence rate with no further updates of the penalty for NC ( Table 3). The objective obtained by the smallest step size was better than others but failed to terminate. The convergence rate is not sufficient when the step size is too small or too large. Valid solutions were obtained with step size 1 and 5, however, the objective was not minimised compared to the value for step size 0.1. Hence, there was a weak convergence rate in later iterations, which is entailed by using a fixed step size and inevitable when the problem size increases. This clearly shows the requirement of parameter tunings to accelerate the process and obtain high-quality solutions, by continuously modifying the step size accordingly. However, it is difficult to designate the appropriate NC penalty for every decision variable. Proximal Bundle The measurements for the PB method are compared in Figure 5. Initial parameters described in Section 3.3 represent the step size p, which is proportional to the improvement of the dual function, and NC coefficient H s , which finds the gradient for cutting planes. We measured the variable convergence (nonanticipativity), simulation time, and the objective cost of the TLEP problem. Given parameters are described by per-unit values based on the objective cost and the stopping criteria was 0.0001. Given NC coefficient H s in Figure 5, different simulation results were observed according to the initial step size. When the initial step size is very low or H s is very large (grey), simulations take more steps with little improvements to the solution. Iterations wasted to draw redundant cutting planes take a long time to deduce the dual function. On the contrary, immediate terminations were observed for the highest p of 100 respective to 0.5 and 1 H s . However, they did not perform the best on the NC results since expected new boundary of the dual function was not able to be improved without changes of dual variable λ. Among results with the same H s (green, magenta), the step size 0.1 performs the best for the NC and objective. As a result, we verified that the wrong choice for parameters can bring premature termination or expensive termination. It is difficult to distinguish which cost performs the best as the process is premised to incline from the minimum to the maximum, yet the original problem involves minimisation. Thus, we excluded the high costs that violated the NC from acceptable solutions. Branch and Bound The BB algorithm was applied to the results from the previous subsection, and the associated results are analysed by comparing the best performance and the worst performance in the same search tree. Intermediate n-th branching results until they are fathomed are marked in Figure 6, where the best and the worst are distinguished. The best case can quickly converge integer variables with low additional cost for the NC, which support the objective function closely approaching to the optimal value. However, the result highlights the importance of first branching nodes, and objectives increased by the NC do not always guarantee the optimal solution. In Figure 7, results for every node in the branching tree are marked, in which nodes are distinguished, following the best case or the worst case. There were significant differences between the two groups. First, the best-case nodes converge close to the optimal cost, however, the worst-case nodes dramatically increase for the first few branchings and produce an overestimated cost. Second, the worst-case nodes spend more time solving additional PB processes to converge the integer variables. Therefore, the quality of the objective function is not guaranteed if we follow the worst branching results and this is determined from almost the first branching. Numerical Results Analysis Based on the empirical test for the initial parameters, we report the TLEP results based on MSSP with different program sizes in Table 4. Line construction and generation cost amounts are displayed to show which line has been augmented or constructed according to DD methods. The termination conditions considered two points: method termination and feasibility. Method termination indicates whether the stopping criteria was reached within a time limit of 48 hours. Node feasibility indicates that the integrated node solutions from the projection satisfy constraints involved in the TLEP. In general cases, if obtained solutions satisfy the NC, the node feasibility also is satisfied. The PH method contains quadratic terms in the objective function in subproblems and updates penalty parameters andx by projection. The quadratic terms in the subproblems usually take more time to solve single subproblems than that of PB. However, the PB method has the main optimisation problem of updating the dual variable λ, which takes more time than calculating the projection. According to the numerical results, solutions from the PH method satisfy the NC because its termination condition pertains to the NC. However, the PB method can violate the NC since the termination is based on boundary improvement not on variables. The DDSIP which implements the BB algorithm after the PB termination finds integer variables implementable across corresponding subproblems. Therefore, we used medium-iteration penalties and concatenated them for the warm-start of the PB method to speed up the process. We convert the penalty parameters for the PH method as follows: The parameter exchange enables the penalty matrix from the PH method to support the PB method to find the initial point. Among the cases featured in Table 4, constructions for new lines occur in Case 2 and 3, whereas Case 1 decides to upgrade existing lines. In general, construction for a new line is more expensive than upgrades so the results reveal the cost savings that construction can provide over 30 years. The selected candidate lines are J and L, which would lower the cost for generating power flow by linking end bus 11 to the central area off the network. Line 8-28 was the most frequent upgrade choice, which is the connection for end bus 8. On inspection, there is a candidate line E for bus 8 that obviously links the network to bus 8 but this is not as beneficial as linking the bus 11. As shown in Case 1, PH and PB determine an upgrade for 16-17 whereas others determine an upgrade for 21-22; the cost effectiveness can be verified by the value of objectives and proximity to the EF results. Growing the system demand, it becomes obvious that the TLEP problem in Case 2 requires new lines as the PH method gave the worst objective in spite of using the best plan from Case 1 (note that there were no more expansions after the first stage in Case 2). The expansion for J is displaced by L in Case 3, observing that most of the generators were fully contributed in a 30-year time span and subproblems determined expansion L and upgraded 21-22 to work better with other plans. This change accounts for the more severe demand conditions of the few scenarios where further upgrades to 23-24 after the first stage are decided. Specific optimisation results are shown in columns 7-10, Table 4. The objective comprises the cost for generation, line expansions, and supply deficits. This paper depicts the sum of costs for individual results to illustrate all obtained objectives for comparison (even though the solution is infeasible). Generation cost rises as expensive fuel generators are committed, which can be seen when phase angles are not physically modifiable or line capacity is limited. Therefore, line expansions to provide energy from cheap generators can minimise the generation cost, yet the mitigated cost should be balanced against the cost of deficits. Despite the decrease in speed as problem size is enlarged and the slight overestimation of the objective entailed by projection, PH in [14] sufficiently satisfies the NC if it can terminate and it obtains the solutions in a reasonable time. Whereas PB in [17] rarely succeeds at the same feasibility test; actually, the results of PB rank lowest but are not implementable (the expansion result in Case 1 merely fulfils the integrality). Feasibility can be satisfied by the BB algorithm with an adequately reduced objective as verified in DDSIP [16] and PH + DDSIP (proposed) results. Moreover, simulation time for the DDSIP process is lowered by the penalty matrix of the PH. The reconciliation of two different DD methods is observed from this result that mitigates both problems for finding the parameter organisation in the PB and the cycling effect that the PH undergoes. The EF results formulated as (15) can represent a reference result for the optimality in Case 1 and 2, but it requires myriad time and a larger MIP gap. Considering the test bed is based on a simplified 30-bus network with some electrical equipment omitted, it is not suitable to use EF in a practical evaluation process. In Case 3, the EF failed to solve the problem within the given time. Instead, the objective 6256.58 was obtained through a 3-split problem, which was slightly overestimated. Considering more detailed scenario trees usually lower the cost, the expected cost for extreme scenarios is discouraged in DDSIP and PH + DDSIP for Case 3 as the gap between scenarios is smaller compared to the 3-split case. As a matter of fact, the objective does not necessarily need to be the same to obtain the unified results as shown in Case 3. DDSIP has a higher objective as many scenarios chose expansions to avoid supply deficits. On the other hand, PH + DDSIP chose expansions in only a few extreme scenarios, and the additional cost for the deficit was not that significant. Regardless of the extremes, expansions for L and upgrades for 8-28, 21-22 are necessary to provide energy throughout the 30 year time span. Particularly, as the number of scenarios increases as the scenario tree branches, so probabilities for single scenarios decrease and an influence to first stage decision is shrunk. Finally, required transmission lines for the network and times needed for completion can be estimated in chronological order. The purpose of our simulation was not to find the one true solution for the test system. Yet, we focused on verifying variable convergence of MSSP problem, composed of typical TLEP problem, and we extended up to six stages. In this long-term evaluation process, we accomplished the feasibility of the results making the integrated variables from DD methods implementable. Various test cases revealed that the expansion plan can be relative according to method type regardless of the feasibility condition. Not only considering the accuracy, our test revealed the problem of time. The PH method shows contradictory results depending on the problem size where decision variables do not converge despite many iterations of the long-term problem. Whereas the proposed method that combines PH and DDSIP shows coherency over various sizes. Moreover, expansion results over different time spans sufficiently explain the consistency in response to the electricity demand. Many studies have focused on the financial benefit of resources in the objective function. It represents the expense, which apparently appears during the production or is induced subsequently [7,28,32]. Instead of that, we provide a framework for DD methods to handle integer variables in MSSP. Through the experiments, algorithms are tuned to cover multistaged problems and they give variety to the assessment results when there is uncertainty in the network organisation. Conclusions In this paper, we introduce the methodological basis of dual decomposition methods for multistage stochastic programming, tailored toward the transmission line expansion problem. Since the role of transmission lines is as substantial as generation resources are, careful planning is required as expansions are not retractable. We evaluate this investment decision on the 30-bus test network by considering three costs-reliability, operation and investment-based on future demand uncertainties. Despite the shortcoming in integer variable convergence in conventional dual decomposition algorithms, the proposed method enables the utilisation of several scenarios over a long-term horizon by finding correlation between scenario subproblems represented by the nonanticipativity constraint. Furthermore, we bring three dual decomposition methods-progressive, proximal bundle, dual decomposition in stochastic integer programming-and combined them to account for the convergence issue by adjusting warm-start values for internal parameter settings. The results not only convey optimal decisions among furcated future realisations but also achieve the economical use of time. We illustrate the significance of a long-term evaluation by showing that empirical tests over different time spans can yield various combinations of expansions, enabling planners to confirm them according to potential benefits under uncertainty. However, further studies for realising adequate reliability cost and evaluating power system flexibility with newly constructed lines should be considered in the future studies.
The Laws on Tourism Promotion through Practice in Binh Duong Province Tourism promotion activities have become an extremely effective tool to attract and persuade visitors to tourist destinations, which is a very essential factor in promoting tourism development. Therefore, tourism promotion is becoming a hot issue in tourism development in today's destinations. Because of the fact that in developed countries, tourism has been quite successful thanks to tourism promotion activities, contributing to the better development of the tourism industry. The promotion of tourism is considered as one of the key tasks of Vietnam's tourism in general and the tourism of provinces and cities in the country in particular. This study clearly shows the role and importance of tourism promotion law in promoting tourism development through the practice of Binh Duong province; from there, proposing solutions to improve this law. INTRODUCTION Today, tourism is an indispensable human need in the modern world and has become one of the leading economic sectors of the world economy. For many countries, tourism is the most important source of foreign currency in foreign trade [1]. The World Travel and Tourism Council (2019) announced that tourism is the largest economic sector in the world, surpassing the automotive, steel, electronics, and agricultural industries. Tourism has become a global issue, many countries use the tourism criteria of residents as an indicator to assess the quality of life. According to the World Tourism Organization (2019), the number of international tourists worldwide will reach 1.6 billion by 2020, of which East Asia -Pacific is the fastestgrowing region in the world at an annual rate of 6.5% for the period 1995-2020. This is an opportunity for Vietnam in general and Binh Duong province, in particular, to promote the tourism industry to thrive in a new period. Binh Duong tourism has the greatest advantage of being large land, beautiful natural landscape, temperate climate, rivers, streams, and lakes. With many landscapes, many historical relics attract tourists from all over the world. On the other hand, because today's traffic is good, people from all other provinces and cities are very convenient to travel to Binh Duong province. Binh Duong province is also close to Tan Son Nhat international airport, the future is Long Thanh international airport, close to seaports, etc. so it is very convenient for international tourists to Binh Duong province. Binh Duong province has 3 big rivers running through the area of Saigon province, Dong Nai province, and Binh Duong province, which helps cool air and green trees to develop easily. The fertile soil conditions are ideal for river tourism, riverside resorts, river entertainment such as boating, sightseeing yachts, swimming, water skiing, etc. Binh Duong province has many Hills, lakes, rivers, and streams that are the potentials for the development of tourism, which is very developed today, is close to nature, sightseeing, relaxing tourism, water sports, fishing, boating, etc. entertainment with nature such as Chau Thoi mountain (a national scenic spot), Nui Cau in the vast Dau famous lake, etc. Binh Duong tourism also has many historical and cultural relics that have been ranked at the national level as the Southwest -Ben Cat Tunnel Area, Phu Loi Prison, Hoi Khanh Pagoda, etc. the Binh Duong province also has nearly 30 provincial-level monuments, more than 500 others monuments have not been ranked. Binh Duong was originally land in the South formed parallel to the city. Ho Chi Minh City, Dong Nai, there are many ancient cultural relics such as pagodas, communal houses, temples, old houses, ancient tombs, etc. that make tourists want to learn. However, the tourism industry in Binh Duong province still reveals many limitations: compared to other provinces, it is still low (keeping a quite far distance from some domestic provinces: in 2019, the number of international visitors to Binh Duong province) 4,750,000 arrivals, while Dong Nai 4,937,000 arrivals, HCMC 36,500,000 arrivals); the contribution of the tourism industry of Binh Duong province to the socio-economic development of Binh Duong province is not really superior to that of other industries; Infrastructure is outdated; the quality of tourism products and services has not yet met regional and international standards, etc. There are many reasons for the above situation (economic background, human factors, the impact of the global financial crisis, etc.), but there are direct causes from the framework. However, after more than 3 years of implementation, 2017 [2] Tourism Law and its guiding documents also reveal certain limitations and shortcomings that need to be amended and supplemented: some contents in the 2017 Tourism Law It is not reasonable or some of the provisions in the Law are only estimates, not feasible, so they cannot be implemented; There are provisions in the Law on Tourism, after many years, there are still no guiding documents; there are many problems that have arisen in practice that has not yet been regulated by law; Besides, many legal regulations on tourism are not consistent with other relevant laws, etc. To a certain extent, the current legal provisions on tourism are both insufficient and redundant. The system is not high, making it difficult to implement, slow to come to life. This situation poses objective requirements in completing legal provisions on tourism promotion of Vietnam in general and Binh Duong province in particular. General Objective The general objective of the thesis is to analyze and evaluate the current legal status of tourism promotion in Binh Duong province, inadequacies, and causes, and clarify the emerging theoretical and practical issues. Since then, proposing solutions to perfect the law on tourism promotion in Binh Duong province SPECIFIC OBJECTIVES In order to achieve the above general objectives, the thesis implements the following specific objectives:  The first is to clarify the rationale for completing the law in the tourism field.  The second is to analyze, evaluate and explain the current legal status of tourism promotion in Binh Duong province.  Thirdly, the thesis proposes a number of solutions to improve the law on tourism promotion in Binh Duong province. LITERATURE REVIEW Over the past years, in order to serve the professional work as well as the need to study and research a number of individuals and organizations working in the tourism field, there have been researching topics and dissertations. Topics related to this field, like: The topic approaches a comprehensive research process from theory to practice, international comparison, applying methods and techniques in branding research, market research, and promotion. . The topic performed two-way analysis, on the one hand, assessing the status of product development, promotion, and promotion, the process of forming Vietnamese tourism brands at the national, regional, local and enterprise-level; on the other hand, research the market perception of Vietnamese tourism brands through the reference research results for many years and conducted 1,000 interviews, multi-object surveys including international tourists and domestic management, state management, businesses, communities to participate in tourism, from which to compare to find out brand awareness of the market. The research results are continuously compared with competing countries in the region to draw the core elements of the best recognized Vietnamese tourism brand. The topic also follows the development directions of the Strategy and Master Plan for tourism development in Vietnam to 2020, vision to 2030 to propose orientations and solutions to develop tourism brands. The study systematized the theoretical basis of responsible tourism: concepts, relationships, interests, behaviors between the parties participating in tourism activities, etc.; Specific experience of a number of tourist destinations in the world and in Vietnam: policy, organization, management, control, regulation, and evaluation mechanism, etc.; Current status of responsible tourism activities in Vietnam through field surveys: supply-side activities, demand-side activities, roles, responsibilities, and participation of all parties; Solutions to promoting responsible tourism. Subject: "Current situation and some solutions to improve state management effectiveness in the tourism sector". Project manager -Nguyen Thi Bich Van, General Department of Tourism, 2001. Research has assessed the current state management effectiveness in the tourism sector and proposed some solutions to improve the effectiveness of state management in the tourism sector Thesis: "Vietnam's marine tourism resources for resort tourism development". Mai Hien, Master's thesis in Tourism Studies, 2007. The thesis presented theories about the type of resort tourism; Presenting the composition, characteristics, and nature of marine resort tourism resources; List, evaluate the suitability and attractiveness of the basic types of resources in Vietnam's marine tourism resources, point out the area with many advantages in terms of natural resources, assess exploitation conditions Learn about the current status of exploitation and use of marine tourism resources to give directions to improve the efficiency of exploitation and sustainable use of natural resources, etc. RESEARCH RESULTS AND DISCUSSIONS Tourism promotion activities in Binh Duong province Visitors to Binh Duong province in recent years have strongly developed in both quantity and structure of tourists. The average growth rate of visitors in the period 2017 -2019 reached 156.87%/year. International Tourists Binh Duong Provincial People's Committee has a plan to implement stimulus packages to attract international tourists, so the number of international tourists to Binh Duong province increases sharply in the period of 2017 -2019. In 2017, it is 2,390. 183 guests, accounting for 34.69% [5], in 2018 increased to 3,439,620 guests [6] and the highest increase was in 2019 with 4,750,000 visitors, equivalent to an increase of 38.09% compared to 2018 [7]. From the structure of international tourists to Binh Duong province, mainly Chinese, Russian, and Korean tourists, up to now, the tourist structure has become very rich and diverse with over 50 different nationalities. Chinese tourists come to Binh Duong province mainly for the purpose of visiting and shopping, while European tourists come from countries such as France, Germany, England, Sweden, etc. to Binh Duong province with the main purpose is to visit provincial landmarks that have unique humanitarian tourism resources Domestic Tourists: Domestic tourists to Binh Duong province mainly from Ho Chi Minh City and southern provinces for the purpose of visiting, resting, attending cultural festivals, service tourism, etc., in addition to a set the people of Binh Duong province also participated in the influx of weekend tourists. From the data table can be seen, the number of domestic visitors to Binh Duong province gradually increases in the period of 2017 -2019. The result is that Binh Duong province has organized many promotional activities such as festivals folk culture. Tourism businesses actively implement measures to stimulate tourism such as promotions and discounts. Especially, the long holidays have created conditions for visitors from the central, and Northern provinces to come to Binh Duong province. As a result, although the number of international visitors decreased as the whole country, domestic tourists to Binh Duong province increased, contributing to the strong growth of domestic tourism in Binh Duong province in 2019 with 6,573,080 arrivals, an increase of 45.12 % over the same period in 2018 [7]. Most domestic tourists come to Binh Duong province to enjoy the cool fresh air. Thus, it can be seen that the number of tourists coming to Binh Duong province over the years has been irregular, international visitors to Binh Duong province are increasing with many different nationalities. The average guest stay in 2017 was 1.9 days; in 2018 it was 2.95 days; 2019 in 3.25 days [5][6][7]. Law on tourism promotion Law is an objective social phenomenon, especially important but also extremely complicated, so from the past up to now, there have been many different conceptions and perceptions about the law. On the universal and most basic aspect and applying to contemporary socio-economic conditions, the legal definition can be stated as follows: "Law is a system of generally compulsory behavioral rules by the house. The country establishes or acknowledges, expresses the will of the state of the ruling class on the basis of recognizing the needs for the interests of the entire society, guaranteed to be performed by the state to adjust social relations for the purpose of social order and stability for the sustainable development of society" [8, p 288]. Law is a phenomenon of the superstructure, reflecting both socio-economic development level and guarantee value, promoting socio-economic development. Law is an important tool for the State to manage and regulate social relations. The state cannot exist without the law and vice versa the law can only bring into play its effectiveness and efficiency if it is guaranteed by the power of the state apparatus. manifested in the form of a law imposed by the government, otherwise, the two hours of will is just air vibrations caused by empty sounds" [9, p 51]. Therefore, the law always shows the will of the class holding the power of the state. In a socialist rule of law state, the law is an instrument for social development. Our State is a state of the people, by the people, for the people, the law must belong to the people, by the people, for the people. The law is the basis for building "civil society" and is an indispensable value in the rule of law state. The document of the Ninth National Congress of the Party affirmed: "Our State is the main tool to exercise the people's mastery, the rule of law of the people, by the people and for the people. State power is unified, with assignment and coordination among state agencies in the exercise of legislative, executive and judicial powers. The State manages the society by law" [10, pp. 131,132]. Document of the Tenth National Congress of the Party also affirmed: "Our State is a socialist rule-oflaw state exercise the legislative, executive, and judicial powers. Completing the legal system, increasing the specificity and feasibility of provisions in legal documents, etc." [11, p. 125]. Document of the 11th National Congress of Delegates continues to affirm: "Continue to promote the building and perfecting of the socialist rule of law state, ensuring that our State is true of the people, by the people for the people and for the people, led by the Party, etc. Improve the State's management and governance capacity according to the law, strengthen the socialist legislation" [12, p 126]. Since the implementation of "Innovation" up to now, the Party and the State of Vietnam have always considered the continuous improvement of the legal system a great, regular and important concern of the country. This is reflected in most of the Party's documents, especially the Resolution of the 8th Central Implementing the policy of building a socialist rule-of-law state of the Party, the National Assembly, the Government, and state agencies in charge of tourism have been making efforts to fully develop legislation in the tourism sector and efficiency. However, it should be emphasized that the current system of legal documents in the tourism sector is both lacking, redundant, unspecific, and systematic, and its impact on the implementation process is still limited institutions need to be revised, supplemented, and improved. On the theoretical basis, so far there is no unified definition of the tourism promotion law. However, the legal norm in the tourism sector, like other legal regulations, is generally compulsory, is a template for all subjects to comply and is an assessment criterion of human behavior issued or recognized by the competent authority and applied many times in life until it is changed or canceled. Another point is that the legal norm in the tourism sector only regulates social relations arising in the process of organizing and operating tourism. Thus, the law in the tourism sector is a system of legal regulations promulgated or recognized by the State and implemented to regulate social relations arising in the process of organizing and operating tourism. calendar, including the provisions of tourism resources; tourism development planning; tourist area, tourist destination, tourist route, and tourist urban area; tourists; Business Travel; Tour guide; tourism promotion; international cooperation on tourism; tourism inspection, handling requests and recommendations of tourists. The current situation of law application on tourism promotion in Binh Duong province in recent years In recent years, Binh Duong province has issued many legal documents on tourism promotion in the province, specifically: Decision No. 2303/QD-UBND "On approving the tourism development planning of Binh Duong province to 2025, with a vision to 2030" issued on August 15, 2011. With the aim of: To build tourism into an economic branch with an important position in the economic structure, contributing to the process of economic restructuring of the province; at the same time, it is a tool to improve the quality of people's lives, to meet the needs of rest and enjoy the spirit of the local people. Effectively exploit the advantages of geographical location and tourism potential to form branded tourism products with local cultural characteristics. Tourism development, using revenues from tourism activities to contribute to the conservation and efficient exploitation of historical relics, cultural heritage values, ecological environmental values, ensuring development sustainable in both tourism and ecological environment. Orientation planning south space: The spatial scale in the South includes the area of Thu Dau Mot Town, Thuan An Town, Di An Town, and a part of Ben Cat District: Main tourism products include: Ecotourism (garden tourism, river tourism), cultural tourism (visiting cultural-historical sites, festivals, craft villages, spiritual tourism, and beliefs), entertainment, weekend travel, vacation travel, shopping travel, MICE travel, and highend sports tourism; Priority areas for investment: Areas along the edge of Lai Thieu garden (Thuan An town), areas along Saigon River (in Ben Cat district). Development Center: Thu Dau Mot Town service tourism. Orientation of spatial planning in the Northwest: Space in the Northwest includes the area of Dau Tieng lake, Cau mountain, Saigon river corridor, and the vicinity of Dau Tieng district and Ben Cat district; Main tourism products include: resort tourism, cultural tourism, ecotourism, high-class sports tourism; Areas of investment priority: Area of Dau Tieng lake, area along Saigon river, area of Can Nom lake. Development center: Dau Tieng town service and tourism. Orientation of space planning in the East: The spatial scale in the East includes the area along the basin of Dong Nai and Be rivers in Tan Uyen District, Phu Giao District; Main tourism products include Ecotourism with types of river eco-tourism, resort tourism, weekend tourism, and high-class sports tourism. Plan No. 3088/KH-UBND on the implementation of the program "Vietnamese to travel to Vietnam" in Binh Duong province, issued on June 29, 2020, by the People's Committee of Binh Duong province. Accordingly, the Department of Culture, Sports and Tourism coordinates with departments, sectors, tourism associations, People's Committees of districts, towns, cities, and tourism service businesses to launch the program. "Vietnamese traveling to Vietnam" on media and communication channels. Promote tourism communication on the media, widely information about the safety level, ready to attract tourists; at the same time guide regulations on ensuring safety for tourism activities, tourists, workers, and local communities. Announcement on the exemption of entrance fees at the province's historic and scenic destinations to travel businesses, tourists inside and outside the province Mobilize and encourage tourismrelated businesses in the province to participate; guide, monitor, and urge businesses to seriously implement promotional commitments when participating in the "Vietnamese traveling to Vietnam" program to ensure a safe, friendly and quality tourism environment. Along with that, implementing the tourism demand stimulus program, focusing on exploiting the domestic market including tourists within the province and other provinces (Hanoi, Ho Chi Minh City, and Southeastern and Delta Mekong River provinces) from June to the end of December 2020 in the whole province. The Department of Culture, Sports and Tourism coordinates with business units of tourism services, traditional craft villages in the province to organize and participate in promoting Binh Duong tourism on events taking place inside and outside the province during the period next time: 60th Anniversary Vietnam Tourism Day (July 9, 1960 -July 9, 2020), The 3rd Binh Duong Food Festival 2020, International Tourism Fair ITE-HCM 2020, Can Tho National Amateurs Festival 2020, and other events. Draft plan "Implementing Vietnam's tourism development strategy to 2030 in the province of Binh Duong" dated November 6, 2020. The Provincial People's Committee has just issued a plan to implement the Vietnam tourism development strategy until 2030 in the province of Binh Duong. The Provincial People's Committee requested a plan to implement the Vietnam tourism development strategy to 2030 in the province in a synchronous manner, with the initiative and coordination of departments, functional agencies, and the People's Committees of districts, towns and cities to support and facilitate the rapid and sustainable development of the tourism sector in the coming time. The target is set in the period of 2021-2025, striving to increase the average number of visitors to visit and stay by about 15%/year or more; Revenue from tourism is about 20%/year or more. By 2025, to attract tourists to visit and stay about 5,250,000 visitors (of which international visitors are about 320,000); revenue from tourism is about 2,090 billion VND; training, retraining, and professional training in tourism and skills related to tourism activities for about 400-500 turns of people; proposing the establishment of a tourism development fund to support training, propaganda, tourism promotion, and tourism product development. In the period of 2026-2030, strive for an average growth rate of about 7%/year of visitors coming to visit and stay; Revenue from tourism increases about 12% / year. By 2030, attract tourists to visit and stay about 8,100,000 visitors (of which about 480,000 international visitors); revenue from tourism is about 3,700 billion VND; training, retraining, and professional training in tourism and skills related to tourism activities for about 600-700 turns of participants [7,8]. Decision 1877/QD-UBND dated 1-8-2013 of the Provincial People's Committee on approving the Project of propaganda, promotion and promotion of tourism in Binh Duong province for the period of 2013-2015, orientation to 2020 and Decision No. 1878/QD-UBND dated August 1, 2013, of the Provincial People's Committee approving the Project to develop Binh Duong specific tourism products to 2015, with a vision to 2020, the Provincial Tourism Promotion Center is implementing activities to develop tourism in the province in the right direction. In recent years, in addition to tourist sites in the province, new types and programs have been built to serve the needs of tourists for sightseeing, entertainment, and relaxation, promotion, and promotion. Tourism of the provincial tourism promotion center also contributes to promoting and introducing Binh Duong tourism and bringing tourists to Binh Duong. Mr. Nguyen Duc Minh, Director of the provincial tourism promotion center, said that Binh Duong tourism website (dulichbinhduong.Org.vn) is one of the advertising channels of Binh Duong tourism that has brought into full play and created favorable conditions event for tourists to find out information about Binh Duong tourism in recent years. In 2018, the website received more than 150,000 visitors looking for travel information; since the website was put into operation, there have been more than 458,000 visitors. Proposing perfecting the law on tourism promotion Currently Thus, to a certain extent, the current legal provisions on tourism can be said to be both insufficient and redundant, and the system is not high, so it can cause certain difficulties for the implementation. Regulations on specific sectors need to be amended and supplemented for tourism to develop as a key economic sector. This Resolution has approved the amendment and supplementation of the Law on Tourism at the 2nd session of the 14th National Assembly (scheduled for October 2016). Amending and supplementing the 2005 Tourism Law, the 2017 Tourism Law and the guiding documents system will contribute to the complete legal system in the tourism sector, creating a synchronous, unified innovation is suitable not only in content but also in form with basic requirements: legal documents of the state must be issued in accordance with competence, form, order, and procedures; ensure the hierarchy of legal documents; both synchronous with domestic law, and consistent with international law qualifications and practices. One of the important contents of the 2017 Tourism Law [2] is the regulation: The travel business does not need to have legal capital but must deposit a deposit to ensure responsibility for tourists. According to Article 16 of Decree 168/2017/ND-CP on the management and use of the deposit, the purpose of the deposit is: "In case a tourist dies, has an accident, risks, harming the life and needing to bring to the residence or urgent treatment but the enterprise is unable to arrange the funds for timely settlement, the enterprise shall send a request for temporary clearance of the deposit to the issuing authority travel service business license. Within 48 hours from the time of receiving the request of the enterprise, the agency granting the travel service business license shall consider and request the bank to allow the enterprise to deduct the deposit account to use or refuse". Thus, it can be seen that "deposit" is to solve the problems of tourists when facing risks. Specifically, the fund is only used when visitors are killed, have an accident, risk, or have had their life compromised and need to be taken back to the residence or for urgent treatment. This purpose seems to be "overlapped" with regulations on compulsory insurance in the Vietnamese legal system: First, the travel service business is the construction, sale, and implementation of a part of the whole tourism program for guests. A travel program is a document showing the itinerary, services, and prices that are predetermined for a tourist's trip from the origin to the end of the trip. Therefore, using passenger transportation will be an indispensable part of most trips. According to the Law on Insurance Business, motor vehicle owners and airline carriers are required to purchase civil liability insurance for passengers. The law also stipulates that the inland waterway transport operator must purchase the vehicle owner's civil liability insurance for the passengers and the third person, the passenger carrier by sea, is obliged to buy liability insurance the carrier's civil liability for the passenger. In addition, the Law on Tourism also stipulates that organizations and individuals doing tourist transport business must buy insurance for tourists by means of transport. Thus, when transporting tourists, owners of means of transport must buy insurance for tourists. Second, in addition to the insurance purchased by the carrier, the tourist can also be reimbursed for medical treatment, or even the costs of death, by health insurance. The Law on Health Insurance stipulates that health insurance is a form of compulsory insurance that is applied to subjects in accordance with the Law. Although the beneficiaries of health insurance have been greatly expanded, in addition to workers, students, students, etc. (even foreigners studying in Vietnam are granted scholarships. from the budget of the State of Vietnam), however, it still does not cover all the scope of tourists. But it can be seen that the majority of tourists who are Vietnamese when having an accident, health and life-related risks will be covered by health insurance. Third, the Law on Tourism also stipulates that the travel business must buy insurance for tourists during the travel program unless the tourist already has insurance for the entire tour program. . The insurance company will pay the cost of the loss of health, property, and luggage for people living in Vietnam (including those of Vietnamese nationality and foreign nationals living in Vietnam) want to travel, visit, and work or study abroad at home as well as abroad. Thus, in Vietnam, with these three types of insurance, the payment for risks, accidents, or loss of health and life for visitors can be made in a timely manner. In which, we believe that selling insurance to tourists is the most important. Because, in order for insurance companies to sell their products, they will have to inspect and validate the product supply process, so they will closely monitor and monitor the activities of the businesses where they sell their products. . Unreasonable acts and activities may be excluded by insurance companies. No insurance company continues to sell or impose low premiums on businesses that do poorly or keep incidents. In addition, according to the provisions of Article 16 of Decree 168/2017/ND-CP, the fund is only used in case tourists die, have an accident, risk, or have their life compromised but the business does not the ability to arrange to fund for timely settlement, the need to bring to residence or urgent treatment but the enterprise is unable to arrange the fees for timely settlement. Intangible, this rule has pushed risks for tourists. State regulators really need to reconsider this issue. Because travel business is a profession that faces many risks such as natural disasters, traffic accidents, climbing accidents, waterfall climbing, food poisoning, etc. If a travel business if you do not have enough money to pay and solve problems for customers, you should not let these businesses exist. Although Vietnam is mobilizing all resources for tourism development, but not developing tourism at all "prices", leading to the interests of tourists is not guaranteed. Recently, the mass media has reflected that some Vietnamese travel companies have even accepted 0 VND tour visitors [8]; Due to its small scale, low management costs lower tour prices; providing poor quality products that cause unfair competition; deceiving tourists, causing discretion, affecting the national tourism brand, etc. Therefore, businesses that cannot afford the financial capacity need to be excluded from the market. CONCLUSION Tourism and law in the tourism sector is a complex issue, directly and indirectly, related to many aspects of social life, related to the system of state management agencies from the central to local levels. , businesses and people. After 60 years of establishment and development, the system of legal documents in the tourism field has been promulgated by a relatively large number of state agencies, of which the 2017 Tourism Law has been implemented for 3 years reveals many problems and shortcomings and has many contents that cannot come to life. Therefore, in order to further promote the construction and completion of the law, create a favorable legal framework for tourism to develop commensurate with the position and role of the "key economic sector", it is required by law. In the tourism sector, appropriate adjustments must be made. On the other hand, in order to continue the cause of building a Socialist State of Vietnam in general and Binh Duong province in particular in the period of accelerating industrialization -modernization and the context of deeper international integration and broadly, the issue of perfecting the legal system in general and perfecting the law in the tourism sector, in particular, is necessary to meet the requirements of social life, serving the country's renovation.
Prenatal cardiac care: Goals, priorities & gaps in knowledge in fetal cardiovascular disease: Perspectives of the Fetal Heart Society Perinatal cardiovascular care has evolved considerably to become its own multidisciplinary field of care. Despite advancements, there remain significant gaps in providing optimal care for the fetus, child, mother, and family. Continued advancement in detection and diagnosis, perinatal care and delivery planning, and prediction and improvement of morbidity and mortality for fetuses affected by cardiac conditions such as heart defects or functional or rhythm disturbances requires collaboration between the multiple types of specialists and providers. The Fetal Heart Society was created to formalize and support collaboration between individuals, stakeholders, and institutions. This article summarizes the challenges faced to create the infrastructure for advancement of the field and the measures the FHS is undertaking to overcome the barriers to support progress in the field of perinatal cardiac care. Introduction Perinatal cardiovascular care including imaging of the fetal heart has evolved considerably over the past few decades and is now established as its own multidisciplinary field of care "bridging the gap and providing continuity of care for the fetus, child, and mother." [1,2]. Fetal cardiac imaging has benefited from significant technological advancements in two-dimensional imaging and Doppler echocardiography in obstetrics and pediatric cardiology since its inception [3][4][5]. Acquisition and interpretation of serial fetal imaging have enhanced the understanding of the natural history of cardiac lesions in fetal life [6][7][8][9]. More recently this knowledge has led to efforts to critically evaluate our ability to affect the natural history of some types of fetal cardiac disease [10]. Advances have also allowed pediatric cardiologists and maternal fetal medicine specialists to work together to better understand cardiac physiology in high risk fetal or maternal conditions that affect pregnancy [11,12]. The main goals of perinatal cardiac care are to optimize care on many fronts including improving the accuracy of detection and diagnosis, predicting in utero and postnatal outcomes, risk stratifying fetal patients to inform perinatal care and delivery planning, supporting and counseling affected families, and altering disease course if applicable, with fetal treatment. While significant advances have been made in these areas, there are still many gaps in achieving and optimizing these goals. Since the field of fetal cardiology spans multiple specialties and provider types, multidisciplinary collaboration is essential to achieve continued progress towards these goals. Such a collaborative approach has resulted in the most recent consensus American Heart Association (AHA) Statement on Diagnosis and Treatment of Fetal Cardiac Disease [13], the Joint Opinion of the International Fetal Medicine and Surgical Society and the North American Fetal Therapy Network [14], and recent American Institute of Ultrasound in Medicine (AIUM) Practice Parameter for the Performance of Fetal Echocardiography. [15] The benefit of formalizing such efforts with a multidisciplinary organization has gained recognition [16], and it is clear that an infrastructure facilitating continued collaborative efforts is important to address the challenges and gaps to providing optimal prenatal diagnosis and care, and move the field further forward. Challenges to advancements in fetal cardiovascular care A primary challenge to optimizing diagnostic capabilities and care of the in utero cardiac patient is the lack of rigorous studies and therefore the lack of strong evidence-based data on which to base counseling and care. Multiple factors contribute to this, including the fact that heart disease in the fetus is relatively rare [17], care varies by center, multicenter collaboration in fetal cardiology has been uncommon, and fetal cardiology as a field has often not been able to obtain sufficient research funding to support multi-center studies. The Pediatric Heart Network is an NIH-funded national research collaborative that is one of the few groups that has successfully conducted funded multicenter pediatric cardiology studies in the United States. This group serves as an excellent example that rigorous studies can be successfully performed in congenital cardiology. However, performing a query of indexed PubMed studies 2015-2019 including "prenatal" or "fetal" and "echocardiography" or "cardiology" show that to date less than 3% of published studies in fetal cardiology are multicenter [18]. In addition, it should be noted that most of the recommendations and practice protocols in the AHA Fetal Cardiac Diagnosis and Treatment Statement are based on single center studies, case studies, or expert consensus (level of evidence B or C) and not multicenter studies or randomized case-control investigations [13]. In response to this limited data, a handful of single centers have initiated collaborative multicenter studies for defects such as fetal Ebstein anomaly or tricuspid valve dysplasia [19] and isolated complete atrioventricular block [20]. These studies, by including multiple centers, have been able to enroll a larger number of fetuses to identify key fetal predictors of morbidity and mortality while accounting for known confounders. While most such multicenter collaborations have been retrospective, there have also been a smaller number of prospective studies undertaken such as that investigating the use of home monitoring for the surveillance of SSA related complete atrioventricular block [21]. Further, collaborative registries have organized prospective collection of fetal cardiac data. The International Fetal Cardiac Intervention Registry (IFCIR), in particular, is collecting data on maternalfetal dyad referred for fetal cardiac interventions from 39 centers in 18 countries, pooling procedural and maternal and fetal outcome data [22][23][24][25]. Similarly, the Fetal Atrial Flutter and Supraventricular Tachycardia (FAST) registry is collecting data on tachyarrhythmia treatments and outcomes from 43 centers in 13 countries [26,27]. The FAST trial importantly also encompasses the first international randomized controlled multicenter fetal trial on therapy for fetal tachycardia. Harnessing the power of prospective data collection at multiple centers allows for more robust and adequately powered observational studies in our field. Despite these significant accomplishments, the many challenges to collaborative research have likely prevented many other such studies from being undertaken. Individual centers establishing multicenter collaborations must navigate challenging and often variable regulatory requirements or "paperwork" and cope with often complex technical aspects for data transfer and sharing across institutions. The infrastructure to support these efforts is lacking at many centers. An additional concern is that individual or center driven efforts may not always be open to equitable participation and be based only on established relationships without an open playing field where centers can network and communicate. Furthermore, academic structures do not always reward multicenter collaborative efforts, taking note of only "first" or "senior" author publications. Finally, even when rigorous research exists, unified multidisciplinary dissemination of knowledge and advocacy to ensure its incorporation into clinical care is often lacking. The Fetal Heart Society To address these challenges, the Fetal Heart Society, Inc. (FHS) was established and incorporated as a 501c non-profit organization in October of 2014. The mission of FHS as outlined in its bylaws is: • To advance the cause of research and education relating to the field of fetal cardiology and other reasonably related medical or scientific pursuits • To promote and encourage the development and advancement of the field of fetal cardiovascular diagnosis, management, and therapy • To promote the establishment of mutually beneficial relationships among the FHS members to enable the sharing of ideas and research collaboration • To foster and facilitate multicenter research and collaboration, and • To advance the field of fetal cardiovascular science and clinical practice by the establishment of a Fetal Cardiovascular Research Collaborative within the Society's auspices. Since its inception, membership has grown to include 285 members across specialties and provider types representing 100 institutions and 12 countries (Fig. 1). Institutional sponsorship of the FHS was initiated to enable individual centers the opportunity to support the efforts of the Society including collaborative fetal cardiovascular research, multidisciplinary education, and advocacy efforts. Currently, 16 institutions have joined, giving their staff, including physicians, nurses, sonographers, and trainees the advantages of membership, including access to senior members and experts in the field (Table 1). In pursuit of their mission, the FHS leadership and its members have identified several priorities/gaps in the field of fetal cardiovascular care which can be summarized as follows: 1. Improving diagnosis and detection 2. Improving the understanding of fetal cardiovascular hemodynamics, the progression of disease, and factors that predict outcomes 3. Standardizing protocols for fetal cardiac imaging and management across disciplines and 4. Advancing fetal therapy The pursuit of these priority objectives is supported by a collaborative approach within the three main pillars of the Society -Research, Education, and Advocacy (Fig. 2). Improving diagnosis and detection The accuracy of fetal echocardiography, for those referred due to maternal or fetal risk factors or suspicion for CHD on screening ultrasound, has improved greatly in diagnosing most complex CHD in the second trimester [28,29]. Diagnoses of more challenging defects such as total pulmonary venous return and coarctation remain limited with room for progress [30,31]. In addition, providing earlier diagnosis of CHD in the fetus (particularly for high risk mothers) is an active area of study as is improving the ability to assess fetal cardiac anatomy and function with advanced techniques such as tissue Doppler, strain and three-dimensional imaging [32][33][34][35][36][37]. However, the goal of "advancing the art and science of fetal cardiovascular medicine" continues to be hampered by continued low rates of prenatal detection of CHD on screening ultrasounds in low risk mothers [ [38]]. Understanding the prenatal progression of disease, establishing appropriate perinatal care and delivery planning, and risk stratifying patients to optimize outcomes all depend on early detection of fetal CHD. While prenatal detection rates have been slowly improved over time, detection of CHD before birth remains low both nationally and internationally [ [39][40][41]]. Reducing the inadequacy in prenatal detection is a key priority of the FHS and requires engagement and interface with frontline providers who perform obstetric screening ultrasounds. To address this, the FHS membership has recognized the need to address this issue on multiple fronts at both ends of the spectrum: from research studying the socioeconomic barriers to prenatal detection to supporting efforts to develop novel interventions to improve detection ( Table 2). The FHS has also developed educational lectures on protocols and performance of screening ultrasounds and is playing a key role in advocacy efforts to emphasize the importance of recognizing this most common birth defect in utero to improve outcomes after birth [ [42][43][44]]. Improving understanding of fetal cardiovascular hemodynamics, the progression of disease and factors that predict outcomes The goals of fetal cardiac assessment include enhancing the understanding of fetal hemodynamics, predicting outcomes in utero such as fetal demise, identifying requirements for a successful delivery room transition including the need for postnatal interventions, and minimizing postnatal morbidity and mortality [45][46][47][48]. Accurate risk stratification and outcome prediction has been achieved with varying success depending on the lesion often due to limited numbers [45,49,50] and may not always be generalizable from single center studies due to variations in practice. By prioritizing multicenter research and establishing an infrastructure for its conduct (see Research below), the FHS hopes to overcome these limitations. Standardizing fetal cardiovascular imaging and management The indications for referral, the technical requirements for imaging, and protocols for the performance of fetal echocardiography have varied across published guidelines from individual professional organizations that support professions that perform prenatal ultrasound. These organizations include the American Institute of Ultrasound in Medicine (AIUM) [51], the American Society of Echocardiography (ASE) [52], the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) [53] and the Association for European Pediatric Cardiology [54]. Since fetal echocardiography can be performed by a variety of providers with different training experiences including pediatric cardiologists, radiologists, obstetricians, and maternal fetal medicine specialists, it is critical to strive towards protocols which standardize the technical requirements, imaging views, sweeps and components, and reporting practices to ensure the provision of equal and optimal care. Appropriate monitoring and delivery planning for the fetus with CHD are critical to providing optimal care; however, protocols are not standard across institutions [10,55,56]. Center-specific delivery room CHDcongenital heart disease, HLHShypoplastic left heart syndrome, dTGAd-transposition of the great arteries. protocols have been shown to improve transitions from fetal to neonatal life as well as communication among multispecialty teams involved in the care of these families [57]. Appropriate surveillance and timing of delivery can be especially challenging when competing conditions coexist in fetuses with CHD including additional non-cardiac defects, placental insufficiency, or growth restriction. No clear guidelines for perinatal surveillance in particular currently exist resulting in wide variation in practices [13]. As a multidisciplinary society, the FHS can work with key stakeholders across other vested organizations to address these gaps in care standards (see Advocacy below). Advancing fetal therapy Increased understanding of the natural history and evolution of cardiac disease in the fetus has also led to efforts to alter the in utero course of CHD in certain conditions. In utero invasive interventions for severe pulmonary stenosis or pulmonary atresia with intact ventricular septum to prevent single ventricle physiology or for HLHS with intact atrial septum to improve mortality have been studied but are limited by relatively small numbers and experience to make robust conclusions regarding utility [24,[58][59][60]. Fetal aortic valvuloplasty for aortic stenosis with evolving HLHS, the most extensively studied [61,62], has been associated with an increased likelihood of biventricular repair compared to those with no intervention or an unsuccessful intervention, but impact on long term outcomes is still variably reported [22,[63][64][65]. The FHS and its members are committed to supporting continued multicenter research collaboratives as well using its infrastructure to facilitate collaborative multicenter prospective studies into novel therapies such as the use of maternal nonsteroidal anti-inflammatory drugs to promote ductal restriction in severe fetal Ebstein anomaly [18,66]. Research infrastructure To address the above gaps in knowledge, a primary mission of the FHS is to support continued collaborative research in fetal cardiovascular hemodynamics, the progression of disease, and factors that predict outcome in CHD. Housed within the infrastructure of the FHS is the Research Collaborative Committee whose purpose is to oversee a formalized process to solicit, review, and provide feedback for fetal cardiovascular study protocols (Fig. 3). To streamline the process for multicenter studies, the FHS utilizes a Data Coordinating Center (the University of Utah DCC) to provide consistent and reliable support for research. The DCC provides comprehensive project management including overseeing regulatory requirements such as Business Use Agreements, Data Use Agreements, and Institutional Review Boards for participating institutions and coordinating the preparation of regulatory documents that can be used across multiple studies. With centralized data management and informatics expertise, the DCC serves as the repository for clinical and imaging data, aids in the creation of databases and forms, performs data queries and quality checks, and provides online training for data entry. The DCC is also able to provide clinical research design and statistical expertise to aid study investigators with study design and implementation. Currently, the FHS supports multiple investigator initiated multicenter studies (11 studies to date, Table 2) both retrospective and prospective in design. Another important benefit is that the opportunities for research funding are likely to be increased with multicenter studies, due to increased patient numbers, and the support of seasoned investigators with an established track record in obtaining grant funding. Thus far, funding has been successfully obtained for a prospective study of fetal d-TGA initiated by three institutions working together to approach the diagnostic challenges of predicting outcomes from multiple vantage points (Mend a Heart Foundation). Education The educational mission of the FHS is essential for continued advancement in the field of fetal cardiovascular medicine. Given the multidisciplinary nature of the field, educational efforts must span current providers and trainees across disciplines and provider types. These include but are not limited to sonographers, physicians, nurses, and other allied providers in pediatric cardiology, maternal fetal medicine, general obstetrics, and radiology who screen for and provide care to families affected by fetal cardiac disease. While educational offerings exist to different extents from professional organizations within these disciplines, the FHS has embarked on creating a core curriculum on prenatal cardiology for all levels of practice including screening for CHD, basic fetal echocardiography, and advanced fetal echocardiography and perinatal cardiac care. For sonographers and obstetricians, the screening curriculum prioritizes advancing knowledge and skills to increase the detection of CHD in the low risk patient. The education initiatives for specialists in fetal cardiology are broader and encompass a large variety of subjects including 1) detailing the technical aspects of cardiovascular imaging, 2) assessing critical imaging features that predict outcomes for specific CHD, 3) creating content and a standardized approach to counseling families, 4) detailing strategies to reduce sociodemographic disparities, and 5) optimizing communication and collaborative care between specialties. In addition, the FHS Education Committee has expanded on the efforts of a sponsoring institution (Stanford University) to create CHD specific provider information sheets that collate the most recent data on incidence, fetal imaging predictors of outcome, available fetal interventions, prognosis, and associated problems [67]. The FHS has pursued these education efforts with the assistance of several educational grants (GE Healthcare). More recently, the FHS began a monthly webinar to supplement the website lecture series. The content of the webinar is directed towards all levels of practice and covers multiple topics including a review of guidelines for the performance of fetal echocardiography, a lecture on advanced arrhythmia management, a journal club discussion of genetics, and presentation of a series of interesting and challenging fetal cases. Building on its collaborative foundation, the FHS has had the opportunity to partner with several organizations to plan the fetal educational content for national and international conferences including the first fetal track in the upcoming World Congress in Pediatric Cardiology and Cardiac Surgery in 2021. Advocacy The FHS has worked with key stakeholders across institutional Fig. 3. The Fetal Heart Society (FHS) research review process. The research review process is overseen by the Research Collaborative Committee (RCC). The process steps include submission of a concept proposal, review by the RCC, convening of a collaborative study group from FHS members, preparation and submission of a full proposal responsive to RCC feedback with the study group and upon approval, working with the FHS Data Coordinating Center (DCC) on the preparation of regulatory documents and database creation. organizations to decrease variation in the practice and performance of fetal echocardiography and perinatal care of fetuses affected by cardiovascular conditions. Collaborative input from across these organizations was sought and incorporated into the recent AIUM Practice Parameters for Fetal Echocardiography [15]. To ensure the quality of care in the transition period from fetus to neonate, the FHS is collaborating with the Neonatal Heart Society, an international society dedicated to the care of neonates with cardiac disease, in the creation of fetal/neonatal cardiac guidelines for care. Finally, the FHS has worked with ASE and the Society of Pediatric Echocardiography to generate guidelines in response to COVID-19 that included pandemic modifications for indication, timing, and performance of fetal echocardiograms to reduce exposure and assure the safety of staff and patients. Advocating for the multidisciplinary resources required to provide optimal care to fetuses and families affected by fetal cardiovascular disease has been critical in recent years. It has led to the inclusion of genetic counselors, social workers, palliative care teams, and psychologists in the cadre of providers supporting and caring for these families. Continued advocacy for the need for psychosocial support for families is required across disciplines in our field as our understanding increases of how such interventions improve both mental and physical health [68,69]. Including these as metrics for centers of excellence will raise the bar for practice in our field. The FHS will serve to coalesce and lend weight to such crucial requests as an institutional single voice. Conclusion The field of perinatal cardiac care has achieved significant advances since its inception more than four decades ago. Within this history, tremendous opportunity for further progress exists in the pursuit of optimal care of patients and families. The continued evolution of the field of fetal cardiology will proceed much faster with multidisciplinary collaboration in investigation, education, and advocacy. The Fetal Heart Society is at the forefront of the effort to foster such collaboration and provide the infrastructure and support to succeed. Funding sources Work for this publication was not supported by any funding sources. The Fetal Heart Society is supported by the institutional sponsors listed in Table 1 and has received support from the Mend a Heart Foundation for a study on d-Transposition of the Great Arteries and from GE Healthcare to support educational initiatives described herein.
Peru: A Wholesale Reform Fueled by an Obsession with Learning and Equity After decades of expansion, the Peruvian education system had relatively high levels of access, but low and heterogeneous quality. The depth of the learning crisis was seen in 2013, when Peru ranked last in PISA. The country responded by implementing an ambitious reform which built on previous efforts, which is described in detail in this chapter. The reform was composed of four pillars: (i) Revalorize teachers’ career by making selection and promotion meritocratic, attracting the best into the profession, and supporting teacher professional development through school-based coaching; (ii) Improve the quality of learning for all by revising the curriculum, expanding early childhood education and full-day schooling, providing direct support to schools (through lesson plans and school grants) and carrying out several deep institutional reforms to the university system; (iii) Effective management of the school and the education system, including the use of learning assessment data for school planning. This entailed increasing school autonomy, introducing meritocracy in the selection of principals, and creating a culture of evidence-based decision making; and (iv) Close the infrastructure gap. The reform process required strong political and financial commitment and resulted in impressive improvements in student learning. Most importantly, it led to a change in mindsets towards a focus on learning. Introduction Peru has been growing steadily for the last 20 years, but economic growth has not been accompanied by a strong investment in human capital. During this time, there was a large expansion of the education system and enrollment rates steadily increased. However, financial investments were not accelerating at the same pace, and expenditures per pupil gradually fell. It was a clear case of a quantity and quality trade-off. Today, more children are in school, with net enrollment rates close to 100% at the primary level and 80% at the secondary level. Peru is a middle-income and that the sector was condemned to mediocrity. Actually, in the Annual Conference of Business Executives (CADE, for its acronym in Spanish) of 2012, one of the discussion topics was how the private sector could be used as a provider of education, given the understanding that the reforms needed in the public sector were short of impossible. So not only was a reform needed, but the reform had to be wellcommunicated and implementable, to clearly demonstrate to the public that change in the public sector-and improvement -was possible. There had to be a plan, but as important as that, there had to be a public perception that there was a clear and an implementable plan. The low learning levels were accompanied by a very large heterogeneity in quality. Peru, as a highly unequal country, could trace a large portion of that inequality of outcomes to a profound inequality of opportunities. Parental socioeconomic background, location and ethnicity determined the quality of a child's education, as well as access to early childhood and tertiary education. For instance, differences in quality between public and private institutions were large, and even within those categories there was a lot of heterogeneity. Differences in quality between urban and rural schools were also immense. Given these inequities, the reform described here from 2013-2016 focused on improving the public system and provide more opportunities for the poor to access quality education. There were many shortcomings of public education at the start of the reform. First, the school infrastructure was in a very poor condition; in part, because despite some investments over the years, current expenditures to maintain buildings were not included in the public budget. Textbooks were insufficient and would not arrive on time to all schools, and in many cases, there were school supplies' shortages. Almost all current expenditures were for teachers' salaries. From the public's perspective, most of the problems of public education could be attributed to the lack of preparation and commitment of public teachers, who were seen as a group of unionized traditional public servants who only cared about their labor rights and job stability and not children's learning. In fact, there was a group of teachers that fit this perception, but many did not. There was a problem of motivation and low salaries, but there were also many teachers that were in the profession because of an intrinsic motivation. And the magic of learning comes from the interaction between teachers and students. Thus, reforming the teacher career path, attracting talented individuals to the profession, and getting the best possible performance out of the existing teachers were key elements of the reform. Increasing the Social Value of the Teaching Career A school is as good as its teachers. In the United States, students in a class with an effective teacher advance 1.5 grade levels or more over a single school year, compared with just 0.5 grade levels for those with an ineffective one (World Development Report 2018). Similar effects of the quality of teachers on learning are also found in Ecuador, Uganda, Pakistan and India (Bau and Das 2017). Beteille and Evans (2019) find that some of the most effective interventions to improve student learning rely on teachers. They compare the effect of three types of programs on student learning in low-and middle-income countries: teacherdriven interventions (e.g. structured pedagogy), community-based monitoring, and computer-assisted learning programs. They find that while teacher-driven interventions raised student's language scores by around 9 months, communitybased monitoring had half the effect and computer-assisted learning program less than one-twentieth. This is supported by other evidence, such as an analysis of the education systems with the best global performance, which concluded that country's learning levels depend on the quality of teachers and that the best interventions to improve learning involved teacher training (Barber and Mourshed 2007). Despite this evidence, the teaching profession in Peru is not well paid. Professionals with similar characteristics to teachers in Peru received an average salary which was 42% higher than Peruvian teachers'. In fact, Peru is the second Latin American country with the greatest wage gap between teachers and other professionals (Mizala and Ñopo 2016) (Fig. 6.2). Until the 1970s, a teaching career was a typical profession in an emerging middle class. However, with the massification of education that started in that decade, teachers' salaries started falling slowly in real terms. By 2010, teachers' salaries were about one-third of what they were in the late 1960s (in real terms). The career lost social recognition slowly but steadily. Standards to hire teachers in the public system were lowered and the quality of pre-service institutions fell, while the number of institutions (including universities) increased. Teacher incentives were not related to performance or professionalism, and -most importantly -were not linked (2004)) in any way to student learning. A survey applied by IPSOS, a polling firm, in 2015 1 showed that most Peruvians did not have a positive perception of teaching: 30% believed teachers in public schools did low or very low-quality work, 55% believed teaching was an easy job, and 64% would not want their children to pursue a teaching career, especially among those of higher socioeconomic status. Additionally, teachers themselves had a poor perception of their job: 63% thought that society minimizes the value of their profession and 53% would not want their children to become teachers. 2 Overall, Peruvian society looked upon teaching as a poor career choice. Low wages and low social value, among others, decreased youth's interest in pursuing a teaching career. Between 1999 and 2012, the percentage of teachers under 35 years-old fell from 51% to 21%. In the same period, the percentage of teachers over age 44 increased from 15% to 47%. 3 Worryingly, the best graduates of secondary school did not describe teaching as their preferred professional path. This situation is not unique to Peru. Elacqua et al. (2018) document this widespread phenomenon in Latin America. They observe that the rapid expansion of coverage between 1960 and 1980 required hiring a larger number of teachers. This could only be achieved by decreasing the standards to become a teacher and increasing the number of teacher training institutions-often without much regulation. As per pupil expenditures went down, the working conditions of teachers also decreased. Both factors contributed to the loss of prestige of the teaching career. The Teacher's Reform Law (the Ley de Reforma Magisterial) was passed in 2012. The Law aimed to attract and retain the best candidates into the teaching profession implementing a new teaching career pathway based on meritocracy. In this new pathway, entrance to the profession is based on teachers' effort and performance, and retention and promotion is related to performance, not only tenure and age. It also included a new scheme of professional development. ENAHO 1997ENAHO -1999ENAHO and 2010ENAHO -2012 Elements of the reform to increase the social value of the teaching career: Attracting and Selecting the Best Candidates into the Teaching Profession To incentivize secondary school graduates to become teachers, the Ministry launched the Teacher Vocation Scholarship (Beca Vocacion de Maestro) in 2014. This scholarship offered full merit-based funding for undergraduate studies in pedagogy in the best universities of the country. About 500 scholarships were awarded per year. Although it was a small number, it was a signal that the public sector was starting to attract good students into the career. A huge challenge was the implementation of the mandates of the law, including executing a fair and transparent process of recruitment and retainment, intensifying in-service training, and improving the social value of the career (Vargas and Cuenca 2018). The first evaluations to enter the public teaching profession were implemented in 2015. There were 202,000 applicants for 19,632 government teaching jobs. Teachers were evaluated through a written test that measured their basic skills (reading comprehension and logic), curricular knowledge, pedagogy and specialty. Those with a minimum score went through a second phase of evaluation which required teaching a class and being directly assessed by peers. To ensure that the best performers joined the teaching career, the Ministry granted an economic incentive of USD 6000 to the top third of the teachers with the best scores in the examination process. 4 This was consistent with a broader government strategy of giving more incentives to the best candidates to join the teaching career. There were 20,000 job posts available but only 8137 teachers (out of the 202,000 applicants) were selected. This implied an entry rate of 4%, which was more demanding than that of the most prestigious universities. Fifty-four percent went to work in rural schools. This process was complex from a pedagogical as well as a logistical perspective. It is not easy to correctly discern the best professionals with a written examination -which was used as a first stage, but it was too difficult to implement other methods given the numbers involved. Importantly, it was critical for the acceptance of the reform for the process to be perceived as fair, transparent and free of any hint of corruption or clientelism as had been common practice in the past. Part of the success of the process was that public opinion, and most importantly, teachers, saw this process as meritocratic and free of any political interference. Rewarding Teacher's Performance and Effort The first teacher promotion contests in more than 20 years were held in 2014. For the first time, promotions were not defined by years of service. More than 180,000 teachers participated, taking a written exam simultaneously in over 60 cities. Around one third received a promotion, moving forward in the teaching career scale and achieving an average salary increase of 32% (the actual raises ranged from 70% to 0%). In the previous 5-year period, the salary increase had been only 8%, and mostly flat. The exams were transparent, objective and fair, and there was no hint of corruption or clientelism. In the next 3 years, 11 evaluation processes were held with several opportunities for teachers to raise their salaries. To incentivize teachers' effort and their focus on learning, a school bonus (Bono Escuela) was awarded to all teachers and principals who taught in schools that attained the largest student gains. 5 The bonus was given to the top third of schools, which were ranked among schools of the same regions and according to changes in enrollment and retention rates, and well as school learning scores. A rigorous impact evaluation by León (2016) found that this incentive had a statistically significant positive impact on student learning, as well as on attendance of teachers and principals. But incentives were not only monetary. The relationship with the union was complex but in general positive. To start, the union could have boycotted all examinations and evaluation processes, which didn't happen. Teachers, including union leaders, participated massively. The position of the Ministry was that teachers were part of the solution. Any improvement in quality of education depended necessarily on increased teacher participation and performance; all material inputs needed were important, but the human factor was the most critical. Teachers were partners in the reform. That was the constant message. Teachers were going to be better rewarded, but this would be based on performance and student learning. Some symbols were important. For example, there was a policy aimed at communicating directly with teachers. At the beginning of the school year, a text message to 180,000 teachers was sent across Peru saying "Maria, you are critical for education in Peru. We count on you to make sure that our students are the best. Signed, Jaime". Previously, teachers would have never received a personalized message from the minister. More opportunities were also established for teacher to show and share innovations, good practices and new ideas through national contests. Additionally, greater emphasis was placed on teacher health and welfare issues. Teacher's Professional Development The Ministry of Education approved new policy guidelines for in-service teacher training which present an articulated plan of systemic and diversified training which tackles the needs of new teachers and accompanies those that are already in classrooms by promoting a deepening of their knowledge and competencies. For the first time, in 2016 the Ministry implemented a Teacher Induction Program directed to those with less than 2 years of experience in public school teaching. This program aims to strengthen their professional and personal competencies, ease their labor insertion and promote their commitment and institutional responsibility. Experienced teachers act as mentors to new teachers for 6 months. In addition, new teachers can access online materials and remote guidance. In 2018, the program served 1694 newly hired teachers in 1559 schools and 26 regions. 6 Further, in 2014 the Ministry also started implementing a continuous professional development program for teachers in community-based early childhood education centers and single-teacher/multi-grade primary schools. This is a school-centered permanent coaching program. It aims to provide planned, continuous, pertinent and contextualized guidance to teachers working in complex settings. The intervention includes school visits where immediate feedback is given to teachers after classroom observation, micro-workshops and courses, and refresher programs. Majerowicz and Montero (2018) evaluated this intervention and found it increased student learning outcomes between 0.25 and 0.38 standard deviations as measured by standardized tests. Importantly, the gains are not centered in highperforming students and benefit low-performing students equally. The author estimates that the impact persists for at least 1 year after the training ends. The program is relatively cost-effective with benefits that range from 0.72 to 1.12 standard deviations per 100-dollar investment, even taking into account teachers exiting the school system and the fact that training will wear off or become obsolete. Improving the Quality of Learning for all Peruvian schools aim to educate students who can innovate, be creative, ask questions and shape their own opinions. Students who have the tools to make the best use of their potential when becoming part of an increasingly challenging world. Students who later become engaged and caring citizens, committed to the development of the country. Achieving this requires a quality system which is guided by a modern curriculum. A system that gives every child the education that they requirewhich might not be the one required by others. 6 http://www.minedu.gob.pe/n/noticia.php?id=47032 Elements of the reform to increase the social value of the teaching career: • Curriculum update • Pedagogical support to primary schools • Full-day secondary schools • Equality: bilingual intercultural education, special basic education, high performance schools, and alternative basic education • Expansion of Early Childhood Education (ECE) • Institutional arrangements for quality in higher education • National Program of Scholarships and Educational Credit Curriculum Update The process of reforming the National Curriculum (NC) starts in 2010, with the development of the learning standards (Tapia and Cueto 2017). Building on these advances to elaborate a new NC, the Ministry of Education carried out a nationwide consultation process between 2012 and 2016 with national and regional public sector institutions (including the National Council of Education), civil society, teachers, as well as national and international experts in curriculum structure and content (MINEDU 2017). In addition, the new NC took into account the results of reviews of curricula from a variety of countries and regions. The new NC was approved in June 2016. The NC establishes the learning outcomes that students are expected to reach by initial, primary, and secondary education. The NC is comprised of the "Exit Profile" of students of basic education, the cross-cutting approaches, and the curricular programs -per cycle of education-to develop the required competencies, among other components. The "Exit Profiles" for students define 11 learning outcomes that students must reach by the end of basic education: • The student recognizes herself as a valuable individual and identifies with her culture across different contexts. • The student recognizes her rights and duties and understands the historical and social processes of Peru and the world. • The student practices an active and healthy lifestyle, takes care of her physical health through day to day activities or sports. • The student values artistic works and understands their contribution to culture and life in society. She is able to use art to communicate her ideas. • The student communicates in her mother tongue, in Spanish as her second tongue (when her mother tongue is different), and in English as a foreign language. • The student inquires and understands the natural and artificial world using scientific knowledge in dialogue with local knowledge to improve livelihoods and preserve nature. • The student interprets reality and makes decisions based on mathematical knowledge adapted to her context. • The student coordinates economic or social entrepreneurship projects in an ethical manner. These allow her to connect to the job market and the environmental, social, and economic development of her livelihood. • The student responsibly uses communication and information technologies to learn and communicate. • The student develops autonomous processes of learning to continuously improve her learning outcomes. • The student understands and appreciates the spiritual and religious dimension of peoples' lives and society. The NC included seven cross-cutting approaches that should inform the pedagogical work of teachers in the classroom and relate to the competencies students should develop to achieve the exit profile (MINEDU 2016): • Rights-based approach: Promotes the recognition of rights and duties of the student and promotes other democratic values such as liberty, responsibility, and collaboration. • Inclusive and diversity aware approach: Teaches students to value all people equally and to avoid discrimination, exclusion, and inequality of opportunities. • Intercultural approach: Promotes the interchange of ideas and experiences emerging from diverse cultural perspectives. • Gender equality approach: Recognizes the need for equality of opportunity between males and females. • Environmental approach: Seeks to educate students to take care of the environment. • Common good approach: Promotes the development of socio-emotional skills such as empathy, solidarity, justice and equity. • Pursuit of excellence approach: Incentivizes students to give their best effort to achieve their goals and contribute to their community. There were several structural changes between the new NC and previous versions (Tapia and Cueto 2017). First, there was a stronger focus on learning, with a clear definition of learning process maps and standards that guided the expected levels of achievement per education cycle. The NC was competency oriented and practical. A competency is mastered through integrated learning resources and not through fragmented or disjointed teaching. There was a focus on progression and continuity in the student's learning process and a strong emphasis on in-class assessment as part of the overall planning and as a source of information to guide pedagogical practices on a day-to-day basis, and not only at the end of each cycle. Finally, gender equality was included as a cross-cutting approach. The objective was to emphasize the existence of similar rights, duties and opportunities for boys and girls, men and women. The Ministry of Education faced opposition by social conservative groups and was accused of promoting a "gender ideology" to destroy family principles (The Economist 2017), in part because of the inclusion in the curriculum of the teaching of tolerance and respect for sexual orientation and sexual education, which was a critical task in a country with very high levels of teenage pregnancy and gender-based violence. Political pressure led to some minor language modifications of the curriculum in March 2017 around concepts such as gender, sex, and sexuality. The Ministry of Education planned for a progressive and gradual implementation of the NC, as it required a change in teaching practices. The NC was first implemented in 2017 in public and private primary schools in urban areas of the country, and after 2019 the Ministry expects to implement it in all modalities and school levels (RM 712-2018, MINEDU). Pedagogical Support to Primary Schools To improve the quality of the learning process in primary schools, the Ministry implemented a Pedagogical Support (Soporte Pedagpogico) strategy with the following components: (i) support for teachers and principals through sample lesson plans to guide and facilitate teachers' work, training workshops for primary school teachers to foster creativity and innovation in pedagogical practices, mentors or coaches to guide teachers in their classrooms, peer-learning groups with teachers and principals, and virtual pedagogical counseling; (ii) personalized math and language tutoring for students in grades 1 through 3 with different learning styles; (iii) delivery and use of educational resources; and iv) community and parental involvement activities such as workshops with caregivers where they are taught how to support students' learning in everyday situations, or gatherings where parents and children can have fun and learn together. As of 2016, 1.1 million students (43% of the total students in primary school) from 18 regions are in primary schools with Pedagogical Support. It should be noted that the use of lesson plans was controversial. Some critics in the educational community argued that prescriptive lesson plans would reduce teacher's autonomy and creativity, and teachers should be free to prepare their classes independently following the guidance of the curriculum. The proposal however, was that the use of the lesson plans was not mandatory; teachers who wanted to prepare classes were welcomed to do so, but the lesson plans could serve as a base to those who could find it useful. The fact was that in most cases the teachers English and Physical Education In 2015 the National English Language Use and Teaching Policy was approved. Instead of teaching English for 2 h per week, high schools with full day schooling now include 5 h of English per week, using a blended learning system that combines self-learning software and face-to-face sessions. To provide this service, between 2015 and 2016 nearly 3000 face-to-face and virtual teachers were trained. 800 of these teachers participated in summer and winter school face-to-face trainings provided by the British Council and Pearson. Nearly 600 teachers were awarded scholarships to the United States and the United Kingdom, countries with which government-to-government agreements were signed. The National Plan for Strengthening Physical Education and School Sports includes the extension of class time to 5 h, as well as teacher training and the provision of sports equipment. As of 2016, 5076 physical education teacher slots had been created nationwide and 500,000 children and adolescents between 7 and 17 years of age performed physical training in adequate conditions. With this effort, which started in 2014, Physical Education returns to the regular curriculum, after being abandoned at the end of the 80s. who complained were those in more advanced grades who did not receive lesson plans. These teachers demanded that lessons plans would be available for them as well. Full Day Secondary School (Jornada Escolar Completa, JEC) To cope with the rise in secondary school enrollment of the 1970s, the Peruvian secondary school day was divided into three shifts. The full-day secondary school model seeks to improve the quality of the educational service by extending the school schedule from 35 to 45 teaching hours per week which allows for more and better time teaching math, communication, English, sciences, physical education and job training. This model brought to public schools the same hours and school regime followed traditionally in all private schools. The new model includes revamped management with support from psychologists, social workers, tutors and pedagogical coordinators. It also includes better equipment and infrastructure. The model started in 2015 with 1000 schools (345,000 students), reached 1601 schools (more than half a million students) by 2016 and 2001 schools by 2017. The longterm objective is to reach all 8000 public secondary schools in Peru. To ease the implementation of the curriculum, schools are receiving hardware, software, digital facilities and teacher training that links technology to the curriculum. For example, for the full-day secondary school model, laptops have been acquired and distributed, along with software licenses to integrate ICT in the English, communication, math and science courses. In addition, tablets are currently being distributed to primary schools in 15 regions of the country to be used as an educational resource. Agüero (2016) evaluates the impact of this program and finds that it improved academic performance in math between 0.14 and 0.23 standard deviation in its first year. The program also had positive effects in communications in the first year, but these were less robust. These results are greater than the effects found in similar interventions in Latin America and are among the highest found worldwide. Importantly, the effects are higher in the poorest districts. Initiatives to Provide each Student with the Service That She or He Requires In the guiding thread of equality of opportunity, part of the narrative of the reform was that equality of opportunity implied very different services -and different expenditures per student-according to circumstances and needs. One critical dimension in Peru was the huge ethnolinguistic diversity. Peru has 55 native or indigenous communities which speak 47 different languages (Vílchez and Hurtado 2018). Thus, an important number of children in Peru speaks a language different from Spanish (e.g. quechua, aimara, awajún, shipibo-conibo, asháninka, etc.) at home. By law, all these children have the right to a bilingual intercultural education that teaches them to read and write in their home tongue and in Spanish (which is the national language together with quechua) so that they can fully participate socially and culturally. Peru is one of the countries that has made the most progress in the region in terms of bilingual intercultural education. The country has a strategy of cultural and linguistic strengthening which involves the production of materials, curriculum and teacher training. To date, more than 500 titles (workbooks, books for school libraries, curricular guides for teachers) have been produced in 19 native languages. Likewise, the competences of 9000 teachers have been strengthened through a coaching program. The latter has been rigorously evaluated; and it was found that receiving intercultural pedagogical support has an average impact on students learning of 28 percentage points in math and 21 in reading, which is equivalent to 0.28 and 0.29 standard deviations respectively (Majerowicz 2016a). Despite progress in terms of production of materials, Peru still lacks sufficient teachers that speak both Spanish and a native langue. By 2017, only about half of primary education students who required it had a trained teacher in their native tongue. The Ministry also implemented a Special Basic Education strategy to serve children with any type of disability. It included, on one hand, strengthening the Special Basic Education Centers and, on the other, promoting the inclusion and increasingly better learning of students with mild or moderate disabilities in regular classrooms. As part of this strategy, 56 regular schools (1500 teachers) that already have expertise in the management of students with disabilities received training in inclusive education and specialized texts for various types of disabilities. In addition, and for the first time, regular schools received the necessary technological equipment to provide a quality service. Interpreters of Peruvian Sign Language have been hired in inclusive schools that serve students with hearing impairment in 7 regions of the country. Between 2015 and 2016, 26 Special Basic Education Resource Centers were created throughout the country to support the work of regular schools with students with disabilities. Between 2013 and 2016, the budget for Special Basic Education tripled, but despite that increase, still only about 25% of children with special needs had the required services. Third, the Ministry created a network of High-Performance Schools (Colegios de Alto Rendimiento, COAR) to serve exceptionally talented youth, similar to magnet schools in the US. COARs were public boarding schools designed to give talented young people the possibility of developing their full potential using a more demanding curriculum. These schools were certified by the International Baccalaureate (IB) which provided an internationally recognized high standard. 7 Each COAR serves 100 students per grade, selected meritocratically by demonstrating their academic, athletic or artistic excellence and covers the last three grades of secondary education. IB schools existed previously in Peru but only in a few private fee-based elite schools, so this network provided for the first-time meritocratic access to an IB education for free. Between 2014 and 2016 the system expanded such that there was one COAR per region, serving a total of 4350 students nationwide. After the expansion of the program, there are more students studying in an IB school in the public than in the private system Most teachers in those schools were public school teachers that received additional training but returned to their schools of origin after 2 years. An impact evaluation of this program, implemented by the CAF and the Ministry of Education, is currently underway. Expansion of Early Childhood Education Services Access to early childhood education for children aged 3-5 years in Peru was much lower among the poor, so it was a policy priority to expand coverage. From 2011 to 2016, the net attendance rate increased from 73% to 86%. In this period, more than 4150 villages in rural areas received early childhood services for the first time. This led to a complete elimination of the urban-rural access gap. Educational resources and materials were provided to existing and new early childhood centers and more than 3000 teachers were trained on initial education. Rigorous impact evaluations find that participating in public early childhood interventions had a positive effect of 8.7 points in reading comprehension and 2.5 points in math among second grade students (Majerowicz 2016b). Institutional Arrangements for Quality in Higher Education The tertiary education system in Peru had expanded during the last two decades. And as observed in many emerging economies, the expansion was fast and chaotic. The system was almost completely unregulated and about 90% of the growth in higher education enrollment was explained by an expansion of the private sector, and with an extreme heterogeneity in quality. After decades of chaos a new university law (Ley Universitaria) was promulgated in July 2014. The law defined the Peruvian university system as one of academically autonomous public and private institutions responsible for training professionals and citizens, that prioritized research, and is responsible for contributing to solving the country's development challenges. The Quality Assurance Policy of Higher University Education was also approved, establishing four pillars for quality improvement: (i) Management and Information Systems, (ii) Quality improvement, (iii) Accreditation for continuous improvement and (iv) Licensing as a guarantee of basic quality conditions. For instance, as part of the first pillar, "Ponte en Carrera" (www.ponteencarrera. pe) was launched in July 2015. Ponte en Carrera is a virtual platform that offers detailed labor market outcomes information. The portal offers information on the income earned from different careers according to university or technical institute, as well as the characteristics of educational institutions. Yamada et al. (2016) analyze the social value of information using the data of this portal. They find that only 62% of the university-career combinations have a positive economic return. Thus, they estimate a high social value for the portal given that if only 1% of recent graduates that opted for a career in a university with a negative return had instead entered the labor market directly (given that the information of that negative return is now available), they would gain 4.5 million Peruvian soles (USD 1.3 million) additional earnings during their lifetime. As mandated by the new Law, the National Superintendence of Higher University Education (SUNEDU) was created. This entity is in charge of licensing universities based on basic quality standards and overseeing the proper use of public resources. In the case of public institutions, this is because they are almost exclusively financed by public funds, and in the case of private institutions, because they were exempt of any sales tax and enjoyed an extremely generous income tax regime. In November 2015, the SUNEDU approved the Basic Quality Conditions. In December, it approved the Regulation of Infractions and Sanctions. The licensing process began in 2016. The vast majority of private universities have adapted their statutes to the new Law, while most public universities have renewed their authorities with the universal voting mechanisms established by the Law. The approval of the Law and, in particular, the implementation of a new regulatory framework and the establishment of a new regulatory agency was politically very contentious. In a pattern that is observed throughout emerging economies, and in particular in South Asia and Africa, owners of low-quality private universities usually have political representation and were not in agreement with the establishment of basic standards or more stringent supervision of the use of tax exemptions. Despite the government having a relative majority in Congress at that time, the Law was passed by a slim margin, and in part by public opinion being very favorable to the establishment of more effective regulation of universities. The law was contested in court several times and a legal action was filed with the Constitutional Tribunal. In all instances, the University Law was cleared. Moreover, very strong political support was received from student organizations from both public and private universities, who on several occasions vocally expressed their support for the reforms via social media and public demonstrations. 8 National Program of Scholarships and Educational Credit A government priority was to increase access of the poor to quality higher education. For the first time, a national policy of large-scale public scholarships was established in 2012. The National Program of Scholarships and Educational Credit (PRONABEC) delivered almost 100,000 scholarships between 2012 and 2016, reaching an annual budget of USD 280 million, making it one of the largest public fellowship programs in Latin America. PRONABEC has several scholarships including the following: • Beca 18 offered full scholarships for undergraduate studies of Peruvian youth with high academic achievement from low socioeconomic backgrounds. From 2011 to 2016, it has financed undergraduate studies for almost 50,000 Peruvian youth with limited resources from 94% of the country's districts. Rigorous evaluations find that the program increases the probability of access to higher education (33 percentage points to universities and 40 percentage points to institutes) and student welfare. Further, those that receive the scholarship tend to access better universities and start studying earlier. However, fellows report higher levels of perceived discrimination and have a lower percentage of approved classes (which could be linked to them accessing higher quality institutions) (MEF 2019). • Beca Presidente de la Republica, supports postgraduate studies (Masters and PhDs) in prestigious universities that rank among the top 400 according to the main global rankings. The scholarships cover all expenses. Applicants must be among the top third of their undergraduate cohort, have an admission letter from a top university, and demonstrate that his/her monthly income is insufficient to pay for their postgraduate degrees. By 2016, it had been awarded to almost 1500 fellows. • Beca Docente Universitario finances Masters and specialization studies for public university teachers in recognized universities in Peru and abroad. By 2016, 11,742 teachers had benefitted. • In 2015, the Beca Doble Oportunidad was launched for young people who did not complete high school. Through it, beneficiaries can finish studying the 5th grade of secondary school and obtain a technical certification. The Peruvian government also launched an educational loan called Credito 18 in 2015. It allows young people to access the best universities and institutes in Peru 9 using their own future incomes as a loan guarantee. The program only involves institutions whose graduates have high employability and are willing to guarantee 50/50 of the loan with the state. The credit is only accessed by young people that attain high grades in high school and throughout their university career. Effective Management of the School System The Peruvian education system needs to provide a daily quality service to almost seven million students and their families in 52,000 public schools. In the case of Peru, the public sector provides the service and also regulates private sector provision. Thus, it also needs to regulate the activities of thousands of private providers. Additionally, the system administers 800 technological and pedagogical institutes and regulates almost 150 universities. The system is managed by 3500 staff who work in the Ministry of Education; close to 4000 government officials who work in the 25 Regional Directorates of Education (Direcciones Regionales de Educacion, DRE) and 220 Local Education Management Units (Unidades de Gestion Local, UGEL); around 380,000 teachers and school principals who work in public schools; and 100,000 teachers and principals in the private sector. The educational service is a very complex service to provide and regulate. The education system shapes people's lives. It must equip students with knowledge, values and life-skills that enable them to become citizens that define their own destiny and attain a productive and fulfilled life. One thing that is not always emphasized in education reforms is that such a complex service requires a highly qualified multidisciplinary bureaucracy, even in systems where schools enjoy autonomy. The quality of the service, and the implementation of all reforms mentioned in this chapter depends on management. Designing, implementing, evaluating and constantly adapting the provision of services requires a management that allocates tasks and monitors their completion, sets the pace of work and administers human and physical resources effectively. At the establishment level, the quality of school management has a very high impact on the effectiveness of teachers, on the quality of the service provided, and on the operation of the institution as a whole. Evidence supports this claim: "Correlational evidence from within and across countries…, coupled with a growing number of impact evaluations, show that higher-skilled managers and the use of effective management practices improve teaching and learning. Evidence from across countries participating in PISA supports this idea: moving from the bottom to the top quartile of school management quality is associated with approximately an additional 3 months of schooling for one year alone" (pg. 2, Adelman and Lemos, forthcoming). Barber and Mourshed (2007) reference The National College for School Leadership (2006) regarding their findings of diverse studies which show that schools that achieve good performance in student learning might differ in their management practices, but all share the characteristic of having good school leadership from their director. Despite the importance of management in the education system, management of public schools in Peru was characterized by a rigid organizational structure in which principals devoted an extremely large amount of time to administrative tasks, which were not centered on learning or flexible to fit the different contexts of the country (MINEDU 2005). Principals also lacked administrative support staff. 10 The principal was supposed to perform many administrative activities in schools where there were no other personnel aside from principals and teachers. In Peru, there were 32,000 administrative staff to support 50,000 schools which means there was less than one administrative staff person per school. 11 This resulted in most principals focusing on routine and administrative work and having little time for pedagogical leadership and human resource management. Regional units faced similar challenges. In fact, in 2013, only 30% of the local education management units personnel provided pedagogical support to schools (the remaining 70% performed administrative tasks). The modernization of educational management was focused primarily on strengthening the management of individual schools, recognizing them as complex institutions to administer that required strong and independent leadership and adequate staff. Progress was made in redefining the role of the school principal and improving their selection. For the first time in 2015, 15,000 -in about a third of existing schools-school principal positions were assigned based on a meritocratic process. Further, training processes for principals were improved and school principals started receiving specialized training in school management. Finally, principals received greater autonomy and are now responsible for the use and allocation of minor maintenance and purchase resources, and for the first time they are part of the process of appointing teachers. To support the role of principals, the reform also included recruitment of administrative workers. In 2015, about 8000 administrative positions (psychologists or social workers, administrators, secretaries, caretakers, cleaning and maintenance staff) were hired for the 1000 secondary schools that benefitted from full-day schooling. This policy continued in the 600 additional JEC schools that started operating in 2016 aiming to gradually close the gap in administrative personnel in schools. 10 While most private schools have administrative and finance staff, psychologists, security personnel, and coordinators of different tasks, public schools are usually only composed of the principal and its teachers. 11 Información del personal no docente en primaria y secundaria obtenida del Escale en base al Censo Escolar 2013. Elements of the reform to foster effective management of the education system: • Strengthening school management: boosting the role of principals and hiring administrative staff • Improving management in the middle and central level: modernizing processes, creating commitments of performance • Improving data collection and use The reform also included improvements to middle management in the education system (the Regional Directorates of Education (DRE) and Local Educational Management Units (UGEL)) and in the Ministry or central level. For example, Performance Commitments (Compromisos de Desempeño) were designed as a tool to allow the transfer of additional resources to DREs based on their performance on sector's priority goals. These goals were linked to improving the planning processes, having the conditions for an adequate start of the school year and improving management throughout the year. Further, the Ministry of Education underwent a substantial modernization process which included execution control mechanisms, dashboards and control panels, and a simplification of purchasing processes. Finally, information systems were strengthened to counter the lack of information about what was effectively happening at the school level, UGEL and DRE levels, and aggregate level. One such initiatives was the School Traffic Light (Semaforo Escuela) tool. This management tool collects critical information about school functioning: attendance of students, teachers, principals; availability of educational materials; and access to basic services. During 2015, 32,000 educational institutions were visited and more than 250,000 teachers were interviewed. In 2016, data of more than 10,000 additional schools was added. Currently, the tool covers the entirety of the system. The School Traffic Light generates representative information of the 220 UGELs on a monthly basis and serves as a means of accountability for school principals and the UGEL. The data generated allowed to public data at the UGEL (roughly similar to provincial) and regional level. As expected, initially the publication of data in, say, teacher absenteeism was not well received by some regional governors, but those reactions, were in the long run, a good symptom that the availability of public information creates incentives for governments to increase the quality of its services. Closing the Education Infrastructure Gap Having minimum infrastructure is critical to achieve student learning. Murillo and Román (2011) find that although there are differences between countries, the availability of basic infrastructure and services (electricity, water, sewage) and didactic resources (libraries, labs, sport facilities, books, computers) has an effect on student achievement levels in primary education in Latin America. Leon and Valdivia (2015) use Peruvian data and find a significant effect of school resources on academic achievement. They state that previous estimates of the school effects underestimate the relevance of school resources, particularly on the poorest areas. In 2014 the National Institute of Statistics and Informatics (INEI) collaborated with the Ministry of Education to assess the status of the infrastructure of the public education sector for the first time in the history of Peru. The Education Infrastructure Census showed a dire scenario: 7 of every 10 schools needed to be strengthened or reconstructed, 60% of schools had high seismic risk, one third of the plots lacked physical or legal resolution, and more than 80% of rural schools lacked access to water and sewage. After decades of insufficient investment and lack of maintenance, the Peruvian education system accumulated a deficit of basic educational infrastructure of more than US$20 billion approximately 10% of GDP. The number reached USD 23 million when taking on account investments required for the conversion of the whole secondary school system to a single shift, the universalization of early childhood education and the improvements of multi-grade schools -including necessary furniture and equipment (MINEDU 2016). The government also accelerated its investment in education. From 2011 and 2015, public investment in infrastructure for education -including all levels of government -exceeded the equivalent of USD 5 billion (15,000 million Peruvian soles); 150% higher than in the previous 5 years. These investments financed the rehabilitation or construction of about 4000 schools nationwide. Most of the investments took place in the rural areas: in 2016 the per student investment in infrastructure in rural areas was 6.5 times that of urban areas. To ensure the sustainability of school infrastructure, the Program of Maintenance of Educational Infrastructure was created in 2012. This program increased the resources received by principals and schools to maintain the school infrastructure. Until 2016, it had financed more than 1800 million Peruvian soles (USD 530 million) in repairs for more than 50,000 schools. To increase the efficiency of the management of infrastructure investments made by the central government (Ministry of Education) and accelerate the process of closing the educational infrastructure gap, the National Educational Infrastructure Program (PRONIED, for its acronym in Spanish) was created in 2014 with administrative and financial autonomy. The institution created standard construction models that increased the speed with which technical construction files were generated and established monitoring systems that tracked every project. The education reform included innovative infrastructure investments tailored to the needs of particular regions or students. For instance, PRONIED started implementing the Plan Selva in 2014. This plan targeted jungle communities which had Elements of the reform to close the infrastructure gap: • Census of Educational Infrastructure • Increased investment • Program of Maintenance of Educational Infrastructure • Creation of PRONIED • Innovative initiatives for specific regions • Public Private Partnerships and Public Works Tax Deduction Programs amongst the highest infrastructure needs countrywide. Before Plan Selva, schools in the jungle were built in the same ways as in cities: with concrete. These structures were not well suited for the jungle as they were not resistant to the heavy rains and reached high temperatures in the summer. The Plan Selva designed and built schools that suited the area: made out of wood, with solar panels and special roofs that could withhold the rain and help manage the heavy temperatures; built high above the ground to avoid flooding when the river grew. The first set of ten schools earned second place in the recognized architectural prize of the Venice Biennale (MINEDU 2016). To increase the speed in closing the infrastructure gap, the Ministry of Education increased cooperation with the private sector through Public Private Partnerships (PPPs) and Public Works Tax Deduction (Obras por Impuestos, OxI). PPPs started being designed in 2014. By 2016, education PPPs were formulated to address the infrastructure challenge for 66 schools, 7 COAR and 3 higher education technological institutes (with a potential investment of 2200 million Peruvian Soles-USD 648 million). Financing In 2003, a National Agreement signed by all political parties, business councils, and civil society organizations agreed to an annual increase in educational expenditures of 0.25% of GDP, starting from a base of about 3% of GDP until reaching 6% of GDP. Ten years later, expenditures were still around 3% of GDP. To implement the educational reform, between 2011 and 2016, the education budget grew from 2.8% to 3.9% as a percentage of GDP, and 88% in nominal terms. This is reflected in the educational budget as a percentage of the total state budget increasing from 15% to 18%. This has been a significant and unprecedented increase. The increase in resources came accompanied by a higher pressure to spend faster and better. Traditionally, a significant portion of the budget assigned was not effectively spent. To accelerate spending, several measures were taken: there was a mechanism of "single balance window" which allowed ministry units which were underspending to release fund to other units; closer supervision of procurement processes to shorten timelines, and dashboards to identify administrative bottlenecks, among others. This allowed for a dramatic increase in spending of all units under direct control of the ministry (which did not include, for example, public universities which were autonomous in their processes) (Figs. 6.3, 6.4, and 6.5). The higher spending is reflected in a significant increase in per student spending in the three educational levels (early childhood education, primary and secondary). However, expenditures are still far from other Latin American countries or OECD countries. Despite the significant increase observed between 2011 and 2016, Peru was spending only about USD 1200 per student in primary education -less than Colombia, about half of what was spent in Chile and a fifth of the OECD average. The main route to continue to increase per pupil expenditures is not an increase in the share of education in the public budget; there is a small margin as it is already at almost 20% of the public budget. The main routes are to increase the size of the state, which in Peru is relatively small at about 16% of GDP because of low tax collection, and to continue a strong economic growth process. Results in Student Learning Peru is the Latin American country with the largest progress in student test scores in PISA (Programme for International Student Assessment) for the period 2009 to 2015. 12 Student outcomes have improved constantly, increasing 8% in reading and science and 6% in math during the period 2009-2015. This growth is reflected in a lower number of students scoring below the minimum competences required to participate in society. This number decreased 10 percentage points in science, 7 percentage points in math, and 11 percentage points in reading. Importantly, Peru's improvement has been particularly stark among public institutions (Moreano et al. 2017). The biggest improvements in PISA scores took place from 2012 to 2015, which coincides with the period in which the reform was implemented (2013)(2014)(2015)(2016) observed. From 2011 to 2015, the percentage of students who reach the satisfactory level increased from 30% to 50% in Reading Comprehension and from 13% to 27% in Mathematics (Figs. 6.6, 6.7, 6.8, and 6.9). Pending Challenges Despite important recent advances, the challenges ahead are immense. The quality and equity of the Peruvian education system is still far from where it should be. For instance, most students still scored below the minimum proficiency threshold in PISA in 2015. The percentage of students that did not reach basic competencies was 58% in science, 66% in math, and 54% in reading. A similar picture is seen with the National School Census, where the percentage of second-grade children who scored below satisfactory level was 50% in reading comprehension and 73% in math. It was a dramatic improvement compared to previous years but is still an underperforming system. Moreover, test scores reflect the inequality of the system. In PISA, those of lower socioeconomic background, from rural areas, and who attend public schools score lower than the rest (Moreano et al. 2017). Further, despite improvements, the differences between learning outcomes in rural and urban settings are still very big. Between 2007 and 2014, the percentage of students with a satisfactory level in reading went from 21% to 50% in urban areas, and from 6% to 17% in rural settings. The percentage of students with a satisfactory performance in math went from 9% to 30% in urban areas and 5% to 13% in rural ones (Fig. 6.10). Most of the policies described above will take years to be solidified and universalized. Early childhood education coverage is still not universal, especially if one limits the analysis to quality provision. As seen above, basic schooling does not provide each student with basic skills, let alone the opportunities that each student needs to develop to their fullest potential. Bilingualism has started but is still not complete and full-day secondary schooling is still not widespread. Higher education also has limitations both in access and quality. In 2013, only 39% of graduates from secondary school accessed higher education. 14 Those that accessed did not enter a system that guarantees quality: out of the 140 universities in Peru, none is among the 500 best in the world 15 and only 3 are among the 100 best universities of Latin America. 16 Equity gaps have moved in the right direction. Gaps in access to early child education have closed, more students in secondary public schools have access to a full day school, as in any private school. Poor, talented children have access to a Higher Performance School (COAR) and to public fellowship for university studies. But urban-rural quality gaps in basic education are still large and access to good universities or technical institutions is easier for the rich. Improving and expanding educational services will require an unprecedented financial effort and political commitment. By 2021, the bicentennial year of Peru, there is a commitment to double teachers' salaries (as compared to 2015). This will require continuing the process of increasing spending efficiency as well as a substantial increase in the budget. But what is most critical is to continue with an obsessive focus on learning and improving the quality of children's experiences in school. in the selection and promotion of teachers and principals, and a deepening of a culture of continued professional development, if it is expected to have an impact on learning. The institutional changes required have been implemented and there is clear political and public support for maintaining a meritocratic career, free of any political interference or clientelism. At the tertiary education level, the institutional changes that support the reforms that aim at increasing quality of the system have been advanced, and the political support of young people and the public opinion is clear. However, as in other countries, university reforms with dispersed winners (the current and future students) and concentrated, politically powerful losers (low quality institutions), the threat of change in the political balance is always present. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Dual-Active Bridge Series Resonant Electric Vehicle Charger: A Self-Tuning Method : This paper presents a new self-tuning loop for a bidirectional dual-active bridge (DAB) series resonant converter (SRC). For different loading conditions, the two active bridges can be controlled with a minimum time displacement between them to assure zero voltage switching (ZVS) and minimum circulation current conditions. The tuning loop can instantly reverse the power direction with a fast dynamics. Moreover, the tuning loop is not sensitive to series resonant tank tolerances and deviations, which makes it a robust solution for power tuning of the SRCs. For simplicity, the power is controlled based on the power-frequency control method with a fixed time displacement between the active bridges. The main design criteria of the bidirectional SRC are the time displacement, operating frequency bandwidth, and the minimum and maximum power, which are simply derived and formulated based on the self-tuning loop’s parameters. Based on the parameters of the tuning loop, a simplified power equation and power control method is proposed for DAB-SRCs. The proposed control method is simulated in static and dynamic conditions for different loadings. The analysis and simulation results show the effectiveness of the new tuning method. Introduction Bidirectional direct current-direct current (DC-DC) converters are widely employed in various applications such as renewable energy, battery chargers of electric vehicles (EVs), DC micro-grids and other battery energy storage system (BESS) applications. Typically, to achieve soft switching with superior electro-magnetic interference (EMI) considerations, these converters are implemented based on resonant converters [1][2][3][4]. This kind of converters have different topologies with respect to their power ratings, voltage levels and input or output required voltage (current) source models. Among them, series resonant converters (SRCs) are the most resilient and practical topologies, which have various control methods, reduced numbers of passive elements and improved efficiency. Moreover, SRCs are more compatible with BESS due to their intrinsic voltage source behavior [3][4][5][6][7]. For high power applications such as fast charging of EVs, a dual-active bridge SRC (DAB-SRC) is the well-known structure for the bidirectional power flow control at EV charging infrastructures [4][5][6][7]. Figure 1 shows a charging stage of EVs based on DAB-SRCs to achieve bidirectional power flow capability between the grid and the electric vehicle, i.e., V2G capability [8][9][10]. Another advantage of DAB-SRC is that it can be implemented in a modular structure without the need for extra input or output filters, as shown in Figure 1. The most important problems associated with the state-of-the-art are the power and frequency tuning of these power converters with minimum phase displacement or circulating current [7,[10][11][12]. Current researches are mainly based on algorithms which utilize phase-locked loop (PLL) techniques. PLLs are well-known techniques to tune the switching frequency of resonant converters; however, they are sensitive to uncertainty and tolerances in resonant tank circuits [3]. Due to high power rating of fast DC chargers, about 100-200 kW, robustness and ability to cope with deviations or tolerances in a resonant tank are essential [13]. To address this problem, a self-tuning method is proposed which has fast dynamics and is not sensitive to the tolerances. Recently, this method has been utilized for power and frequency tuning of wireless charging of EVs using inductive power transfer (IPT) technology, DC-DC converters and various battery chargers [3,[14][15][16][17]. Moreover, in self-tuning methods, all the switching methods such as pulse width modulation (PWM), pulse density modulation (PDM) and the phase-shift control method can be implemented. However, in the available researches, the bidirectional capability of the SRCs is not considered in the self-tuning method. Moreover, the formulations and analysis of the selftuning are derived for an ohmic load [14][15][16][17]. In this paper, design considerations of the DAB-SRC such as DC-link voltages, bandwidth of the switching frequency, the minimum and maximum of the transferred power and the time displacement between the two active bridges are derived based on the self-tuning loop's parameters. In addition, a simple control method is proposed which can change the direction of the output power with less transients and power fluctuations. Another advantage is direct control of the phase displacement between the dual active bridges hence; the minimum phase displacement can be achieved simply, which is essential for a DAB-SRC [11,12]. As presented in [16], the proposed method can be implemented based on phase shift and PWM modulations which is essential for DAB-SRCs with different DC-link voltages. However, for the sake of simplicity, the power and frequency tuning is devised based on phase shift controlling between the two active bridges which is a usual solution for approximately equal DC-link voltages [7]. The rest of this paper is organized as follows: a description of DAB-SRC with and its mathematical model are presented in section 2. Section 3 presents modeling and formulation of the proposed tuning loop for the DAB-SRC. Section 4 presents simulation results and modification of the analysis to verify the proposed control method in the transient and steady-state conditions and the main conclusions of the paper are summarized in section 5. Modeling of a Dual-Active Bridge Series Resonant Converter (DAB-SRC) In this section, the principle of operation for a DAB-SRC is set forth as follows. Figure 2 shows the two active bridges with DC-link voltage of Vin for the charger side and VBatt for the EV side as the battery voltage level. In this figure, for simplicity, the input/output filters are not considered and a high-frequency transformer (HFT) is utilized for isolation and better impedance matching between these two sides. In the proposed tuning loop, a current transformer (CT) is used for the resonant tank current sensing, io, while Io represents phasor state of io. Two simple first order phase shifters are considered for phase displacement between the two bridges, while the Micro-Controller Unit (MCU) can digitally implement these two phase shifters in the discrete time mode. The active bridge that is supposed to send the power is responsible to regulate the power, hence only the leading phase shifter is considered variable. By digital implementation, the system will be more compact and less sensitive to tolerances occurring in the phase shifter circuits. For analog implementation, the variable leading phase shifter resistor, RT1, can be implemented using a digital-potentiometer or combination of a light-emitting diode and light-dependent resistor (LED-LDR) as discussed in [17]. In this implementation, the digital-potentiometer or light of LED can be controlled directly from the MCU [17]. In this paper, the two bridges are switching with duty cycle of about 50%, i.e., neglecting the dead time, td, between the low side and high side switches. Hence, the two bridges can be considered as two square wave voltage sources connected together by the series resonant tank, i.e., resonant inductor, Lr and resonant capacitor, Cr. Using fundamental components, each square voltage source can be modeled as a sinusoidal voltage source as shown in Figure 3. In this Figure, Rs is the winding resistance of the HFT and Io is considered as the reference phasor. E1 and E2 represent fundamental harmonics of the output voltage of the charger side and EV side referred to the primary side of the HFT, while e1 and e2 represent their instantaneous state, respectively. Moreover, δ1 and δ2 are phase angles of the aforementioned sinusoidal voltages with respect to the reference phasor, i.e., Io. Hence, for the forward power transfer, battery charging, and considering zero voltage switching (ZVS) for the two active bridges, the phase angles must be derived as δ1 > 0 and δ2 < 0. This rule is reversed when the battery sends its energy back to the station side while ZVS is achieved [6][7][8][9][10]. In the following equations, each phasor is considered based on its maximum value. E1, E2 and the natural angular frequency of the power converter are derived by the following equations according to the system parameters: The impedances presented in Figure 3 and the equivalent impedance, Xt, between these two bridges neglecting the series resistor, Rs ≈ 0, are defined as follows: where, ωs is the angular switching frequency. Regarding Figure 3 and Rs ≈ 0, the power-angle equation is derived as follow in the forward power direction, i.e., the battery is charging [7]. ( ) Regarding (5), there are different strategies for power flow regulation which are the frequency control, phase control or combination of them [4][5][6][7][8][9][10]. Using frequency control, Xt changes to regulate the power while phase control has direct effect on the two phase angles, δ1 and δ2. The relation between E1, E2 and Io, is derived by (6), considering Rs ≈ 0. Hence, the relation between angles and amplitude of the input and output voltages are derived by (7b). Moreover, the relation between amplitude of resonant tank current, angles and voltage amplitudes are derived as: To achieve ZVS, the snubber capacitors, Cs, of the power switches should be considered. Hence, a minimum required phase displacement for each active bridge is defined for the worst case, the minimum current or light load conditions. For the charger and the battery side bridges, the minimum phase displacements for the active bridges with respect to the minimum resonant tank current Io min are derived by (9) and (10), respectively, to assure ZVS for the two active bridges. Equations (9) and (10) are derived assuming that the resonant inductor is large enough to keep the current constant at the dead time, td, of switching. For DAB-SRC there are six possible conditions regarding voltage ratio of the two DC-link sides and three possible conditions are depicted in Figure 4 in the forward power flow, the battery is charging. Regarding Figure 4, the equivalent impedance is considered as a pure inductive, meaning that Rs ≈ 0 and ωs > ωn. For Figure 4a and Figure 4c, the corresponding phase angle of the lower voltage is significantly derived from the lower rather than the other side phase angle. Hence, in these conditions the lower angle should be considered at the minimum required phase angle for ZVS even for full load condition. This fact is essential for minimum phase displacement achieving and reducing the circulation current. This approach requires precise controlling methods which are depend on system parameters identification. Another fact is that for unequal voltages, the system requires higher operating frequencies to assure ZVS condition due to large phase difference between the two bridges. In this condition, the wave forms are far from a sinusoidal equation, also achieving ZVS is more difficult for the lower voltage active bridge. Figure 4b is an idea condition for DAB-SRCs by achieving an equal phase displacement for the two bridges and intrinsic ZVS for simply operating frequencies higher than the natural frequency of the system. This condition E1 ≈ E2 is proposed for high power application and power and frequency control methods can be applied simply. Moreover, in this condition, the operating frequency can be selected near the natural frequency to achieve almost pure sinusoidal waveforms to reduce power losses in the HFT. Regardless of the three possible conditions and their corresponding applications, previous research required complex control methods for power and frequency-tuning methods which mainly depend on parameters of the DAB-SRC [7]. These control methods are sensitive to possible system tolerances. Moreover, systems that are based on PLLs have transients to achieve ZVS which imposes extra EMI at transient conditions and is not recommended for high power applications. In the next section, the proposed tuning loop is devised based on self-oscillating methods presented in [17] and the main formulation of the system is derived based on DAB-SRC. The new tuning method can simply assure ZVS condition, regardless of system tolerances. Moreover, the tuning loop has fast dynamics in start-up and power direction changing. Modeling of the Proposed Tuning Loop Similar to the other self-tuning methods for voltage source inverters [3], the switching signals are generated by the series resonant load current, io, which is received by the MCU using CT, as shown in Figure 2. In this figure, output of the CT is passed through the two phase shifters. For forward power flow, the variable leading phase shifter is considered for the left active bridge and the constant lagging phase shifter is considered for the battery side active bridge. For changing the power flow direction, the two phase shifters are replaced by each other while the CT signal must be considered with 180 0 phase shift, which can be done simply by using NOT gates at the outputs of the two zero detector units. Without loss of generality, the system formulation and analysis are considered in the forward direction. For the DAB-SRC start-up, the series resonant tank should be charged up by turning Sa1, Sa4, Sb1 and Sb3, as shown in Figure 2, i.e., Trigger1 is applied. Hence considered zero initial condition for the tank circuit, the instantaneous resonant current, io, is derived by the following for the first half cycle and considering that the leading phase shifter makes no phase displacement at start-up. After the first half cycle, the EV side bridge is activated for switching by applying the Trigger2 signal, while the leading phase shifter makes the minimum time constant, τ1 min , defined by the MCU, which corresponds to the minimum output power, Pmin. The lagging phase shifter has a fixed time constant, τ2. In the self-tuning method, the state of the switches reversed at the zero crossing of the phase shifters' output signals. In the positive half cycles of leading phase shifter output, Sa2 and Sa3 are turned on and in the negative half-cycle Sa1 and Sa4 are turned on. For the EV side, Sb1 and Sb4 are turned on for the negative half cycles and Sb2 and Sb3 are turned on for the positive cycles of the output signal of the lagging phase shifter. Hence, considering the time constants for the two phase shifters, the phase displacements for the two active bridges are derived as follows: ( ) where CT1, RT1 are the tuning capacitor and resistor of the leading phase shifter while RT1 is a variable resistor which can be implemented with a digital potentiometer connected to the MCU. RT2 and CT2 are the tuning resistor and capacitor of the lagging phase shifter which are constant and should be designed to make the minimum phase displacement that assures ZVS at the predefined light load condition, Io min or Pmin. Regarding (10) and (13b), τ2 is derived by (14b), where ωmax is the predefined maximum switching frequency of the DAB-SRC at the light load condition: Despite previous research on the self-tuning methods that are based on one active bridge, here the two phase displacements of the active bridges must be satisfied. Hence, to derive a relationship between the time constants of the proposed tuning loop and the angular switching frequency of the converter Equation (7) For conditions where E1 ≈ E2, as described in Figure 4(b), Equation (15b) is simplified in (16), which shows that the angular switching frequency is equal to geometric mean of the two time constants. . Hence, the minimum and maximum values for τ1 whit respect to the predefined minimum, ωmin, and maximum, ωmax, operating frequencies of the converter can be derived by the followings: The phase displacements can be derived by (18) with respect to the time constants. It is worth noting that regarding (18), the converter intrinsically reduces the required phase displacement for high power conditions. Equation (19) derives the time displacement, tδ, between these two bridges and shows that for ωmin < ωs < ωmax, the time displacement is approximately constant which is essential for high power loads, where a better power factor is needed. Regarding (5) and (17), the transferred power between the two bridges is derived according to the time constants of τ1 and τ2, as follows: Regarding (2), (16) and (18), the resonant inductor and capacitor can be designed according to the maximum power and ωmin. Comparing Equations (5) and (20), it is concluded that the new tuning method proposes a simplified equation for power regulation, which not only depends on ωs or variable τ1 but also intrinsically applies the required phase displacement at light load and gradually decreases the phase displacement for high power conditions, as described. Simulation Results and Improved Feedback Circuit In this section, the DAB-SRC is tuned based on the proposed self-oscillating power and frequency control loop. In the following simulations, the main parameters of the power converter are Lr = 100 µH, Cr = 100 nF, Vin = 200 V, VBatt = 200 V, τ2 = 1 µs and τ1 varies between 2 µs up to 10 µs depending on the desired output power. The DC-links of the two active bridges are compensated by LC filters with filter's inductor of 100 µH and capacitor of 200 µF. For simplicity, the power switches and HFT are considered as ideal elements and the maximum and minimum operating frequencies are considered 85.5 kHz and 53.5 kHz, respectively. The maximum and minimum power for equal 200 V level for the two active bridges are designed as 4.5 kW and 700 W, respectively. Figure 5 shows the output voltages of the active bridges at start-up, while the tuning loop is set at the lowest output power, the worst case for start-up and achieving ZVS condition. In this simulation, Sb2 and Sb3 are switched on for about one cycle of the resonant current, about 20 µs, for charging-up of the series resonant tank, as can be seen from Figure 5. The time constants of the tuning loop are considered as τ2 = 1 µs and τ1 = 2 µs and the converter is considered in forward power flow, meaning that the switching states of the charger bridge is determined by zero crossings of the lead phase shifter. As can be seen from Figure 5, the converter properly starts without current stresses according to its nominal peak current rating for io, which is about 40 A, moreover, ZVS is achieved at the beginning of the cycles. Figure 6 shows the output voltages of the active bridges and the resonant tank current at steadystate condition and forward power direction for τ2 = 1 µs and τ1 = 2 µs. It can be seen that ZVS is achieved for the two bridges with minimum phase displacements or circulating currents. In condition of Figure 6, the transferred power from the right side bridge to the EV side is almost 700 W with operating frequency of about 87.5 kHz. Regarding Equation (16) and τ2 = 1 µs and τ1 = 2 µs, the operating frequency is derived about 112.5 kHz. The difference between analysis and simulation result is caused by the non-sinusoidal behavior of io at light load conditions. To improve this, a new feedback circuit is devised at the end of this section. For the maximum power, the converter is simulated for τ2 = 1 µs and τ1 = 10 µs and the power direction is considered forward. The output voltages and io are shown in Figure 7 which shows that the converter properly operates with the proposed minimum phase displacement at the high power condition. In Figure 7, the transferred power is about 4.5 kW with operating frequency of 53.4 kHz, which is approximately in fair agreement with Equations (16) and (20). To show the effectiveness of the proposed tuning loop, the unequal DC-link voltages condition is simulated for Vin = 200 V and VBatt = 180 V. In the following simulation, the maximum output power is achieved for τ1 ≈ 4.7 µs while the operating frequency is about 54.5 kHz, as shown in Figure 8. As can be seen, in this simulation, minimum phase displacement required for ZVS is achieved which shows the proper performance of the proposed tuning loop under unequal DC-link voltages. Regarding simulations, for VBatt < Vin, the required time constant to achieve maximum output power occurs between τ1 max and τ1 min derived by (17). Regarding Figure 6 and Figure 7, tδ is approximately constant from low power up to high power conditions. Moreover, regarding simulation results, Equation (19) should be modified to Equation (21). Moreover, the DC-link of charger sides is connected to an approximately constant DC bus, as shown in Figure 1. Hence, a well-design HFT can be used to achieve VBatt ≤ Vin for different conditions and state of charges of batteries. As a result, the tuning loop time constants and DAB-SRC parameters can be designed according to the analysis presented in section 3. For large difference in DC-link voltages, the two phase shifters must be considered variable to transfer power in both directions from light load condition up to full load condition. To show the performance of the proposed tuning loop under direction changing of the power flow, a simulation has been undertaken as shown in Figure 9. In this simulation at t = 20 ms, the phase shifter of the charger bridge is changed to the fixed lagging phase shifter while the phase shifter of the EV bridge is changed to the variable leading phase shifter. Figure 9, shows that the tuning loop properly changes the direction of the power flow under full load condition. Figure 10 To improve the accuracy of the proposed tuning loop, a new feedback circuit for the power generator bridge is devised, as shown in Figure 11. In this circuit, output of the CT is directly connected to a capacitive load, Co, in comparison with conventional CT circuits which utilizes a resistive load. Hence, regarding Figure 11 and effect of Co, the output signal of the CT is 90 0 lead with respect to io. Using a variable lagging phase shifter with the same RT1 and CT1, δ1 can be derived by (22) which is similar to Equation (12). Therefore, without any change in the previous analysis, Co and the variable low pass filter, effectively suppress the high order harmonics. Figure 12 shows dependency of the converter's operating frequency against RT1 using (16) beside the simulation results, based on the new feedback circuit for the charger bridge, i.e., the forward power direction. In this figure, the operating frequency is derived for different tuning resistors, RT1, for the variable lagging phase shifter and in a wide range of operating frequencies, i.e. 51 kHz up to 85 kHz. Regarding Figure 12, there is only a fixed offset, fo ≈ 4.6 kHz, between the analysis and the simulation results. Hence, Equation (16) should be revised to the following equation: Figure 11. New feedback circuit for the power generator active bridge using a capacitor as the load of the current transformer (CT) and a variable lagging phase shifter. To validate the above modification, a simulation is undertaken using previous parameters for the DAB-SRC, as shown in Figure 13. In the following simulation, the new feedback circuit is used for the charger bridge with CT1 = 1 nF and RT1 = 5 kΩ. At the steady state, the operating frequency of the system is about 75.78 kHz, while using (22) the operating frequency is derived at about 75.77 kHz, which shows the accuracy of the tuning loop based on the new feedback circuit. Moreover, regarding Figure 13, the time displacement between, tδ, active bridges is about 1.59 µs which is compatible with Equation (21). For the fixed offset, fo, a sensitivity analysis based on more than 200 simulations with different resonant tank parameters and different τ1 and τ2 has been undertaken. Based on the analysis, fo is only related to the fixed time constant, i.e., τ2. The relationship between fo and τ2 is derived by the following which has maximum error 3% for all scenarios in the sensitivity analysis. Figure 13. Output voltage of the charger bridge, EV bridge and io under equal voltages condition with τ1 = 5 µs and τ2 = 1µs while the charger side is controlled based on the new feedback circuit. Experimental Results For simplicity, experimental results of the SRC are derived based on dual-active half-brides, connected together by resonant capacitor and inductor, as shown in Figure 14a. In this setup, the resonant inductor is about 15 µH and different resonant capacitors are used to assess the validity of the proposed tuning loop under different resonant frequencies. The power switches are IRFP260N power MOSFETs while the drivers are implemented by IR2104S bootstrap gate drivers. The MCU is ATMEGA16 with 8 MHz clock pulse and the zero detector circuits are constructed using LT1016 comparators with sensitivity of 5 mV and propagation delay of about 10 ns. It is worth noting that using 32-bit micro-controllers is essential for final fabrication and better dynamic responses. The CT turn ratio is 100 and a 100 Ω resistor is connected to the CT, i.e. 1ampere/volt gain. In this setup, one of the half bridge DC-link is directly connected to a battery bank with voltage of about 20 V. Moreover, for the power transmitter half-bridge, the DC-link voltage is set to 20 V. Figure 14b shows io and Trigger1 signal at start-up condition while τ1 ≈ 2.2 µs and τ2 ≈ 1 µs. Regarding this figure, the operating frequency is about 109.6 kHz which is derived from about 111.9 kHz using Equations (23) and (24). In this result the resonant capacitor is about 180 nF. Figure 14c shows the start-up condition for the same signals while τ1 ≈ 16 µs, τ2 ≈ 1 µs and the resonant capacitor is about 1200 nF. Regarding this figure, the operating frequency is about 44.4 kHz which is also compatible with Equations (23) and (24) with less than 3 % error. Figure 14d, shows the output current and voltage of the charger inverter at steady state, while ZVS is achieved with a minimum phase displacement with τ1 ≈ 3.6 µs, τ2 ≈ 1 µs and the resonant capacitor is about 400 nF. Operating frequency is about 89.6 kHz while using Equations (23) and (24), the operating frequency is about 88.5 kHz. Experimental results show the accuracy of the previous analysis and the new tuning loop can be considered as a simple and fast tuning loop for DAB-SRC. Conclusions The presented method of this paper improves the performance of bidirectional DAB-SRCs using a new self-tuning method. The new tuning method intrinsically creates a constant time delay between the two active bridges from light load up to full load while ZVS is achieved for all the switches. A simple, straightforward designing procedure has been presented for DAB-SRCs parameters based on the tuning loop variables. The new method proposes a simplified equation for power control while intrinsically making the minimum required phase displacement between the two active bridges. The tuning loop can instantly change the direction of power flow without any extra control circuit. The simulations and analysis show the validity of the proposed method for high power applications, for which uncertainty tolerance, fast response in frequency tracking and power flow direction changing are required. Experimental results show the accuracy of the previous analysis and the new tuning loop cab be considered as a simple and fast tuning loop for DAB-SRC.
Modeling of Thermal Mass in a Small Commercial Building and Potential Improvement by Applying TABS With a resistor-capacitor model built in Matlab/Simulink, the role of envelope/interior thermal mass (eTM/iTM) in a small commercial building is investigated systematically. It concludes that light-weight concrete is a little worse than normal-weight concrete but much better than wood as eTM or iTM for controlling operative temperature variation in the building. In order to combine the advantages of radiant cooling/heating with the heat storage of massive building structure, an attractive technique called TABS (thermally activated building systems) is applied to the building to investigate the potential improvement. Simulations demonstrate that TABS can keep the operative temperature level around the comfort zone with small variations. As TABS is a low-temperature heating and high-temperature cooling technique, it suggests that natural energy gradient driven low-power equipment, such as cooling tower and rooftop solar thermal panels, can be used to achieve free cooling/heating combining Introduction Thermal mass [1] "has the ability to absorb and store heat energy during a warm period of heating and to release heat energy during a cool period later." Building thermal mass can be classified as exterior (envelope) thermal mass (eTM) and interior thermal mass (iTM) based on its location and function. Thermal mass is a powerful tool for controlling indoor temperature for thermal comfort. The green building engineer M. Schuler [2] noted, "Many of us have experienced that moment during a warmer summer day, upon entering an old church. One step into the space and the climate changes totally…allows us to feel a temperature even below the air temperature". Most of us have the same intuitive appreciation of thermal mass in making a building thermally comfortable. There is a large number of architectural and/or engineering publications on the use of thermal mass in building applications [3][4][5][6][7][8][9][10][11][12][13][14]. By analytical and numerical methods, two papers investigated the dynamic heat transfer performance of interior and exterior planar thermal mass [1,15] subject to sinusoidal heating and cooling. In these two papers, the indoor air temperature acted as an input function rather than an output result, which made it hard to predict the effect of thermal mass on the indoor air temperature. The resistor-capacitor (RC) model, which can solve the problem and was validated widely [16][17][18][19][20][21][22][23][24][25], will be used to systematically investigate the role of thermal mass in a small commercial building in this paper. Unlike residential buildings that are envelope (externally) load dominated type, commercial buildings are usually internally load dominated type, which uses the majority of energy for internal needs (such as lights, computers, equipment, etc.) leading to large internal heat generation [26]. This paper is organized as follows: the details of the small commercial building will be given in Section 2; Section 3 will present the modeling of the building in Matlab/Simulink, and the role of thermal mass in the building will be investigated in Section 4 using the built model; Section 5 will explore the potential improvement by applying TABS (thermally activated building systems) in the building; and the paper will be closed with main conclusions in Section 6. The Small Commercial Building The small commercial building, which is called the Suffolk County Health Center [27], also known as the Farmingville Health Center, is a one-story south-facing building located in Farmingville, Long Island, New York, United States. Its usages include environmental health, alcoholism and substance abuse, mental health, children's clinic, public health, and so on. The dimension of the health center is about 47.24 m × 30.48 m × 3.96 m. So the total area and volume are about 1440 m 2 and 5700 m 3 , respectively. A picture of the health center is shown in Figure 1 and the floor plan is shown in Figure 2. There are four exterior entrance doorways, all of which are made of aluminum and glass. In the middle of the south wall, there is a main entrance, which has three doors with a vestibule and three solid wood doors to the interior. Over the main entrance, there is a 2.800 m long overhang. The other three of the entrance ways have double doors with a vestibule and solid wood double doors to the interior. The health center has 71 identical 0.965 m × 1.829 m high efficiency windows, where 7, 24, 14 and 26 are east-, south-, west-and north-facing, respectively. The exterior walls surrounding the main entrance are made of light colored, precast concrete and marble. The rest of the exterior walls are light in color and made of precast concrete. All of the walls use foam insulation. The health center has an upper level roof and a lower level roof. The height difference of the two roofs is 0.152 m. The ceiling heights under the upper and lower roofs are 2.438 m and 2.591 m, respectively. The dimension of the upper level roof is 13.818 m × 7.620 m, which is only about 7% of the total roof area. Inside of the health center, there exist 36 staff offices, a small file storage area, a large conference room, a computer room, three small storage areas and a very small janitor's closet along the perimeter of the building. There is a big reception/waiting lobby near the center of the building. On the east of the lobby, there contains four offices, three inspection services rooms, a large computer room, a small storage room and a pantry. The section on the west of the lobby is a little bigger than the east one. It contains five offices, three examination rooms, a children's play room, three storage rooms, one large file storage room, a janitor's closet and six restrooms. It is also the location of the only staircase (to the186 m2 semiconditioned basement) in the building. There are 57 Suffolk County employees in the health center during regular working hours and the designed occupancy capacity is 155 persons (assumed 9.290 m 2 per person). The health center opens from 9:00 AM to 9:00 PM on Monday-Thursday and 9:00 AM to 5:00 PM on Fridays. Thermal Resistance of the Envelope The thermal resistance of the building envelope contains two parts: the building material resistances and the surface air film resistances. Building materials also have heat capacities, which are modeled as capacitors. The thermal network of the building envelope is in Figure 3. The thermal resistances of the building envelope can be calculated as follows. Windows: All of the 71 windows of the health center are "double-glazed, anodized, aluminum thermal break frame". From Table 5-16 in [28], the thermal resistance is 0.345 m 2 K/W. As the total area of the windows is 125.46 m 2 , the total thermal resistance due to the windows is: R window = 0.002750 K/W. Roofs: The structure of the upper lower level roof roofs is the same: new roof membrane, adhesive, new 0.038 m thickness insulation, existing built-up roofing, existing insulation and roof deck (both of the insulation and deck are 0.038 m; the deck was filled with lightweight concrete). The thermal resistance of the roof materials is 1.980 m 2 K/W. As the total area of the roofs is 1440 m 2 , the total thermal resistance due to the roofs is: R roof = 0.001375 K/W. Doors: For the four entrances, there are nine operable swing doors made of aluminum and glass. The thermal resistance of this kind of door is 0.143 m 2 K/W. [28] Behind the glass doors, the thermal resistance due to the indoor air film is 0.141 m 2 K/W calculated from Chapter 15 in [29]. Beyond the vestibules, there are nine solid wood doors, whose thermal resistance is 0.383 m 2 K/W. [28] On the wood door surfaces, the thermal resistance due to the air film is 0.121 m 2 K/W. [29] The dimension of the doors is 0.914 m × 2.134 m and thus the total area of the doors is 17.56 m 2 . Therefore, the total thermal resistance due to the doors is: R door = 0.044875 K/W. Exterior walls: Most of the exterior walls were constructed with 0.076 m thick precast concrete panels, insulated with foam and finished with plaster. The thermal resistance due to the wall materials is about 2.200 m 2 K/W. The walls adjacent to the main entrance were constructed with concrete and marble block, whose thermal resistance is a little higher. As the area is much smaller, the difference can be neglected. After subtracting the area of windows and doors, the total area of the walls is 472.93 m 2 . Therefore, the total thermal resistance due to the walls is: R wall = 0.004652 K/W. Floor: The health center has a slab-on-ground floor, which was constructed with 0.102 m normal-weight concrete slabs directly on the ground. The bottom of the basement has a 0.051 m concrete slab. The wall below grade is 0.305 m concrete and has a 0.051 m insulated pane. The estimated thermal resistance for the belowgrade wall is 2.997 m 2 K/W, which is much higher than that of other parts of the envelope. Since the weather condition is moderate and the temperature of the ground is much stable than that of the outdoor air, "for floors in direct contact with the ground, or over an underground basement that is neither ventilated nor conditioned, heat transfer may be neglected for cooling load estimates", as said in Chapter 7 of [28]. Therefore, the floor and ground resistance is treated as infinitely large. Air films: The thermal resistances of the indoor and outdoor air films are 0.141 and 0.037 m 2 K/W, respectively. Therefore, the total thermal resistances due to the indoor and outdoor air films are R i = 0.000040 K/W and R o = 0.000018 K/W, respectively. Thermal Resistance Of Infiltration And Ventilation There are two classifications of air exchange between outdoor air and indoor air [28]: infiltration/exfiltration and ventilation. The difference of the two patterns is that, in the former one, the air flow through cracks and other openings is unintentional and uncontrolled by human, and in the latter one, the outdoor air is intentional introduction into a building. According to the condition of the health center, the thermal resistance is 0.992 m 2 K/W for infiltration and up to 0.594 m 2 K/W for infiltration and ventilation. The value of 0.667 m 2 K/W is selected in the simulation, and the total thermal resistance due to the infiltration and ventilation is R vi = 0.000463 K/W, which is parallel to the building envelope thermal resistance. Radiation Solar Energy Input Through Windows One important part of building energy gains is the radiation solar energy input through windows. Using the method given in [30], a program is written in Matlab to calculate the solar irradiance, which is the solar intensity that is incident perpendicular (normal) to one unit area of the plane surface. The total solar irradiance incident upon a flat surface, I T [W/m 2 ], includes the direct beam component, the diffuse component and the reflected component. As the window type is double glazing with low-e coating, the shading coefficient, C sc , is 0.32~0.60 and choose 0.46. In most commercial buildings, besides using glazing, shading strategies are also used to control solar heat gain to minimize cooling requirements. In Chapter 15 of [29], many details about shading are given. It mentions that outdoor shading devices reduce solar heat gain more effectively than indoor devices, but indoor devices are easier to operate and adjust. It also points out that "fenestration products fully shaded from the outside reduce solar heat gain by as much as 80%". Assume that 20% solar energy can go into the building and this gives a coefficient C sd = 0.20. Therefore, on clear days (i.e., sunny and cloudless), the heat flux of solar energy gain, Solar energy input through windows of the health center is shown in Figure 4. Internal Heat Gains The internal heat gains, which are the heat energy emitted by occupants, lighting and equipment, contribute a significant amount of heat to the total sensible and latent heat gains in a commercial building. According to Table 1 in Chapter 18 of [29], the heat gain, including sensible and latent, from occupants is about 130 watts per person. The energy gains due to lighting and equipment are 0.75 and 1.00 W/ft 2 , respectively. The hour-by-hour internal heat gains on typical workdays are shown in Figure 5. Comparing with Figure 4, it is clear that the internal heat gains are about ten times more than the solar energy input through windows. This is why commercial and office buildings are called internally load dominated buildings. For this kind of buildings, cooling is a dominate concern. In contrast, for the so called envelope (externally) load dominated building (typical residential buildings), the heating requirement is often at least as important as cooling and in many cases (in cold climate) much more important. Interior Thermal Mass The iTM in the health center can be divided into thermal mass due to internal walls/doors and thermal mass due to office furniture. The main material composition of the internal walls in the health center is "one layer of 5/8″ gypsum wallboard on both sides". As the thermal-physical properties of gypsum board are similar to that of wood, [1] after ignoring the air and studs between the two gypsum wallboards, the internal walls can be treated as 0.032 m wood boards. From Figure 2 Building Modeling In Matlab/Simulink In this paper, the health center is modeled by the RC method in Matlab/Simulink as shown in Figure 6. In the figure, the units of the symbols are: C (J/K); T (K or °C); R (K/W); and q (W=J/s). This model has elements of the building envelope, the solar energy input, the internal heat gains, the iTM, the indoor air, and the outdoor air. Most of them were described in Section 2. The indoor air temperature, T in (K or °C) is allowed to "float". September 15 is selected for the calculation. The average high and low temperatures of the outdoor air in this month [31] are 14°C and 23 °C, respectively. The outdoor air temperature, T out (K or °C) is supposed as a sinusoidal function with a period of 24 hours. The time step of the simulation is 10 s. For the sake of simplicity for simulation, except the windows, the health center envelope in the model is made of four materials: wood (with low thermal conductivity and small volumetric heat capacity), normal-weight concrete (with high thermal conductivity and large volumetric heat capacity), light-weight concrete (with moderate thermal conductivity and moderate volumetric heat capacity), and insulation (with very low thermal conductivity and almost no heat capacity). The roofs, the exterior walls and the floor are insulated by the insulation from outside. Main properties of the materials are listed in Table 1. In Figure 7 (a), it is clear that because of higher thermal resistance values, the mean temperatures of the windows, the doors, the roofs, the exterior walls and the floor increase stepwise, and the temperature variations of them decrease stepwise. The temperature peaks of the windows and the doors are at 3:00 PM, which coincide with the peak of the internal heat gains. The temperature peak of the roofs lags several hours due to its large heat capacity. The temperature peaks of exterior walls and the floor even lag to the evening because of their large heat capacity and high thermal conductivity. In Figure 7 (b), the temperatures of the indoor air and the iTM almost coincide, because the iTM thickness is small and the solar energy input is much smaller than the internal heat gains. The indoor air temperature variation is about 3.68 °C, which may cause thermal discomfort for occupants in the health center. Combining all the temperatures in the health center, the operative temperature is in Figure 8. Operative temperature is the "uniform temperature of a radiantly black enclosure in which an occupant exchanges the same amount of heat by radiation plus convection as in the actual nonuniform environment". [28] Operative temperature is one of the most important parameter in the comfort zone chart specified in the ANSI/ASHRAE Standard 55. In Figure 8, the operative temperature varies between 24.23 °C and 27.60 °C. According to [32], in summer, the comfort range of operative temperature is 24.5 °C ~ 26.0 °C for a maximum 6% dissatisfied permissible rate and is 23.5 °C ~ 27.0 °C for a maximum 10% dissatisfied permissible rate. Therefore, the 3.37 °C variation is corresponding to nearly 10% dissatisfied permissible rate. Thermal Mass in the Health Center Although the thermal mass in the health center practically cannot be varied much, we can investigate the role of thermal mass easily with the model developed in the previous section. Some cases in this section may be not realistic in buildings and are only for reference purpose. Exterior Thermal Mass For the health center, the eTM is mainly from the 0.038 m lightweight concrete filled in the roof, the 0.076 m precast concrete panels in the exterior walls, and the 0.102 m normal-weight concrete slabs in the floor. Now let us replace all the concrete to normal-weight concrete, lightweight concrete or wood and vary their thickness. Meanwhile, the corresponding thermal resistances will be kept the same as the original ones by changing the thickness of the insulation. Simulation results are shown in Table 2. In the table, except the extreme case in the first column, the thickness is doubled from 0.025 m to 0.400 m. Cells are colored in green when the operative temperature variation is smaller than 2 °C; buildings are considered with good thermal comfort in this region. When the variation is bigger than 4 °C, cells are colored in red; such big variations are not accepted for occupants and extra heating or cooling is needed. Between them, cells are colored in yellow; people can accept the variation range, but buildings are not considered to be thermally comfortable. In Table 2, the operative temperature variation can be up to 12.57 °C in the extreme case with no eTM. When the amount of the eTM is small, the variation is large and in the red zone. However, when the concrete thickness is a little bigger, the variation reaches the yellow zone. By contrast, no matter how thick the wood eTM is, the variation is still in the red zone. There is no values in the last two cells, because when the wood thickness is 0.400 m, the insulation thickness becomes negative in the simulation. Unfortunately, there is no variation entering the green zone. From Table 2, we can conclude that as eTM for controlling the operative temperature variation, normalweight concrete is the best, wood is the worst and lightweight concrete is in between. Now let us investigate the role of the iTM in the health center. In order to show the iTM effect more clearly, the health center envelope is assumed to be built by insulation only, i.e., the heat capacities of the envelope is assumed to be zero. Originally, the iTM in the health center is considered as 0.028 m thick wood with a total area of 5952 m 2 (two sides). In this section, besides wood, the materials will be replaced to normal-weight concrete or light-weight concrete, the thickness will be varied from 0 to 0.400 m, the area (two sides) will be changed from 1488 m 2 to 5952 m 2 . Simulation results are listed in Table 3. The cells are colored following the rules in Table 2. Interior Thermal Mass From Table 3, we can find: for wood iTM, all of the cells are in red, which means that wood is really a bad iTM for controlling the operative temperature variation; for normal-weight concrete iTM, although most of the calls are in red, the variation can enter the yellow and green zones if the iTM amount and the thickness are big enough; for light-weight concrete iTM, the variation can only reach the yellow zone. In summary, light-weight concrete is a little worse than normal-weight concrete but much better than wood as iTM. Potential Improvement By Using TABS The calculations in Section 4 shows that a great amount of building thermal mass is necessary for controlling the operative temperature variations in a small range. However, besides the temperature range, we are also concern with the temperature level when considering the thermal comfort of a building. Until now, all simulations are under the assumption that the ambient temperature is neither too high nor too low. Therefore, the operative temperatures float around the comfort zone. However, beyond this assumption, extra heating or cooling is needed to control the temperature level. One attractive technique is the so-called TABS (thermally activated building systems), which "combines the advantages of radiant cooling [and/or heating] with the thermal storage of massive concrete ceilings [and/or floors]" [33]. In this section, TABS will be applied to the health center and the potential improvement under the cooling and heating conditions will be investigated. Brief Background Of TABS The practice of thermally activating building mass was originally established by a Swiss engineer, Robert Meierhans, who published two important papers [33,34] in the 1990s. Later it is known as thermally activated building systems (TABS). The TABS movement has since gathered great momentum in Europe, especially in Switzerland and Germany. In recent two decades, many researches on TABS were published [18,21,22,23,32,[35][36][37][38][39][40]. TABS is "a most attractive strain of radiant cooling," [23] but it is still not well known out of Europe, including the North America [41]. This paper aims to take a small step towards the success of TABS in the whole world. The key innovation of TABS is the replacement of the old air-based paradigm with the water-based paradigm. Water is 832 times denser than air; a 1 cm water pipe delivers the same amount of heat as an 18 cm air ductand is much easier to install in small spaces. Compared with air, water is a more space-efficient method of transferring heat and coldness around a building, as well as a more energy-efficient method: heat loss from air ducts is eliminated; air infiltration energy loss is reduced; the effective use of building elements as thermal mass; and so on. In TABS, water carrying pipes are embedded in building structure (concrete slabs of roofs, floors or walls). The whole slab (with its large thermal mass) is activated because of the water pipes placed in the slabs. As a result, a building's operative temperature remains within thermal comfort zone in the daytime, and in the nighttime the water can be cooled by the cold ambient air using a cooling tower or off-peak power using a chiller. Notice that TABS does not sacrifice thermal comfort of the buildings. In fact, this new kind of thermal environment is more comfortable as pointed out by Alexander et al. in A Pattern Language [42]: "It turns out that people are more comfortable when they receive radiant heat at a slightly higher temperature of the air around them. The two most primitive examples of this situation are: (1) outdoors, on a spring day when the air is not too hot but the sun is shining. (2) around an open fire, on a cool evening." Total Thermal Resistance Of TABS where R z is the equivalent resistance in the z-direction (along the pipe), R w is the resistance caused by convection, R p is the resistance due to the cylindrical pipe wall, and R x is the resistance due to the inserted pipe. A program is written in Matlab to calculate the total thermal resistance of TABS. Applying TABS to the Health Center Now let us apply the TABS to the roofs and floor of the health center. The 0.038 m lightweight concrete of the roof and the 0.102 m normal-weight concrete slabs of the floor are replaced with the 0.250 m normal-weight concrete slabs. The water pipes are inserted at the middle of the slabs. The total heat capacity of the roof or floor slabs are 6.64 × 10 8 J/K. The mass of the water in the pipes are about 1221 kg and thus the heat capacity of the water in the roof or floor is 0.05 × 10 8 J/K, which is much smaller than that of the slabs. The TABS total thermal resistance R t is 0.9191 m 2 K/W with the reference surface 1166 m 2 . Suppose the supply-water temperature of the TABS is 22°C, in September the operative temperature of the health center is shown in Figure 9. In the figure, the operative temperature varies between 23.20 °C and 25.84 °C. Compared to Figure 8, the temperature variation decreases 21.7% thanks to the large heat capacity of the TABS concrete, and the temperature level drops 1.4 °C because of the cooling from the TABS. In the hottest month July, the average high and low temperatures of the outdoor air are 19 °C and 28 °C, respectively. In the coldest month January, they are -5 °C and 3°C, respectively. The operative temperatures of the health center with/without TABS in these two months are shown in Figure 10. In July, the TABS supply-water temperature is set to be 18 °C, and in January it is set to be 53 °C. In the figure, the operative temperature variations with TABS are nearly 1 °C smaller than that without TABS; the operative temperature level without TABS increases about 5 °C in July and drops nearly 18 °C in January. The 18 °C TABS supply-water temperature for cooling is obviously not low (which is called high-temperature cooling). It is more important to notice that the 53 °C TABS supply-water temperature for heating actually is very low comparing with conventional heating equipment. This confirms the statement in [23]: "A common misconception is that radiant heating requires high temperature source, or radiant conditioning requires large temperature difference (or gradient). The fact is exactly the opposite…" For conventional heating equipment, the temperature set point is typically about 180 °F (82 °C). So the TABS heating is one of the so-called low-temperature heating techniques. The attraction of high-temperature cooling or lowtemperature heating techniques is that we can use low-power equipment driven by natural energy gradient, such as cooling tower and rooftop solar thermal panels. With low-power equipment, the free cooling or heating can be achieved while combining with solar photovoltaics, and thus it can save a great amount of precious electrical power. The possibility of using low-power equipment is not considered here and requires further future study. Conclusion Thermal mass has the ability to absorb and store heat energy during a warm period of heating and to release heat energy during a cool period later. Thermal mass is a powerful tool for controlling building temperatures. In order to investigate the role of exterior and interior thermal mass, a resistor-capacitor model of a small commercial building in the great New York City area is built in Matlab/Simulink. To explore the potential improvement, a TABS, which combines the advantages of radiant cooling and heating with the heat storage of massive building structure, is applied to the building. After more than 80 case studies under moderate ambient air temperature, we demonstrate that light-weight concrete is a little worse than normal-weight concrete but much better than wood as exterior or interior thermal mass for controlling the building operative temperature variation. In summer or winter when the ambient air temperature is not moderate, extra cooling or heating is necessary and we apply TABS to the building. It concludes that TABS can control both the range and the level of the building operative temperature for thermal comfort. It also shows that with TABS, the building can achieve high-temperature cooling and low-temperature heating. Therefore, we suggest using natural energy gradient driven low-power equipment combining solar photovoltaics in the building to reach free cooling and heating.
Audio Content based Geotagging in Multimedia In this paper we propose methods to extract geographically relevant information in a multimedia recording using only its audio component. Our method primarily is based on the fact that urban acoustic environment consists of a variety of sounds. Hence, location information can be inferred from the composition of sound events/classes present in the audio. More specifically, we adopt matrix factorization techniques to obtain semantic content of recording in terms of different sound classes. These semantic information are then combined to identify the location of recording. INTRODUCTION Extracting information from multimedia recordings has received lot of attention due to the growing multimedia content on the web. A particularly interesting problem is extraction of information related to geographical locations. The process of providing information about geographical identity is usually termed as Geotagging [11] and is gaining importance due its role in several applications. It is useful not only in location based services and recommender systems [1] [2] [12] but also in general cataloguing, organization, search and retrieval of multimedia content on the web. Location specific information also allows a user to put his/her multimedia content into a social context, since it is human nature to associate with geographical identity of any material. A nice survey on different aspects of geotagging in multimedia is provided in [11]. In all of the contexts mentioned above the central point is the availability of geographical information about the multimedia. Although, there are applications which allows users to add geographical information in their photos and videos, a larger portion of multimedia content on the web is without any geographical identity. In these cases geotags needs to be inferred from the multimedia content and the associated metadata. This problem of geotagging or location identification also features as the Placing Tasks in yearly MediaEval [13] tasks. The goal of Placing Tasks [6] in Me-diaEval is to develop systems which can predict places in videos based on different modalities of multimedia such as images, audio, text etc. An important aspect of location prediction systems is the granularity at which location needs to be predicted. The Placing Task recognizes a wide range of location hierarchy, starting from neighbourhoods and going upto continents. In this work we are particularly interested in obtaining city-level geographical tags which is clearly one of the most important level of location specification for any data. Most of the current works on geotagging focus on using visual/image component of multimedia and the associated text in the multimedia ( [19] [11] [18] [8] to cite a few). The audio component has been largely ignored and there is little work on predicting location based on audio content of the multimedia. However, authors in [5] argue that there are cases where audio content might be extremely helpful in identifying location. For example, speech based cues can aid in recognizing location. Moreover, factors such as urban soundscapes and locations acoustic environment can also help in location identification. A few works such as [10] [17] did tried to exploit audio cues for geotagging in videos. However, the approaches proposed have been simplistic relying mainly on basic acoustic features. The general schema is to either directly use basic acoustic features such as Mel-Cepstra Coefficient (MFCC) or to obtain audio-clip level features such GMM-Supervectors or Bag Of Audio Words histograms and then build classifiers on these features. In this work we show that geotagging using only audio component of multimedia can be done with reasonably good success rate. Our primary assertion is that the semantic content of an audio recording in terms of different sound events can help in predicting locations. We argue that soundtracks of different cities are composed of a set of sound events. If we can somehow capture the composition of audio in terms of these sound events then they can be used to train machine learning algorithms for geotagging purposes. We start with a set of base sound events or classes and then use methods based on matrix factorization to find the composition of soundtracks in terms of these sound events. Once the weights corresponding to each base sound class have been obtained, we build higher level feature using these weights which are further used to obtain kernels representations. The kernels corresponding to each basis sound are then combined to finally train Support Vector Machines for predicting location identification of the recording. The rest of the paper is organized as follows. In Section 2 we describe our proposed framework for audio based geotagging. In Section 3 we present our experiments and results. In Section 4 we discuss scalability of our proposed method and also give concluding remarks. AUDIO BASED GEOTAGGING Audio based geotagging in multimedia can be performed by exploiting audio content in several ways. One can possibly try to use automatic speech recognition (ASR) to exploit the speech information present in audio. For example, speech might contain words or sentences which uniquely identifies a place, I am near Eiffel Tower clearly gives away the location as Paris with high probability irrespective of presence or absence of any other cues. Other details such as language used, mention of landmarks etc. in speech can also help in audio based geotagging. However, in this work we take a more generic approach where we try to capture semantic content of audio through occurrence of different meaningful sound events and scenes in the recording. We argue that it should be possible to train machines to capture identity of a location by capturing the composition of audio recordings in terms of human recognizable sound events. This idea can be related to and is in fact backed by urban soundscapes works [3] [14]. Based on this idea of location identification through semantic content of audio, we try to answer two important questions. First how to mathematically capture the composition of audio recordings and Second how to use the information about semantic content of the recording for training classifiers which can predict identity of location. We provide our answers for each of these questions one by one. Let E = {E1, E2, ..EL} be the set of sound events which we want to capture in audio recordings. E1 to EL are different sound events or classes. We assume that each of these sound classes can be characterized by a basis matrix M l . For a given sound event E l the column vectors of its basis matrix M l essentially spans the space of sound event E l . Mathematically, this span is in space of some acoustic feature (e.g MFCC) used to characterize audio recordings and over which the basis matrices have been learned. How we obtain M l is discussed later. Any given soundtrack or audio recording is then decomposed with respect to a sound event E l as where X is a d × n dimensional representation of the audio recording using acoustic features such as MFCC. For MFCC, this implies each column of X is d dimensional melfrequency cepstral coefficients and n is the total number of frames in the audio recording. The sound basis matrices M l are d×k dimensional where k represents the number of basis vectors in M l . In principle k can vary with each sound class, however, for sake of convenience we assume it is same for all E l , l = 1 to L. The idea behind Eq 1 is to obtain W l which captures how the sound class E l is present in the recording. The weight matrix W l captures the presence of each sound event through out the duration of the recording. Obtaining W l for each l provides us information about the structural composition of the audio in terms of sound classes in E. Hence, these W l can be used for differentiating locations. Now, the problem boils down to learning M l for each E l and then using it to compute W l for any given recording. Obtaining M l and W l Let us assume that for a given sound class E l we have a collection of N l audio recordings belonging to class E l only. We parametrize each of these recordings through some acoustic features. In this work we use MFCC features augmented by delta and acceleration coefficients (denoted by MFCA) as basic acoustic features. These acoustic features are represented by d × ni dimensional matrix X i E l for the i th recording. d is dimensionality of acoustic features and each column represents features for a frame. The basic features of all recordings are collected into one large matrix to get a large collective sample of acoustic features for sound class E l . Clearly, XE l has d rows and let T be the number of columns in this matrix. To obtain the basis matrix M l for E l we employ matrix factorization techniques. More specifically, we use Non-Negative matrix factorization (NMF) like method proposed in [7]. [7] proposed two matrix factorization methods named semi-NMF and convex-NMF which are like NMF but do not require the matrix data to be non-negative. This is important in our case, since employing classical NMF [9] algorithms would require our basic acoustic feature to be nonnegative. This can be highly restrictive given the challenging task at hand. Of the two methods semi-NMF and convex-NMF, semi-NMF yielded better results in our experiments and hence due to space constraints we present description and results for only semi-NMF in this paper. semi-NMF considers factorization of a matrix, XE l as XE l ≈ M l W T . For factorization number of basis vectors k in M l is fixed to a value less than min(d, T ). semi-NMF does not impose any restriction on M l that is its element can have any sign. The weight matrix W on the other hand is restricted to be non-negative. The objective is to minimize ||XE l −M l W T || 2 . Assuming that M l and W have been initialized, [7] gave the following iterative update rules for factorization. In each step of iteration, The process is iterated till error drops below certain tolerance. The + and − sign represents positive and negative parts of a matrix obtained as Z + rs = (|Zrs| + Zrs)/2 and Z − rs = (|Zrs| − Zrs)/2. Theoretical guarantees on convergence of semi-NMF and other interesting properties such as invariance with respect to scaling can be found in original paper. One interesting aspect of semi-NMF described by authors is its analysis in terms of K-means clustering algorithm. The objective function ||X − M W T || 2 can be related to K-Means objective function with M l having the k cluster centers. Hence, the basis matrix M l also represents centers of a group of clusters. We actually exploit this interpretation in the next phase of our approach. The initialization of M l and W l is done as per the procedure described in [7]. Once M l have been learned for each E l , we can easily obtain W l for any given audio recording X by fixing M l and then applying Eq 3 for X for several iterations. For a given X, W l contains information about E l in X. With K-Means interpretation of semi-NMF, the non-negative weight matrix W l can be interpreted as containing soft assignment posteriors to each cluster for all frames in X. Discriminative Learning using W l We treat the problem of location prediction as a retrieval problem where we want to retrieve most relevant recordings belonging to a certain location (city). Put more formally, we train binary classifiers for each location to retrieve the most relevant recordings belonging to the concerned location. Let us assume that we are concerned with a particular city C and the set S = {si, i = 1 to N } is the set of available training audio recordings. The labels of the recordings are represented by yi ∈ {−1, 1} with yi = 1 if si belongs to C and otherwise yi = −1. Xi (d × ni) denotes the MFCA representation of si. For each Xi weight composition matrices W l i are obtained with respect to all sound events E l in E. W l i captures distribution of sound event E l in Xi and we propose 2 histogram based representations to characterize this distribution. Direct characterization of W l as posterior As we mentioned before semi-NMF can be interpreted in terms of K-means clustering. For a given E l , the learned basis matrix M l can be interpreted as matrix containing cluster centers. The weight matrix W l i (ni × k) obtained for Xi using M l can then be interpreted as posterior probabilities for each frame in Xi with respect to cluster centers in M l . Hence, we first normalize each row of W l i to sum to 1, to convert them into probability space. Then, we obtain k dimensional histogram representation for Xi corresponding to M l as This is done for all M l and hence for each training recording we obtain a total of L, k dimensional histograms represented by h l i GMM based characterization of W l We also propose another way of capturing distribution in W l where we actually fit a mixture model to it. For a given sound class E l , we first collect W l i for all Xi in training data. We then train a Gaussian Mixture Model G l on the accumulated weight vectors. Let this GMM be G l = {λg, N ( µg, Σg), g = 1 to G l }, where λ l g , µ l g and Σ l g are the mixture weight, mean and covariance parameters of the g th Gaussian in G l . Once G l has been obtained, for any W l i we compute probabilistic posterior assignment of weight vectors wt in W l i according to Eq 5 (P r(g| wt)). wt are again the rows in W l i . These soft-assignments are added over all t to obtain the total mass of weight vectors belonging to the g th Gaussian (P (g) l i , Eq 5). Normalization by ni is done to remove the effect of the duration of recordings. P r(g| wt) = λ l g N( wt; µ l g ,Σ l g ) G p=1 λ l p N( wt; µ l p ,Σ l p ) ; P (g) l i = 1 i is a G l -dimensional feature representation for a given recording Xi with respect to E l . The whole process is done for all E l to obtain L different soft assignment histograms for a given Xi. Fusing All Information h l i or v l i features captures sound events information for any Xi. We then use kernel fusion methods in Support Vector Machine (SVM) to finally train classifiers for geotagging purposes. We explain our strategy here in terms of h l i , for v l i the same steps are followed. For each l, we obtain separate kernels representation K l using h l i for all Xi. Since exponential χ 2 kernel SVM are known to work well with histogram representations [20] [4], we use kernels of ) is χ 2 distance between h l i and h l j . γ is set as the average of all pair wise distance. Once we have all K l , we use two simple kernel fusion methods; average kernel fusion where K h S = 1 L L l=1 K l (; , ; ) and product kernel fusion (:, :). The final kernel representation K h S or K h P is used to train SVM for prediction. EXPERIMENTS AND RESULTS We evaluate our proposed method on a subset of videos from MediaEval Placing Task [10]. The dataset contains contains a total of 1079 Flickr videos with 540 videos in the training set and 539 in the testing set. We work with only audio of each video and we will alternatively refer to these videos as audio recordings. The maximum duration of recordings is 90 seconds. The videos of the recording belong to 18 different cities with several cities having very few examples in training as well as testing set such as just 3 for Bankok or 5 for Beijing. We selected 10 cities for evaluation for which training as well as test set contains at least 11 examples. These 10 cities are Berlin (B), Chicago (C), London (L), Los Angeles (LA), Paris (P), Rio (R), San Francisco (SF), Seoul(SE), Sydney (SY) and Tokyo (T). As stated before the basic acoustic feature used are MFCC features augmented by delta and acceleration coefficients. 20 dimensional MFCCs are extracted for each audio recording over a window of 30 ms with 50% overlap. Hence, basic acoustic features for audio recordings are 60 dimensional and referred to as MFCA features. For our proposed method we need a set of sound classes E. Studies on Urban soundscapes have tried to categorize the urban acoustic environments [3] [14] [16]. [15] came up with a refined taxonomy of urban sounds and also created a dataset, UrbanSounds8k, for urban sound events. This dataset contains 8732 audio recordings spread over 10 different sound events from urban sound taxonomy. These sound events are car horn, children playing, dog barking, air conditioner noise, drilling, engine idling, gun shot, jackhammer, siren and street music. We use these 10 sound classes as our set E and then obtain the basis matrices M l for each E l using the examples of these sound events provided in the UrbanSounds8k dataset. The number of basis vectors for all M l is same and fixed to either 20 or 40. We present results for both cases. Finally, in the classifier training stage; SVMs are trained using the fused kernel K h S (or K h P , or K v S , or K v P ) as described in Section 2.2.3. Here the slack parameter C in SVM formulation is set by performing 5 fold cross validation over the training set. We compare our proposed method with bag of audio words (BoAW) representation built over MFCA acoustic features. This method builds bag of words features directly over basic acoustic features and is known to work well. The first step in this method is to train a GMM G bs with G b components over MFCA features where each Gaussian represents an audio word. Then for each audio recording clip level histogram features are obtained using the GMM posteriors for each frame in the clip. The computation is similar to Eq 5; except that the process is done over MFCA features. We will use b to denote these G b dimensional bag of audio words features. Exponential χ 2 kernel SVMs are again used to train the classifiers. In this case, the parameter γ in the kernel along with C is optimized by cross validation over the training set. As stated before we formulate the geotagging problem as retrieval problem where the goal is to retrieve most relevant audios for a city. We use Average Precision (AP) to measure performance for each city and Mean Average Precision (MAP) over all cities as the overall metric. Due to space constraints we are not able to show AP results in every case and will only present overall metric MAP. Table 1 shows MAP results for bag of audio word method (top 2 rows) and our proposed method (bottom 3 rows) using h l features described in Section 2.2.1. For baseline method we experimented with 4 different component size G b for GMM G bs . k represents the number of basis vectors in each M l . K h S represents the average kernel fusion and K h P product kernel fusion. First, we observe that our proposed method outperforms the bag of audio words plus χ 2 kernel SVM by a significant margin. For BoAW, G b = 256 gives highest MAP of 0.478 but MAP saturates with increasing G b and hence, any significant improvement in MAP by further increasing G b is not expected. Our proposed method with k = 40 and product kernel fusion gives 0.563 MAP, an absolute improvement of 8.5% in MAP when compared to BoAW with G b = 256. MAP in other cases for our proposed method are also better than best MAP for BoAW. We also note that for h l features, product kernel fusion of different sound class kernels performs better than average kernel fusion. Also, for h l , k = 40 is better than k = 20. Table 2 shows results for our v l features in Section 2.2.2 which uses GMM based characterization of composition matrices W l . We experimented with 4 different values of GMM component size G l . Once again we observe that overall this framework works better than BoAW with exponential χ 2 kernel. We can also observe that if we fix the GMM size as same for BoAW and v l that is G l = G b = G, then v l outperforms BoAW in all cases; upto 9% in absolute terms for k = 20 and G = 32. This shows that the composition matrices W l are actually capturing semantic information from the audio and these semantic information when combined helps in location identification. If we compare v l and h l methods then overall h l seems to give better results. This is worth noting since it suggests that W l on its own are extremely meaningful and sufficient. Another interesting observation is that for v l average kernel fusion is better than product kernel fusion. Figure 1 shows city wise results for the three methods. For each method the shown AP correspond to the case of best MAP. This implies GMM component size in both BoAW and v l is 256 that is G l = G b = 256; for h l k = 40 and product DISCUSSIONS AND CONCLUSIONS We presented methods for geotagging using audio content in multimedia recordings. We proposed that if the semantic content of the audio can be captured in terms of different sound events which occur in our environment, then these semantic information can be used for location identification purposes. It is expected that larger the number of sound classes in E the more distinguishing elements we can expect to obtain and the better it is for geotagging. Hence, it is desirable that any framework working under this idea should be scalable in terms of number of sounds in E. In our proposed framework the process of learning basis matrices M l are independent of each other and can be easily parallelized. Similarly, obtaining composition weight matrices W l i can also be computed in parallel for each E l and so do the features h l i (or v l i ) and kernel matrices. One can also easily add any new sound class to an existing system if required. Hence, our method can be easily scaled in terms of number of sound events in E. Even with 10 sound events from urban sound taxonomy we obtained reasonably good performance. Our proposed framework outperformed known bag of audio word method by a significant margin. Currently, we used simple kernel fusion methods to combine event specific kernels. One can potentially use established methods such as multiple kernel learning at this step. This might lead to further improvement in results. One can also look into other methods for obtaining basis matrices for sound events. A more comprehensive analysis on a larger dataset with larger number of cities can through more light on the effectiveness of the proposed method. However, this work does give sufficient evidence towards success of audio content based geotagging in multimedia.
Evaluating the Effects Related to Restocking and Stock Replenishment of Penaeus penicillatus in the Xiamen Bay, China : The quantitative evaluation of restocking and stock replenishment is essential for providing operational feedback and implementing adaptive management for future restoration projects. Since 2010, approximately 700 million juvenile shrimp ( Penaeus penicillatus ) have been released into Xiamen Bay, Fujian Province, China, each year, through stock replenishment programs. The recruited shrimp were sampled through three-year bottom trawl surveys from 2014 to 2017. The biological characteristics and catch equation were used to evaluate the effect of restocking and stock replenishment. The analysis uses the FAO-ICLARM Stock Assessment Tool (FISAT II) program. In general, there are two sources of recruitments—one from spawning brood stock and the other from released juvenile shrimp. We constructed an evaluation model for an effect evaluation based on Baranov’s catch equation to separate the initial recruitment volume using survey data. The relationship between body weight and total length was W = 1.638 TL 2.9307 . There is no statistically significant difference between males and females. The von Bertalanffy growth parameters derived for prawns, using FiSAT II, were L ∞ = 209.6 mm and K = 0.51 per year. In spring 2014, the initial resource amount was 49,200, while the ratio of effective recruitment and parent amount was 3.92. The survival rate of the released shrimp larvae, 1.88 ‱ , seems to be very unsatisfactory. The resource amount in summer and autumn is higher than in winter and spring. Obviously, the restocking effect is lower and the programs need to be improved. To improve the restocking effect, the replenishment performance should be adjusted to reduce the mortality rate and increase its release effectiveness. Therefore, corresponding implementations are recommended, including standard extensive culture, reduction in stress during transportation, and temporary culture. Introduction In many offshore and pelagic fisheries in the world, nearly 90% of marine fish resources have been fully exploited, overexploited, or depleted, and are affected by the degradation of aquatic habitats [1]. Global capture fisheries production has stagnated, while seafood demand has steadily increased [1,2]. However, for some coastal fisheries, restocking, stock replenishment, and sea ranching via the application of aquaculture technology are also expected to help to restore the lost production and possibly increase harvests beyond the original level [3]. The improvement of fisheries based on aquaculture is a set of management methods that involve the release of cultured organisms to enhance, protect, or restore the and restocking. We have particularly constructed an assessment model to evaluate the effect based on Baranov's catch equation, which can separate the amount of initial recruitment using survey data from 2014 to 2017. The aim of this research, which analyzes the biological characteristics and restocking effects of P. penicillatus in Xiamen Bay, is to provide an idea regarding the effect of fisheries replenishment and restocking, provide suggestions on how to improve the effects, and serve as a reference for formulating redtail prawn management policies and the conservation of fisheries resources. Study Area The study area is in Xiamen Bay (117 • 50 -118 • 20 E and 24 • 14 -24 • 42 N) and has a total marine area of 1281 km 2 (Figure 1). It is a semi-enclosed bay, and the whole area includes the eastern areas (including Tong'an Bay, the Dadeng-Xiaodeng region, and Weitou Bay) with a sandy substrate and the western areas (including Western Harbor and Jiulong River Estuary), which are characterized by estuarine processes [25]. The water depth around the bay ranges from 6 to 25 m, and the bay has a deep-water coastal area of 30 km [26]. The bay contains many habitat types (including mangrove wetlands and sandy beaches) and diverse biological groups [27]. Xiamen Bay is an important economic region for Xiamen as well as the whole of Fujian Province. There are numerous ports, transport infrastructure, shipbuilding, and petrochemical industries. Therefore, the area is also affected by intensive development activities, including shipping, aquaculture, reclamation, and tourism. These activities have led to several issues, including a decline in marine biodiversity, habitat loss, and water pollution [28]. To manage shrimp fisheries resources and sustainability in Xiamen waters, biological information and population assessments are needed. This research aims to analyze the biological aspect of redtail prawn (P. penicillatus) in Xiamen waters, consisting of the sex ratio, length-weight analysis, growth parameter, level of exploitation (natural mortality rate, fishing mortality rate, and total mortality rate), and evaluation of stock replenishment and restocking. We have particularly constructed an assessment model to evaluate the effect based on Baranov's catch equation, which can separate the amount of initial recruitment using survey data from 2014 to 2017. The aim of this research, which analyzes the biological characteristics and restocking effects of P. penicillatus in Xiamen Bay, is to provide an idea regarding the effect of fisheries replenishment and restocking, provide suggestions on how to improve the effects, and serve as a reference for formulating redtail prawn management policies and the conservation of fisheries resources. Study Area The study area is in Xiamen Bay (117°50′-118°20′ E and 24°14′-24°42′ N) and has a total marine area of 1281 km 2 (Figure 1). It is a semi-enclosed bay, and the whole area includes the eastern areas (including Tong'an Bay, the Dadeng-Xiaodeng region, and Weitou Bay) with a sandy substrate and the western areas (including Western Harbor and Jiulong River Estuary), which are characterized by estuarine processes [25]. The water depth around the bay ranges from 6 to 25 m, and the bay has a deep-water coastal area of 30 km [26]. The bay contains many habitat types (including mangrove wetlands and sandy beaches) and diverse biological groups [27]. Xiamen Bay is an important economic region for Xiamen as well as the whole of Fujian Province. There are numerous ports, transport infrastructure, shipbuilding, and petrochemical industries. Therefore, the area is also affected by intensive development activities, including shipping, aquaculture, reclamation, and tourism. These activities have led to several issues, including a decline in marine biodiversity, habitat loss, and water pollution [28]. The biotic survey was conducted at six sample stations (Figure 1). These stations are briefly described as follows: XM01 includes the estuary of Jiulong River with larger freshwater exchange and mud sedimentation, where the seabed substrata are muddy because The biotic survey was conducted at six sample stations (Figure 1). These stations are briefly described as follows: XM01 includes the estuary of Jiulong River with larger freshwater exchange and mud sedimentation, where the seabed substrata are muddy because of the river's material deposition. XM02 and XM03 are located within the shallow waters, which face the open sea, with a substrate with mud-sand mixing and fluent water exchange. Compared to site XM02, site XM03 is better sheltered from monsoon wind waves. In addition, XM02 is at the front of the open zone with the greatest depth. Additionally, it stands where the wave current is the strongest, but also has the most drifted sand. Sites XM04, XM05, and XM06 are behind Kinmen Island and Xiaokinmen Island, especially site XM06, which is located at the mouth of Tongan Bay, a semi-enclosed water environment. A lot of sludge is deposited at Tongan Bay behind the bay mouth because of the sea water backflow. Biological Sampling From May 2014 to February 2017, field surveys were conducted to explore the effect of stock replenishment and restocking of the P. penicillatus population in Xiamen Bay, China. A total of 6 sites ( Figure 1) were set up to conduct biological sampling. The "Minlongyu 62,678 Fishing Boat" in Longhai City was used, and twelve cruises were performed as a seasonal sampling in February, May, August, and November. The vessel was a single-boat truss bottom trawler with a main engine power of 330 kilowatts. The height of the bottom trawl net mouth was 2.5 m, the length of the net was 24 m, the mesh of the bag net was 20 mm, the width of the truss was 27 m, and the average towing speed was 2-3 n mile/h. Surveys were conducted in accordance with the "Regulations for the Survey of Marine Fishery Resources (SC/T 9403-2012)". Towing took place approximately parallel to the coastline at a speed of 2.5 knots for 1 h. After lifting the net, all the catch was poured onto the deck. All unwanted debris, plants, and garbage were first removed from catches; thereafter, the remainder was sorted into fish and shrimp categories. The shrimp and fish were put in marked plastic bags, and samples were frozen for species identification and further analysis in a laboratory. The crustacean samples were thawed in the laboratory, rinsed, and species identification, counting, weighing (g), and biological parameter determination at substations were performed; males and females were identified; the total length (mm) and carapace length (mm) of all samples of P. penicillatus were measured. A total of 312 shrimps (172 females and 140 males) were collected for the present study. Biological Characteristics The length-weight relationships were determined according to the allometric equation given by Sparre and Venema [29], as follows: where W = weight (g), L = total length (mm), a is the proportionality constant, and b is the isometric exponent or slope indicating isometric growth, also known as the power exponent coefficients. The weight-length relationships were also determined for each sex. The t test was used to analyze differences in total lengths and the weights of the sexes. Growth in length and weight was analyzed separately for each sex using the von Bertalanffy growth function (VBGF). By means of the input data from the length frequencies and the FiSAT II program, the asymptotic length (L ∞ ) and the growth coefficient (K) were estimated. Data analysis was conducted with the most recent version of FiSAT statistical software [30,31]. The instantaneous rate of natural mortality (M) was obtained using Pauly's empirical formula [32]. The annual average surface water temperature T was 21.6 • C in Xiamen Bay. The instantaneous rate of total mortality (Z) was found from the estimate of the growth parameters (K, L ∞ ), using the length-converted catch curve method (FiSAT II software) [31]. The instantaneous rates of fishing mortality (F) were calculated by the subtraction of the estimates of M from Z, as follows: The exploitation rate was calculated as follows: Then, the total mortality (A) was obtained using the follow equation: The derived estimates were then compared with other studies to allow further assessment. Biomass Estimation A method for assessment of stocking was developed and applied based on stock assessment theory and methods. Shrimp biomass was calculated using the swept area method [33]. The swept area (a, nm 2 ) or 'effective path swept' for each tow was calculated as follows: where a is the sweeping area of each tow, v is the towing speed, t is the duration of the towing, and B is the width of the path swept by the trawl (the width of the truss bar in this study is 27 m). Catch rates were calculated as catch (C, kg) divided by the time spent trawling (t, h) and converted to catch per unit area (CPUA, kg/nm 2 = biomass b per unit area) by dividing by the swept area ((C/t)/(a/t) = C/a). The formula of the average abundance (N, kg), total biomass, was calculated from the following: where C/a is the CPUA of all tows (kg/square kilometers), ai is the sweeping area of the i-th station, ci is the number of catches at the i-th station, q is the proportion of catches (0.5 in this study; we assumed that all shrimps in the path of the tow would be captured), Ar is the overall area (154.18 square kilometers in this study), and n is the total number of tows, which is derived from the number of survey stations in all seasons. The survey frequency is quarterly in this study. Therefore, the average abundance in that year is the average value from the four quarters. Model Building and Biomass Assessment Generally, the spawning season of P. penicillatus is between April and May in Xiamen Bay. To facilitate calculation and setting conditions, we assume that all spawning will be completed on 1 May (ts: time of spawning). The release time is generally 15 June. This time is usually around the international ocean week (8 June). Therefore, we assume that P is the number of spawning shrimps in the natural sea area, te is the annual release time of juvenile shrimp (te: release time of stock replenishment), tr and NR are the recruitment time and the number of recruitments for juvenile shrimp to enter the fisheries, respectively. At this time, there are two sources of recruitments NR; one is from spawning brood stock NRP and the other is from released juvenile shrimp NRr. The schematic diagram of assessment model is shown in Figure 2. respectively. At this time, there are two sources of recruitments NR; one is from spawning brood stock NRP and the other is from released juvenile shrimp NRr. The schematic diagram of assessment model is shown in Figure 2. NR = NRP + NRr According to Baranov's catch equation, the average abundance during the interval is as follows: where is average abundance and N × A = total deaths. Then, a simplified algorithm can be conducted for three years, which can estimate the number of initial recruitments. The average abundance for the first year can be constructed in the form of the following equations: The average abundance for the second and the third year can be constructed in the form of the following equations: According to Baranov's catch equation, the average abundance during the interval is as follows: where N is average abundance and N × A = total deaths. Then, a simplified algorithm can be conducted for three years, which can estimate the number of initial recruitments. The average abundance for the first year can be constructed in the form of the following equations: The average abundance for the second and the third year can be constructed in the form of the following equations: where, the following apply: P is the amount of female brood stock in natural seas. This value represents the amount of brood stock in the previous period, which will determine the number of next-generation juvenile shrimp. C r is the conversion ratio, which represents the ratio between the recruitment amount entering the fishing area in natural seas and the amount of brood stock P. t is the period of released shrimps within the year. R t is the number of shrimps released from artificial farming each year. α t is the ratio of the number of females to the total resources at time t. S is the survival rate of released shrimps entering the fishing area. A t is the total mortality of shrimp in a natural sea area at time t. Z t is the total mortality coefficient of shrimp in a natural sea area at time t. N is the average abundance of shrimp in a natural sea area at time t. The above continuous Equation (4) includes a total of eight parameters, where R t can be obtained from the website of Xiamen Municipal Bureau of Marine Fisheries, A t and Z t can be obtained by the body length transformation catch curve method in the FiSAT II software, and t can be calculated by the sea sweeping area method and substituted into Equation (12). Finally, we can calculate the value of N 0 , C r , and S by solving the continuous equations. We assumed that (1) the population is a one-year-old species, that is, one year of sexual maturity, and the parent dies after laying eggs; (2) there is no difference between the released population and the natural one in the sea area, and they are fully mixed; (3) after the release activity, the fishing and natural mortality coefficients of this population generation are constant; (4) the reproduction rate is consistent every year; (5) there is poor mobility of species, and it is assumed that there is no population moving in and moving out in the sea area. Biological Information and Growth Estimation During the period from 2014 to 2017, a total of 312 shrimps (172 females and 140 males), with a total of 7573.18 gm, were collected from 12 cruises. Among different stations, the largest number was a total of 77 prawns, 1423.51 gm, accounting for 24.68% of the total catch number and 18.80% of the total catch weight, obtained at station XM05. In contrast, the smallest number was 20 tails, 436.6 gm, accounting for 6.41% of the total catch number and 5.77% of the total catch weight, obtained at station XM02. The number of shrimps caught in summer and autumn is higher than that in winter and spring. Among them, 107 shrimps were caught in summer and autumn, accounting for 34.29% of the total catch, while 21 shrimps were caught in winter and 77 in spring, which accounted for 6.73% and 24.68% of the total catch, respectively. The highest catch weight in autumn and the lowest in spring, which are 3623.30 g and 754.13 g, accounted for 47.84% and 9.96% of the total catch weight, respectively. The size frequency data are plotted in Figure 3. The total length of the shrimps was 62 to 191 mm, the main range was 100 to 150 mm (63.14%), the weight range was 2.8 to 79.6 gm, and the main range was 10.0 to 20.0 gm (31.41%). There are obvious seasonal changes here. The figure also shows the season when the recruitment group enters the sea and the state of seasonal growth. For females, the smallest individuals (75 mm TL) were observed in the months of May, June, August, and September, while the largest sizes (>215 mm) were found in all the months (Figure 2). Most individuals, however, fell within the range of 145 to 195 mm TL. The smallest male individual (65 mm TL) was found in the month of June, while large sizes (>215 mm) were found in the month of May (Figure 3). Most female shrimps fell within the range of 115 to 155 mm TL. These results reveal that males are generally larger than females. The weight-length equations obtained were as follows ( Figure 4): For females, the smallest individuals (75 mm TL) were observed in the months of May, June, August, and September, while the largest sizes (>215 mm) were found in all the months (Figure 2). Most individuals, however, fell within the range of 145 to 195 mm TL. The smallest male individual (65 mm TL) was found in the month of June, while large sizes (>215 mm) were found in the month of May (Figure 3). Most female shrimps fell within the range of 115 to 155 mm TL. These results reveal that males are generally larger than females. The weight-length equations obtained were as follows ( Figure 4): In general, there are two kinds of allometric growth patterns, positive allometric (b > 3) and negative allometric (b < 3). Positive allometric indicates that the growth in weight is dominant compared to length, while negative allometric indicates that the growth in length is more dominant than the growth in weight. This seems to imply that growth in In general, there are two kinds of allometric growth patterns, positive allometric (b > 3) and negative allometric (b < 3). Positive allometric indicates that the growth in weight is dominant compared to length, while negative allometric indicates that the growth in length is more dominant than the growth in weight. This seems to imply that growth in length is more dominant than growth in weight. The estimation of the growth parameters of P. penicillatus uses the von Bertalanffy equation, with length-frequency data as the input data in the FISAT II program ( Figure 5). The growth coefficient (K) value is 0.51 year −1 , and the asymptotic total length (L ∞ ) can reach 209 mm. In general, there are two kinds of allometric growth patterns, positive allometric (b > 3) and negative allometric (b < 3). Positive allometric indicates that the growth in weight is dominant compared to length, while negative allometric indicates that the growth in length is more dominant than the growth in weight. This seems to imply that growth in length is more dominant than growth in weight. The estimation of the growth parameters of P. penicillatus uses the von Bertalanffy equation, with length-frequency data as the input data in the FISAT II program ( Figure 5). The growth coefficient (K) value is 0.51 year −1 , and the asymptotic total length (L∞) can reach 209 mm. Regarding the weight-length equations obtained in this paper, W = 1.638 TL 2.9307 . Compared with the parameter of weight-length from the study of Chen and Hong [34], the coefficients of a and b are very close (Table 1). The results of the statistical F test show that there is no difference between males and females. However, Liu and Zhong [35] showed, in 1986, that males are heavier than females with the same body length. In contrast, the results of Wang et al. showed no difference between males and females [36]. Regarding the weight-length equations obtained in this paper, W = 1.638 TL 2.9307 . Compared with the parameter of weight-length from the study of Chen and Hong [34], the coefficients of a and b are very close (Table 1). The results of the statistical F test show that there is no difference between males and females. However, Liu and Zhong [35] showed, in 1986, that males are heavier than females with the same body length. In contrast, the results of Wang et al. showed no difference between males and females [36]. Comparing the results of Liu and Zhong in 1986 with this 2014-2017 study, it is shown that both males and females are heavier in the 1986 study than they are in the results of this study [35]. This seems to imply that the weight of P. penicillatus is lighter in this study. The reason for the difference may come from different sea areas; Liu and Zhong conducted their study in the South China Sea [35], and this study (2021) was conducted in the waters of Xiamen, using the same ocean fishing method. However, comparing the different environment, aquaculture, and natural sea, the body size of the aquaculture [36] is smaller than the natural sea [35] and this study. Regarding the difference in growth between the marine environment and aquaculture environment, the growth of shrimp may be caused by environmental differences. Estimation of Mortality The results of the analysis showed that the instantaneous rate of the natural mortality (M) of P. penicillatus was 0.5919 year −1 . In this study, the temperature used was 21.6 • C, which was the average water temperature in the three years in Xiamen waters. Then, the instantaneous rate of total mortality (Z) was obtained using the length-converted catch curve method in the FiSAT II software, Z = 1. 15 year −1 (Figure 6). Hereafter, the total mortality (A) was 0.683, the instantaneous rates of fishing mortality (F) = 0.5581 year −1 , and the exploitation rate (E) = 0.485 year −1 , respectively. According to Sparre and Venema (1992), the optimum exploitation ratio Eopt is 0.5, implying that the stock of P. penicillatus in Xiamen waters is heavily exploited. The optimum level of exploitation (Eopt) is 0.5, beyond which the stock was said to be over-exploited [29]. This level implies that the stock of the P. penicillatus in Xiamen waters is exploited in a suitable state. Komi et al. reported a fishing mortality of F = 0.39 year −1 , natural mortality of M = 1.640 year −1 , total mortality of Z = 2.030 year −1 , and exploitation rate of E = 0.1906 of Metapenaeus elegans in the Andoni River, Nigeria, which indicated that the stock was under-exploited [37]. Another species of shrimp, Metapenaeus elegans of Segara Anakan Lagoon Cilacap, Central Java, showed overexploitation of this population, obtained from F = 6.760, M = 1.430, and E = 0.8830 [38]. Nwosu also reported an over-exploitation case of P. notialis, with E = 0.77 in 2007 and E = 0.69 in 2008 [39]. Most species of penaeid shrimps are over fished [40][41][42]. The exploitation rate could help understand the losses caused by fishing efforts and natural deaths, so that recommendations for management can be made. The catch deaths and natural deaths were almost equal, which also means that the biomass was not wasted in natural death. Therefore, the exploitation rate of P. penicillatus in Xiamen waters was conducive to management purposes. Such conditions could be harvested sustainably to maintain the socio-economic capacity and development of coastal communities. Assuming that the instantaneous rates of total mortality (Z) and total mortality (A) in three years are the same, the data obtained above are substituted into 1.15/year and 0.683 respectively. According to the survey data, the proportion of females in these three years are 0.5, 0.6512, and 0.5046, respectively. At the same time, using the method of sweeping the sea area, it can be calculated that the average volume in the past three years is 76,000, 103,100, and 64,600. According to Equation (4), it can be calculated that for the initial female resource of P. penicillatus in the waters of Xiamen, the estimate was 49.2 thousand in the spring of 2014, while the ratio between the effective recruitment amount The optimum level of exploitation (Eopt) is 0.5, beyond which the stock was said to be over-exploited [29]. This level implies that the stock of the P. penicillatus in Xiamen waters is exploited in a suitable state. Komi et al. reported a fishing mortality of F = 0.39 year −1 , natural mortality of M = 1.640 year −1 , total mortality of Z = 2.030 year −1 , and exploitation rate of E = 0.1906 of Metapenaeus elegans in the Andoni River, Nigeria, which indicated that the stock was under-exploited [37]. Another species of shrimp, Metapenaeus elegans of Segara Anakan Lagoon Cilacap, Central Java, showed overexploitation of this population, obtained from F = 6.760, M = 1.430, and E = 0.8830 [38]. Nwosu also reported an overexploitation case of P. notialis, with E = 0.77 in 2007 and E = 0.69 in 2008 [39]. Most species of penaeid shrimps are over fished [40][41][42]. The exploitation rate could help understand the losses caused by fishing efforts and natural deaths, so that recommendations for management can be made. The catch deaths and natural deaths were almost equal, which also means that the biomass was not wasted in natural death. Therefore, the exploitation rate of P. penicillatus in Xiamen waters was conducive to management purposes. Such conditions could be harvested sustainably to maintain the socio-economic capacity and development of coastal communities. Assuming that the instantaneous rates of total mortality (Z) and total mortality (A) in three years are the same, the data obtained above are substituted into 1.15/year and 0.683 respectively. According to the survey data, the proportion of females in these three years are 0.5, 0.6512, and 0.5046, respectively. At the same time, using the method of sweeping the sea area, it can be calculated that the average volume in the past three years is 76,000, 103,100, and 64,600. According to Equation (4), it can be calculated that for the initial female resource of P. penicillatus in the waters of Xiamen, the estimate was 49.2 thousand in the spring of 2014, while the ratio between the effective recruitment amount and the brood stock amount was 3.92, and the survival rate of released shrimp larvae was 1.88‱. Estimation of Biomass and Replenishment Effect Assessment Jiag et al. evaluated the recaptured yield and abundance of spawners originating from the hatchery-released juveniles of Nibea albiflora in Xiangshan Bay, with a mark-recapture experiment [11]. The natural and fishing mortalities of the hatchery-released individuals were 0.51/a and 1.31/a, respectively. The effectiveness of N. albiflora stock replenishment is strongly dependent on the level of fishing effort, and an appropriate reduction in the fishing effort would benefit both the recapture yield and the abundance of spawners originating from hatchery-released juvenile N. albiflora. In this study, the survival rate of the larvae of P. penicillatus stock replenishment released in Xiamen Bay from 2014 to 2016 was 1.88‱. The result showed that only a very small number of individuals could survive during the release process. This means that although billions of prawn larvae were released, the amount of resources in the sea was not well restored. Wang used historical production to evaluate the release effect of P. penicillatus in Luoyuan Bay and believed that the recapture rate of release was between 5% and 7% [16]. An in vitro listing method evaluation of the reproduction and release effect of P. penicillatus in Sanduao waters was conducted [21], and the recapture rates of the two times release were 9.3% and 20.45%, respectively. Lin et al. conducted multiple releases of prawns in Daya Bay waters. In vitro plastic labeling was carried out, and the results showed that the recovery rate of P. penicillatus was 5.0% [43]. The abovementioned studies seem to have very different results from the results of this article. We speculate that the main reason may be twofold, as follows: the size of the released juveniles is different, and the release process is different. We discussed the two release procedures and conditions of Guan et al. [21]. The average body lengths of the shrimp larvae were 7 cm and 10.72 cm, respectively, and they were temporarily raised before release. Obviously, the use of larger juveniles and good temporary rearing is beneficial [21]. In contrast, regarding the release process and conditions of Lin et al., the body length of juvenile shrimp is between 1 and 5.7 cm, which also seems to represent a small juvenile shrimp [36]. Compared with our conditions, the average body length of the shrimp released in Xiamen Bay, from 2014 to 2016, was 1 cm. In short, the two conditions, including small individuals and no temporary rearing, may be the reason behind the higher mortality and lower fishing rate. At that time, due to the small size of juvenile shrimp, it was not easy to use the method of marking in vitro, and the damage of the body surface was expected to cause many deaths, so this study did not use the method of marking and recapture evaluation. To overcome this problem, molecular labeling and isotope labeling seem to be more suitable methods for smaller release species. This is also an expected topic of ongoing research in the future. As we could not use the method of marking and recapture, we chose traditional fisheries resource evaluation. In this study, we used mathematical modeling to separate the two sources of recruitments. The highlight of this approach is that it confirms the effect of the release. Furthermore, the data used can basically be obtained through years of field surveys. Conservation Suggestions At this moment, the scientific basis for marine restocking, stock replenishment, and sea ranching continues to advance rapidly. The government assesses the possible contribution of these methods to fisheries management goals before designing enhancements and needs to be judged as having good potential, with the plan implemented effectively and responsibly [44]. It is widely recognized that there is a need to reduce fishing efforts and restore habitats to increase the resilience of capture fisheries [44]. However, the application of this technology still has a long way to go before an integrated management system that successfully solves all biological, ecological, social, cultural, and economic problems is in place [5]. The main challenges include determining when and where to use interventions to add value to management, integrating these initiatives with institutions and fisheries management regimes, monitoring the success of interventions, considering the cost effectiveness of culturing farmed juveniles, and releasing them into the wild, so that they survive in high proportions [5,44]. Marine ranching, which is considered a sustainable fisheries mode that has advantages for the ecosystem approach to fisheries [45], aquaculture, and capture-based aquaculture, is rapidly growing in China [45]. Habitat restoration and construction technology, stock replenishment, and the behavioral control of fisheries resources are some of the important methods of marine ranching [23,[44][45][46]. Due to the small body length of the released juvenile P. penicillatus, their ability to resist impact is low. In addition, there are many predators in the environment. Ultimately, the number that can survive is limited. Therefore, in order to improve the survival rate of juvenile shrimps, corresponding implementations should be taken, including standard crude culture, reduction in stress during transportation, and temporary breeding. These implementations can improve the resources of P. penicillatus for the purpose of restoration in Xiamen Bay. In addition, the factor of the habitat environment is also very important. This factor will have a great impact on the survival and growth of the prawn. With the development of the economy in Xiamen Bay, there are a lot of human-made disturbances, such as estuary pollution, marine garbage, seabed sand mining, and artificial land reclamation, etc. Reducing the damage to the habitat of the prawns requires the relevant departments to formulate corresponding implementations. For this, strengthening sewage treatment and monitoring, resolutely combating illegal sand mining, and salvaging seabed garbage are recommended, among others [46]. Improving the living environment and optimizing the habitat of P. penicillatus are meaningful implementations to increase the survival rate of the released prawns. Illegal fishing is also one of the influencing factors [46]. Jiag et al. mentioned that the effectiveness of N. albiflora stock replenishment is strongly dependent on the level of fishing effort, and an appropriate reduction in the fishing effort would benefit both the recapture yield and the abundance of spawners originating from hatchery-released juvenile N. albiflora [11]. Thus, an appropriate reduction in the fishing effort is essential to improve its effectiveness. Therefore, we make the following recommendations: 1. Vigorously crack down on illegal fishing activities, including the confiscation and destruction of illegal fishing gear, and strengthen the punishment of offenders. 2. Conduct legal and ecological education to make relevant fishermen understand the importance of ecology, the environment, and resources. 3. Increase the number of patrols, regular patrols, and random inspections, so that illegal fishing personnel have nowhere to hide, and increase the cost of illegal fishing crimes. 4. Organize professionals from relevant scientific research institutions to regularly carry out publicity and education on marine biological resources and environmental protection, enter primary and secondary schools or communities, and improve the ecological awareness of ordinary people regarding protecting and caring for the ocean. In the future, after the restoration of P. penicillatus in Xiamen Bay, the genetic diversity of the species must be ensured on a sustainable basis. In order to prevent all the released individuals from being close relatives, the gene integrity of Penaeus prawns in natural seas is destroyed. Therefore, in the process of releasing, it is necessary to collect and hatch eggs of different gene groups. Finally, enhancements enter into complex fisheries systems. It is crucial to consider the fisheries system, broad objectives for management, and the full range of management options when assessing the potential for developing and using enhancements [44,45]. Conclusions Since 2010, approximately 700 million juvenile shrimp (P. penicillatus) have been released into Xiamen Bay, Fujian Province, China, each year, through stock replenishment programs. The biological aspect consists of length-weight analysis, growth parameters, level of exploitation (natural mortality rate, fishing mortality rate, and total mortality rate), and an evaluation of stock replenishment and restocking was conducted. An assessment model for effect evaluation, according to Baranov's catch equation, to separate the amount of initial recruitment, was constructed using survey data from 2014 to 2017. The survival rate of the released shrimp larvae, 1.88‱, seems to be very unsatisfactory. Obviously, the restocking effect is lower and the programs need to be improved. To improve the restocking effect, the replenishment performance should be adjusted to reduce the mortality rate and increase its release effectiveness. Therefore, corresponding implementations are recommended, including standard extensive culture, reduction in stress during transportation, and temporary culture. Of course, the results of this evaluation are unsatisfactory. We believe that the main limitations of the created evaluation model are the consequence of certain assumptions that were made. Therefore, identifying how to improve our ability to carry out successful evaluations is expected to be the focus of efforts in the future.
Sequential stub matching for uniform generation of directed graphs with a given degree sequence Uniform sampling of simple graphs having a given degree sequence is a known problem with exponential complexity in the square of the mean degree. For undirected graphs, randomised approximation algorithms have nonetheless been shown to achieve almost linear expected complexity for this problem. Here we discuss the sequential stub matching for directed graphs and show that this process can be mould to sample simple digraphs with asymptotically equal probability. The process starts with an empty edge set and repeatedly adds edges to it with a certain state-dependent bias until the desired degree sequence is fulfilled, while avoiding placement of a double edge or self loop. We show that uniform sampling is achieved in the sparse regime, when the maximum degree $d_\text{max}$ is asymptotically dominated by $m^{1/4}$, where $m$ is the number of edges. The proof is based on deriving various combinatorial estimates related to the number of digraphs with a given directed degree sequence and controlling concentration of these estimates in large digraphs. This suggests that the sequential stub matching can be viewed as a practical algorithm for almost uniform sampling of digraphs, and we show that this algorithm can be implemented to feature linear expected runtime $O(m)$. A graph is simple when it has no multi edges or self loops. Given a degree sequence, the existence of the corresponding simple graph can be checked with the Erdős-Gallai criterium. However, enforcing this property when sampling a graph uniformly at random is difficult. One straightforward but computationally expensive way is the rejection sampling with configurational model [1]. The idea is to construct a multigraph by randomly matching stubs of a given graphical degree sequence. Repeating such constructions multiple times will eventually produce a simple graph in time that is exponential in the square of the average degree [2]. This strategy can be improved if instead of rejecting every multigraph, one 'repairs' them by switching edges to remove edge multiplicity. Such a procedure was shown to implement exact uniform sampling in polynomial time for undirected graphs, for example, see [3]. Uniform generation of simple graphs is used in analysis of algorithms and networks [4,5,6]. In algorithmic spectral graph theory, fast sampling is required to study spectra of sparse random matrices [7,8,9], where beyond the case of undirected graphs, heuristic algorithms had to be postulated. Moreover, generating random graphs is closely related to counting and generation of binary matrices with given row and column sums. In all of these areas, generation of directed graphs is of equal importance, however, this is a markedly different problem from sampling undirected graphs. Indeed, the adjacency matrix in the directed graph is non-symmetric and hence it satisfies different column and row sums. Another reason to study directed random graphs is that, as a special case, they give a simple representation for bipartite graphs and hypergraphs that can be exploited for sampling [10]. To represent a hypergraph, for example, consider a digraph with all vertices being either sinks (identified with hypervertices) or sources (identified with hyperedges). Note that bipartite graphs are the special cases of directed graphs but not the other way around, as in-and out-degrees of a given vertex may be dependent. Lots of research has been conducted to generalise sampling algorithms to more general graphs. Here we distinguish two of the main algorithmic families, while for an exhaustive review the reader is referred to [11]. Firstly, Markov Chain (MC) algorithms approximate the desired sample by taking the last element of an ergodic Markov chain [12,13,14,15,16]. For a given number of nodes n, one can always improve the expected error bound on the output distribution in an MC method by running the algorithm longer. For this reason, initiating the MC chain with a seed that itself is chosen with minimal bias benefits such algorithms. In the same time, ensuring that this sample is sufficiently independent from the initial seed after a number of iterations, i.e. estimating the mixing time of the chain, is generally a difficult problem and has been achieved only for several classes of random graphs. Various MC algorithms were suggested for graphs with arbitrary degree sequences by Kannan, Tetali and Vempala [17], whereas the rapid mixing property was shown in [15] for the class of Pstable degree distributions, and more recently, for other stability classes by Gao and Greenhill [16]. See also, Jason [18] for the analysis of the convergence to uniformity. Bergerand and Müller-Hannemann suggested a MC algorithm for sampling random digraphs [14], with some relevant rapid mixing results shown by Greenhill [19,20] and Erdős et al. [21]. Further generalisations were also proposed for degree-correlated random graphs and bipartite graphs [22,22,10]. As an alternative to MC, sequential algorithms construct simple graphs by starting with an empty edge set and adding edges one-by-one by with a stub matching. The crux is to employ state-dependent importance sampling and select a new edge non-uniformly from the set of possible pairs while updating the probability after each edge placement [23,24,25]. Because the number of steps is fixed, sequential algorithms run in almost linear time. The price to pay is that the uniformity is achieved only asymptotically for a large number of edges, which creates the practical niche for assessing asymptotic properties of large graphs, e.g. to study sparse random matrices and complex networks. Moreover, one may eliminate uniformity error also for finite graphs, by post-processing with the degree-preserving MC method. The other advantage is that sequential algorithms produce an a posteriori estimate for the total number of graphs with given constants, which makes them potentially useful for statistical inference [26]. Sequential sampling has been realised for regular graphs with the running time shown to be O(md 2 max ) by Kim and Vu [24]. Bayati, Kim and Saberi [25] generalised the sequential method to an arbitrary degree sequence, yet maintaining a near-linear in the number of edges algorithmic complexity. For these algorithms, the maximum degree may depend on n with some asymptotic constraints, and the bounds on the error in the output distribution asymptotically vanish as m tends to infinity. Other approaches [27,28,29] realise non-uniform sampling while also outputting probability of the generated sample a posteriori. Hence they may be used to compute expectations over the probability space of random graphs. Let us call a directed graph simple when it has no self loops or parallel edges with identical orientation. If there is a simple directed graph with a given degree sequence prescribing the number of in-and out-stubs for each vertex, we call this distribution graphical. A given degree sequence can be tested for this property by applying Fulkerson's criterion [30]. In this work, we provide a sequential algorithm for asymptotically uniform sampling of simple directed graphs with a given degree sequence by generalising the method of Bayati, Kim and Saberi [25], which requires more delicate analysis because of a two-component degree sequence. We show that the expected runtime of our algorithm is O(m) and the bound on the error between the uniform and output distributions asymptotically vanishes for large graphs. Our algorithm provides a good trade-off between the speed and uniformity, and additionally computes an estimate for the number of directed graphs with a given directed degree sequence. Furthermore, we would like to stress that for finite n, our algorithm is complementary to available MC-based methods, as it produces good seeds for further refining with directed edge switching techniques. We expect that, that by introducing the transition form univariate degree distributions to degree distribution with two types of stubs (in-and out-edges) may open an avenue for further generalisations, for example, to coloured edges or random geometric graphs. From sampling perspective, coloured graphs comprise a less tractable class of problems. For instance, even answering the question whether a given coloured degree sequence is graphical is an NP hard problem for more than two colours [31]. The algorithm is explained in Section 1. The proof that the algorithm generates graphs distributed within up to a factor of 1 ± o(1) of uniformity is presented in Section 2 and is inspired by the proof of Bayati, Kim and Saberi [25], wherein novel Vu's concentration inequality [32] plays a significant role. Our algorithm may fail to construct a graph, but it is shown that this happens with probability o(1) in Section 3. This work is completed with the expected runtime analysis of the algorithm in Section 4. Sequential stub matching Our process for generating simple digraphs is best explained as a modification of the directed configuration model. This model generates a configuration by sequentially matching a random in-stub to a random out-stub. One can therefore see that generating a uniformly random configuration is not difficult, however, a random configuration may induce a multigraph, which we do not desire. This issue can be remedied by the following procedure: a match between the chosen in-and out-stub is rejected if it leads to a self-loop or multi-edge. Then, the resulting configuration necessarily induces a simple graph. Note that this rejection of specific matches destroys the uniformity of the generated graphs. To cancel out the nonuniformity bias, we accept each admissible match between an in-and an out-stub with a cleverly chosen probability, which restores the uniformity of the samples. Namely, we show that the distribution of the resulting graphs is within 1 ± o(1) of uniformity for large graphs. Another consequence of the constraint on acceptable matches, is that it may result in a failed attempt to finish a configuration, for example, if at some step of the matching procedure the only remaining stubs consist of one in-stub and one out-stub belonging to the same vertex. In this case, we reject the entire configuration and start from scratch again. As we will show later in Section 4, a failure is not likely to occur, i.e. the probability that a configuration cannot be finished is o(1). Algorithm 1: generating simple directed graphs obeying a given degree sequence Input : d, a graphical degree sequence without isolated nodes Output: G d = (V, E) a digraph obeying d and N an estimation for the number of simple digraphs obeying d or a failure 1 V = {1, 2, . . . n} // set of vertices 2d = d // residual degree 3 E = ∅ // set of edges 4 P = 1 // probability of generating this ordering 5 while edges can be added to E do Return failure The stub matching process cab be formalised as pseudo-code, shown in Algorithm 1. We use the following notation: be a graphical degree sequence, and m = i>0 d − i = i>0 d + i the total number of edges. Furthermore, we define We wish to construct a simple directed graph G d = (V, E) with vertex set V = {1, . . . , n} and edge set E that satisfies d. At each step, Algorithm 1 chooses edge (i, j) with probability and adds it to E, where the residual in-degreed − i (respectively residual out-degreed + i ) of vertex i is the number of unmatched in-stubs (out-stubs) of this vertex and E the set of edges constructed so far. If for all pairs i, j ∈ V withd + i > 0 andd − j > 0 it happens that either i = j or (i, j) ∈ E, no edge can be added to E and the algorithm terminates. If the algorithm terminates before m edges have been added to E, it has failed to construct a simple graph obeying the desired degree sequence and outputs a failure. If the algorithm terminates with |E| = m, it returns a simple graph that obeys the degree sequence d by construction. In this case the algorithm also computes the total probability P of constructing the instance of G d in the given order of edge placement. We will show that asymptotically each ordering of a set of m edges is generated with the same probability. Hence, the probability that the algorithm generates digraph G d is asymptotically m!P . We will also show that each digraph is generated within a factor of 1 ± o(1) of uniformity, and therefore N = 1 m!P is an approximation to the number of simple digraphs obeying the degree sequence. The value of N is also returned by the algorithm if it successfully terminates. To make these statements more precise, let us consider degree progression {d n } n∈N , that is a sequence of degree sequences indexed by the number of vertices n. The algorithm has the following favourable properties. Theorem 1.1. Let all degree sequences in {d n } n∈N are graphical and such that for some τ > 0, the maximum degree d max,n = O m 1/4−τ , where m is the number of edges in d n . Then Algorithm 1 applied to d n terminates successfully with probability 1 − o(1) and has an expected runtime of O (m). Theorem 1.2. Let d be a graphical degree sequence with maximum degree d max = O m 1/4−τ for some τ > 0. Let G d be a random simple graph obeying this degree sequence. Then Algorithm 1 generates G d with probability Note that the probability in Theorem 1.2 depends on the degree sequence but is asymptotically independent of G d itself, which indicates that all graphs that satisfy d are generated with asymptotically equal probability. The remainder of this work covers the proofs of Theorems 1.1 and 1.2, which are split into three parts, discussing the uniformity of the generated digraphs, failure probability of the algorithm and its runtime. The probability of generating a given digraph The goal of this section is to determine the probability P A (G d ) that Algorithm 1 outputs a given digraph G d on input of a graphical d, which will prove Theorem 1.2. To begin, notice that the output of Algorithm 1 can be viewed as a configuration in the following sense. Definition 2.1. Let d be a degree sequence. For all i ∈ {1, 2, . . . , n} define a set of in-stubs W − i consisting of d − i unique elements and a set out-stub W + i containing d + i elements. Let W − = ∪ i∈{1,2,...,n} W − i and W + = ∪ i∈{1,2,...,n} W + i . Then a configuration is a random perfect bipartite matching of W − and W + , that is a set of tuples (a, b) such that each tuple contains one element from W − and one from W + and each element of W − and W + appears in exactly one tuple. A configuration M prescribes a matching for all stubs, and therefore, defines a multigraph with vertices V = {1, 2, . . . , n} and edge set The output of Algorithm 1 can be viewed as a configuration since at each step an edge (i, j) is chosen with probability proportional tod + id − j , i.e. the number of pairs of unmatched outstubs of i with unmatched in-stubs of j. Let R(G d ) = {M | G M = G d } be the set of all configurations on (W − , W + ) that correspond to G d . Since the output of Algorithm 1 is a configuration, Different configurations correspond to the same graph if they differ only in the labelling of the stubs. Since the algorithm chooses stubs without any particular order preference, each configuration in R(G d ) is generated with equal probability. However, the probability to match an out-stub of i to an in-stub of j at a given step of the algorithm depends on the partial configuration constructed so far. Hence the order in which the matches are chosen influences the probability of generating a configuration M. Let for a given M ∈ R(G d ), S (M) be the set of all the orderings N in which this configuration can be created. Because the configuration already determines the match for each in-stub, an ordering of M can be thought of as an enumeration of edges N = (e 1 , e 2 , . . . , e m ) , e i ∈ E, defining which in-stub gets matched first, which second, etc. There are m! different orderings of the configuration M. This implies that Hence, we further investigate P A (N ). If the algorithm has constructed the first r elements of N , it is said to be at step r ∈ {0, 1, . . . , m − 1}. There is no step m, as the algorithm terminates immediately after constructing the m th edge. Let d − i (r) (respectively d + i (r) ) denote the number of unmatched in-stubs (out-stubs) of the vertex i at step r. Let E r be the set of admissible edges that can be added to the ordering at step r, . . , e r } . With this notation in mind, we write the probability of generating the entire ordering N as where P [e r+1 = (i, j)|e 1 , . . . , e r ] = 1 − . Here we slightly abuse the notation as this is the conditional probability that a given out-stub of i is matched with a given in-stub of j, rather than the conditional probability that the edge (i, j) is created. The probability that the algorithm generates the graph G d is then where Ψ r (N ) = By formally comparing the expression (2) with the statement of Theorem 1.2, we observe that to complete the proof it is sufficient to show that Ψ r (N ) sharply concentrates on some ψ r , such that and Indeed, combining the latter two equations with (2) and using that 1 − x = e −x+O(x 2 ) we find: which coincides with the statement of Theorem 1.2. Thus proving equations (4) and (5) suffices to show validity of Theorem 1.2. Defining ψ r We abbreviate Ψ r (N ) by Ψ r whenever N follows from the context. It is clear that ψ r plays role of an expectation in some probability space. To define this space, note that Ψ r (N ) can be viewed as a function on the subgraph of G d induced by the first r elements of the ordering N , which we denote by G Nr . Hence, when taking the expected value of Ψ r over all orderings, we use random subgraph of G d with exactly r edges. This can be further relaxed by introducing a random graph G pr modelling a subgraph of G d in which each edge is present with probability p r = r m . We define ψ r := E pr [Ψ r ], and spend the remainder of this section to derive its expression. Let us split Ψ r , as defined in equation (3), into a sum of two terms: Here ∆ r counts the number of unsuitable pairs at step r, i.e. the number of pairs of the unmatched in-stubs with out-stubs that will induce a self-loop or multi-edge if added, and Λ r counts the number of suitable pairs (multiplied by the importance sampling factor). In the sequel we refer to a combination of an unmatched in-and out-stub as a pair. Furthermore, we also split ∆ r = ∆ 1 r + ∆ 2 r , into the sum of the number pairs leading to self-loops, ∆ 1 , and the number of pairs leading to double edges, ∆ 2 r = ∆ r − ∆ 1 r . For the suitable pairs, we split relates to total number of possible pairs in the whole graph, Λ 1 subtracts pairs that are self loops, Λ 3 r 2m further reduces this quality by already matched edges to obtain stuitable pairs. We will now derive several bounds on the latter quantities, to be used in Section 2.3. Proof. (i) At step r, there are m − r unmatched in-stubs left. Each unmatched in-stub can form a self-loop by connecting to an unmatched out-stub of the same vertex. The number of unmatched out-stubs at each vertex is upper bounded by d max , hence ∆ 1 r ≤ (m − r)d max . The vertex to which an unmatched in-stub belongs has at most d max − 1 incoming edges. The source of such an edge has at most d max − 1 unmatched outstubs left. Thus the number of out-stubs an unmatched in-stub can be paired with to create a double edge is at most for all v, the claim follows. Next, the following expected values, are defined with respect to random graph model G pr : For each 0 ≤ r ≤ m − 1 the following equations hold: (ii) ∆ 2 r counts the number of pairs leading to a double edge. Choose a random (i, j) ∈ G d . To add an additional copy of this edge at step r, the edge must be already present in G pr , which happens with probability p r . Let a pair of edges (i, k), (l, j) be in G d but not in G pr . This means that in G pr there are unmatched in-stubs and out-stubs such that one could instead form the edges (i, j) and (l, k), creating a double edge. The number of combinations of such l and k, is . By taking the expected value of this value, summing it over all edges of G d and multiplying it by the probability p r that (i, j) ∈ G pr , the claimed expected value of ∆ 2 r follows. (iii) Remark that Λ 1 Bernoulli variables representing the out-stubs (in-stubs). If (i, j) ∈ G d , one fixed in-stub of j forms an edge with a fixed out-stub of i. This implies that the corresponding Bernoulli variables always need to take on the same value. Let us denote these Bernoulli variables by d + i j and d − j i . Now that we have characterised the dependence between d + i (r) and d − j (r) , we are ready to determine For the covariance we have The covariance of any random variable X and a Bernoulli variable Y with expectation p * equals: Plugging this back into the expression for E pr Λ 1 In the proof of (i) we have already showed that (v) From equation (9) it follows that Λ 3 we can use the proof of (ii). This implies each to the sum, proving the claim. Next, we will use the following asymptotic estimates, , to obtain and approximation of ψ r that we will work with. Combing these estimates with Lemma 2.3 we find This allows us to state the following Lemmas, which will be useful in Sections 2.2 and 2.3. with error term Proof. Combing equation (10) with the asymptotic estimate , and since r ≤ m and d 2 max = o(m), the latter equation becomes (5) With help of Lemmas 2.4 and 2.5 we are now ready to prove equation (5). We start by multiplying the left hand side of equation (5) Proof of equation Applying Lemma 2.4 to the numerator and Lemma 2.5 to the denominator the right hand side of the latter equation becomes: and after using that and some asymptotic expansions, we obtain: which proves equation (5). Proof of equation (4) Let us define Then equation (4) becomes equivalent to which we will demonstrate instead in the remainder of this section. We start by rewriting the latter expectation as a sum of expected values of mutually disjoint subsets covering S (M) in the following fashion. Partitioning S (M) The set of orderings S (M) is partitioned as follows: and let S (M) \ S * (M) be the first element of the partition. As the second element of the partition we take where the family of functions T r is defined below, see equation (22), and δ is a small positive constant, e.g. 0 < δ < 0.1. The next element of the partition is chosen from 4. We define as last element as the complement We will now show that the following asymptotic estimates hold Since E [f (N )] is a sum of the above expected values, it remains to introduce suitable definitions for T r and prove equations (17)- (21) to finish the proof of (12). The family of functions T r . We define the family of functions T r : with The quantity c is a large positive constant, which will be defined later, and q r is the probability that an edge of G d is not present in G pr . The intuition behind the definition of this family of functions will become apparent in the remainder of this section. Let λ 0 := ω ln(n) and λ i : We have the following relation between T r (λ i ) and T r (λ i−1 ). Proof. As the function T r is defined piecewise, we distinguish three cases: . Both β r (λ) and γ r (λ) are square roots of a 6 th -order polynomial in λ. As λ i = 2λ i−1 and This completes the proof, because m − r ≥ λ i ω and m − r < λ i−1 ω never holds, as λ i > λ i−1 . In order to prove equations (17) and (18) we subpartition A and B. Let us define the chain of subsets To ensure that we cover S * (M) entirely, we also introduce Now equation (14) implies that Next, we partition A 0 . The goal of this partition is to write B as the union of some smaller Hence there is some n 0 such that for all n > n 0 : Without loss of generality we may assume that n > n 0 . Let K be the unique integer such that Then for all r ≥ m − ωλ 0 : This allows us to define the chain of subsets From equations (15) and (16) it immediately follows that These descriptions of A, B and C enable us to show the validity of equations (17), (18), (19) and (20). First, we prove equation (17). The proof also contains statements that hold for any ordering in S * (M), which are also used in the proof of equations (18), (19) and (20). We finish with the proof of equation (21), which requires a different technique as it concerns all orderings not in S * (M). (17) Based on the definition of A in terms of A i 's and A ∞ , we now prove equation (17). For this we use the following Lemmas. Proof of equation Lemma 2.8. For a large enough constant c, Together these lemmas imply that thus proving equation (17). First, we prove Lemma 2.7 (a) and Lemma 2.8 (a). This is done by showing a stronger statement, This implies that to prove Lemma 2.7 (a) and Lemma 2.8 (a), it suffices to show that for all i ∈ {0, 1, . . . , L} and 0 ≤ r ≤ m − 1, Determining the value of Ψ r is more challenging than the value of Ψ pr in random graph model G pr , where each edge is present with probability p r . As mentioned in Section 2.1, the graph G Nr is a random subgraph of G d with exactly r edges for a random ordering N ∈ S (M). Denoting the number of edges in G pr by E [G pr ] we find: Bayati, Kim and Saberi showed the following bound on the probability that the random graph G pr contains exactly r edges. Using this Lemma we obtain As λ i = 2 i ln(n) 1+δ ≫ ln(n), ne −Ω(λ i ) = e −Ω(λ i )+ln(n) = e −Ω(λ i ) . Hence, to prove equation (32) it suffices to show that As T r is defined piecewise, we formulate separate Lemmas distinguishing two cases: i) m − r < ωλ i and ii) m − r ≥ ωλ i . Proof. Instead of showing the desired inequality, we show an even stronger statement: Combining the fact that Ψ pr ≤ λ 2 i 8ω with Ψ pr = ∆ pr + Λ pr and Lemma 2.2, we find As mq r = m − r < ωλ i and ω 4 d 2 max < m 5 for large n we have Let G qr be the complement of G pr in G d and define N 0 (u) := {v ∈ V | (u, v) ∈ G qr } ∪ {u}. Let d + Gq r (u) (respectively d − Gq r (u)) be the out-degree (in-degree) of u in G qr . By definition of ∆ pr , By combining the latter inequality with the lower bound on ∆ pr we have just derived, we find This equation implies that at least one of the following statements must hold true: (a) G q has more than ω 2 λ i 40 edges; If (a) is violated then Hence if (a) and (b) are both violated, we find This violates equation (34). Thus it is not possible that (a) and (b) are simultaneously violated. This implies that at least one of the statements holds. Using the proof of [25,Lemma 20], the probabilities that statements (a) and (b) hold, are both upper bounded by e −Ω(λ i ) . Since Ψ pr ≥ λ 2 i 8ω implies that at least one of these statements holds, this completes the proof. Theorem 2.12. [Vu's concentration inequality [32]] Consider independent random variables t 1 , t 2 , . . . , t n with arbitrary distribution in [0, 1]. Let Y (t 1 , t 2 , . . . , t n ) be a polynomial of degree k with coefficients in (0, 1]. For any multi-set A let ∂ A Y denote the partial derivative with respect to the variables in A. Proof. To prove each of the above equations, we write the quantity as a polynomial and apply Theorem 2.12 to it. This polynomial will be a function of m Bernoulli variables. Each variable t e represents an edge e ∈ G d , that is if e ∈ G pr then t e = 0 and if e / ∈ G pr , t e = 1. Remark that by definition of G pr , see Section 2.1, E [t e ] = q r for all e. Also by definition of G pr , variables t e are independent of each other. (i) Recall that ∆ 1 pr counts the number of pairs creating a self-loop. Each vertex v has d − v in-stubs and d + v out-stubs. The number of those out-stubs (respectively in-stubs) that are matched equals the number of outgoing (incoming edges) for v in G pr . Thus the number of unmatched in-stubs (respectively out-stubs) of vertex v is e=(•,v)∈G d t e e=(v,•)∈G d t e . The number of ways to create a self-loop at v is Hence we find pr ≤ md max q 2 r . Let us take the partial derivative with respect to one variable t e for some e = (u, v), then we obtain f =(•,u)∈G d t f + f =(v,•)∈G d t f . This is upper bounded by 2d max q r . As ∆ 1 pr is a polynomial of degree 2 with all coefficients 1, it is clear that E ∂ te ∂ t f ∆ 1 pr ≤ 1 for all e, f . Thus we find The maximization follows from the definition of E j (Y ). Let us define, E 0 := 9λ 2 i + 2md max q 2 r , E 1 := 9λ i + 2d max q r and E 2 := 1. We claim that together with λ = λ i , they fulfil the conditions of Theorem 2.12. It is obvious that E 2 ≥ E 2 ∆ 1 pr . Also E 1 ≥ E 1 ∆ 1 pr as λ i ≥ 1 for all n ≥ 3. Furthermore E 0 ≥ E 0 ∆ 1 pr as λ i ≥ 1 and mq r = m − r implies that 2md max q 2 r ≥ 2d max q r . This shows the first condition of Theorem 2.12. For the second condition, remark that λ i ≥ ln(n) and ln(m) ≤ 2 ln(n) as m ≤ n 2 . This implies Furthermore, showing that the second condition of Theorem 2.12 is fulfilled as well. Thus we may apply Vu's concentration inequality to obtain (23) completes the proof. (ii) Recall that ∆ 2 pr counts the number of pairs that create an edge already present in G pr , i.e. a double edge. Pairing an out-stub of u with an in-stub of v creates a double edge only if (u, v) ∈ G pr , i.e. if for e = (u, v), t e = 1. Recalling the expressions for the number of unmatched in-stubs and out-stubs at a vertex v from the proof of (i) and defining a set of non-cyclic three-edge line subgraphs, Vu's inequality will be applied to Y 1 and Y 2 separately. To upper bound the expected value of Y 1 , we need an upper bound on the size of Q. Given f , the source of e and the target of g are fixed. Hence there are at most d 2 max triples in Q with a fixed edge f . As f may be any edge, |Q| ≤ md 2 max . Together with E [t e t g ] = q 2 r this implies that E [Y 1 ] ≤ md 2 max q 2 r . We differentiate Y 1 with respect to t e , to obtain: (e,f,g)∈Q e= e t g + (e,f,g)∈Q g= e t e . Since we have E [∂ t e Y 1 ] ≤ 2d 2 max q r , and since Y 1 is a polynomial of degree 2 with all coefficients equal to 1, all second derivatives are at most 1. Together, these observations yield: Similar to (i) it can be shown that λ = λ i and E 0 = 9λ 2 i + 2md 2 max q 2 r , E 1 = 9λ i + 2d 2 max q r and E 2 = 1, fulfil the conditions of Theorem 2.12. Applying Vu's inequality and assuming c ≥ 8 · 9c 2 , we thus obtain Moving on to Y 2 , we see that E [Y 2 ] ≤ md 2 max q 3 r as |Q| ≤ md 2 max and E [t e t f t g ] = q 3 r . Differentiating Y 2 to with respect t e , we obtain This implies that E [∂ t e Y 1 ] ≤ 3d 2 max q r . Differentiating Y 2 to with respect t e and t f for e = f , we obtain In each of the sums, there is freedom to choose only one edge. As the source, the target or both are fixed for this edge, each summation is upper bounded by d max q r . According to the definition of Q, at most two of the summations are non-zero, implying that E ∂ t e ∂ t f Y 2 ≤ 2d max q r . As Y 2 is a polynomial of degree 3 and all of its coefficients are 1, any third order partial derivative of Y 2 can be at most 1. We thus find: Vu's inequality is applied to Y 2 using λ = λ i and E 0 = 85λ 3 i + 3md 2 max q 3 r , E 1 = 85λ 2 i + 3d 2 max q 2 r , E 2 = 17λ i + 2d max q r and E 3 = 1, to obtain If we choose c large enough, this implies that Next, remark that This implies that completing the proof. (iii) To prove that P separately. The construction is almost identical to the proofs of (i) and (ii). First consider Start with Z 1 . Since for a Bernoulli variable t 2 e = t e , Z 1 is a polynomial of degree one. Since its coefficients are at most 1, it is clear that any first order partial derivative of Z 1 is upper bounded by 1. The expected value of Z 1 is upper bounded by mq r . This implies that, Hence, E 0 = mq r +λ i and E 1 = 1, satisfy the constraints of Theorem 2.12 with λ = λ i . Applying this theorem we find Next, consider Z 2 . This is a sum over all pairs of distinct edges, hence it contains fewer than m 2 terms. Combining this with d − Taking the partial derivative with respect to a variable t g and writing g = (i, j) leads to Each term of the summations is upper bounded by q r . Each summation contains m − 1 terms. Thus we find: E ∂ tg Z 2 ≤ 2mq r . As Z 2 is a second order polynomial with coefficients upper bounded by 1, all second order partial derivatives will be at most 1. Combining these observations we find: Similar to the proof of (i) it can be shown that λ = λ i and E 0 = 9λ 2 i + 2m 2 q 2 r , E 1 = 9λ i + 2mq r and E 2 = 1, satisfy the constraints of Vu's concentration inequality, which gives Pulling the factor d 2 max m inside the root and taking c > 8 (c 1 + 9c 2 ), we also find Next, we consider Note that this is the same expression as for ∆ 1 pr where the coefficient of each term is replaced by . Hence using the same argument as for (i) we obtain Again pulling d 2 max m inside the square root, we find Since β = c λ i (λ i + d 2 max q r ) λ 2 i + md 2 max q 2 r , this completes the proof by taking c > 8(18c 2 + c 1 ). (iv) This argument is exactly the same as for (ii), since Hence we obtain P Combining all inequalities from the statement of Lemma (2.13), we find that for all i ∈ {0, 1, . . . , L} and 0 ≤ r ≤ m − 1. By definition of ψ r this shows equation (35) and hence it proves Lemma 2.11. This completes the proofs of Lemma 2.7 (a) and 2.8 (a). Next, we prove Lemma 2.7 (b) and 2.8 (b). This requires the following Lemma. Proof. The first claim follows by changing the summation 2m−2 m−r=2 into m m−r=1 in the proof of Lemma 15(b) [25]. The second claim follows by applying a similar change to the proof of Lemma 18 [25]. We will now determine an upper bound on f (N ) for all N ∈ S * (M). According to the Using 1 + x ≤ e x the latter inequality becomes Let us consider N ∈ A i \ A i−1 for i ∈ {1, 2, . . . , L} and apply Lemma 2.14 to equation (37), to obtain: This completes the proof of Lemma 2.7 (b). It remains to prove Lemma 2.8 (b). As A ∞ ⊂ S * (M) we have: . Since 0 < Ψ r (N ) and ψ r < (m − r) 2 , we further have: From Lemma 2.2 it follows that Ψ r = ∆ r + Λ r ≤ 2(m − r)d 2 max , which, when inserted in the latter inequality, gives: Using (1 + x) ≤ e x , we find: and since τ ≤ 1 3 and m ≤ nd max , we have: This proves Lemma 2.8 (b), completes the proofs of Lemma's 2.7 and 2.8, and therefore, completes the proof of the asymptotic estimate (17). (18) The next step is showing that equation (18) holds. To this end, we first prove the following Lemma. Proof of equation Proof. (a) The probability that N ∈ B j \ B j−1 is upper bounded by the probability that N ∈ B c j−1 := S (M) \ B j−1 . Hence if we show that P N ∈ B c j ≤ e −Ω(2 j/2 ln(n)) , the claim is proven. Remark that B c j−1 ⊂ N ∈ S (M) | ∃ r, s.t. m − r ≤ ωλ 0 and Ψ r ≥ 2 j−1 . Therefore, we need to consider only those r for which m − r ≤ ωλ 0 . We will therefore prove inequality (38) instead. Fix an arbitrary r such that m − r < ωλ 0 and assume that Ψ r ≤ 2 j−1 . Then by definition of Ψ r and applying Lemma 2.2, we have Since m − r ≤ ωλ 0 < 2 j−1 ωλ 0 and d 2 max ω 2 λ 2 0 < m, The remainder of the proof is similar to the proof of Lemma 2.10 wherein equation (34) is replaced by This inequality can be shown to imply one of the following statements holds true: (a) G q has more than 2 j/2−1 edges; Indeed, the probability that either of those statements holds, is upper bounded by e −Ω(2 j/2 ln(n)) , by using the same argument as in the proof of Lemma 2.10. Since r is arbitrary this shows that P Ψ r ≥ 2 j−1 ≤ e −Ω(2 j/2 ln(n)) for all r such that m − r < ωλ 0 , completing the proof. for all N ∈ B j \ B j−1 . According the definition of B j , we have and since B j ⊂ A 0 the second statement from Lemma 2.14 can be applied, giving: Hence for all Now, we give a proof of asymptotic estimate (18). Lemma 2.15 implies that for all Recall that j ≤ K, and, in combination with equation (30), this yields 2 j−1 2 ≤ ln(n). Hence, proving equation (18). Proof of equations (19) and (20) We bound the expected value of f (N ) for all N ∈ C. We, start with proving upper bound (19), for which it suffices to show that for all N ∈ C, f (N ) ≤ 1 + o(1). As C ⊂ S * (M), in analogy to equation (37), we have Because C ⊂ A 0 , we obtain from Lemma 2.14 that Also by definition of C, Ψ r (N ) ≤ 1 for all m − r ≤ ωλ 0 . Hence for all N ∈ C, proving equation (19). Next, we derive a lower bound on E f (N ) ½ S * (M) . As C ⊂ S * (M) this will prove equation (20). Take any ordering N ∈ S * (M). Lemma 2.11 states that holds for all r, such that m − r ≥ ωλ 0 . Thus the probability that |Ψ r (N ) − ψ r | ≥ 4β r (λ 0 ) + 2 min (γ r (λ 0 ) , ν r ) holds for at least one r is small. Now consider an ordering N ∈ S * (M) such that for all r with m − r ≥ ωλ 0 , Combining this with the definition of f (N ), we find: From Lemma 2.14 and the definition of T r , we find m To approximate the remaining product, we apply Lemma 2.5 in combination with 1−x ≥ e −2x and an asymptotic estimate λ 3 0 ωd 2 max = o(m) to obtain: Now, for each N ∈ S * (M) we have shown that either f (N ) ≥ 1 − o(1) or that its probability is upper bounded by o(1), which this completes the proof of equation (20). Remark that in fact we have proven Additionally, the proofs of equations (17) This corollary will be used to prove equation (21). Proving equation (21). This equation is the last bit that remains to prove equation (5). It concerns the expected value of f (N ) for the orderings in S (M) \ S * (M). Equation (13) implies that for any N ∈ S (M) \ S * (M), there is at least one 0 ≤ r ≤ m − 1 such that the inequality is violated. This inequality can only be violated for specific values of r. To determine these values, we assume that the above inequality is violated and investigate what are the implications for ∆ r . Recall that Ψ r = ∆ r + Λ r . By using Lemma 2.2 to bound Λ r , we obtain: Since d 4 max = o(m), there is such n 0 that for all n > n 0 , d 2 max m < τ 2 . Let n > n 0 , then Assuming the opposite inequality to (41) holds, this becomes: Lemma 2.2 states that ∆ r ≤ (m − r)d 2 max , and hence, we deduce that (m − r) 1 − τ 2 ≤ d 2 max , which is equivalent to Therefore, inequality (41) can only be violated if m − r ≤ 2d 2 max 2−τ . This allows us to partition with S t (M) being the set of all orderings N violating inequality (41) with r = m − t and not violating it for all r < m − t. To prove equation (21), it suffices to show that We will now prove equation (43). According to the definition of Ψ r , we have (m−r) 2 For the algorithm to finish successfully, there must be at least m − r suitable pairs left at each step r, implying that (m − r) 2 In analogy to equation (37) it can also be shown that Combing these observations with inequality (41), which holds for all r < m − t, we find: Next, we take the expected value of the above equation and apply Hölder's inequality to obtain: Using Corollary 2.16 this becomes Hence, to prove equation (43), it remains to show that This requires an upper bound on P [N ∈ S t ], which we derive in the following manner: As the first step, we show that if N ∈ S t , then G Nr always contains a vertex with some special property. We use the probability that such a vertex exists as an upper bound for P [N ∈ S t ]. Let us assume that N ∈ S t , fix r = m − t and define Γ(u) := {v ∈ V |(u, v) ∈ G Nr }. By definition of ∆ r , this allows us to write Because N ∈ S t , inequality (42) must hold. Inserting the above expressions for ∆ r and (m−r) into this inequality yields: which implies that there exists a vertex u ∈ V such that Thus we have shown that if N ∈ S t , there must exist a vertex u obeying (45), and therefore, probability that G Nr contains such a vertex u provides an upper bound for P [N ∈ S t ]. As the second step, we derive an upper bound on the probability that u obeys (45). Recall that G Nr contains the first r edges of the ordering N . Adding the remaining t edges of N completes and l ≥ (1 − τ )t. We derive an upper bound on the probability that k ≥ 1 and l ≥ (1 − τ )t for a random ordering N ∈ S (M). That is to say we fix all m edges in the graph, but the order in which they are drawn N is a uniform random variable. To obtain a fixed value of k, exactly k of the d + u edges with u as source must be in N \ N r . Choosing these edges determines Γ(u). To obtain the desired value of l, exactly l edges with target in Γ(u) ∪ {u} must be in N \ N r . There are v∈Γ(u)∪{u} (d − v − 1) + d − u edges to choose from, since for each v ∈ Γ(u) the edge with v as the target and u as the source is already in N r . The remaining t − l − k edges that are not in N r may be chosen freely amongst all the edges that do not have u as a source or an element of Γ(u) ∪ {u} as target. Thus the probability to get a specific combination of k and l is We therefore write the upper bound for the probability that a randomly chosen vertex u satisfies (45) as For N ∈ S t at least one vertex satisfies inequality (45), thus we have: This gives Finally, we approximate the sum over k and l. Since adding t edges completes the ordering, and ((d + u − k + 1)(d max )) = O 1 m 1/2 , the term inside the summation is maximal for k = 1 and l = (1 − τ ) t. This gives: Here we used that τ ≤ 1 3 , m k ≤ 2 m and u∈V d + u = m. Plugging this into (44) yields: Since t ≤ 2d 2 max 2−τ , we have: and since τ ≤ 1 3 , for any x ≥ 1, x 1−τ ≤ x, we find: Inserting the estimate d max = O m 1/4−τ yields, and using t = o m 1/2 and that 4 2−τ is constant when m goes to infinity with n, we find: This completes the proof of inequality (43) and hence it shows that equation (21) holds. This completes the prove of equation (12) and hence proves equation (4). Together with the results from the beginning of Section 2, and Section 2.2 this completes the proof of Theorem 1.2. The probability of failure of the Algorithm Here we show that the probability the algorithm fails is o(1). The proof is inspired by [25,Section 5]. If at step s, every pair of an unmatched in-stub with an unmatched out-stub is unsuitable -the algorithm fails. In this case, the algorithm will necessarily create a self-loop or double edge when the corresponding edge is added to G Ns . First, we investigate at which steps s ∈ {0, 1, . . . , m − 1} the algorithm can fail. Then, we derive an upper bound for the number of vertices that are left with unmatched stubs when the algorithm fails. For a given number of unmatched stubs, this allows us to determine the probability that the algorithm fails. Combining these results, we show that this probability is o(1). The following lemma states that the algorithm has to be close to the end to be able to fail. The number of vertices that have unmatched stubs when the algorithm fails is also bounded. Suppose a vertex v ∈ V has unmatched in-stub(s) left when the algorithm fails. Since the number of unmatched in-stubs equals the number of unmatched out-stubs, this implies that there are also unmatched out-stubs. Because the algorithm fails, every pair of an unmatched in-stub and an unmatched out-stub induces either a double edge or self-loop. Hence, only v and vertices that are the source of an edge with v as a target can have unmatched out-stub(s). As v has at least one unmatched in-stub, there are at most d max − 1 edges with v as a target. Thus at most d max vertices have unmatched out-stub(s). Symmetry implies that at most d max vertices have unmatched in-stub(s) when a failure occurs. Let (s) be the event that the algorithm fails at step s with v i 1 , . . . , v i k − ∈ V being the only vertices with unmatched in-stubs and v j 1 , . . . , v j k + the only vertices having unmatched out-stubs. The amount of unmatched in-stubs (respectively outstubs) of such a vertex i l (j l ) is denoted by (s) ). Since k − (respectively k + ) denotes the number of vertices with unmatched in-stubs (out-stubs) that are left, k − , k + ≤ d max . This allows to write the probability that Algorithm 1 fails as We apply this bound to the degree sequence d K − ,K + (s) . A graph obeying this degree sequence has s − k + k − + k ± edges, with k ± = |K ± |. Thus we must show that Thus we may apply inequality (48) to d K − ,K + (s) to obtain Following the derivation in Section 2 we find In the latter expression, the factor with factorials accounts for the number of different configurations leading to the same graph G Ms , which equals the number of permutations of the stub labels. However for i ∈ K − there are only permutations of the labels of the in-stubs of v i that lead to a different configuration. Remark that changing the label of an in-stub that remains unmatched with another in-stub that remains unmatched does not change the configuration. By the same argument for i ∈ K + there are only ways to permute the labels of the out-stubs of v i . We can now determine First, we look at the product of the exponentials in the asymptotic approximations of P [G Ms ] and L d (s) k − ,k + , which after some transformations, and using that m > s ≥ m − d 2 max , becomes: By using the latter estimate, we obtain P A Using that m − s ≤ d 2 max , k + , k − ≤ d max and 0 ≤ k ± ≤ min (k − , k + ), we obtain: Thus the upper bound on the probability of A Combining equation (46) with Lemma 3.2, we are able to prove the desired result. Proof. In the statement of Lemma 3.2, the fraction d 2 max m k + k − −k ± is either 1 if k + k − = k ± or smaller than d 2 max m if k + k − = k ± . Since k ± ≤ min (k − , k + ), k + k − = k ± implies that k + = k − = 1. Together k + = k − = 1 and the conditions under which the algorithm can fail imply that K + = K − . First we consider this case. Since K + = K − = K ± = 1 we have d − Next, assume that k + k − = k ± , which implies that This proves the claim of Theorem 1.1 about the failure probability of Algorithm 1. Running time Algorithm 1 When implementing Algorithm 1 one has a certain freedom to decide how exactly to choose random samples with probability proportional to P i,j . Our implementation of Algorithm 1 uses the three-phase procedure introduced for regular graphs in [23] and extended to nonregular undirected graphs in [25]. We also distinguish three phases depending on the algorithm step r, however, our sampling probability is proportional d + , and the corresponding criteria that determine the phase of the algorithm are different. We also benefit from the fact that looking up an element in a list with a dictionary requires constant time, and therefore, one can check in constant time whether an edge (i, j) is already present in the graph constructed thus far. In what follows, we show that the expected running time of our algorithm is linear in the number of edges. Proof. Phase 1. Let E be the list of edges constructed by the algorithm so far, and let E be supplied with an index dictionary. In the first phase, a random unmatched in-and out-stubs are selected. We may check whether this is an eligible pair in time O (1) , as this is the time needed to look up an entry in a dictionary. If eligible, the pair is accepted with probability and (i, j) is added to E. We select edges according to this procedure until the number of unmatched in-stubs drops below 2d 2 max . This marks the end of phase 1. As a crude estimate, each eligible pair is accepted with probability at least 1 2 , and at most 1 2 of all stub pairs are ineligible, see Lemma 2.2(a). Hence, creating one edge in phase 1 has an expected computational complexity of O (1), and the total runtime of this phase is O (m). Phase 2. In this phase we select a pair of vertices instead of a pair of stubs. This requires us to keep track of the list of vertices with unmatched in-stubs/out-stubs. These lists are constructed in O (n) and can be updated in a constant time. Draw uniformly random vertices i and j from the lists of vertices with unmatched out-stubs and in-stubs correspondingly. Accept i (respectively j) with probability . If both vertices are accepted, we of one edge also takes at least O (d max ) in every phase, this does not change the overall complexity of the algorithm. The initial value is which can be computed in O (n). As n ≤ m this does not change the order of the expected running time, and hence, this completes the proof. This lemma completes the proof of Theorem 1.1.
Role of CD40 ligation in dendritic cell semimaturation Background DC are among the first antigen presenting cells encountering bacteria at mucosal surfaces, and play an important role in maintenance of regular homeostasis in the intestine. Upon stimulation DC undergo activation and maturation and as initiators of T cell responses they have the capacity to stimulate naïve T cells. However, stimulation of naïve murine DC with B. vulgatus or LPS at low concentration drives DC to a semimature (sm) state with low surface expression of activation-markers and a reduced capacity to activate T-cells. Additionally, semimature DC are nonresponsive to subsequent TLR stimulation in terms of maturation, TNF-α but not IL-6 production. Ligation of CD40 is an important mechanism in enhancing DC maturation, function and capacity to activate T-cells. We investigated whether the DC semimaturation can be overcome by CD40 ligation. Results Upon CD40 ligation smDC secreted IL-12p40 but not the bioactive heterodimer IL-12p70. Additionally, CD40 ligation of smDC resulted in an increased production of IL-6 but not in an increased expression of CD40. Analysis of the phosphorylation pattern of MAP kinases showed that in smDC the p38 phosphorylation induced by CD40 ligation is inhibited. In contrast, phosphorylation of ERK upon CD40 ligation was independent of the DC maturation state. Conclusion Our data show that the semimature differentiation state of DC can not be overcome by CD40 ligation. We suggest that the inability of CD40 ligation in overcoming DC semimaturation might contribute to the tolerogenic phenotype of semimature DC and at least partially account for maintenance of intestinal immune homeostasis. Background Dendritic cells (DC) are among the first antigen presenting cells encountering bacteria at mucosal surfaces and play an important role in maintenance of regular homeostasis in the intestine. Stimulation of DC with e.g. TLR agonists leads to activation and maturation of DC by activation of NF-κB and mitogen-activated protein kinase (MAPK) family members [1]. This results in a rapid production of costimulatory molecules, cytokines and pro-inflammatory mediators that affect T-cell differentiation, for instance. We identified Escherichia coli mpk, a commensal E. coli strain which induces colitis in genetically predisposed hosts and Bacteroides vulgatus mpk which does not elicit colitis and even prevents the colitis caused by E. coli mpk [2,3]. Stimulation of bone marrow derived dendritic cells (BMDC) with E. coli [4] or lipopolysaccharide (LPS) at high concentration [5] induced TNF-α, IL-12 and IL-6 secretion and expression of activation-markers, whereas stimulation with B. vulgatus or [4] LPS at low concentrations [5] only led to secretion of IL-6 and DC were driven to a semimature state with low expression of activationmarkers. Those semimature DC were nonresponsive to subsequent TLR stimulation in terms of maturation and TNF-α but not IL-6 production [4,5]. Moreover, the low positive expression of activation surface marker like e.g. CD40 on semimature DC, was not overcome by a subsequent stimulus via TLR4 [4]. This might contribute to the reduced activation of T-cells by semimature DC [4] as binding of the CD40 ligand (CD40L) on naïve T-cells to CD40 is a crucial signal for generation of effective CD4 + and CD8 + T-cell responses [6,7]. CD40 ligation results in upregulation of CD83, CD80 and CD86 as well as MHC molecules on DC. Additionally, the expression of adhesion molecules ICAM-1 and CD58 [8][9][10][11] is upregulated and survival of DC is supported by CD40 ligation [12,13]. Furthermore, CD40 ligation of mature DC results in secretion of proinflammatory cytokines e.g. IL-1, IL-6 and IL-12 [9][10][11] [14,15]. IL-12 plays an important role in T-cell polarization by promoting Th-1 responses. Its bioactive heterodimer IL-12p70 consists of a p40 and p35 subunit, which are encoded by different genes and therefore independently regulated. IL-12p40 can also form homodimers (IL-12p80) which were shown to inhibit IL-12p70 mediated immune responses [16,17]. Mitogen-activated protein kinase (MAPK) signal transduction pathways play a crucial role in many aspects of immune mediated inflammatory responses [18]. The MAPK ERK, JNK and p38 are important regulators of host immune responses to e.g. bacterial stimuli. Extracellular stimuli induce phosphorylation of MAPK-kinasekinase (MKKK) which in turn phosphorylate MKK. Specific MKK are necessary to phosphorylate and activate MAPK, which results in activation of downstream kinases and transcription factors [18][19][20]. The products of inflammatory genes include e.g. cytokines, chemokines and adhesion molecules which promote recruitment of immunocompetent cells to inflammatory sites. Additionally, the MAPK p38 enhances the mRNA stability of many proinflammatory cytokines, e.g. IL-8, TNF-α or IL-6 [21][22][23]. Within the present study we analyzed the effects of DC semimaturation on cellular responses to CD40 ligation and showed that the semimature differentiation state of DC, induced by stimulation with B. vulgatus or LPS lo can not be overcome by CD40 ligation. Mice C57Bl/6x129sv mice were obtained from own breeding. All mice were kept under SPF conditions. Male and female mice were sacrificed at 6-12 weeks of age. Animal experiments were reviewed and approved by the responsible institutional review committee (Regierungspräsidium Tübingen, December 19 th 2008). Western blotting For p38 and pERK Western blot analysis proteins (50 μg) were solubilized in Laemmli sample buffer. They were separated on SDS-PAGE gels and transferred to nitrocellulose membranes. The membranes were blocked for 1 h at room temperature in 5% dry milk in TBS/T (TBS containing 2,01% Tween). After that the membranes were incubated with mouse anti-p38 MAPK (pT180/pY182) or with mouse anti-ERK 1/2 (pT202/pY204) (both BD Pharmigen, Heidelberg, Germany) at 4°C over night. The antibody solution was diluted 1:1000 in 5% dry milk in TBS/ T. After incubation the membranes were washed three times in TBS/T and were treated with the secondary antibody (polyclonal rabbit anti-mouse conjugated to horseradish peroxidase, DakoCytomation, Hamburg, Germany; diluted 1:1000 in 5% dry milk in TBS/T) for 2.5 h at room temperature. After repeating the washing step the proteins were detected by enhanced chemiluminescence. Before using ß-actin (mouse anti-mouse β-actin; Sigma, Munich, Germany) as a control for protein loading, the blots were stripped for 20 min with 10 ml stripping-solution (10 mM NaOH and 250 mM guanidinium chloride). Bacteria and cell lines The bacteria used for stimulation of the murine dendritic cells were Escherichia coli mpk [2] and Bacteroides vulgatus mpk [2] . The E. coli strain was grown in Luria-Bertani (LB) medium under aerobic conditions at 37°C. Bacteroides vulgatus was grown in Brain-Heart-Infusion (BHI) medium and anaerobic conditions at 37°C. In some experiments, J558L/CD40L cells were used for CD40 ligation. The cells were cultured in DMEM (Dulbecco's modified Eagle's medium,Invitrogen, Darmstadt, Germany) supplemented with 1 g/l glucose, L-glutamine, pyruvate, 50 μmol/l 2-mercaptoethanol, 10% FCS and penicillin/ streptomycin. Mouse DC isolation Bone marrow cells were isolated and cultured as described previously [4] with minor modifications. Cells were harvested at day 7 and used to evaluate the effects of cellular challenge with E. coli mpk, B. vulgatus mpk and LPS on subsequent CD40 ligation. Cytokine release and expression of surface markers were determined after CD40 ligation as described below. Stimulation of isolated DC At day 7, DC were stimulated with viable bacteria at a MOI of 1 at 37°C, 5% CO 2 . Gentamicin was added one hour after stimulation and cells were incubated for 24 hour. To exclude bacterial overgrowth, CFU of viable bacteria was determined at the end of incubation period. Respectively, DC were stimulated with LPS (1 ng/ml and 1 μg/ml). After 24 h cell culture supernatant was harvested for analysis of cytokine expression and cells were used for flow cytometry of surface marker expression. CD40 ligation To determine the effects of CD40 ligation on DC cytokine production and expression of surface markers DC were restimulated using an agonistic anti-CD40 mAb (BD, Heidelberg, Germany). Therefore, DCs pretreated with E. coli, B. vulgatus or LPS were harvested, washed twice and cultured at 1.5x10 5 DC in the presence of 5 μg/ml anti-CD40 mAb in DC culture medium at 37°C, 5% CO 2 . As a control, DCs were incubated with 5 μg/ml of the IgM isotype control antibody (BD, Heidelberg, Germany). After 48 h, DC culture supernatants were harvested and analyzed for cytokine concentrations by ELISA. The expression of CD80 and CD86 on the DC surface was determined by FACS analysis. For determination of CD40 expression of DC upon CD40 ligation the J558/LCD40L cell line was irradiated with 180 Gy in a Gammacell 1000 Elite W (Nordion, Ottawa, Canada) prior to co-culture with DC. 5x10 4 J558L/CD40L cells were cultured with 1.5x10 5 DC in DC culture medium for 48 h at 37°C, 5% CO 2. DC were harvested and analyzed for expression of CD40 by FACS. Inhibition of MAP kinase signaling DC were incubated with the p38 MAP kinase inhibitor SB202190 (2 μmol/l) or the ERK inhibitor PD98059 (50 μmol/l) for 30 min prior to CD40 ligation. After 30 min the cells were washed. CD40 ligation with anti CD40 mAb was performed for 24 h. Cell culture supernatants were harvested and used for determination of cytokine concentrations. Cytokine analysis by ELISA For analysis of IL-6, IL-12p40 and IL-12p70 concentrations in cell culture supernatants commercially available ELISA kits (BD, Heidelberg, Germany) were used according to the manufacturer's instructions. Flow cytometry analysis 3x10 5 DC were incubated in 150 μl PBS containing 0.5-μl of fluorochrome conjugated antibodies and applied to flow cytometry analysis. 30,000 cells were analyzed using a FACS Calibur (BD, Heidelberg, Germany). Statistical analysis Statistical analysis was performed using the two sided unpaired Student's t-test. P values < 0.05 were considered significant. Error bars represent ± SEM. CD40 ligation does not overcome DC semimaturation Stimulation of DC with B. vulgatus or LPS lo (1 ng/ml) leads to induction of DC semimaturation [4,5] whereas stimulation of immature DC with E. coli or LPS hi (1 μg/ml) induces DC maturation. The semimature DC phenotype is characterized by tolerance towards a subsequent stimulation via TLR2 or TLR4 in terms of TNF-α and IL-12p70 but not IL-6 secretion and a low positive expression of costimulatory molecules like e.g. CD40, CD80 CD86 [4,5]. Herein we investigated whether CD40 ligation as a TLR independent DC activation signal can overcome the semimature DC phenotype and induce activation and maturation of semimature DC. By stimulation of immature DC with B. vulgatus or LPS lo we induced semimature DC, and by stimulation with E. coli or LPS hi DC maturation was induced. Secretion of IL-12p40 ( Figure 1A), IL-12p70 ( Figure 1B) and IL-6 ( Figure 1C) was determined by ELISA. CD40 ligation of semimature DC led to secretion of IL-12p40 but not to secretion of the bioactive heterodimer IL-12p70. In contrast, CD40 ligation of mature DC resulted in significant enhanced secretion of IL-12p70 as compared to cells treated with the anti-CD40 mAb isotype control. Mature DC revealed a high spontaneous production of IL-12p40 which was only slightly enhanced by CD40 ligation ( Figure 1A, B). Additionally, CD40 ligation of semimature DC resulted in increased levels of IL-6 compared to immature DC. In mature DC which also showed a high spontaneous secretion rate of IL-6 CD40 ligation only led to a slight further increase of IL-6 production ( Figure 1C). Next, we investigated whether CD40 ligation can overcome the low positive expression of DC activation markers on semimature DC. Therefore, we analysed the expression of CD80 (Figure 2A), CD86 ( Figure 2B), and CD40 ( Figure 2C) on immature, mature and semimature DC upon CD40 ligation in comparison to mock cells or cells treated with the anti-CD40 mAb isotype control by FACS. The MHC class II expression on immature, semimature and mature DC was slightly increased upon CD40 ligation. These changes, however, proved not to be statistically significant (data not shown). As anti-CD40 antibodies used for ligation assays and anti-CD40 antibodies used for FACS analysis might compete for binding of CD40 we used the J558L/CD40L cell line to analyze the influence of CD40 ligation on the expression of CD40 itself on DC. In B. vulgatus treated semimature DC we found an increase of CD40 expression upon CD40 ligation (B. vulgatus 26.8% ± 7.4% to 55.0% ± 0.1%). In LPS lo treated semimature DC we observed a similar effect, however, the increase in CD40 expression was statistically not significant (57.8% ± 6.6% to 63.5% ± 8.9%). Interestingly, CD40 ligation of semimature DC did not lead to an increase of CD40 resulting in as high expression levels as on mature DC (E. coli 77.1% ± 4.2%; LPS hi 75.2% ± 4.0%) ( Figure 2C). In DC semimaturation CD40L induced p38 phosphorylation is inhibited To analyze phosphorylation of the MAP kinase p38 in response to CD40 ligation, immature, semimature and mature DC were activated by CD40 ligation and pp38 levels were determined by Western blotting. CD40 ligation of immature and mature DC resulted in phosphorylation of Figure 1 Cytokine secretion of differentially primed BMDC in response to secondary CD40 ligation. Wildtype BMDC were stimulated with B. vulgatus mpk (MOI 1) or E. coli mpk (MOI 1) and LPS at low (1 ng/ml) or high concentration (1 μg/ml), respectively, for 24 hour. Unstimulated DC (PBS) served as a control. Subsequently, these DC were washed twice and were re-stimulated with an agonistic anti-CD40 mAb (5 μg/ml) for 48 hours (black bars). As a control, DC were further incubated in medium only (white bars) or in the presence of the isotype control (grey bars). The concentrations of IL-12p40 (A), IL-12p70 (B) and IL-6 (C) were determined by ELISA. The bars represent the mean values of three independent experiments, each performed as duplicate, + SD. * p < 0.05, ** p < 0.01, *** p < 0.001. To investigate the biological relevance of p38 MAP kinase activation we treated immature, semimature and mature DC with the p38 inhibitor SB202190 prior to CD40 ligation. Levels of IL-12p40, IL-12p70 and IL-6 were determined in cell culture supernatants by ELISA (Figure 4 A-C). Inhibition of p38 had no influence on the CD40L induced secretion of IL-12p40 by DC, independent of the maturation state ( Figure 4A). However, in mature DC the production of IL-12p70 upon CD40 ligation was Figure 2 Expression of CD80, CD86 and CD40 on differentially primed BMDC after secondary CD40 ligation. Wildtype BMDC were stimulated with B. vulgatus mpk (MOI 1) or E. coli mpk (MOI 1) and LPS at low (1 ng/ml) or high concentration (1 μg/ml), respectively, for 24 hours. Unstimulated DCs (PBS) served as a control. Figure 2A and 2B: The pre-treated DC were washed twice and were re-stimulated with an agonistic anti-CD40 mAb (5 μg/ml) for 48 hours. As a control, DCs were further incubated in medium only (mock) or in the presence of the isotype control. The expression levels of CD80 (A) and CD86 (B) were measured by FACS analysis. Figure 2C: The pre-treated DC were washed twice and were re-stimulated with CD40L by co-culture with J558L/CD40L transfectants or were cultured in medium only as a control (− J558L/CD40L). After 48 hours, the expression of CD40 was determined by FACS analysis. Each histogram is representative of three independent experiments. The data are mean values of the percentages of the positive cell populations determined in these three experiments, ± SD. * p < 0.05, ** p < 0.01, *** p < 0.001; # p < 0.05 compared to the respective control (PBS). inhibited partially by SB202190. Inhibition of p38 did not influence the IL-12p70 expression pattern in immature or semimature DC as CD40 ligation did not induce any IL-12p70 secretion in these cells ( Figure 4B). In mature DC, both spontaneous and CD40L induced IL-6 secretion levels were partially reduced by inhibition of the p38 MAP kinase. In contrast, IL-6 production of immature as well as semimature DC upon CD40 ligation was not significantly affected by inhibition of p38 ( Figure 4C). In DC semimaturation ERK suppresses CD40L induced IL-12p40 production To analyze the role of the extracellular signal regulated kinase (ERK) we used the ERK inhibitor PD98059. Upon CD40 ligation the inhibition of ERK resulted in a significant increase of IL-12p40 in immature and semimature DC but only a slight increase in mature DC ( Figure 5A). In contrast, inhibition of ERK had not the ability to induce IL-12p70 production in immature and semimature DC and resulted in only slightly enhanced IL-12p70 secretion levels in mature DC ( Figure 5B). The CD40L induced IL-6 production by DC was not affected by ERK inhibition, independent of the maturation state ( Figure 5C). In line with this, analysis of pERK levels upon CD40 ligation of immature, semimature and mature DC showed similar levels independent of the DC maturation state (data not shown). Taken together, our data showed that the semimature differentiation state of DC, induced by stimulation with B. vulgatus or LPS lo can not be overcome by CD40 ligation. Discussion In order to clarify the impact of CD40 expression on the T-cell activation capacity of semimature DC, we examined the effect of CD40 ligation on immature, semimature and mature DC. Semimature DC were induced by either stimulation with B. vulgatus or LPS at low concentration (1 ng/ml), and are characterized by a low positive expression of costimulatory molecules like e.g. CD40, secretion of only IL-6, and nonresponsiveness toward subsequent TLR activation [4,5]. In brief, we showed that CD40 ligation does not overcome DC semimaturation in terms of expression of activation surface markers and results in production of only IL-6 and IL-12p40, but not the bioactive form IL-12p70. The slightly reduced p38 phosphorylation levels in semimature DC as compared to mature DC might at least partially contribute to this effect. The expression of IL-12p40 turned out to be limited by pERK. In line with other studies [24,25], we observed that on mature DCs no significant further increase in the expression levels of the already highly expressed costimulatory molecules CD40, CD80 and CD86 could be triggered upon additional stimulation by CD40 ligation. Upon CD40 ligation immature and semimature DC expressed intermediate levels of CD40 CD80 and CD86, but did not reach the expression level of mature DC. However, the intermediate expression of costimulatory molecules was not associated with production of proinflammatory cytokines like IL-12p70. It is known that immature DCs characterized by low expression levels of costimulatory molecules and lacking secretion of proinflammatory cytokines induce tolerance Figure 3 Phosphorylation of p38 MAPK upon secondary CD40 ligation in immature, semimature and mature DC. Wildtype BMDC were stimulated with B. vulgatus mpk or LPS 1 ng/ml (LPS low ) to generate semimature DC and E. coli mpk or LPS 1 μg/ml (LPS high ), respectively, to generate mature DC. Immature DC were maintained by incubation in the absence of further stimuli (PBS). After 24 hours, DC were re-stimulated with anti-CD40 mAb (1 μg/ml) for 15 minutes or treated with the isotype control (Isotype). The expression of pp38 was analyzed by Western blotting. As loading control, β-actin expression was determined. The results shown are representative of three independent experiments. by promoting T-cell anergy, apoptosis or differentiation into T reg cells via antigen presentation in the absence of costimulatory signals [26][27][28][29]. Additionally, CD40 deficient DCs or DCs with a suppressed CD40 expression were shown to have a reduced potential to activate T-cell proliferation and polarization in Th1 or Th2 direction [30][31][32][33][34]. This effect might also contribute to the inhibited T-cell activation induced by the intermediate expression of costimulatory molecules on semimature lamina propria (lp) DC of B. vulgatus monocolonized IL-2 −/− mice [3]. On the other hand it was shown that a high positive expression of costimulatory molecules in absence of Figure 4 Role of p38 MAPK on CD40L induced cytokine secretion in DC of different maturation status. Wildtype BMDC were primed with B. vulgatus mpk or LPS 1 ng/ml to obtain semimature DC and E. coli mpk or LPS 1 μg/ml, respectively, to obtain mature DC. Immature DC were maintained by incubation in the absence of further stimuli (PBS). Subsequently, the DC were washed and treated with the p38 inhibitor SB202190 (2 μmol/l, black bars) or as a control with DMSO (grey bars) for 30 minutes, washed and re-stimulated with 1 μg/ml anti-CD40 mAb (+) or the isotype control (−). After 24 hours, the concentrations of IL-12p40 (A), IL-12p70 (B) and IL-6 (C) were measured in the cell culture supernatants by ELISA. The bars represent the mean values of three independent experiments, each performed as duplicate, ± SD. * p < 0.05, *** p < 0.001 for SB202190 compared to DMSO. The cytokine secretion pattern upon CD40 ligation differed between immature/semimature DC and mature DC. In immature and semimature DC, CD40 ligation did not result in induction of IL-12p70 secretion, in contrast to mature DC where CD40 ligation led to increased IL-12p70 secretion. This is in line with other studies showing that TLR4 stimulation and CD40 ligation synergize in inducing IL-12 p70 secretion [25,39]. The additive microbial priming signals are necessary to trigger Figure 5 Role of ERK on CD40L induced cytokine secretion in DC of different maturation status. Wildtype BMDC were primed with B. vulgatus mpk or LPS 1 ng/ml to obtain semimature DC and E. coli mpk or LPS 1 μg/ml, respectively, to obtain mature DC. Immature DC were maintained by incubation in the absence of further stimuli (PBS). Subsequently, the DC were washed and treated with the ERK inhibitor PD98059 (50 μmol/l, black bars) or as a control with DMSO (grey bars) for 30 minutes, washed and re-stimulated with 1 μg/ml anti-CD40 mAb (+) or the isotype control (−). After 24 hours, the concentrations of IL-12p40 (A), IL-12p70 (B) and IL-6 (C) were measured in the cell culture supernatants by ELISA. The bars represent the mean values of three independent experiments, each performed as duplicate, ± SD. * p < 0.05 for PD98059 compared to DMSO. the production of the IL-12p35 subunit [40] which was shown to be not induced by exclusive CD40 ligation [41,42]. Additionally, these accessory stimuli have the potential to augment the CD40 expression on antigen presenting cells (APC) [43][44][45] which results in a more effective CD40 ligation. However, DC primed with Bacteroides vulgatus as a microbial stimulus do not secrete IL-12p70 upon CD40 ligation. This might be one mechanism accounting for the tolerogenic effects of B. vulgatus in maintenance of intestinal homeostasis [2,3]. As Porphyromonas gingivalis which is phylogenetically closely related to B. vulgatus signals mainly vial TLR2 [46], this might be also the main receptor for recognition of B. vulgatus. In turn, TLR2 activation is reported to result in transcription of the p40 but not the p35 subunit of IL-12p70 [1,47]. This might account for the induction of IL-12p40 but not p70 upon stimulation of B. vulgatus primed DC via CD40 ligation. The production of IL-12p40 in the absence of the p35 unit might result in the formation of IL-12p40 homodimers which are known to act as potent antagonists at the IL-12p70 receptor [48][49][50]. Additionally, in IL-12p40 transgenic mice Th1 responses are significantly reduced suggesting that also in vivo p40 functions as an IL-12 antagonist [51]. Upon CD40 ligation semimature DC produced significantly enhanced levels of IL-6 but not TNF-α (data not shown) or IL-12p70. This is in line with our previous studies showing a crucial role for IL-6 in induction of DC semimaturation and tolerance [4,5,52]. This is interesting as the secretion of IL-6 upon CD40 ligation by semimature DC might help to sustain the semimature differentiation state and influence the T-cell activation pattern. IL-6 plays an important role in T-cell differentiation through two independent molecular mechanisms. First, IL-6 stimulation of T-cells leads to an upregulation of nuclear factor of activated T cells (NFAT) [53], a transcription factor regulating IL-4 transcription [54] resulting in IL-4 expression, and thereby promotion of Th2 polarized T cell differentiation [55]. Second, IL-6 upregulates the expression of silencer of cytokine signaling (SOCS) 1 in CD4 + cells which inhibits IFN-γ signaling and thus Th1 differentiation [56]. The presence of IL-6 may shift the Th1/Th2 balance towards Th2 [55]. CD40 ligation of DC is known to result in phosphorylation of MAP kinases like e.g. p38 and ERK [57,58] and the ratio between pp38 and pERK is thought to play a crucial role in directing the cytokine secretion pattern of DC towards pro-or anti-inflammatory host responses [59][60][61]. CD40 ligation of mature DC resulted in phosphorylation of p38, inhibition of pp38 using the inhibitor SB202190 partially reduced of IL-12p70 and IL-6 but not IL-12p40 levels. Therefore, in mature DC pp38 might contribute to positive regulation of the p35 subunit of IL-12p70 [62]. This is in line with others showing that pp38 is important for production of IL-12p70 [61,63]. Additionally, pp38 is known to increase the stability of IL-6, TNF-α and IL-8 mRNA [22,23,64] which might result in increased secretion of these cytokines. Furthermore, via the mitogen and stress activated protein kinase (MSK) 1 pp38 is involved in NFκB activation [65,66]. In contrast, CD40L induced IL-12p40 secretion from mature DC has been shown to be independent of p38 phosphorylation, but dependent on the NFκB inducing kinase (NIK) [67]. As we observed an only slight reduction of p38 phosphorylation in semimature DC we hypothesize that inhibition of p38 phosphorylation due to DC semimaturation is only one of many factors that may affect in interaction with others the cytokine secretion pattern of semimature dendritic cells in response to secondary CD40 stimulation and thus their reduced pro-inflammatory capability [3][4][5]. The slight differences in the MAP kinase phosphorylation pattern in response to CD40 ligation might be based on differences in CD40 expression of immature, semimature or mature DC. A strong CD40 signal is known to preferentially activate p38, whereas weak CD40 signals are thought to favour ERK phosphorylation [60]. Inhibition of pERK during CD40 ligation turned out to have no significant effect of cytokine secretion in mature DC. In contrast, in semimature DC phosphorylation of ERK was at least partially responsible for limiting IL-12p40 expression. This is in line with others showing similar effects [68]. However, the Western blot analysis did not reveal significant differences of pERK levels in immature, semimature or mature DC. We speculate that in semimature DC the ERK activation might probably control the IL-12p40 production and therefore contribute to the limitation of the IL-12 p70 production. We are aware that this is highly speculative and that further work has to elucidate the role of ERK in DC semimaturation. Conclusion We hypothesize that the inability of CD40 ligation in overcoming DC semimaturation might contribute to the tolerogenic phenotype of semimature DC and at least partially account for maintenance of intestinal homeostasis.
Generic Switching and Non-Persistence among Medicine Users: A Combined Population-Based Questionnaire and Register Study Background Generic substitution means that one medicinal product is replaced by another product containing the same active substance. It is strictly regulated with respect to its bioequivalence, and all products must have undergone appropriate studies. Although generic substitution is widely implemented, it still remains to be answered how generic switch influences persistence to long-term treatment, and if it is modified by patients’ concerns about medicine and views on generic medicine. This study focuses on users of antidepressants and antiepileptics, and their experience of generic switching. Methods The study was an observational cohort study. By use of a prescription database, we identified patients who had redeemed prescriptions on generically substitutable drugs, and a questionnaire was mailed to them. We analyzed predictors of discontinuation in relation to generic switch and patients’ attitudes towards generic medicines and concerns about their medicine. Results Patients who experience their first-time switch of a specific drug were at higher risk of non-persistence, Hazard Ratio 2.98, 95% CI (1.81;4.89) versus those who have never switched, and 35.7% became non-persistent during the first year of follow-up. Generic switching did not influence persistence considerably in those having previous experience with generic switching of the specific drug. Stratified analyses on users of antidepressants and antiepileptics underpin the results, showing higher risk of non-persistence for first-time switchers for both drug categories. Conclusion In conclusion, patients who are first-time switchers of a specific drug were at higher risk of non-persistence compared to never switchers and those having experienced previous generic switching. Methods The study was an observational cohort study. By use of a prescription database, we identified patients who had redeemed prescriptions on generically substitutable drugs, and a questionnaire was mailed to them. We analyzed predictors of discontinuation in relation to generic switch and patients' attitudes towards generic medicines and concerns about their medicine. Results Patients who experience their first-time switch of a specific drug were at higher risk of nonpersistence, Hazard Ratio 2.98, 95% CI (1.81;4.89) versus those who have never switched, and 35.7% became non-persistent during the first year of follow-up. Generic switching did not influence persistence considerably in those having previous experience with generic switching of the specific drug. Stratified analyses on users of antidepressants and antiepileptics underpin the results, showing higher risk of non-persistence for first-time switchers for both drug categories. Conclusion In conclusion, patients who are first-time switchers of a specific drug were at higher risk of non-persistence compared to never switchers and those having experienced previous generic switching. Background Although generic substitution is widely implemented, it still remains to be answered how the substitution influences persistence to long-term treatment and if it is modified by patients' concerns about medicine and views on generic medicine. Generic substitution means that one medicinal product is replaced by another product containing the same active substance, and generic substitution is implemented in many countries worldwide. In some countries generic substitution is only regarding a switch from brand name to generic drug, while in Denmark generic substitution includes all types of switches between drugs containing the same active substance [1,2]. It is strictly regulated with respect to its bioequivalence, and all products must have undergone appropriate studies [3]. However, generic substitution has always been accompanied with concerns about clinical equivalence in terms of safety and effectiveness, and whether it may have important consequences for the medicine users' adherence [4,5]. Research on the subject often focuses on one shift from a manufacturer's drug to a generic drug or on incident users whose prescription is substituted at their first redemption [6][7][8]. Most of these studies did not identify significant associations between generic substitution and non-adherence [6,9,10], but one study assessing the association between generic substitution and persistence showed reduced persistence [7]. So far, studies of the effect of generic drug substitution on drug continuation have not focused on patients overall experience of generic switches. Studies on patients' beliefs about medicine have shown associations with patients' adherence towards drug treatment. Those with stronger beliefs about medicine as being harmful and with concerns about treatment were less adherent, while patients with stronger perceptions of necessity of treatment showed higher adherence [11,12]. How these beliefs about medicine influence patients' persistence after generic switching has not been shown. Hence, we aimed to analyze, in a cohort study combining data from a register and questionnaires, predictors of discontinuation in relation to generic switch and patients' attitudes towards generic medicines and concerns about their medicine. Methods This study is part of a larger project on the consequences of generic substitution initiated by the Danish Ministry of Health comprising qualitative interviews, questionnaires and register data [13][14][15]. Study design By use of a prescription database, we identified patients who had redeemed prescriptions on generically substitutable drugs and analyzed predictors of discontinuation in relation to generic switch and patients' attitudes towards generic medicines and concerns about their medicine. The study was an observational cohort study comprising 4000 randomly selected patients. The patients were aged 20 years or older and living in the Region of Southern Denmark, who in September 2008 redeemed prescriptions with general reimbursement and where generic substitution was recommended. A substitutable drug was defined as a medicinal product approved for generic substitution by the Danish regulatory authorities. In the Anatomical-Therapeutic-Chemical (ATC) system, substitutable medicines have the same 5 th level code corresponding to the active substance or a combination of substances [16]. Setting Generic substitution was implemented in Denmark in 1991 and represents 68% of the Danish drug consumption [17]. Pharmacies are obliged to substitute a generic version of a drug according to a defined list, if the general practitioner (GP) has not explicitly stated that it should not be performed, or the patient insists on having the more expensive drug. In both cases, the patients have to pay the price difference [18]. In Denmark patients use whichever pharmacy they wish to. Drug prices are regulated every 14 days and The National Health Service only reimburses the price of the least expensive product. The pharmacy proposes the cheapest drug within a limit of 5-20 Danish kroner's (0.8-3.3 US$) price difference. A prescription always comprises the brand name, and prescription of the substance name is not allowed [17,19]. The Danish healthcare system is tax funded, providing free access to general practice, outpatient clinics and hospital care for all inhabitants irrespective of age, socioeconomic status and geographical residence. Reimbursement increases with patients' expenses for prescription medication [20]. All Danish citizens are registered with a unique personal identification number, used in all national registers, thus enabling accurate record linkage [21][22][23]. Data sources The patients were identified by means of Odense PharmacoEpidemiologic Database (OPED) [24], covering the population in the Region of Southern Denmark (1.19 million). Only prescriptions issued by GPs were included. For each patient we focused on one purchase of a generically substitutable drug (index drug) during September 2008. OPED data were used to obtain information on the patients' prescriptions on their day of inclusion (index date), as well as information on any other prescription during the preceding 12 months, including ATC code, brand name and date of purchase. We were thus able to identify drug switches likely to be due to generic substitution and the number of different drugs dispensed. A generic switch was defined as having taken place, if the patient had previously purchased the same pharmacological substance under a different name or by a different manufacturer. This distinction between drug products was based on brand name, registration holder (having the right to marketing) and importer or parallel importer. Our primary predictor was the generic switch of the index drug. A previous study from our group showed that experience with earlier generic switch within the index ATC code was clearly positive associated with the generic switch and was included in the model [14]. Thus we were able to distinguish between first-time generic switchers within the same ATC code and patients with previous experience. As a possible confounder we included the "number of different drugs" defined as the number of different ATC codes different from the index ATC code at the 5th level purchased by the patient during the 12 months prior to their index date. Number of different drugs was used as a proxy for comorbidity, as comorbidity may influence patients' persistence and the choice of switching between generically substitutable drugs. The variable "redemptions of the index drug within 1 year prior to index date" was used to illustrate patients' experience with the index ATC code 365 days before the index date. Information on residency and vital status of the cohort member was retrieved from the demographic data in OPED [24]. Study subjects and questionnaire Our cohort comprises 2000 users of antiepileptics and 2000 users of antidepressants. Details of the sample selection are described in a prior publication [14]. Patients were eligible for inclusion if they had made at least one other purchase of the same drug or one of its generic alternatives within 120 days prior to their index date [13]. For each patient we focused on the purchase of one generically substitutable drug (index drug). To prevent vulnerable patients (e. g. patients with severe terminal disease or dementia) from receiving the questionnaire, the GPs were asked whether it was appropriate to approach their patients. This is standard procedure when using data from OPED to approach patients. Questionnaires were mailed out in December 2008. A reminder was sent two weeks later. The questionnaire was adapted to the individual subject with reference to their specific drug (index drug) in every question and index date printed on the questionnaire. As a quality control the patient had to confirm purchase of the index drug to be included in the study. The questionnaire included scales from the "Beliefs about Medicine Questionnaire" (BMQ) and ad hoc scales developed on the basis of a literature review and a qualitative interview study, which have been reported elsewhere [13]. The BMQ was translated into Danish by means of a standardized forward-backward translation [25] and finally approved by Rob Horne, who developed the questionnaire. The BMQ is a validated and widely used psychometric instrument, which assesses patients' beliefs about medicines prescribed for personal use and beliefs about medicine in general [26]. The specific concern scale (from the BMQ) was used as measure of beliefs about the index drug. It consists of 6 items and assesses concerns about prescribed medication based on beliefs about the danger of dependence and long-term toxicity and the disruptive effects of medication, e.g. "it worries me that I have to take this medicine" [26,27]. The items in the BMQ scale were measured on a 5-point Likert response scale (strongly disagree to strongly agree). Furthermore, we constructed the scale Views on generic medicine for this questionnaire. The scale was based on 4 items concerning side effects, quality and effectiveness of generic medicine. The items were also measured on a 5-point Likert response scale (1: strongly agree to 5: strongly disagree). We analyzed the internal consistency using Cronbach's α. A value of 0.88 was found, suggesting that the scale is reliable. A person's score was calculated as the average of the non-missing scale items, if at least 60% of the scale items were answered. If less than 60% of the items were answered, the score was treated as missing. The BMQ subscale ranged between 1 and 5, and a high score meant a stronger belief in the concept described by the scale. The scale Views on generic medicine ranged from 1 to 5, and a low score meant a positive view on generic medicines. The questionnaire was pilot tested prior to the survey focusing on comprehensibility, relevance, acceptability and feasibility. Interviews were carried out with 18 medicine users purchasing their drug at community pharmacies, and the interviews were discussed in an academic setting of healthcare researchers. Data analysis Persistence represents the time over which a patient continues to fill a prescription, or the time from the initial filling of the prescription until the patient discontinues refilling of prescription [28]. In this study a subject was considered to be a medication user from the index date and for the subsequent number of days corresponding to the number of tablets of the prescription. A treatment episode was considered to have ended, if the interval between two prescriptions exceeded a period covered by the number of tablets prescribed plus a grace period of 90 days. We assume the patients as a minimum take 1 tablet per day. The grace period was introduced to allow for some degree of non-adherence and for irregular dispensing due to stockpiling. We defined non-persistence as the first episode during the study period when a subject failed to present a subsequent prescription within the time window defined by the duration of the preceding prescription [29,30]. To analyze associations between generic switching and non-persistence, we used a Cox proportional hazards model to calculate Hazard Ratios (HR) and corresponding 95% confidence intervals (CI) and Kaplan-Meier curve to show time to non-persistence [28]. The analysis period was defined from the index date and 365 days ahead. Non-persistence events were registered on the day the number of tablets expired. An event was classified as such if it took place within the 365 days of follow-up or before the patient moved out of the Region or died. Patients were censored on the day of death, date of moving or at the time the analysis period ended, if an event had not taken place. If censoring occurred during the grace period, the date of censoring was set to the day that the number of tablets expired. In the Cox model we adjusted for potential confounders such as age, gender, number of different drugs, concerns about medicine and views on generics. Sensitivity analyses were made, assessing the influence of grace periods using 30 days, 60 days, 90 days and 120 days. All analyses were performed using Stata Release 11.0 (Stata-Corp, College Station, TX, USA). Ethics statement. Written informed consent was obtained by all participants for their clinical records to be used in this study. According to the Act on Biomedical Research Ethics Committee System, the project was not a biomedical research project and therefore did not need the Research Ethics Committee's approval. The anonymity of patients was strictly preserved throughout the data entry and analysis process. The study was approved by the Danish Data Protection Agency (journal number 2008-41-2364) Results A total of 2476 patients (44.1%) responded to the questionnaire and 1368 patients who used antiepileptics or antidepressants were included in the analysis (Fig. 1). During the analysis period 15 patients either moved out of the Region of Southern Denmark or died and were therefore censored. Table 1 shows the baseline characteristics according to whether the patients had experienced a generic switching stratified on drug categories. During the 365 days of follow-up 237 (17.3%) patients included in the study became nonpersistent to their treatment (Fig. 1). Table 2 shows that patients who experience their first generic switch had a higher risk of non-persistence of the index drug over time; HR 2.98, 95% CI (1.81;4.89) compared to never switchers. Generic switching did not influence persistence considerably in those having previous experience with generic switching of the specific drug. Fig. 2 shows that the time to non-persistence differed according to the patients' experience with generic switching. Among first-time switchers 35.7% became non-persistent during the first year of follow-up. In contrast, among patients who had never experienced a switch 14.2% became non-persistent. Among patients with previous experience with generic switching within the index ATC code, 15.0% became non-persistent if they switched on the index day and 15.1% if they did not switch on the index day. The Cox regression analyses were also performed stratified on drug categories, i.e. antidepressants and antiepileptics, both showing higher risk of non-persistence when the patients experienced their first generic switch of the index drug (Table not shown). The group of Other potential confounding variables in the model such as age, concerns about medicine and views on generics had an effect on persistence. However, it did not affect our primary predictor considerably. Sensitivity analysis assessing the influence of gap length did not materially affect the association between switching patterns and non-persistence (Table 3). Discussion Summary We found that patients who were first-time switchers of a specific drug were at higher risk of non-persistence versus never switchers or multiple switchers, respectively. The stratified analyses showed higher risk of non-persistence for first time switchers for both drug categories, i.e. antidepressants and antiepileptics. Strength and limitations This study adds to the body of knowledge about the mechanisms of non-persistence in a wide group of patients, both addressing first-time switchers and recurrent switchers. A major strength of the study was that it by means of prescription data focused on a single well-defined generic switch, and that the purchase was confirmed by the patient in the questionnaire. Additionally we obtained information on previous generic switches on the same specific drug within one year. In that way we had a unique opportunity to look into patients' overall experience with generic switch of one specific drug. The prescription register data offered complete coverage on the use of reimbursed drugs by all cohort members [24]. Furthermore, we were able to combine the register data with questionnaire items on views on generic medicines and the validated concerns about medicine from the BMQ. The OPED prescription database did not have information on prescribed daily doses, which would have been the ideal measure for continuity calculations. However, as tablets come in all clinically relevant strengths, it is unlikely that patients take less than one tablet per day. For patients using more than one tablet per day we may have underestimated nonpersistence. The choice of non-persistence rather than non-adherence [29] was made because of our interest in whether patients stay on their therapy, when a generic switch has taken place. The definition of non-persistence with a 90-day grace period was based on literature [29,31,32]. For drugs such as antiepileptics and antidepressants missed doses may be more problematic and decrease the effectiveness of therapy compared to missed doses of other classes of drugs, e.g. antihypertensive agents, implying that a short grace period should be used [29]. However, the sensitivity analyses showed robustness of the results irrespective of the length of the grace periods with results having the same direction with narrow confidence intervals. Regarding the questionnaire a possible selection due to the sampling procedure cannot be ruled out. The GPs were allowed to exclude vulnerable patients and excluded 6.4%. The response rate was 44.1%, which corresponds to other questionnaire survey studies [33]. Switchers and non-switchers were quite similar among respondents and non-respondents [14,15], and distribution of age, gender and drug group of the non-respondents was quite similar to the distribution of the sample, hence we assume that our results are generalizable to the population of drug users. Also, the relatively short interval between the drug purchase and receiving the questionnaire probably minimized the recall bias. Comparison with existing literature There is consistency between this study's results and previous studies comprising incident users, Ström et al. found that patients who had their medicine substituted at their first prescription refill had a higher probability of discontinuing treatment [7]. Kesselheim et al. also studied incident medication users, in this case users of anticonvulsants, and found that changes in pill color or shape due to generic substitution were associated with discontinuation [34]. The grace period employed was, however, only 5 days, which might have led to an overestimated rate of non-persistence. Studies pointing in other directions are e.g. Van Wijk et al. who assessed nonadherence among incident users of antihypertensive medicine, showing that generic substitution improved medication adherence, but a possible weakness of the study was a relatively short follow-up period of 180 days [6]. Olesen et al. assessed adherence and generic substitution in an elderly population with polypharmacy by means of pill count, and the results of that study also showed that generic substitution did not affect adherence negatively [9]. However, the indirect measure of adherence, that is pill count, has been found to overestimate adherence [35]. Persistence studies often measure the duration of time from initiation to discontinuation of therapy in incident drugs users or previous "treatment naïve" patients [7,34,36]. Studies evaluating incident users of therapy may report lower estimates of persistence than our study, representing patients with at least two prescriptions, since the largest non-persistence occurs within the first year of therapy [29]. What this study specifically shows is that first-time switching is the most critical point. Experience with generic switching has been shown to influence acceptance of future generic switches positively [14]. This study also shows that experience with generic switching also has a positive influence on persistence. Concerns and cautions have been raised in relation to generic substitution of antidepressants and especially antiepileptics [37,38]. When looking at this study's two drug categories, Hazard ratios between generic switch and non-persistence Non-persistence was established as the first episode in a subjects' medication history with a gap in prescription renewal that exceeded a predefined limit (number of tablets and a grace period of 90 days) Hazard ratios are presented as the full model described in Generic Switching and Non-Persistence among Medicine Users the persistence estimate had the same direction with different results, but with overlapping confidence intervals. The non-persistence estimate was higher among users of antiepileptics than among users of antidepressants. Generic substitution in the treatment of epilepsy has raised concerns at both patient and physician level. Despite the fact that anticonvulsants have narrow therapeutic indices, studies have shown that many physicians were likely to request brand antiepileptics "dispensed as written" because of concerns about breakthrough seizures [37,39]. Pechlivanoglou et al. showed that users of antidepressants were more prone to redeem brand name products [40]. It is well known that decisions about taking medication are likely to be influenced by beliefs about medicines as well as beliefs about the illness, and studies have reported negative associations between low adherence and specific concerns about medicine [27,41] and specifically for users of antiepileptics and antidepressants [42,43]. Results from the present study support these studies, showing that patients with a high level of concerns were negatively associated with persistence. Surprisingly, negative views on generics were positively associated with persistence. An explanation could be that patients having negative views on generics may have thought rationally about their medicine and over time chosen to take their medicine as prescribed. However, both questionnaire items did not affect our primary predictor considerably in the adjusted model. In conclusion, patients who are first-time switchers of a specific drug were at higher risk of non-persistence compared to never switchers and those having experienced previous generic switch. Implication for research and practice This study shows that experience with changes in medication due to generic substitution is of major importance and that first-time switchers need special attention, e.g. information from prescribing physicians or pharmacy professionals. It seems to be important to give words to potential changes to patients' medicine both at physician consultations and at the pharmacies. Focus on the name of the active substance may be of some relevance to patients, which could give them a possibility to navigate by medication lists issued by physicians and by emphasizing the name of the active substance name on a sticker on the drug package, which was introduced in Denmark 2013. Hence interventions should be developed targeting this specific event, to support physicians, pharmacists and most importantly patients. Supporting Information S1 Questionnaire. Views on generic medicine. The ad hoc constructed scale applied in the questionnaire: "Views on generic medicine". (DOCX)
Broad Components in Optical Emission Lines from the Ultra-Luminous X-ray Source NGC 5408 X-1 High-resolution optical spectra of the ultraluminous X-ray source NGC 5408 X-1 show a broad component with a width of ~750 km/s in the HeII and Hbeta lines in addition to the narrow component observed in these lines and [O III]. Reanalysis of moderate-resolution spectra shows a similar broad component in the HeII line. The broad component likely originates in the ULX system itself, probably in the accretion disk. The central wavelength of the broad HeII line is shifted by 252 \pm 47 km/s between the two observations. If this shift represents motion of the compact object, then its mass is less than ~1800 M_sun. INTRODUCTION Ultraluminous X-ray sources (ULXs) are variable offnuclear X-ray sources with luminosities exceeding the Eddington luminosity of a 20 M ⊙ compact object, assuming isotropic emission (Colbert & Mushotzky 1999;Kaaret et al. 2001). Irregular variability, on time scales from seconds to years, suggests that ULXs contain accreting compact objects. Intermediate mass black holes would be required to produce the inferred luminosities, but ULXs may, instead, accrete at super Eddington rates or be beamed, mechanically or relativistically. NGC 5408 X-1 is one of the best intermediate mass black hole candidates because it powers a radio nebula requiring an extremely energetic outflow (Kaaret et al. 2003;Soria et al. 2006;Lang et al. 2007) and a photoionized nebula requiring an X-ray luminosity above 3 × 10 39 erg s −1 (Kaaret & Corbel 2009). Also, quasiperiodic X-ray oscillations at low frequencies suggest a high compact object mass (Strohmayer et al. 2007). The optical counterpart to NGC 5408 X-1 was identified by Lang et al. (2007) and optical spectra were obtained by Kaaret & Corbel (2009). The optical spectra had no absorption lines suggesting the emission is not dominated by the companion star. The observed continuum emission may arise from a nebula or reprocessing of X-rays in an accretion disk. The optical spectrum is dominated by emission lines, including forbidden lines which must be produced in a low density environment such as a nebula. Several high excitation lines were detected indicating that the nebula is X-ray photoionized. Kaaret & Corbel (2009) found that the Heii line from NGC 5408 X-1 was broader than the forbidden lines. Permitted lines produced in the high-density environment of an accretion disk can be broad, reflecting the distribution of velocities within the optical emitting re-gions of the disk. Furthermore, since the accretion disk moves with the compact object, the line velocity shifts may provide a means to constrain the compact object mass (Hutchings et al. 1987;Soria et al. 1998). To study the Heii line profile of NGC 5408 X-1 in more detail, we obtained new observations using the FORS-2 spectrograph on the European Southern Observatory Very Large Telescope (VLT) with a high resolution grism and reanalyzed our previous FORS-1 observations (Kaaret & Corbel 2009). The observations and data reduction are described in §2. The results are presented in §3 and discussed in §4. OBSERVATIONS AND ANALYSIS FORS-2 observations of NGC 5408 X-1 were obtained on 12 April 2010 using the GRIS 1200B and GRIS 1200R grisms with a slit width of 1.0 ′′ covering the spectral range 3660−5110Å and 5750−7310Å with dispersion 0.36Å pixel −1 and 0.38Å pixel −1 and spectral resolution λ/∆λ = 1420 and λ/∆λ = 2140 at the central wavelength, respectively. The observation block (OB) consisted of three 849 s exposures with a 12 pixel offset along the spatial axis between successive exposures. CCD pixels were binned for readout by 2 in both the spatial and spectral dimensions. We also reanalyzed all six OBs of the previous FORS-1 observations (Kaaret & Corbel 2009), hereafter the low resolution data (LRD), taken using the GRIS 600B grism which has a spectral resolution of λ/∆λ = 780 at the central wavelength and with three shifted exposures per OB. The average seeing for our new observations was 0.72 and 0.62 arcsecond for the blue and red spectra, respectively. The average seeing of the six OBs of the LRD were 0.87, 0.82, 0.96, 1.28, 0.64, 0.57 arcsecond, respectively. Data reduction was carried out using the Image Reduction and Analysis Facility (IRAF) 3 (Tody 1993). First, we created bias and flat-field images, then applied these to correct the spectrum images. The three exposures in each OB were aligned then averaged to eliminate bad pixels and cosmic rays using the imcombine task with the ccdclip rejection algorithm. As the continuum emission of the ULX counterpart is faint, we could not trace its spectrum. Following Kaaret & Corbel (2009), we used the bright nearby star at 2MASS (Skruitske et al. 2006) position α J2000 = 14 h 03 m 18. s 97, δ J2000 = −41 • 22 ′ 56. ′′ 6 as a reference trace. The trace position on the spatial axis varied less than half a pixel along the whole length of the dispersion axis. The trace for the ULX counterpart was centered on the Heii λ4686 emission line profile. The smallest possible trace width, 2 pixels corresponding to 0.5 ′′ , was used to best isolate the ULX emission from the nebular emission. Background subtraction was done with a trace close by. The HgCdHeNeA lamp and standard star LTT7379 were used for wavelength and flux calibration. An atmospheric extinction correction was applied using the IRAF built-in Cerro Tololo Inter-American Observatory (CTIO) extinction tables. To estimate the reddening, we used the Balmer decrement of Hδ/Hβ, we find E(B − V ) = 0.08 ± 0.03 in agreement with Kaaret & Corbel (2009). We corrected for reddening using the extinction curve from Cardelli et al. (1989) To study the kinematics, we need to characterize the instrumental resolution in order to obtain intrinsic line widths. After applying the dispersion correction to the lamp spectrum, we measured the full width at half maximum (FWHM) of several lines, excluding saturated ones, by fitting Gaussians with the IRAF splot subroutine. The instrumental FWHM was 2.24Å and 5.08Å for the high and low resolution data, respectively. The error on the instrumental FHWM was estimated by finding the standard deviation of the FWHM for several different lines. For the Heii, Hβ and [Oiii] emission lines, see Fig. 1. and Table 1, we first fitted the continuum with a second order polynomial to a region around each line excluding the line itself by visual examination. We estimated the measurement errors by calculating the root mean square deviation of the data in the same region. Then, we performed a non-linear least squares fit using the LMFIT subroutine of the Interactive Data Language version 7.0 and based on "MRQMIN" (Teukolsky, Vettering & Flannery 1992). We fitted the line profiles iteratively, first using one Gaussian which converged on the narrow component, then using a sum of two Gaussians with initial parameters adjusted to achieve convergence. All six parameters in the two Gaussian fit were free to vary. The errors on the parameters were calculated by the fitting routine in a way that the uncertainty for the ith parameter derives from the square-root of the corresponding diagonal element of the covariance matrix. The intrinsic line width was calculated assuming the measured line width is the quadrature sum of the intrinsic and instrumental widths and the error on the intrinsic line width included a term for the uncertainty in the instrumental FHWM. For the Hα line, we fitted the sum of four Gaussians, because the [Nii] lines lie on the red and blue parts of the line wing. Initial fits to the Hα and red [Nii] lines provided initial values for a fit with four Gaussians. Because the blue [Nii] line has very low signal to noise ratio, the widths of the two [Nii] lines were set equal, the wavelength offset was fixed at −35.44Å, and the amplitude of the blue line was set to 1/3 of the red line (Osterbrock & Ferland 2006). Table 1. The centroid is shifted from the nebular component by +0.87 ± 0.26Å in the red direction. We also searched for broad components in other lines. Hβ has a broad component with a FWHM similar to the Heii line but shifted by −0.52±0.32Å towards the blue, rather than the red. In contrast, a single Gaussian provides a good fit to the forbidden [Oiii] line and there is no evidence for a broad component, as expected if the line is emitted only from the nebula. Then we fitted the Heii line profiles of the six OBs of the LRD, see Fig. 2. and Table 1. The flux variation of the overall line profiles correlates with the seeing, e.g. OB5 has the best seeing and the highest flux. We detected a broad component in the Heii line in OB3, OB5, and OB6. We did not significantly detect a broad component in OB1, OB2 and OB4. This may be due to seeing or variations in the flux of the broad component. We note that Kaaret & Corbel (2009) reported lower fluxes for Heii, [Nev], and the continuum emission for OB4 (with by far the worst seeing) as compared to the other OBs, while the other line fluxes remained relatively constant. Our new analysis suggests that this is due to changes in the seeing. If the emitting region is smaller than the 0.51 ′′ slit used for the LRD, then poor seeing will decrease the flux through the spectrometer. If these emission components are enhanced close to the ULX system while the other line emission is uniform, then the poor seeing in OB4 would produce the observed changes in flux. Thus, there is no evidence for temporal variability of the continuum or line emission. However, the subtraction performed by Kaaret & Corbel (2009) to isolate continuum emission arising from near the ULX is still justified; the separation of components is spatial instead of temporal. The Heii line parameters are consistent between OB3, OB5, and OB6. The wavelength shifts of the Heii broad component of OB3, OB5 and OB6 relative to the narrow component are −1.82 ± 0.78Å, −3.77 ± 1.44Å and −4.03 ± 1.70Å into the blue direction instead of the red as in the HRD, and are consistent within one σ. We averaged the spectra for these three observations and fit the resulting line profile. The fit results are listed as Heii AVG in Table 1. The shift of average line profile is −3.07 ± 0.68Å. The Heii broad component width is consistent between the new and old data. The narrow component is wider in the old data because we do not resolve the nebular lines. The line fluxes are higher in the new data, most likely due to the wider slit. The centroids of the narrow component are consistent, while the wavelength shift of the Heii broad component between the old and new data is ∆λ = 3.94 ± 0.73Å. Fitting the Hβ line of the LRD, we did not get a good fit, due to the lack of spectral resolution and the low broad to narrow flux ratio. We could not fit the bluer Balmer lines because of their low S/N ratios. We did fit the Hα line in the new data. Although we do not obtain a good fit (χ 2 ν = 4.9) because of the complicated line profile (i.e. the two [Nii] lines lie on the red and blue wing of the Hα line), we find that there is a broad component with a width of 19Å, while the width of the nebular component is 2.7Å. The [Nii] lines are narrow, with a typical width of 3Å, quantitatively supporting that the forbidden, nebular lines do not have broad components. DISCUSSION Our new, high-resolution spectra show narrow nebular lines and broad components in the Heii, Hβ, and Hα lines. Our previous, moderate-resolution spectra show a broad component in the Heii line. There is no broad The line emitting region The broad components of both Heii and Hβ have widths ∼750 km/s, consistent with production in the accretion disk, and are roughly Gaussian, instead of having P-Cygni profiles that would indicate origin in a wind. Following Porter (2010), we estimate the size of the lineemitting region, R le , by assuming the line-emitting gas is in Keplerian orbits around a compact object, thus R le ≤ GM/v 2 . We find R le < 2.35 MBH 1500 M⊙ AU, which for a mass of 10 M ⊙ would give an upper limit of 3.4 R ⊙ . This is consistent with origin of the broad Heii line in the accretion disk. The broad line components are shifted relative to the narrow components. In the new data, the shifts are small compared to the line width, +56 ± 17 km/s for Heii and −33 ± 20 km/s for Hβ. These shifts are consistent only at the 3σ level, which might indicate a difference in the spatial origin of the lines. However, this is still consistent with production of both lines within the disk since random motions within the disk and variation between the emission regions could produce shifts that are small compared to the line widths, as observed. The central wavelength of the Heii broad component shifts markedly between the odld and new data, ∆λ = 3.94 ± 0.73Å or ∆v = 252 ± 47 km/s. This shift is a substantial fraction of the line width. The shift could be due to random motion within the disk, differing viewing geometries (Roberts et al. 2010), or orbital motion of disk (and the compact object). If the shifts in the broad component of the Heii line are due to orbital motion, then this would provide a means to determine the orbital period and would also provide a measurement of the mass function for the secondary star. Thus, a program of monitoring NGC 5408 X-1 with high-resolution optical spectroscopic observations will be important in extending our understanding of the physical nature of this system. The binary system In this section, we make some speculations based on interpretation of the shift in the broad component of the Heii line as due to orbital motion. One can express the mass function and the compact object mass, M x , in terms of the orbital period, P , the velocity excursion, K x , and the companion mass, M c , as where i is the inclination angle and G is the gravitational constant. From the shift of the Heii line quoted above, we constrain the semi-amplitude of the radial velocity K x ≥ ∆v/2 = 126 ± 24 km/s. Thus, if the maximum mass of the companion and the orbital period are known, then Eq. 2 leads to an upper bound on the mass of the compact object. The binary system has a visual magnitude v 0 = 22.2 that gives an upper limit on the absolute magnitude of the companion of V 0 = −6.2 at a distance of 4.8 Mpc (Karachentsev et al. 2002). Unfortunately, this places little restriction on the companion mass as even O3V stars, with masses of 120 M ⊙ , are allowed. However, very high mass stars are very short lived, no more than a few million years. There is no evidence of a dense stellar association near NGC 5408 X-1 and origin in the closest super-star cluster would require a transit time to the present location on the order of 30 Myr (Kaaret et al. 2003). Thus, the companion mass is likely significantly lower, near 20 M ⊙ or less similar to found from studies of the stellar environments of other ULXs (Grisé et al. 2008(Grisé et al. , 2011. Figure 3. shows the upper bound on the compact object mass for donors of 120 M ⊙ and 20 M ⊙ as a function of orbital period. High black hole masses are excluded, except for very short periods. We note that the Heii line shift was the same in OB3 versus OB5 and OB6, taken one day apart, suggesting that the period is longer than few days. Thus, the black hole mass is likely below ∼1800 M ⊙ . The more probable companion mass of 20 M ⊙ or less would imply smaller black hole masses, less than 112 M ⊙ . As a further constraint, we note that the orbital separation should be larger than the size of the emitting region, calculated above. Assuming a circular orbit, the orbital separation of the compact object is a = G(Mc+Mx)P 2 4π 2 1 3 . Figure 3. shows the the orbital separation as a function of period. Also shown is the size of the line-emitting region, R le , versus period. Both are calculated using the maximum black hole mass for each period assuming an 120 M ⊙ or 20 M ⊙ donor. The orbital separation is greater than the upper limit of the size of the lineemitting region when the compact object mass is below 875 M ⊙ for an 120 M ⊙ companion and below 128 M ⊙ for a 20 M ⊙ companion. These masses are reduced if the inclination is lowered. These results suggest that the most probable black hole mass is at most a factor of several above the usual stellar-mass black hole range. Strohmayer (2009) proposed an orbital period of P = 115.5 ± 4.0 days for NGC 5408 X-1, based on varia-tions in the X-ray emission. With P = 115.5 days, the mass function is f x = 24.0± 13.4 M ⊙ , implying a lower bound on the companion mass M c ≥ 10.6 M ⊙ . It is interesting to determine if this period is consistent with other constraints on the system. An orbital period of P = 115.5 ± 4.0 days would require a mean stellar density of ρ = 1.5 × 10 −5 g cm −3 and, thus, a supergiant companion if mass transfer proceeds via Roche-lobe overflow (Strohmayer 2009;Kaaret, Simet, & Lang 2006). In particular, late F and early G supergiants have densities close to that required, although we caution that the high mass transfer rate needed to power the ULX may distort the spectral type of the star. Such stars have masses of 10-12M ⊙ , consistent with the minimum mass derived from the mass function, and absolute magnitudes close to or below the upper limit quoted above. The stellar radii are large, up to 0.7 AU, but smaller than the orbital separation for this period and mass. However, a companion mass so close to the lower bound on the mass function would require a very low mass compact object. For a companion mass of 10-12 M ⊙ , the compact object would have to be below 1 M ⊙ , which seems unlikely. For a 5 M ⊙ black hole, one would need a donor of about 17 M ⊙ if the system is edge on. A higher black hole mass or a less extreme inclination would require an even higher mass companion. These high companion masses contradict the required stellar density; there is no star with both M c ≥ 17 M ⊙ and ρ ∼ 1.5 × 10 −5 g cm −3 . Thus, either the orbital period is not near 115 days or mass transfer does not proceed via Roche-lobe overflow. We note that Foster et al. (2010) have suggested that 115 day periodicity may, instead, indicate a super-orbital period.
A Neuro-Symbolic ASP Pipeline for Visual Question Answering We present a neuro-symbolic visual question answering (VQA) pipeline for CLEVR, which is a well-known dataset that consists of pictures showing scenes with objects and questions related to them. Our pipeline covers (i) training neural networks for object classification and bounding-box prediction of the CLEVR scenes, (ii) statistical analysis on the distribution of prediction values of the neural networks to determine a threshold for high-confidence predictions, and (iii) a translation of CLEVR questions and network predictions that pass confidence thresholds into logic programs so that we can compute the answers using an ASP solver. By exploiting choice rules, we consider deterministic and non-deterministic scene encodings. Our experiments show that the non-deterministic scene encoding achieves good results even if the neural networks are trained rather poorly in comparison with the deterministic approach. This is important for building robust VQA systems if network predictions are less-than perfect. Furthermore, we show that restricting non-determinism to reasonable choices allows for more efficient implementations in comparison with related neuro-symbolic approaches without loosing much accuracy. This work is under consideration for acceptance in TPLP. Introduction The goal in visual question answering (VQA) (Antol et al. 2015) is to find the answer to a question using information from a scene. A system must understand the question, extract the relevant information from the corresponding scene, and perform some kind of reasoning. Neuro-symbolic approaches are useful in this regard as they combine deep learning, which can be used for perception (e.g., object detection or natural language processing), with symbolic reasoning (Xu et al. 2018;Manhaeve et al. 2018;Yi et al. 2018;Yang et al. 2020;Basu et al. 2020;Mao et al. 2019). As the semantics of the employed reasoning formalism is known, the way in which an answer is reached is transparent. We present a neuro-symbolic VQA pipeline for the CLEVR dataset (Johnson et al. 2017) that combines deep neural networks for perception and answer-set programming (ASP) (Brewka et al. 2011) to implement the reasoning part. The system is publicly available at https://github.com/Macehil/nesy-asp-vqa-pipeline ASP offers a simple yet expressive modelling language and efficient solver technology. It is in particular attractive for this task as it allows to easily express non-determinism, preferences, and defaults. The scene encoding in the ASP program makes use of non-deterministic choice rules for the objects predicted with high confidence by the network. This means that we do not only consider the prediction with the highest score, but also reasonable alternatives with lower ones. This allows our approach to make up for mistakes made in object classification in the reasoning component as the constraints in the program exclude choices that do not lead to an answer. For illustration, assume a scene with one red cylinder and the question "What shape is the red object?". Furthermore, assume that the neural network wrongly gives the cylinder a higher score for being blue than red. The ASP constraints enforce that an answer is derived. This entails that the right choice for the colour of the object must be red, and the correct answer "cylinder" is produced in the end. While non-determinism improves the robustness of the VQA system, the downside is that, in case there are many object classes and the system is used with no restriction, it can negatively impact the reasoning performance in terms of run time. The objective of this paper is to introduce a new method for restricting non-determinism that is sensitive to how well networks have been trained such that efficient reasoning is facilitated. While several datasets have been published to examine the strengths and weaknesses of VQA systems (Malinowski and Fritz 2014;Antol et al. 2015;Ren et al. 2015;Zhu et al. 2016;Johnson et al. 2017;Sampat et al. 2021), CLEVR is an ideal test bed for the purposes of this paper, since it is simple, well known, has detailed annotations describing the kind of reasoning each question requires, and focuses on basic object detection. We in fact omit natural language processing for the VQA tasks because this would add a further variable and is not the focus of this work which is reasoning on top of object detection. Instead, we use functional programs which are structured representations of the natural language questions already provided by CLEVR. Our VQA pipeline for consists of the following stages: (1) Training neural networks for object classification and bounding-box prediction of the CLEVR scenes using the object detection framework YOLOv3 (Redmon and Farhadi 2018). (2) Statistical analysis on the distribution of prediction values of the neural networks to determine a threshold for high-confidence predictions defined as a function of mean and standard deviation of the distribution. (3) Translating CLEVR questions and network predictions that pass confidence thresholds into logic programs so that we can compute the answers using an ASP solver. which are treated as a probability distribution. This can, as our experiments confirm, become a performance bottleneck if there are many object classes. Both systems feature closed-loop reasoning, i.e., the outcome of the reasoning system can be back-propagated into the neural networks to facilitate better learning. Our pipeline is however uni-directional, as our goal is to explore the interplay between non-determinism and confidence thresholds regarding efficiency and robustness of the reasoning component. We leave the learning component for future work as a scalable implementation is not trivial in our setting and thus outside the scope of this paper. We compare NeurASP and the reasoning component of DeepProbLog with our approach on the CLEVR data. Indeed, limiting non-determinism of neural network outputs in ASP programs to reasonable choices leads to a drastic performance improvement in terms of run time with only little loss in accuracy and is thus important for efficient reasoning. Furthermore, our experiments show that our system performs well even if the neural networks are trained rather poorly and predictions by the network are less-than perfect. This is important for robust reasoning as even well-trained networks can be negatively affected by noise or if settings like illumination change. The remainder of this paper is organised as follows. We first review ASP and CLEVR in Section 2. Our VQA pipeline using ASP and confidence-thresholds is detailed in Section 3. Afterwards, we present an experimental evaluation of our approach in Section 4, discuss further relevant related work in Section 5, and conclude in Section 6. Background We next provide preliminaries on ASP and background on the CLEVR dataset. Answer-Set Programming Answer-set programming (ASP) is a declarative problem solving paradigm, where a problem is encoded as a logic program such that its answer sets (which are special models) correspond to the solutions of the problem and are computable using ASP solvers, e.g., from potassco.org or www.dlvsystem.com. We just briefly recall important ASP concepts; and refer for more details to the literature (Brewka et al. 2011;Gebser et al. 2012). An ASP program is a finite set of rules r of the form where all a i , b j , c l are first-order atoms and not is default negation; we denote by H(r) = {a 1 , . . . , a k } and B(r) = B + (r) ∪ {not c j | c j ∈ B − (r)} the head and body of r, respectively, where B + (r) = {b 1 , . . . , b m } and B − (r) = {c 1 , . . . , c n }. Intuitively, r says that if all atoms in B + (r) are true and there is no evidence that some atom in B − (r) is true, then some atom in H(r) must be true. If m = n = 0 and k = 1, then r is a fact (with :− omitted); if k = 0, r is a constraint. An interpretation I is a set of ground (i.e., variable-free) atoms. It satisfies a ground rule r if H(r) ∩ I = ∅ whenever I ⊆ B + (r) and I ∩ B − (r) = ∅; I is a model of a ground program P if I satisfies each r ∈ P , and I is an answer-set of P if in addition no J ⊂ I is a model of the Gelfond-Lifschitz reduct of P w.r.t. I (Gelfond and Lifschitz 1991). Models and answer sets of a program P with variables are defined in terms of the grounding of P (replace each rule by its possible instances over the Herbrand universe). We will also use choice rules and weak constraints, which are of the respective forms i {a 1 ; . . . ; a n } j :− b 1 , . . . , b m , not c 1 , . . . , not c n (2) Informally, (2) says that when the body is satisfied, at least i and at most j atoms from {a 1 , . . . , a n } must be true in an answer set I, while (3) contributes tuple t with costs w, which is an integer number, to a cost function, when the body is satisfied in I, rather than to eliminate I; the answer set I is optimal, if the total cost of all such tuples is minimal. The CLEVR Dataset CLEVR (Johnson et al. 2017) is a dataset designed to test and diagnose the reasoning capabilities of VQA systems. 1 It consists of pictures showing scenes with objects and questions related to them; there are about ten questions per image. The dataset was created with the goal of minimising biases in the data, since some VQA systems are suspected to exploit them to find answers instead of actually reasoning about the question and scene information (Johnson et al. 2017). Each CLEVR image depicts a scene with objects in it. The objects differ by the values of their attributes, which are size (big, small), color (brown, blue, cyan, gray, green, purple, red, yellow), material (metal, rubber), and shape (cube, cylinder, sphere). Every image comes with a ground truth scene graph describing the scene depicted in it. Figure 1 contains three images from the CLEVR validation dataset with corresponding questions. In CLEVR, questions are constructed using functional programs that represent the questions in a structured format. These programs are symbolic templates for a question which are instantiated with the corresponding values. For each such question template, there are one or more natural language sentences to which they are mapped. For illustration, the question "How many large things are either cyan metallic cylinders or yellow blocks?" from Fig. 1 can be represented by the functional program shown in Fig. 2. There, function scene() returns the set of objects of the scene, the filter * functions restrict a set of objects to subsets with respective properties, union() yields the union of two sets, and count() finally returns the number of elements of a set. A detailed description of functional programs in CLEVR can be found in the dataset documentation (Johnson et al. 2017). The VQA Pipeline The architecture of our neuro-symbolic VQA pipeline that builds on object detection and ASP solving is depicted in Fig. 3. A particular VQA task, which consists of a CLEVR scene and question, is translated to an ASP program given predictions from a neural network for object detection, a functional program, and a confidence threshold; by running an ASP solver, the answer to the CLEVR task is then figured out. Before going into details of our VQA pipeline, we recapitulate the necessary stages of establishing it for the CLEVR dataset: 1. Object detection: we train neural networks for bounding-box prediction and object classification of the CLEVR scenes; 2. Confidence thresholds: we determine a threshold for network predictions that we consider to be of high confidence by statistical analysis on the distribution of prediction values of the neural networks; 3. ASP encoding: we translate CLEVR functional programs that represent questions as well as network predictions that pass confidence thresholds into ASP programs and use an ASP solver to compute the answers. While the VQA tasks are designed to always have a unique answer, the ASP solver may give multiple results that correspond to alternative interpretations of the scene through the object detection network. Object Detection We use YOLOv3 (Redmon and Farhadi 2018) for bounding-box prediction and object classification, adopting that the object detector's output is a matrix whose rows correspond to the bounding-box predictions in the input picture. Each bounding-box prediction is a vector of the form (c 1 , . . . , c n , x 1 , y 1 , x 2 , y 2 ), where the pairs (x 1 , y 1 ) and (x 2 , y 2 ) give the top-left and bottom-right corner point of the bounding box, respectively, and c 1 , . . . , c n are class confidence scores with c i ∈ [0, 1] for 1 ≤ i ≤ n; as customary, higher confidence scores represent higher confidence of a correct prediction. Each c i represents the score for a specific combination of object attributes size, color, material and shape and their respective values; we call this combination the object class of position i. For any object class c, letc be the list size, shape, material, color of its attribute values. For example, assume c is the object class "large red metallic cylinder", then c = large, cylinder, metallic, red. In total, there are n = 96 object classes in CLEVR. Every row of the prediction matrix has also its own bounding-box confidence score. The number of bounding-box predictions of the object detection system depends on the bounding-box threshold, which is a hyper-parameter used to filter out rows with a low confidence score. For example, setting this threshold to 0.5 discards all predictions with confidence score below 0.5. Confidence Thresholds Given class confidence scores c 1 , . . . , c n from an object detection prediction, we would like to focus on classifications that have reasonable high confidence and discard others with low confidence for the subsequent reasoning process. Using a fixed threshold hardly achieves this, since it does not take the distribution of confidence scores in the application area (or validation data for experiments) into account. Our approach solves this problem by fixing the threshold based on the mean and the standard deviation of prediction scores. More formally, given a list of prediction matrices X 1 , . . . , X m , where any X i is of dimension N i × M , N i is the number of bounding box predictions in the input image i, each of which is described by M features. We compute the mean µ and standard deviation σ for the maximum class confidence scores: We suggest computing these values on the validation dataset used in training the object detector. Then, we define the confidence threshold θ that determines what is considered a confident class prediction as follows: We consider class predictions as sufficiently confident if their confidence score is not lower than the mean minus α many standard deviations. The value for α in Eqn. (6) is a parameter we call the confidence rate. It must be provided and should depend on how well the network is trained: for a fixed α, the number of class predictions that pass the threshold decreases if the network gets better trained as the standard deviation becomes smaller. For a fixed network, the number of class predictions that pass the threshold decreases if α decreases and increases otherwise. ASP Encoding To solve VQA tasks, we rely on ASP to infer the right answer given the neural network output and a confidence threshold. We outline the details in the following. Question encoding The first step of our approach is to translate the functional program representing a natural language question into an ASP fact representation. We illustrate this for the question "How many large things are either cyan metallic cylinders or yellow blocks?" in Section 2.2. The respective functional program in Fig. 2 is encoded by the following ASP facts: end (8) . count(8, 7) . filter large(7 , 6 ) . union(6, 3, 5) . filter cylinder (3, 2) . The structure of the functional program is encoded using indices that refer to output (first arguments) and input (remaining arguments) of the respective basic functions. Scene encoding Let X be a prediction matrix and θ be a confidence threshold as described in Section 3.2. Recall that the output of the basic CLEVR function scene() corresponds to the objects detected in the scene which in turn correspond to the individual rows of X. For a row X i with confidence class scores c 1 , ..., c n , set C i contains every object class c j with score greater or equal than θ. If no such c j exists, then C i contains the k classes with highest class confidence scores, where k ∈ {1, . . . , 96}, is a fixed integer; intuitively, k is a fall-back parameter to ensure that some classes are selected if all scores are low. For every row X i with bounding-box corners (x 1 , y 1 ) and (x 2 , y 2 ), as well as C i = {c 1 , . . . c l }, we construct a choice rule of form Every object with sufficiently high confidence score will thus be considered for computing the final answer in a non-deterministic way. For every c ∈ C i , we add the weak constraint where the weight w c is defined as min(−1000 · ln(s), 5000), and s is the class confidence score for c in X i . This approach, which comes from the NeurASP implementation, achieves that object selections are penalised by a weight which corresponds to the object's class confidence score. Resulting answer sets can thus be ordered according to the total confidence of the involved object predictions. We refer to this encoding as non-deterministic scene encoding, but we also consider the special case of a deterministic scene encoding where each C i holds only the single object class with the highest confidence score. Encoding of the basic CLEVR functions We next present encodings for the remaining CLEVR functions. The variables T and T 1 are used to indicate output resp. input references; I represents the object identifier. We omit arguments irrelevant for the particular filter functions. Here, #count is an ASP aggregate function that computes the numbers of object identifiers referenced by variable T 1 . Rules for set operations: The two set operation functions in CLEVR are intersection and union. We present respective ASP rules for each of them: Uniqueness constraint: The CLEVR function unique() is used to assert that there is exactly one input object, which is then propagated to the output. We encode this in ASP using a constraint to eliminate answer sets violating uniqueness and a rule for propagation: Spatial-relation rules: Several CLEVR functions allow to determine objects in a certain spatial relation with another object. We present the rule for identifying all objects that are left relative to a given reference; the rules for right, front, and behind, are analogous: Query rules: Query functions allow to return an attribute value of a referenced object. We present a rule to query for the size of an object; the rules for colour, material, and shape look similar: size(T, Size) :− query size(T, T 1 ), obj (T 1 , . . . , Size, . . .) . Same-attribute-relation rules: Similar to the spatial-relation functions, same-attribute-relation rules allow to select sets of objects if they agree on a specified attribute with a specified reference object. We illustrate the ASP encoding for the size attribute, the ones for colour, material and shape are defined with the necessary changes: Integer-comparison rules: CLEVR supports the common relations for comparing integers like "equals", "less-than", and "greater-than". We present the ASP encoding for "equals": bool (T, false) :− equal integer (T, T 1 , T 2 ), not bool (T, true) . Attribute-comparison rules: To check whether two objects have the same attributes, like size, colour, material, or shape, CLEVR provides attribute-comparisons rules. The one for size can be represented in ASP as follows, the others are defined analogously: bool (T, true) :− equal size(T, T 1 , T 2 ), size(T 1 , V ), size(T 2 , V ) . In addition to the rules above, we also use rules to derive the ans/1 atom that extracts the final answer for the encoded CLEVR question from the output of the basic function at the root of the computation: ans ( :− not ans( ). The last constraint enforces that at least one answer is derived. Putting all together, to find an answer to a CLEVR question, we translate the corresponding functional program into its fact representation and join it with the rules from above. Each answer set then corresponds to a CLEVR answer founded in a particular choice for scene objects. For the deterministic, resp., non-deterministic encoding, at most one, resp., multiple answer sets are possible; no answer set means imperfect object recognition. In case of multiple answer sets, we use answer-set optimisation over the weak constraints to determine the most plausible solution. Experiments on the CLEVR Dataset Recall that the parameters of our approach are (i) the bounding-box threshold for object detection, (ii) the confidence rate α for computing the confidence threshold as distance from the mean in terms of standard deviations, and (iii) k as a fall-back parameter for object-class selection. We experimentally evaluated our approach on the CLEVR dataset to study the effects of different parameter settings. In particular, we study • different bounding-box thresholds and training epochs for object detection, • how the deterministic scene encoding compares to the non-deterministic one, and • runtime performance of our approach in comparison with NeurASP and ProbLog. For the non-deterministic scene encoding, we consider different settings for α and set k = 1. We restricted our experiments to a sample of 15000 CLEVR questions as the systems NeurASP and ProbLog would exceed the memory limits on the unrestricted dataset. All experiments were carried out on an Ubuntu (20.04.3 LTS) system with a 3.60GHz Intel CPU, 16GiB of RAM, and an NVIDIA GeForce GTX 1080 GPU with 8GB of memory installed. Object-Detection Evaluation For object detection, we used an open-source implementation of YOLOv3 (Redmon and Farhadi 2018). 2 The system was trained on 4000 CLEVR images with bounding box annotations, as suggested in related work (Yi et al. 2018). We used models trained for 25, 50 and 200 epochs, resp., to obtain different levels of training for the neural networks. For the bounding-box thresholds, we considered two settings, namely 0.25 and 0.50. Table 1 summarises the results of the evaluation of how the networks of different training quality perform for detecting the objects in the CLEVR scenes. We report on precision and recall, which are defined as usual in terms of true positives (TP), false positives (FP), and false negatives (FN). A TP (resp., FP) is a prediction that is correct (resp., incorrect) w.r.t. the scene annotations in CLEVR. An FN is an object that exists according to the scene annotations, but there is no corresponding prediction. As expected, our results show that the total number of FP and FN decreases for the better trained YOLOv3 models. Naturally, a low bounding-box threshold yields more FP detections, while the number of FN decreases. Setting the bounding-box threshold to a higher value usually leads to fewer FP but also more FN. Question-Answering Evaluation We used the ASP solver clingo (v. 5.5.1) to compute answer sets. 3 Table 2 sheds light on the impact of the training level and bounding-box thresholds of the models on question answering for the deterministic and non-deterministic scene encodings. Our system yields either correct, incorrect, or no answers to the CLEVR questions, and we report respective rates. The nondeterministic scene encoding outperformed the deterministic approach for all settings of training epochs and bounding-box thresholds, and the rate of correct answers increases with larger α. The differences are considerable if the networks are trained rather poorly and and become small or even disappear for well trained ones. It thus seems beneficial to consider more than one prediction of the object detection system if network predictions are less than perfect. Also, lower bounding-box thresholds lead to more correct results in all cases, especially for the deterministic encoding. Hence, being too selective in the object detection is counterproductive. Comparison with NeurASP and ProbLog The related systems NeurASP and DeepProbLog also embody the idea of non-determinism for object classifications, but they do not incorporate a mechanism to restrict object classes to ones with high confidence like in our approach; this can drag down performance considerably. We conducted different experiments to investigate the impact of limiting the number of object classes as in our approach on runtimes and accuracy of the questions answering task in comparison with the aforementioned related systems. The choice rules in NeurASP always contain the 96 CLEVR object classes, whose scores come from the YOLOv3 network. In addition, we also used a setup for NeurASP where only the highest prediction is considered while the probabilities of all other atoms are set to 0. While the former setting is more similar to our approach, the latter is the one used by Yang et al. (2020) in their object detection example. Table 3 summarises the results on question answering accuracy, and Table 4 shows the total runtime for the different systems under consideration on the 15000 questions. NeurASP outperforms our approach in terms of correct answers as it does not restrict the number of atoms for the choice rules. However, this comes at a price as runtimes are much longer, which can be explained by the inflation of the search space due to the unrestricted choice rules. Also, the rate of incorrect answers is higher for NeurASP while our approach will more often remain agnostic when in doubt. For the non-deterministic encoding of our approach, we observe a similar jump in runtimes for α = 2.5 as more object classes are included in the choice rules. We could not use DeepProbLog for CLEVR directly as annotated disjunctions that depend on a variable number of objects in a scene are not supported and would require some extensions. Instead, we evaluated a translation to ProbLog, as DeepProblog's inference component is essentially the one of Problog (Manhaeve et al. 2018). Recall that for NeurASP and DeepProbLog, neural network outputs are interpreted as probability distributions. While NeurASP does not strictly require this and works also in our setting, ProbLog is less lenient and network outputs need to be normalised so that the sum of their scores does not exceed 1. This does however not change results as only the relative order of the object-class scores is relevant for determining the most plausible answer. For the case of the unrestricted 96 object classes, computing results on our hardware was infeasible. We thus considered only the three object classes with highest confidence scores for every bounding-box prediction in our experiments. Results on runtimes and accuracy are therefore lower bounds for the unrestricted case. The picture for ProbLog is quite similar to that of NeurASP: While additional predictions help for question answering to some extent, this is at the cost of a considerable increase in runtime. Overall, the experiments further support our belief that non-determinism is useful for neurosymbolic VQA systems and suitable mechanisms to restrict it do reasonable choices allows for more efficient implementations. Further Related Work Purely deep-learning-based approaches (Yang et al. 2016;Lu et al. 2016;Jabri et al. 2016) led to significant advances in VQA. Some systems rely on attention mechanisms to focus on certain features of the image and question to retrieve the answer (Yang et al. 2016;Lu et al. 2016). Jabri et al. (2016)) achieved good results by framing a VQA task as a classification problem. Some VQA systems are however suspected to not learn to reason but to exploit dataset biases to find answers, as described by Johnson et al. (2017). Besides these purely data driven attempts, there are also systems which incorporate symbolic reasoning in combination with neural-based methods for VQA (Yi et al. 2018;Basu et al. 2020;Mao et al. 2019). The system proposed by Yi et al. (2018) consists of a scene parser, which retrieves object level scene information from images, a question parser, which creates symbolic programs from natural language questions, and a program executor that runs these programs on the abstract scene representation. Our system is akin to this system, but we use ASP for scene representation as well as question encoding, and our program executor is an ASP solver. A similar system architecture appears in the approach by Mao et al. (2019) with the difference that scene and question parsing is jointly learned from image and question-answer pairs, whereas the components of Yi et al.'s system are trained separately. This means that annotated images are not necessary for training, which makes the system more versatile. The approach of Basu et al. (2020) builds like ours on ASP. They use object-level scene representations and parse natural language questions to obtain rules which represent the question. The answer to a question is given by the answer set for the image-question encoding which is combined with commonsense knowledge. However, their approach is not amenable to non-determinism for the scene encoding in order to deal with competing object classifications as we do. Riley and Sridharan (2019) present an integrated approach for deep learning and ASP for representing incomplete commonsense knowledge and non-monotonic reasoning that also involves learning and program induction. They apply their approach to VQA tasks with explanatory questions and are able to achieve better accuracy on small data sets than end-to-end architectures. As in our work, they use neural networks to extract features of an image. We focus however more narrowly on the interface between the neural network outputs and the logical rules and turn network outputs into choice rules to further improve robustness, which has not been considered by Riley and Sridharan. Conclusion We have introduced a neuro-symbolic VQA pipeline for the CLEVR dataset based on a translation from CLEVR questions in the form of functional programs to ASP rules, where we proposed a non-deterministic and a deterministic approach to encode object-level scene information. Notably, non-determinism is restricted to network predictions that pass a confidence threshold determined by statistical analysis. It takes the variance of predication quality into account and can be adjusted by the novel confidence rate parameter α, which supports control of non-determinism, resp. disjunctive information. Our experiments confirm that, on the one hand, non-determinism is important for robustness of the reasoning component especially if the neural networks for object classification are not welltrained or predictions are negatively affected by other causes. On the other hand, unrestricted nondeterminism as featured by related neuro-symbolic systems can pose a performance bottleneck. Our method of using a confidence threshold is a viable compromise between quality of question answering and efficiency of the reasoning component. The insight that it makes sense to deal with uncertainty at the level of the reasoning component is in fact not restricted to ASP and therefore also provides directions to further improve related approaches. While CLEVR is well-suited for the purposes of this work, the scenes are quite simple and do not require advanced features of ASP like expressing common-sense knowledge. For future work, we intend to apply our approach also to other datasets, especially ones that do not use synthetic scenes and where object classification might be harder, resulting in increased uncertainty. There, using additional domain knowledge and representing it with ASP could come in handy. Furthermore, we plan to extend our pipeline to closed-loop reasoning, i.e., using the output of the ASP solver also in the learning process. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016, pp. 4995-5004.
New Approaches to Immunotherapy for HPV Associated Cancers Cervical cancer is the second most common cancer of women worldwide and is the first cancer shown to be entirely induced by a virus, the human papillomavirus (HPV, major oncogenic genotypes HPV-16 and -18). Two recently developed prophylactic cervical cancer vaccines, using virus-like particles (VLP) technology, have the potential to prevent a large proportion of cervical cancer associated with HPV infection and to ensure long-term protection. However, prophylactic HPV vaccines do not have therapeutic effects against pre-existing HPV infections and do not prevent their progression to HPV-associated malignancy. In animal models, therapeutic vaccines for persisting HPV infection can eliminate transplantable tumors expressing HPV antigens, but are of limited efficacy in inducing rejection of skin grafts expressing the same antigens. In humans, clinical trials have reported successful immunotherapy of HPV lesions, providing hope and further interest. This review discusses possible new approaches to immunotherapy for HPV associated cancer, based on recent advances in our knowledge of the immunobiology of HPV infection, of epithelial immunology and of immunoregulation, with a brief overview on previous and current HPV vaccine clinical trials. Introduction Cervical cancer is the first cancer recognized by the World Health Organization (WHO) to be 100% attributable to an infection. Other cancers that are attributable to HPV infection include cancers of the vulva, vagina, anus and penis. The HPV family also includes the causative agents of genital and skin warts. Humoral Immunity Adaptive immune responses to HPV infection include humoral (B cell, antibody) responses specific for viral structural (L protein) or non-structural (E protein) antigens. The L1 major structural protein of HPV assembles into particles or capsids indistinguishable from native virions. Although papillomavirus capsids have been shown to be quite immunogenic when injected into animals [15,16], the natural antibody response to L1 is weak, probably because the L1 protein is expressed in the squamous epithelium, a site that may not allow efficient immune activation. Nevertheless, those capsids are a major tool for the study of serological responses to HPV. Most studies have examined the prevalence of antibodies to HPV capsids by ELISA [17] and compared their findings with the presence of HPV DNA using PCR-based testing and neoplasia or genital warts by Papanicolaou (Pap) screening. There is considerable variability in the timing of the HPV-specific antibody response in relation to HPV infection. Nevertheless, serum IgG antibodies, which have a neutralizing potential, develop in response to infection in more than 50% of women. The kinetics of antibody development are slow (6 to 12 months), and the peak titres are low, but antibodies can persist for decades unless HPV-associated lesions resolve. Secretory IgA antibodies can also be detected in the cervical secretions of HPV-positive women with a similar time course of appearance to IgG antibodies, but with a shorter persistence time [18]. However, studies have also suggested that antibodies to HPV may persist for years after the disappearance of measurable disease, making it difficult to discriminate whether antibodies are due to past or persistent infection [19,20]. Antibodies to non-structural proteins after natural infection are rare or nonexistent. However, relative to the presence of HPV DNA at the initial stages of infection, antibodies against HPV-16 and -18 non-structural E2, E6 and E7 proteins were detected in only half of patients tested [21]. A stronger correlation has been demonstrated between E7 antibodies and advanced cervical cancer [22][23][24]. HPV antibody assays are problematic because the levels of antibody are low, the threshold for positivity is difficult to define, and correct antigens (with the exception of L1 VLP) are not well characterized. Spontaneous clearance of HPV infection occurs in the majority of patients, and has been largely attributed to the cellular immune response rather than the humoral immune response, as further described below and in Figure 1. CD4+ and CD8+ T-cells- High-risk HPV infections can progress to high-grade CIN due to impaired T-cell immunity (Fig 1). HPV early proteins are required early in viral infection and therefore may serve as a useful vaccine target in HPV-infected individuals, aimed at prevention or therapy of premalignant lesions. Since the E6 and E7 oncoproteins of HPV viruses are expressed in precancerous lesions, such proteins might be potential tumor-specific targets for immunotherapy. The E5 protein of HPV-16 is known to downregulate MHC class I expression on infected cells and could be targeted to allow re-expression of MHC and recognition of infected cells by CD8+ T cells [25]. Proposed model for the association between HPV-16-specific T-cell immunity and the development of disease (adapted from [26]). Thick arrows represent the fate of the majority of HPV-16-infected individuals, in contrast to thin arrows that represent the fate of the minority of HPV-16-infected individuals. Dotted arrow indicates that some cases of spontaneous regression can occur, probably by the induction of T-cells and/or cells from the microenvironment, which leads to the destruction of infected tissues or tumor mass. Dashed box represents the immunological mechanism that is likely involved. There are several studies providing information about CD4+ and CD8+ cytotoxic T-cell responses to HPV-16 early proteins [27]. Using fusion proteins, panels of overlapping peptides, virus-like particles or tetramer technology, CD4+ T cell responses to HPV-16 E2 [28], E4 [29,30], E5 [31], E6 [29,32] and E7 [33] have been demonstrated in both patient and healthy control populations without clear consequences for resolution of HPV infection. HPV-16 E2-, E6-and E7-specific CTLs can also be detected in patients with previous or ongoing HPV infections [34,35]. Moreover, T-cells can also be found that recognize late proteins, such as HPV-16 L1 [36], although responses to L1 are not thought to be important in viral clearance once HPV infections are established. Overall, HPV specific T-cells detected in patients with HPV infection are not anergic but functionally active, as has been demonstrated using a variety of restimulation protocols such as gamma-interferon (IFN-Ȗ) release by ELISPOT and chromium 51 release by cytotoxic assay [34,35]. To highlight this, a recent study described the first clinical success for therapeutic vaccination of vulvar intraepithelial neoplasia (VIN)3 patients with a vaccine which comprised HPV-16 E6 and E7 synthetic long overlapping peptides (HPV16 SLP). This vaccine induced a CD4+ and CD8+ T cell response with IFN-Ȗ production and led to a durable and complete regression of HPV-induced lesions in half of the patients [37]. NKT-Natural killer T (NKT) cells are a heterogeneous group of T cells that recognize self or microbial lipid antigens presented by CD1d molecules. CD1d is a major histocompatibility complex (MHC) I-like glycoprotein present on the surface of not only antigen presenting cells but also intestinal epithelial cells, keratinocytes and reproductive tract epithelial cells [38]. The most widely studied ligand of NKT cells is Į-galactosylceramide (Į-GalCer) [38]. A recent study has used Į-GalCer as an adjuvant with DNA vaccination against HPV-16 oncoprotein E7 to generate high numbers of E7-specific CD8+ T-cells protective against the transplantable TC-1 (HPV-16+) tumour model in mice [39]. Another NKT ligand, ȕ-galactosylceramide, has also been shown to be protective against the same tumour [40]. These data thus highlight the potential of NKT ligands as therapeutic agents for HPV-associated cancers. MHC cell surface downregulation is one of the mechanisms employed by a variety of tumours to evade immune detection. Cell surface CD1d, a MHC-Ib molecule, is down-regulated in HPV-related lesions and in HPV-negative cervical cancer cell lines stably transfected with HPV-6 E5 and HPV-16 E5, thus linking decreased CD1d expression in the presence of HPV infection with evasion of NKT cells [41]. On the other hand, studies in a mouse model expressing HPV-16 E7 oncoprotein in keratinocytes have shown that IFN-Ȗ produced by invariant NKT cells in the skin provides protection against E7 transgenic skin graft rejection [42]. A small study on a North Indian population has also shown an association of the IFN-Ȗ +874 polymorphism with an increased risk of cervical cancer in patients at stages III + IV. However, this study did not include all clustered polymorphism sites of IFN-Ȗ and was not free from selection bias [43]. Based on the literature, NKT cells may play two different roles, depending on the type of HPV lesions. In established HPV-associated cancerous lesions, activation of NKT cells may provide therapeutic value, while inhibition of NKT cell activity or recruitment in precancerous lesions may allow a protective immune response. Treg-Regulatory T cells (Treg) are CD25hi Foxp3+ CD4+ T cells that have been involved in the failure of the immune system to control the development of numerous cancers both in humans and in mice [44,45]. Therefore, a high frequency of Treg in human HPV-associated cancer could counteract the host immune response and thus influence therapeutic strategies. Indeed, human, high-grade CIN lesions (CIN3) and cervical carcinomas have been shown to contain higher numbers of infiltrating lymphocytes and FoxP3+ Treg compared to colon carcinomas, skin melanomas, and bronchial carcinomas [46][47][48]. Moreover, mucosal enrichment of Treg cells was associated with a diminished cellular immunity in the cervical mucosa, and both seemed to contribute to the development of high-grade CIN 2 and 3 lesions [49]. Notably, Treg may also be induced by therapeutic vaccines. In a study comparing patient groups with small or large HPV-16-positive vulvar lesions, larger lesions were found to contain higher frequencies of vaccine-induced HPV-16-specific Foxp3+ Treg cells, correlating with early treatment failure [50]. A similar result has also been observed in a mouse model [51]. Moreover, it has been shown that the process of repeat vaccinations can increase the presence of regulatory T cells [52]. This suggests that the combination of therapeutic vaccination with the depletion of Treg or at least the abrogation of their suppression should improve cancer therapy. The in vivo depletion of Treg has already proved to be efficient in allowing the rejection of established tumours in mice and humans [53][54][55][56][57][58]. In an animal model of HPV-associated cancer, the therapeutic depletion or inactivation of Treg has been shown to induce a strong intratumoral invasion of CD8+ T cells and complete eradication of HPV-16 E6/E7-expressing tumor cells in 70% of treated animals [48]. Moreover, combining T-cell therapeutic vaccination and Treg depletion can lead to the complete eradication of HPV-expressing tumors in mice [59]. In patients with condylomata acuminata, Treg depletion using cyclophosphamide ameliorates the immune milieu of the lesion site, leading to the elimination of remnant viruses and helps prevent recurrence after laser therapy [60]. As a consequence, the role of Treg should be scrutinized within HPV vaccination strategies. The temporary removal of Treg from patients should be considered as a means to improve the benefits of therapeutic vaccination targeting HPV-associated lesions. Myeloid Cells Macrophages. Macrophages are one of the three types of phagocytes in the immune system and are distributed widely in the body tissues, where they play a critical part in innate immunity. Macrophages are found in many areas of lymph nodes, particularly in the marginal sinus and in the medullary cords. Here they can actively phagocytose pathogens and antigens, and so prevent them from entering the blood. Activated macrophages undergo changes that greatly increase their antigen-presenting function and anti-pathogen effectiveness, and amplify the immune response. Regarding their role in HPV infection, studies in women infected with HPV have shown a positive correlation between lesion grade, the number of infiltrating macrophages and IL-10 expression by these cells [61][62][63]. However, macrophages have also been reported to react against E6 and E7. Macrophages can kill HPV-16 E6 but not E7-expressing tumor cells through tumor necrosis factor (TNF)-Į and nitric oxide (NO)-dependent mechanisms [64,65]. It has also been shown that HPV-16 E7, but not E6, NIH-3T3 human cells were susceptible to activated macrophages, and that the ability of E7 to cause transformation was required to induce susceptibility of infected cells to activated macrophages [66]. Thus, the role of macrophages in HPV infection is still unclear. The tumor microenvironment has been shown to influence tumor immune privilege, and this has become a new field of research in tumor immunology. Macrophages are enriched in several types of human cancer, including breast, ovarian, non-small cell lung cancer, and Hodgkin's lymphoma, and their presence correlates with a poor clinical outcome. Tumor-associated macrophages (TAM) have been therefore identified as regulators of tumor development. As an example, TAM have been shown to play a major role in colorectal cancer by recruiting Treg to the tumor mass, thus favouring an immunosuppressive microenvironment that leads to tumor growth [67]. TAM can also act by suppressing tumor-infiltrating T-cells through several mechanisms, as seen for example under hypoxic conditions [68]. TAM have been shown to be recruited in situ through the macrophage colony-stimulating factor (CSF-1), as its over-expression correlates with poor prognosis in breast cancer [69]. Therefore, macrophages can have opposing roles within the immune response against tumorigenesis and infection, and their role in high risk HPV infection is yet to be investigated. Myeloid derived suppressor cells. Myeloid derived suppressor cells (MDSC) are a heterogeneous population of cells that have been associated with cancer, inflammation and infection, and been shown to suppress T-cell responses. A regulatory role for MDSC, as well as regulatory T cells, has been highlighted in animal models using transplantable tumour cells expressing the E7 protein, TC-1. One recent study found reduced MDSC and Treg numbers in both the spleen and transplantable tumour itself through the use of a tri-therapy combination HPV vaccine, leading to the restoration of a potent E7-specific CD8 T-cell response and the control of tumour growth [70]. Another study using the same tumour line, but E7 DNA vaccine instead, showed a reduction in MDSC numbers but not Treg cells in the tumour microenvironment, which was sufficient to control tumour growth. Adding Imiquimod to the E7 DNA vaccine further reduced MSDC numbers and activated of a CD8 response, improving the anti-tumour response mediated by E7-specific CD8 T cells, macrophages and NK1.1+ cells [71]. Despite no demonstrated association with HPV-associated cancers in humans, MDSC have been observed with increased prevalence in the peripheral blood and tumor microenvironment of patients with head and neck squamous cell carcinomas [72] and pancreatic cancer [73]. By further elucidating the mechanisms of MDSC recruitment and maintenance in the tumor environment in mice and humans, new vaccine strategies may be developed to reverse the suppression of anti-tumour immunity. Mast cells. Mast cells (MC) are localized at body sites that interface with the environment, such as the skin and mucous membranes, which are also the site of HPV infection. Mast cells are highly specialized innate immune effector cells that contain secretory granules in which large amounts of proteases are stored in complexes with serglycin proteoglycans [74]. The presence of mast cells along with other immune cells has been shown in CIN2/3 lesions [75], though their main role in HPV pathogenesis is difficult to determine due to their "tunable" function. They can indeed act as pro-inflammatory cells through the recruitment of innate and adaptive immune cells, or as immuno-suppressive cells through the production of the immunosuppressive cytokine IL-10 [76]. Burns [77], tape stripping, tumors [78], allergy, parasitic and virus (HIV) infections have each been shown to recruit mast cells into the skin. A major point linking mast cells to cancer is that mast cells accumulate in the stroma surrounding certain tumors, especially mammary adenocarcinoma, where they have been shown to synthesize and secrete potent angiogenic cytokines, such as vascular endothelial growth factor (VEGF). These molecules facilitate tumor vascularisation by a direct angiogenic effect and by stimulating the stroma and inflammatory cells of the tumor microenvironment [79,80]. In relation to their 'tunable' function, mast cells in the local environment can also be detrimental for the tumours themselves, secreting immune mediators such as IL-1, IL-4, IL-5, IL-6 and TNF-Į that can induce apoptosis of tumor cells and recruit inflammatory cells [80]. As is the case for macrophages, the role of mast cells in the immune response to HPV infection is still largely unknown. Dendritic cells. Dendritic cells (DC) are potent antigen-presenting cells (APC) that play a fundamental role in the induction and regulation of innate and adaptive immune responses against microbial pathogens. DC in human and mice can be broadly categorised into two major populations: plasmacytoid DC (pDC) and conventional DC (cDC), which can be further divided into migratory DC (Langerhans cells (LC) and interstitial DC) residing in peripheral tissues and lymphoid tissue-resident DC. In humans, pDC are present in cervical cancer lesions, primarily in the stroma underlying the tumor rather than the dysplastic epithelium, and produce the anti-viral cytokine, IFN-Į, in response to HPV VLP [81,82]. These studies suggest pDC play an important function in the natural immune response against HPV, although their role in cervical cancer development remains unclear. Epithelial LC are the model migratory DC, initiating immune responses to infecting pathogens by capturing Ag and delivering it to the T cell areas of the draining lymph nodes. Several studies have documented reduced numbers of LC in human HPV-associated cervical lesions [83][84][85][86], suggesting the depletion of epithelial LC during HPV infection may lead to prolonged infection and possibly oncogenesis. Furthermore, LC incubated with HPV VLP fail to up-regulate surface activation markers or initiate a HPV-specific immune response, suggesting that HPV avoids immune recognition through poor stimulation of LC [87,88]. A unique population of interstitial DC has been recently defined in the cervical stroma, with elevated numbers of these cells in human cervical cancer relative to normal cervix. This stromal DC subset is distinct from LC and expresses immuno-suppressive factors IL-10, TGF-ȕ and indoleamine 2,3-dioxygenase (IDO) [63,75,89]. However, there are several additional mechanisms by which stromal inflammatory cells could contribute to tumor escape. For example, PD-L1 (B7-H1), expressed on multiple cell types including APC, and PD-L2 (B7-DC), selectively expressed by DC and macrophages, are both expressed in human cervical cancers [90]. DC are regarded as the master regulators (chef d'orchestre) of the immune response. They have been utilized in numerous vaccination protocols to boost antigen presentation and T-cell costimulation and thus, augment antigen-specific T-cell responses. However, their multiplicity of phenotypes and functions make it difficult to know which population(s) of APC should be targeted. VLP Virus-like particles (VLP) are a useful tool for development of vaccines and are often used in studies to identify viral assembly proteins. They resemble the native virus immunologically but are non infectious as they don't contain viral genetic material. Two prophylactic HPV vaccines (Cervarix and Gardasil) that are currently in use are based on the L1 major capsid protein. The HPV L1 open reading frame translates a 55-kD protein that efficiently self assembles into viral capsomeres and empty capsids, when expressed in eukaryotic cells [91]. In papillomavirus VLP vaccine production, the L1 gene is cloned, and amplified using primers specific for the L1 gene. The amplified L1 segment is then inserted into an appropriate intermediate expression vector and used to produce recombinant L1 yeast [92], vaccinia [91] or baculovirus [93]. These are then purified and used to express L1 in eukaryotic cells [91]. The L1 self assembles to form VLP and does not require L2 or other non structural proteins. The resulting VLPs are immunologically similar to the native viron. These are then finally combined with alum-based adjuvants to make the basis for prophylactic vaccines. Cervarix: HPV-16 and -18 VLP are produced in Trichoplusia ni Rix4446 cell substrate using a baculovirus expression vector system and formulated with the adjuvant system ASO4 that is comprised of 3-O-desacyl-4ƍ monophosphoryl lipid A (MPL) and aluminum hydroxide salt. The vaccine is administrated intramuscularly in three dose schedule -months 0, 1 and 6. These prophylactic vaccines are licensed for use to prevent HPV infection and anogenital cancers in females and males aged 9-45 in many countries, and are administrated to teenage girls as a part of the routine immunization schedule in some. In addition, the quadrivalent vaccine is licensed for use to prevent genital warts in some developed countries. The use of these vaccines in developing countries is limited by cost-related issues. Thus, provision of prophylactic HPV vaccines to a wider population through development of biosimilar vaccines is under consideration in developing countries. As long as women are sexually active they remain at risk of cervical HPV infection. Therefore, it is essential that HPV vaccines provide long-lasting protection. Clinical efficacy has been observed up to 6.4 years for the bivalent vaccine [94] and up to 8.5 years for the quadrivalent vaccine [95,96]. Since most HPV infections are silent, it will be many years before we know for certain about the duration of protection and efficacy provided by HPV vaccines. A mathematical modeling study based on three different statistical models and follow-up data from more than 300 vaccinated women, has predicted that anti-HPV-16 and -18 antibody levels with the bivalent vaccine will persists for at least 20 years [94]. In addition to showing 100% efficacy in preventing pre-cancerous lesions, HPV vaccines also provide some cross-protection against other HPV subtypes (60% and 78% efficacy in preventing incident infections secondary to subtypes HPV-31 and -45 respectively) (reviewed in [97]). Based on phase III clinical trial results [98], the Advisory Committee on Immunization Practices (ACIP) states that women should be advised that the vaccine will not have any therapeutic effect on existing infection or disease and they should continue to receive routine cervical cancer screening [99]. Who and When To Vaccinate To allow meaningful approaches to vaccine development, both prophylactic and therapeutic, it is important to identify the age at which the population is infected, the extent of exposure to human papillomavirus within the general population, how long the infection persists and if the immune response to HPV is consistent throughout life. One of the key questions also remains to understand the first transmission of the virus and whether sexual transmission is the only route for high-risk HPV. Perinatal transmission. The detection of human papillomaviruses including HPV-16 and -18 in neonates have been described in about 50% of cases where the mother is infected at the time of delivery. In two early studies, HPV-16 and -18 DNA were detected in buccal and genital swabs in more than 70% of infants at 24 hours post delivery and persisted in more than 80% of contaminated infants at 6 weeks through to 6 months of age [100,101]. Other studies also provide evidence that HPV-DNA could be found within the nasopharyngeal aspirate fluids and oral cavity of neonates [101][102][103][104]. The mode of delivery at birth, vaginal versus caesarean, may play a role, as a significantly higher rate of HPV-16/-18 infection was found at birth when infants were delivered vaginally [105]. Ten years later, two studies analysed the type-specific HPV concordance in infected mothers, the placenta and the newborn or the infected mother and the cord blood in about 60 to 70 cases, but they only found a small incidence of placental infection, and an even smaller incidence of transplacental transmission [106,107]. One of the largest studies of newborn HPV infections to date, and the first to use sequencing methods to evaluate if a vertical transmission of HPV from infected parents does occur, showed a lack of concordance for the HPV type between the mother and child or the father and child [108]. These data support the rarity of perinatal high-risk HPV transmission and suggest other potential sources of exposure or contamination. Altogether, there is no obvious evidence for the transmission of high-risk HPV-16 and -18 to the genital tract at birth. Although common to detect HPV-16/-18 DNA in the oral cavity of newborns, it does not persist nor replicate, and the only disease attributable to vertical transmission is for recurrent respiratory papillomatosis (RRP), which is caused by HPV types -6 and -11 with a 1:1000 live birth event [95]. Adolescence. If the only possible method of genital HPV infection is sexual transmission, non-infected individuals could be found within virgin girls and young women. On the other hand, if nonsexual transmission can occur, some virgins might have genital HPV-specific antibodies, and this would increase the cut-off and reduce test sensitivity if virgin females were used to provide a reference standard for antibody testing. Therefore, it would be informative to determine the antibody status in pre-pubertal children. Here, the results from different studies are controversial, as it is crucial to define the antibody detection methodology, and what would be a "positive" and a "negative" control for an assay. One large study in Edinburgh examined more than 1,000 serum samples from a cohort of 11-13 year-old virgin school girls for the presence of antibodies to HPV-1, -2, and -16 VLP. The study reported that 7% of study subjects had antibodies to HPV-16 VLP, 52% to HPV-1 and 38% to HPV-2. In contrast, a Swedish study showed that virgin teenage girls were not seropositive in an HPV-16 VLP-based ELISA, while 14% of early sexually experienced girls were detected HPV seropositive and positive for HPV-16 DNA, leading to a conclusion that non-sexually transmitted infections are rare or nonexistent among adolescent girls [109]. Another study conducted in South Africa with children aged between 1 and 12 years found that 4.5% of sera tested were positive for antibodies to HPV-16, with a prevalence decreasing with age. This could indicate vertical transmission of HPV infection, but HPV DNA from children and parents were not tested to confirm or invalidate this conclusion [110] Altogether these data show the difficulty in defining an HPV antibody positive from a negative serum. This makes interpretation of data on early HPV infection problematic. However, the balance of evidence suggests that there are only very few, if any, HPV infections caught before the onset of sexual activity. Adult age. The prevalence of high risk HPV in the general population is well known. Women acquire HPV infection soon after onset of sexual activity [111] and more than 50% of adult women are HPV-16 seropositive, a percentage increasing with the number of partners and sexual behaviour. Moreover, studies on older women (>65 years old) indicate an impairment in host immunologic responses, with decreased lymphoproliferative responses associated with persistent HPV infection [112,113]. Taken together, the available data lead to the conclusion that the HPV prophylactic vaccines currently available should ideally be administered before the onset of sexual activity, prior to exposure to HPV infection. Men. Cancers of the penis, anus, and oropharynx in males can be due to high risk HPV infection, although less than 25% of HPV-related cancers occur in males. However, HPV infections and related non-malignant diseases are common in males [114,115]. The rate of genital HPV infection in males, and the probability that a sexually active male will acquire a new genital HPV infection, are equivalent to the rates in females. Increasing numbers of sexual partners and preference for male partners are each associated with an elevated risk of HPV acquisition [116]. However, there are differences between the sexes in the immune response to genital HPV infection: seroprevalence in men is lower than in women (7.9%, vs. 17.9%, respectively), with lower titres of antibodies [114,117]. A recent study enrolled more than 4,000 healthy boys and men between 16 to 26 years of age from 18 countries in a randomized, placebo-controlled, double-blind trial, and demonstrated that the quadrivalent HPV vaccine used as a prophylactic vaccination can reduce the incidence of some HPV-related infections [118]. There are two potential advantages to vaccinating males: the first is the direct benefit for those immunised and the second, indirect but not less important, is that this can improve protection of females, by reducing virus transmission. Despite the success of preventive HPV vaccines, such vaccines are unlikely to reduce the global burden of HPV-associated cancers in the next few years due to their high cost and limited availability in developing countries, where there is a high incidence of cervical cancer. Moreover, existing preventive HPV vaccines do not generate therapeutic effects. Therefore, it is worthwhile to consider alternative strategies to treat HPV-associated premalignancy and malignancy. When To Vaccinate with a Therapeutic Vaccine? The Sooner, the Better… Thymic involution with age. With age, the immune system undergoes dramatic changes. From fetal to neonatal periods, naïve T-cells are extensively produced and migrate from the thymus to secondary lymphoid organs in the periphery, where they accumulate and display a broad and polyclonal repertoire. From youth to adulthood, thymic export is balanced by cell death at the periphery, leading to an equilibrium in the apparent quantity of cells. With age, the thymus involutes and numbers of newly generated naïve T-cells gradually fall. Together with this thymic involution, an immune decline, termed immunosenescence, progressively alters B-and T-cell functions. In counterpart, an accumulation of antigen-experienced peripheral T-cells is observed, both in the CD4+ and CD8+ compartments, likely due to an accumulation of memory responses following antigen activation and from homeostatic proliferation to maintain T-cell levels [119]. It is reported that the TCR repertoire of elderly persons (age > 75 years) is severely contracted. In fact, the TCR diversity of elderly persons in both the CD4+ and CD8+ compartments is at least 100-fold less diverse than the TCR diversity in younger individuals (age 20-35) [119,120]. As a consequence, it seems that a therapeutic vaccine should be administered early enough during adulthood to elicit the maximum efficiency. Critical window of time. Another fact arguing in favour of an early therapeutic vaccination comes from the analysis of vaccination strategies against transplantable tumours in mice. Indeed, it has been shown in mice that a vaccine is more potent and will generate a strong anti-cancer effector response when it is administered within a certain time frame [121]. In humans, in HPV-16-positive vulvar lesions, it has been shown that patients with a smaller lesion and a shorter history of disease were significantly more responsive to therapeutic vaccines, compared to patients with larger lesions and longer history of disease [50]. This suggests that the earlier a therapeutic vaccine is administered after the appearance of cancer, the improved probability of a positive clinical response. The side effects: Treg cells. We have previously discussed in Section 2 the deleterious role of Treg in cancer. Moreover, it has been described in mice that, at the very time of tumour emergence, self-specific Treg were activated early and briskly by self-antigens expressed by tumours, driving a secondary-type immune response in essence more rapid and efficient than the primary-type response of naive effector T-cells specific for tumour neoantigens [55]. This mechanism of 'déjà vu' explains an old paradigm of cancer immunology, that preventive immunization is more effective than therapeutic immunization. Therefore, here again, it seems that early vaccination after the detection of cancer might be the preferred option to promote a maximum anti-tumour response. In that case, an improved effort in the screening of cervical cancers or skin cancers induced by HPV infection is also a key in the success of immunotherapy for HPV-associated cancers. Therapeutic HPV Vaccines Current treatments for HPV-associated lesions rely primarily on excision or ablation of the infected lesion. Ablative therapies are effective when the disease is localized as in the case of CIN but the possibility of recurrence remains, as treatment does not always eradicate the underlying HPV infection. A successful immunotherapy might therefore be a preferred mode of treatment because it can target all HPV-associated lesions irrespective of their location. Ideally, it would also induce longlasting immunity, thus preventing recurrence. Therapeutic cancer vaccines are anticipated to treat an existing cancer by enhancing the naturally occurring immune response to the cancer. Unlike prophylactic vaccines, which protect against HPV infection by generating neutralizing antibodies, therapeutic vaccines would likely require induction of antigen-specific T-cells for clearance of HPV-associated lesions (reviewed in [122]). Most of the therapeutic vaccines developed to date for HPV-associated disease have failed at the clinical stage, despite promising results in preclinical animal models. One possible reason for failure is that these vaccines have generally been tested in advanced stage cancer patients, where the chances of their success are reasonably poor. Nevertheless, we have to mention here some of the recent and successful therapies against HPV-associated cancers, in HPV-16 associated vulval intraepithelial neoplasia [37,123] and in HPV-16 associated cervical cancer [124]. They will be further discussed in each section of this chapter. The therapeutic vaccine should also be directed against the right target. The capsid proteins L1 and L2 are no longer expressed after the integration of HPV DNA into the DNA human host cells, rendering them inappropriate as therapeutic targets. On the contrary, the two oncoproteins E6 and E7 are expressed throughout the viral life cycle and are required for continued tumor growth. The E6 oncoprotein degrades tumor suppressor p53 via direct binding to the ubiquitin ligase E6AP and contributes to tumorigenesis [125,126]. The E7 oncoprotein binds and induces proteasomal mediated degradation of the retinoblastoma family of proteins (pRB, p107 and p130), which are required for processes such as cell cycle progression, DNA repair, apoptosis, senescence and differentiation [126,127]. Hence, E6 and E7 oncoproteins appear to be good candidates for HPV vaccination strategies. However, the proteins E1 and E2 are expressed early in the progress of an HPV infection before the integration of the viral genome into the host DNA. E1 encodes for the helicase and E2 is a regulatory protein and they both have a role in viral replication. In line with our hypothesis that the vaccine should be delivered as soon as possible after infection by HPV (as discussed in Section 3.3), E1 and E2 may be the best targets for a therapeutic vaccine. Indeed, vaccination against E1 and E2 have already been shown to induce protection in dogs [128] and rabbits [129]. In humans, E1 and E2 have been reported to induce both natural and vaccine-induced T-cell responses in patients with persistent cervical neoplasia [130]. Together with a good target, HPV vaccines need to be delivered correctly. In the following sections, we will discuss various strategies as candidates for delivery of therapeutic HPV vaccines. Live Vector-Based Vaccines Live-vector based vaccines are highly efficient in delivering relevant antigens or DNA encoding antigens of interest. These vaccines have advantages including the possibility of choosing a desirable vector from a wide range of vectors to deliver antigens. Moreover, replication and spreading of live vectors in the host results in potent immune responses. However, using live vectors poses safety concerns in clinical applications. Additionally, neutralizing antibodies to the live vector, generated upon vaccination, may limit the efficiency of repeated immunizations with the same vector. The most common types of live vectors used for vaccinations are bacterial and viral vectors. Bacterial Vectors The two most promising bacterial vectors for therapeutic HPV vaccines are Listeria and Salmonella. Listeria is a gram-positive intracellular bacterium that occasionally causes disease in humans. Listeria monocytogenes (LM) has the ability to replicate in the cytosol of APCs after escaping phagosomal lysis by secreting a factor called listeriolysin O (LLO). This unique feature allows peptide antigens derived from LM to be processed and presented via both MHC class I and class II pathways, resulting in potent CD4+ and CD8+ T-cell-mediated immune responses. Preclinical models. Listeria-based vaccines targeting E7 have been shown to cause regression of solid implanted tumors in HPV-16 E6/E7 transgenic mice [131]. These vaccines can also inhibit the growth of thyroid tumors in E6/E7 transgenic mice [132]. Intravaginal immunization with live attenuated Salmonella enterica serovar Typhimurium expressing HPV-16 antigens induced transient inflammatory responses in the genital mucosa and conferred protection against subcutaneously implanted HPV-16 tumors [133]. Clinical Models. The first clinical Listeria-based vaccines, using HPV-16 E7 antigen fused to a fragment of LLO, was found to be well tolerated and did not have any side effects in end-stage cervical cancer patients [134]. However, the trial was a non-controlled study and therefore no inference can be made regarding the differences in overall survival. Further data and trials need to be done to confirm the use of Listeria-based vaccines as an efficient vaccine for HPV infection. Although a promising candidate for many cancer vaccines, Salmonella has yet to enter clinical trials for therapeutic HPV vaccine. Viral Vectors Many recombinant viral vectors (such as adenoviruses, fowlpox viruses, vaccinia viruses, vesicular stomatitis viruses and alphaviruses) have been used for therapeutic HPV vaccine development due to their high infection efficiency and expression of antigens in the infected cells (reviewed in [135]). Preclinical models. A replication-deficient adenovirus encoding fusion proteins comprised of calreticulin, known to enhance MHC-I expression on cell surface, and E7 antigen (CRT/E7) has been shown to generate a potent and protective cellular response to E7, in an established tumour model in mice [136]. This study thus suggested a therapeutic potential for viral vector. Clinical Models. Vaccinia virus is another viral vector that has been tested in both preclinical and clinical trials [135]. Phase I/II clinical trials with recombinant vaccinia virus expressing HPV-16/-18 E6/E7 fusion protein (TA-HPV) have been shown to induce therapeutic effects in patients with early-or late-stage cervical cancer, vaginal intraepithelial neoplasia (VAIN) [137] and VIN (reviewed in [135]. Furthermore depending on the dose and schedule, the antigen-specific immune responses in E6/E7 transgenic mice can be augmented by co-expression of IL-12 in a Semliki Forest virus (SFV), an alphavirus, suggesting that the viral vectors can be further modified to enhance their potency [138]. In a recent phase II clinical trial, subcutaneous injections of TG4001 vaccine, consisting of an attenuated recombinant vaccinia virus containing the sequence for modified HPV-16 E6 and E7, and human IL-2 gene, induced the regression of CIN2/3 lesions in seven of 10 patients [139]. A second phase II clinical trial of randomized vs placebo patients (with a sample size of 200 individuals) is currently ongoing for this vaccine. Vaccinia virus is therefore a promising vaccine candidate against HPV infection. Viral vaccines are moreover safer than DNA vaccines (that will be discussed later in Section 4.3), as they contain only a small fraction of pathogenic genes [135], and they are currently use in both preclinical models and clinical trials. Protein and Peptide-Based Vaccines Protein and peptide-based vaccines are the most popular form of HPV therapeutic vaccines. Vaccination with HPV antigenic peptides involves the uptake and presentation of the peptide antigen in association with MHC molecules by DC. The polymorphic nature of MHC molecules in genetically diverse populations makes it difficult to identify one immunogenic epitope which would cover all individuals. However, the use of overlapping, long peptide vaccines based on HPV E6/E7 antigens has been effective in generating antigen-specific T-cell responses. This limited use of peptides is reduced when the entire protein is used. Since protein antigens can be processed by the patient's DC, which contain the relevant human leukocyte antigens (HLA), vaccines based on protein antigens can evade the limitation of MHC specificity associated with peptide vaccines. Adjuvants and fusion with immunostimulatory molecules, such as IL-2, are often used to overcome poor immunogenicity associated with protein and peptide-based vaccines. Several protein and peptide vaccines against HPV E6 and/or E7 have been successfully tested in preclinical and clinical models. Preclinical models. In a E7-expressing TC-1 tumor model, mice were vaccinated subcutaneously with HPV-16 E7 peptide, together with a pan HLA-DR epitope (PADRE) peptide and the TLR3 ligand poly(I:C) adjuvant. PADRE peptide and poly (I:C) were used to enhance the activation of CD4+ T helper cells and dendritic cells, respectively, which altogether generated anti-tumor effects against TC-1. This therapeutic effect was further enhanced when the vaccine was administered in the tumour mass itself, generating higher frequency of E7-specific CD8+ T-cells and leading to a better survival [140]. Like peptides, proteins have been tested as candidates for therapeutic HPV vaccines. To enhance the efficacy of HPV protein-based vaccines, adjuvants such as liposome-polycationic-DNA (LPD) [141] and saponin-based adjuvant [142] have been successfully used in the mouse models. Similarly, fusion of HPV-16 E7 protein with Bordetella pertusis adenylyl cyclase (CyaA), which targets proteins to APC [143], or with Pseudomonas aeruginosa exotoxin A, which facilitates their translocation to enhance MHC class I presentation [144], have led to improved CTL responses. Clinical models. Several trials have demonstrated the successful use of peptide/protein based vaccines against HPV. A phase I trial involving overlapping HPV-16 E6 and E7 peptides with Montanide ISA51 adjuvant in end-stage cervical cancer patients and VIN grade III patients elicited broad IFN-Ȗ associated T-cell responses [145]. In a similar study, vaccination with long synthetic peptides of E6 and E7 of HPV-16 was effective in the majority (79%) of grade 3 VIN patients [37]. Regarding protein-based vaccines, a HPV fusion protein composed of HPV-6 L2 and E7 (TA-GW) has been shown to be effective in generating antigen-specific T-cell responses in 24 patients with genital warts [146]. A formulation of a vaccine with ISCOMATRIX adjuvant and the HPV-16 E6/E7 fusion protein significantly enhanced E6 and E7-specific CD8+ T cell responses in patients compared with placebo controls [142,147]. Another fusion protein vaccine formulation with of HPV-16 E7 and Mycobacteria bovis derived heat shock protein (HSP), known to enhance CTL responses, was used in patients with high-grade anal intraepithelial neoplasia (AIN) [148] and showed clinical responses in 13 out of 38 CIN3 patients [149]. Finally one other successful example is the TA-CIN vaccine (Tissue Antigen -Cervical Intraepithelial Neoplasia), which is a fusion of HPV16 viral proteins L2, E6 and E7. This vaccine is under license from Xenova Research Ltd. (Cambridge, UK) for the treatment of HPV16-associated genital diseases. A recent phase II trial investigating the use of imiquimod/TA-CIN vaccine in patients with VIN demonstrated significant infiltration of CD4 and CD8 T-cells, and complete lesion regression in 63% of patients [123]. The use of peptide/protein-based vaccine against HPV have been shown to be well-tolerated by patients, safe, with no obvious signs of toxicity other than a few flu-like syndromes. They overall have a good immunogenicity and have lead to successful results. DNA Vaccines DNA-based vaccines are another candidate for HPV-vaccine. They have been extensively studied in preclinical models, in order to optimize the cell targeting, DNA uptake and processing, presentation to MHC molecules... Preclinical models. Strategies to deliver DNA directly into DC may vary. The strategy using Gene gun can efficiently deliver DNA to Langerhans cells in the epidermis, which then express the antigens, become mature and migrate to the draining lymph nodes, where they prime naïve T cells. This method has been shown to elicit HPV antigen-specific T cell immunity and antibody responses [150]. As with protein-based vaccines, DNA encoding HPV antigens are fused with molecules such as FMS-like tyrosine kinase 3 (flt3) ligands [151] and HSP [152] that are capable of targeting the antigens to the DC. Furthermore, DNA construct encoding a fusion protein that links antigenic peptide to MHC class I and ȕ2-microglobulin [153] have been made to enhance antigen presentation by MHC class I molecules. In the similar fashion, DNA construct encoding a fusion protein that links antigenic peptide to a signal peptide for the endoplasmic reticulum and sorting signal for transmembrane proteins [154] have been made to enhance antigen presentation by MHC class II molecules. In a recent study, a DNA vaccine encoding herpes simplex virus type 1 (HSV-1) glycoprotein D genetically fused to human HPV-16 oncoproteins E5, E6, and E7, induced antigen-specific CD8+ T-cell responses and conferred preventive resistance to transplantable murine TC-1 tumor cells [155]. Co-administration of DNA vectors encoding a co-stimulatory signal, such as GM-CSF or IL-12, has been shown to enhance the therapeutic antitumor effects [155]. In another study, a DNA vaccine encoding calreticulin (CRT) linked to human HPV-16 E7 (CRT/E7) showed increased intercellular uptake and processing of the DNA. With the TLR7 agonist Imiquimod, this vaccine generated E7-specific antitumor effects and prolonged the survival in treated mice, probably by also decreasing the number of myeloid-derived suppressor cells (MDSC) in the tumor microenvironment of tumor-bearing mice [71]. This CRT/E7 vaccine efficiency can be increased by the co-administration of a demethylating agent, 5-aza 2 deoxycytidine (DAC). This treatment decreases the methylation of the DNA and thus inhibit its gene silencing [156]. Clinical models. Several DNA vaccines have been tested in clinical trials for CIN2/3 patients (reviewed in [135]). A recent phase-I trial using HPV-16 E7 DNA linked with M. tuberculosis HSP70 was tested in CIN2/3 patients. Fifteen CIN2/3 patients were given three different doses of vaccine (three each at 0.5 mg and 1 mg, nine at 3 mg). The vaccine led to complete regression of the lesions in three of nine patients treated with highest dose of the vaccine. E7-specific CD8+ T cell responses were detected in patients treated with vaccine at 1 mg and 3 mg [135,157]. DNA-based vaccines are stable and easy to produce. They can lead to sustained cellular antigen expression compared to protein-based vaccines, which makes them a strong candidate for therapeutic HPV vaccines. However, they have limited potency to invoke innate defence mechanism because of lack of intrinsic ability to amplify and spread in vivo. Their use has been shown to be well-tolerated by patients, quite safe, and good immunogenes. Nevertheless, more studies are needed to further improve the efficiency, safety, and cost of potential DNA vaccines. RNA Vaccines RNA replicons are vaccines based on RNA viruses. They have the ability to self replicate in the infected host cell. Therefore, they can sustain the cellular antigen expression and as a result, can produce more antigenic protein than conventional DNA vaccines. However unlike DNA vaccines, RNA replicons cannot reproduce by themselves and therefore, can circumvent the possibility of integration into the host genome and cellular transformation. RNA replicons are customized to lack the viral structural genes and thus, the vaccines may be repeatedly administrated in patients without the generation of neutralizing antibodies against viral capsid proteins. A DNA-launched RNA replicons vaccine, called "suicidal vaccine" has been made to increase the stability of RNA replicons. In this vaccine, suicidal DNA is transcribed into RNA replicons and cells uptaking the suicidal DNA vector eventually die due to apoptosis, thereby minimizing the concerns associated with potential integration of vaccine DNA into the host genome and cell transformation. However, the expression of inserted genes in these vectors is transient and it can shorten the functional lifespan of DC if DNA is targeted to DC, thereby reducing their effectiveness in stimulating the immune system. In one of the preclinical models, a suicidal DNA vector, pSCA1, encoding HPV-16 E7 was fused with BCL-xL, an antiapoptotic protein, to enhance the survival of APC [158]. This vector generated higher E7-specific CD8+ cells and better anti-tumor immune responses than pSCA1 DNA containing E7 gene alone [158]. In another study, vaccination of mice with DNA-launched KUN replicons, a flavivirus which does not induce apoptosis, encoding E7 epitopes generated CTL responses and protected mice against challenge with an E7-expressing epithelial tumor [159]. RNA replicons thus show promising results in preclinical models of HPV infection, but have not yet been investigated in clinical trials. Tumor Cell-Based Vaccines Tumor cell-based vaccines have the advantage to cover a broad spectrum of tumor antigens. Tumor cells can be isolated from patients and manipulated to express immunostimulatory proteins such as IL-2, IL-12 and GM-CSF, ex vivo to enhance their immunogenicity (for review, see [160]). However, there is a potential concern of introducing new cancers to the patients. In a recent preclinical study in mice, forced expression in tumors of LIGHT, a ligand for the lymphotoxin-beta receptor, resulted in an increased expression of IFN-Ȗ and chemoattractant cytokines such as IL-1Į, MIG, and MIP-2. This correlated with an increased frequency of tumor-infiltrating CD8+ T cells and eradication of large well-established tumors [161]. These vaccines are mainly used when tumor antigens are not identified; however they may not be relevant in case of HPV as tumor antigens associated with HPV-infection are largely known and this could be the reason that these vaccines have not yet tested in clinical trials for HPV-associated cancers. Dendritic Cell-Based Vaccine DC vaccines are another way to enhance T-cell mediated immunity against HPV-associated lesions. Preclinical models. DC-based vaccines are the preferred choice for vaccine development and a number of methods have been used including the usage of different vectors, pulsing of DC with proteins and peptides or transfecting DC with DNA or RNA (see above and reviewed in [135]). In a TC-1 murine tumor model, vaccination with E7-presenting DC transfected with anti-apoptotic siRNA targeting Bim was shown to generate E7-specific CD8+ T cells and decrease tumor growth [162]. Clinical Models. In humans, autologous DCs were pulsed with HPV16 or HPV18 E7 recombinant proteins and E7-specific CD8+ T cell responses were observed in four out of 11 late stage cervical cancer patients [163]. In another clinical study, stage IB or IIA cervical cancer patients were vaccinated with autologous DC pulsed with recombinant HPV16/18 E7 antigens and keyhole limpet hemocyanin (KLH), an immunological carrier protein. This vaccine generated E7-specific T cell responses in 8 out of 10 patients and antibody responses in all patients [164]. DC vaccines are patient-specific, where clinicians harvest DC from the patient, load them with tumor antigens such as E6 or E7, and then inject these DC back into the patient, where they elicit antigen-specific potent antitumor immune response. The success of Provenge ® , a DC vaccine incorporating prostatic acid phosphatase, in patients with advanced prostate cancer has generated strong interest for DC-based vaccines [165,166]. However, DC-based vaccines have serious limitations. Since these vaccines cannot be produced at a large scale, they are labour intensive and expensive. Nevertheless, DC-based vaccines have been tested in patients with HPV-associated cervical cancer by successfully transducing genes coding for E6 and E7 into DC. Combinational Approaches In Section 4 we discussed the different formulations that can potentially been used to create a therapeutic vaccine. In the examples we cited, some of the studies already used combinational approaches, by adding an adjuvant or a co-stimulatory signal to the vaccine itself. In this paragraph, we just want to highlight it a bit more. Prime-boost vaccination strategies use several of the available therapeutic vaccines in combination to induce higher levels of tumor specific immune response. Some of these have been evaluated in clinical trials for therapeutic HPV vaccines. In one trial, nine patients developed HPV16 specific T cell responses and three out of 10 patients showed a significant reduction in the size of the lesion, when high grade VIN were primed with TA-HPV and boosted with TA-CIN [137]. Therapeutic HPV vaccines may potentially be combined with other therapeutic methods such as radiotherapy and chemotherapy to enhance the clinical outcome. In an experimental model, treatment with low-dose radiotherapy made the TC-1 tumor cells more susceptible to lysis by E7-specific CD8+ T cells, and enhanced antitumor effects in tumor bearing mice [167]. In a similar fashion, combination of 5,6-dimethylxanthenone-4-acetic acid (DMXAA), a vascular disrupting agent, and treatment with E7 DNA vaccination generated potent antitumor immune responses in the splenocytes of tumor bearing mice [168]. Therapeutic HPV vaccines may also be combined to co-stimulatory signal delivery or cytokine adjuvantizing. The use of CTLA-4 antibodies has been shown to be effective in murine TC-1 tumor models [169]. More recently, peritumoral administration of IL-12-producing tumor vaccines enhanced the effect of the cytostatic chemotherapeutic agent, gemcitabine, which was correlated with high production of IFN-Ȗ by splenocytes [170]. Another strategy to improve HPV-vaccine is to combine them with the depletion of regulatory cells such as Treg, MDSC, or NKT cells, as discussed in Section 2. However in a recent study, depletion of Treg did not enhance the immune response induced by SFV-based HPVE6/E7 vaccine against murine tumors, suggesting that the SFVeE6/7 vaccine may not require additional immune interventions [171]. Thus, as it has already been described in many other cancer types, combinational approaches might be the gold strategies to generate a successful anti-HPV therapeutic vaccine. HPV Immunotherapy, the Future Although preventive HPV vaccines are now available to use, the high prevalence of HPV-associated malignancies worldwide suggests a potential benefit from developing therapeutic HPV vaccines. New leads, news targets and new research axis have to be investigated to maximize the chances to find a way towards therapeutic vaccines. Future Challenges and Resources? The HPV-derived VLP obtained by the self assembly of the viral L1 capsid protein have been generated in yeast and insect cell lines. However, the development of contained plant systems (i.e., plant cell suspension, hairy root cultures, microalgae, etc) provides a powerful alternative for the production of recombinant therapeutic molecules, such as IgG, IL-12 or IFNs. Plant cell suspension can be derived from tobacco, rice, soybean, tomato, alfalfa and carrot plants; hairy root cultures are generated from the interaction between Agrobacterium rhizogenes, a Gram-negative soil bacterium, and a host plant; and microalgae are photosynthetic heterotrophic microorganisms found in wet environments [172]. The advantages offered by these systems are firstly their low-cost production, the safety of their production, their ability to generate post-translational modifications and to synthesize folded or assembled protein multimeres correctly, as well as alleviating ethical issues in generating high-grade pharmaceuticals. On the other hand, some parameters still need to be improved, as the protein yield is quite low (0.01-0.2 g/L) compared to mammalian or eukaryotic systems (1-3 g/L or 0.5-5 g/L, respectively), and also some post-translational modifications need to be adapted [173]. There is recent evidence of the successful usage of plant-produced pharmaceuticals by Biolex Therapeutics (www.biolex.com). The results of this phase II clinical trial for the treatment of patients with chronic hepatitis C with Locteron, an alpha-IFN produced in duckweed, were published in March 2011 [174]. In relation to vaccine development, the very first veterinary vaccine derived from tobacco cell culture has been engineered by Dow AgroSciences (http://www.dowagro.com/), obtaining US Department of Agriculture (USDA) approval in 2006. Although not yet available commercially, this novel product represents the first approval for a plant-derived vaccine. Conclusions A successful HPV therapeutic vaccine should deal with both arms of the innate and adaptive immunity. The ideal vaccine should therefore generate high numbers of efficient tumor-specific and cytotoxic effector T-cells and promote inflammation. Together, this would prevent persistent HPV infection from progressing towards cervical cancer or even eradicate HPV. Moreover, within the tumor microenvironment and as already proven in other cancers, there are many cells and factors that may inhibit T-cell effector function and hinder the success of effective immunotherapy, such as Treg cells and immunosuppressive cytokines such as IL-10 and TGF-ȕ. Similarly, IFN-Ȗ released by iNKT cells has shown to be immunosuppressive in the E7-expressing skin. Therefore, the transient depletion of Treg, the blocking of IL-10 and TGF-β, the fine control of NKT-derived IFN-γ in the tumor microenvironment may enhance therapeutic HPV vaccine potency. More studies and knowledge are required to determine the role of those cells, as well as MSDC, mast cells and macrophages and other regulatory components of the innate immune system, that compose the tumor microenvironment and could play a role in HPV-associated cancers (summarized in Figure 2). Figure 2. Therapeutic strategies against HPV-infected lesions. The tumor microenvironment is composed of cells of the adaptive immune system (such as CD4 and CD8 T-cells, Treg) and cells of the innate immune system (such as DC, NKT, macrophages) and potentially other cells (Mast cells? MSDCs?) that could have a role in the response to HPV. Soluble factors including regulatory cytokines IL-10, TGF-β or IFN-γ may also be involved. This figure provides an overview of the different strategies that can be employed to generate therapeutic effects against HPV-infected epithelial lesions, which include live vector (viral/bacterial), protein or peptide, or nucleic acid (DNA/RNA) or VLP, together with the use of adjuvants such as TLR agonists or cytokines. Overall, the ideal vaccine would activate effector killer T-cells while silencing regulatory factors. The generation of memory cells that can mount a faster and stronger immune response would prevent reinfection by HPV and cancer relapse (inset). A potent therapeutic vaccine will most likely require the combination of current delivery systems (VLP, live vector, protein and DNA/RNA, plant-derived pharmaceuticals) associated with conventional therapeutic approaches (chemotherapy/radiotherapy, targeted depletion). With the increasing discovery of new drugs, development of new adjuvants, and the better understanding of tumor biology, we will have more opportunities to develop improved therapeutics against HPV-associated cancers. Successful clinical trials for therapeutic HPV vaccines based on positive preclinical data have now been published [37,123], and show that curing HPV-associated lesions is feasible.
An Artist Management Practicum: Teaching Artist Management in the Twenty-First Century Modern-day artist management is one of the most challenging aspects of the music industry to teach in academic institutions. This paper provides a framework for teaching artist management through a series of weekly assignments focused on various real-world scenarios and solutions as student teams virtually manage an active artist in the marketplace. These assignments are designed to allow each team to effectively assess the stage of the artist’s career, evaluate the marketplace, and plan successful management strategies for the artist. The paper also identifies benchmarks of achievement based on six stages of an artist’s career which help student teams identify successful artist strategies and establish goals for their artists. Students conclude the term by constructing a strategic plan to assist their acts in progressing to the next stages of their careers. Introduction There are several overarching challenges to teaching artist management in an academic setting. Unlike accounting (CPA), law (J.D.), or many other professions, there is no certification or degree required to act as a manager on behalf of an artist. Therefore, anyone can be an artist manager. However, managers play perhaps the most crucial role in the music industry, because they quarterback all components of an artist's career. Artist managers oversee intellectual property, analyze revenue, engage in marketing, and develop strategic initiatives which are unique to each artist. Accordingly, an academic course on artist management must incorporate copyright, publishing, marketing, finance, law, accounting, touring, songwriting, production, A&R, and almost every other course taught in music business programs. These components are almost always aligned with the specific needs and goals of the manager's artist and will largely depend on the artist's "status" or "stage of career". For example, an emerging artist will have a different set of goals, revenue structure, marketing considerations, and strategy than an established or superstar act. It is a manager's job to appreciate this distinction and foster the growth of the artist from one career stage to the next. Accordingly, creating an understanding of the artist's needs at each stage of the career must be at the crux of teaching artist management in an academic setting. A student who can effectively assess the strengths and weaknesses of an artist at each career stage will be primed to develop a successful and comprehensive strategic plan for that artist. Previous Research Mapping the Landscape: "5 Stages of Artist Development" (Next Big Sound) Next Big Sound (NBS) provides analytics for the music industry to assess the popularity of musicians in social networks, streaming services, and radio. In 2013 NBS conducted a research study entitled Mapping the Landscape: "5 Stages of Artist Development," which was retitled in 2016, to The Taxonomy of Artists (Buli 2016). The study focused on establishing a data set which sourced social media, sales, chart position, television appearances, and record label affiliation to create a benchmark of "career milestones" within the stages of an artist's career. The study carved out five career stages: undiscovered, developing, midlevel, mainstream, and mega ( Figure 1). According to Digital Music News, the NBS study determined that 91% of artists were in the undiscovered category (Ulloa 2014) ( Figure 2). Therefore, one of the biggest challenges facing the majority of artist managers is to grow an artist from the undiscovered stage to the developing stage. Subsequently, my artist management course focuses on developing a strategic plan for these undiscovered acts to help them progress into the next stage of their careers. Quantitative and Qualitative Assessment Metrics: Stage of Artist's Career (Terry Tompkins) Since 2013, the music industry has experienced seismic changes in revenue sources, marketing, and rights management, thereby necessitating an update to NBS's "Mapping the Landscape" study to include current resources and benchmarks of achievement. Additionally, the Next Big Sound study had limited qualitative data as part of its research. Lastly, Next Big Sound's data points have been compromised due to a recent ac- quisition by Pandora, thereby limiting the sources available for establishing current metrics. In an effort to update the NBS study, I developed a new set of criteria to assess quantitative and qualitative components for the stages of an artist's career. My research outlines six sets of criteria-three quantitative categories: touring, streaming, and social media; and three qualitative categories: artist professional team, record label/music publisher, and brand partnerships. Figure 3 outlines one example of a strategic consideration within the "stages of career". The chart gives an overview of a strategic approach to developing or engaging with fans based on the artist's stage. A manager working with an undiscovered act will likely need to grow fans beyond family/friends and develop "real fans" perhaps through radio airplay or streaming. A developing act might look to grow fans in another region of the country, perhaps through touring or social media advertising in undeveloped regions. A mid-level act might consider monetizing its fans through direct-to-fan selling, perhaps by launching a crowdsourcing campaign or a merchandise bundling via its online web store. Teaching Artist Management in the 21 st Century The following section provides detailed pedagogical insights into the artist management class. It includes a course overview, syllabus excerpt, weekly project assignments, in-class presentations, final paper, course reviews, and concludes with a description of future research. Students work in teams of two to virtually manage an act they mutually select at the beginning of the semester. The course is centered around a series of weekly projects assigned to each team which assists the students in understanding, assessing, and planning successful management strategies. MUSB 104 Course Syllabus (excerpt): Artist Management in the Music Industry Assessment and Strategy: Assessment and strategy are the keys to effective artist management. An effective strategic plan is the by-product of deep-seated assessment to recognize an artist's strengths and weaknesses, surveying of the marketplace, insight into an artist's fans, and implementation of a plan for the artist to monetize the industry. The weekly projects for this course are centered around five key building blocks of assessment and strategy in artist management. Assignments Each week throughout the semester, student teams complete a specific project for the artist they are virtually managing. These weekly assignments are supported by a detailed set of guidelines for the student teams and builds on the prior week's subject. I have highlighted below the weekly project assignments for weeks one through four, each of which teaches prospective artist managers a critical skill for building the proper foundation for their artists. Week 1: A&R Submission Assignment The weekly assignments begin with the artist discovery process through an A&R assignment. Identifying talent is an often-overlooked aspect of artist management. However, artist managers generally work on commission (15% to 20% of artist earnings), and identifying emerging talent is essential to earning potential for a young manager. Students are directed to The Deli, a nationally syndicated music blog with regional publications in ten markets including: Austin, Chicago, Los Angeles, Nashville, New England, New York City, Philadelphia, Portland, San Francisco, and Toronto. The Deli provides a valued resource for students to seamlessly and expeditiously find an act to manage virtually for the duration of the semester; in light of the time constraints of academia, The Deli is used to circumvent the often-lengthy A&R process to allow students to select an artist they would like to virtually manage. Students choose from one of ten cities (Figure 4). Students customize a search for artists in each city based on popularity/fame range and genre/sub-genre. Their search results lead to a filtered list of acts and links to online artist properties ( Figure 5). This assignment provides a crash course on teamwork, collaboration, and compromise for the newly-formed management company. The new management team presents the artist to the class, in-cluding music from the artist and an elevator pitch stating why the team wants to manage the artist they chose. Because the largest percentage of artists fall into the "undiscovered" category (91%), I provide additional criteria for the A&R submission to ensure the teams' selected artists have potential to move to the next stage of their career. There is no restriction on genre or whether the artist is a group or solo performer. However, the artists must have traction with an engaged fan base to move forward with their career. Therefore, each artist's audience must fit within certain required parameters. Social media following is the main qualifier for audience size and engagement. The artist must have a minimum of 1,000 and a maximum of 5,000 followers with at least 3% reacting to posts online (engagement). For example, if an artist has 1,000 followers, it needs to have an average of 30 likes, shares, or comments on Instagram or Facebook to qualify as an engaged audi- ence. The Deli's Fame range consisting of "Emerging Artist" and "Mostly Unknown Artists" is a useful tool which aligns with these social media benchmarks. The artist must also not be represented by a manager or a label to ensure the students' final strategic plan will be realized exclusively by the student management team. Summary of Artist Requirements/Qualifications: • Any musical genre, solo or group • Minimum 1,000 Facebook or Instagram likes/followers (maximum 5,000) • 3% fan engagement on Facebook/Instagram (likes, shares, comments) • Not represented or affiliated with an artist manager or record label Week 2: Artist Assessment Assignment The artist assessment assignment begins with the student teams learning about the history of the artist. How long has the artist been established? Have the releases been streaming or physical, digital downloads? How many shows does the artist play each year and what is the touring radius for these shows? Is the artist engaging with fans on Facebook or Instagram? Does the artist maintain an email list or newsletter? Is the artist self-producing or working with outside producers? Through watching YouTube videos, students assess stage presence, musicianship, and group compatibility. Are the artist's online assets (photos, videos, website, etc.) in line with the musical brand? How many followers does the artist have on social, monthly listeners on streaming platforms? What are the overall strengths and weaknesses of the artist? Sourcing all of this feedback, students create a summary analysis of the artist. Example of Student Work Week 2: Summary Analysis Artist Assessment -Artist #2 • The possibility of mainstream success for XXX is very real. They have a clear direction regarding songwriting that melds with their overall vibe and imagery. • The relatability factor in their music is a major selling point. Social and streaming are growing, fans are invested, engagement is high. • An increased budget through a label partner and collaborating with an outside producer could elevate their songs and commercial appeal to reach the masses. Artist Assessment Collaboration with David Newgarden/Manage This! In Spring 2019 the MUSB 104 class collaborated with artist manager David Newgarden, founder/owner of Manage This! (Guided by Voices, Yoko Ono, Sean Lennon, Tift Merit), to perform an artist assessment for two of his artists: Surfer Blood and The Lennon/Claypool Delirium featuring Sean Lennon and Les Claypool of Primus. Student teams chose one of the two acts and wrote an artist assessment about the act. David visited the class and listened to the student assessments of his artists. During this discussion, students were able to get direct feedback from him about their external assessment of his artists. This proved to be an invaluable resource for the students, validating some of their efforts and providing additional insight into the world of artist management. Subsequently, some of the takeaways from the students provided interesting insight which may not have been uncovered through virtual management. Student Observations: David Newgarden/Manage This! Artist Assessment -Takeaways • Lines between personal and professional can't be blended • It's important to establish trust with your client • You don't earn the same percent commission from every client • Artist management approach is case by case, each client has different needs • Managing an artist that you don't believe in will end up not working out well Week 3: Identify Audience/Fan Profile Assignment The third week's assignment directs the student teams to create a profile of the artist's fans. Categories for the fan profiles are derived from Ari Herstand's book How To Make It in the New Music Business: Practical Tips on Building a Loyal Following and Making a Living as a Musician (Herstand 2016). Herstand describes "20 Things" that an artist needs to know about their fans, including demographic, geographic (countries, cities), technological preferences, product preferences, entertainment consumption, and food preferences. Understanding an artist's fan interests helps student management teams identify the best means to reach their targeted audience through various marketing and promotional efforts. Ari Herstand "20 Things" Fan Profile (Herstand 2016) "20 things" Student Guidelines: Week 4: Comparable Artists/Six Stages Assignment Using the "Artist Stage of Career" guidelines (Tompkins 2019), students cite three comparable artists aligned with their act throughout the stages of the artist's career. These acts have advanced further into the various stages (Developing, Mid-Level, Established, Superstar, and Heritage) of their careers providing valuable insight for the student-managed undiscovered artist. After collecting social, streaming, touring, and sales certifications for the acts within each stage, students compute the average for each metric (Facebook, Instagram, Spotify, etc.). Each team will cite a total of fifteen comparable artists: 3 developing acts, 3 mid-level acts, 3 established acts, 3 superstar acts, and 3 heritage acts. Teams are directed to the following resources for research: • Facebook/Instagram likes: Facebook and Instagram: Go to "Home" tab, find "likes" or "followers" • Spotify Monthly Listeners and Followers: Go to Spotify, search for artist, click "About", find Monthly listeners and followers • The student submission suggests artists in Stage 4 (Established) have over 750,00 social media followers, 5.5 million monthly listeners on Spotify, perform in large theaters, and achieved at least one RIAA certified gold record. I compile all of the student research and build a chart establishing benchmarks for each stage of the artist career. Figure 6 is a summary of the metrics for the assignment from one semester of my class. As stated earlier, the six stages are a key learning/teaching resource for students in this course. The comparable artists in future career stages assist the teams to identify proper marketing channels, revenue sources, branding/record label/publishing partners, touring, and other important strategic considerations for their undiscovered artists. Additionally, the metrics for each stage provide a set of benchmarks for managers to consider when progressing through stages of the artist's career. In 2019, I had an opportunity to validate student data from the "Six Stages" with an A&R scouting platform called Instrumental. Instrumental is an online music discovery scouting tool using data science to help A&R reps learn about artists building a buzz on streaming music platforms. Instrumental's A&R platform sources Spotify playlists to determine if an artist is gaining traction within Spotify's algorithm. Its research indicates that when an artist reaches 40,000 followers on Spotify, the artist's algorithm begins to trigger playlist activity within the platform. Essentially, when an act reaches this level of followers, the artist is moving to the next career stage. My student research in the six stages suggests that an artist who breaks out of the Undiscovered Stage (Stage 1) into the Developing Stage (Stage 2) has garnered an average of 46,340 followers on Spotify. This statistic is closely aligned with Instrumental's research algorithm of 40,000 followers on Spotify for buzzing acts. One music industry application for the six stages research is to create a chart encompassing social, streaming, touring, and sales for each stage of an artist's career. These charts are supported by benchmarks established by metrics in each of the six stages. This new set of charts will provide deeper insight for artists progressing through each career stage as it relates to other acts at their current level. Since the goal for many artists and industry types has been to reach number-one on the charts, this new data set and chart system could provide an opportunity for artists to reach numberone during several stages of their career. Perhaps this type of chart will be more relevant to artists in the DIY digital age. Figure 6. Six Stages Summary. Week 5-9 Assignments The remaining weekly project assignments include raising venture capital, identifying brand partnerships, placement of music in film and television, developing a touring strategy, and forecasting record label partners. These assignments may alternate from year to year based on changes in the marketplace. • In-Class Presentation and Final Paper The final two components of the course are an in-class presentation and final paper. These assignments bring the semester projects into a manageable framework to pitch an artist for representation. In-Class Presentation Description and Criteria The in-class presentation comprises all of the weekly assignments. Student teams highlight the most important takeaways from the weekly assignments during a fifteen-minute presentation. The teams present their research to the class who performs the role of the artist. 1. Management Company -profile of management team 2. Artist Assessment -strengths and weaknesses 3. Fan Profile -20 things about artist fans 4. Crowdfunding -platform, projected funds, top experiences, and offerings 5. Sponsorship and Endorsements -local and national sponsorship and endorsement partners 6. Synch Targets -placement of music with supervisors, brands, television shows, and films 7. Booking Agent and Touring Acts -routing, venues, agent, tour packages 8. Record Label -record label targeted partners 9. Strategic Plan -brief summary of the three most important marketing platforms for artist to develop and monetize fans Final Paper Description The final paper is comprised of four components: marketing, expenses, revenue, and roster. The team assembles a marketing campaign for its artist detailing a plan for the three most critical channels to build and monetize the artist's potential audience. The next two components of the final paper prepare the management team to consider the prospect of running the management company as a stand-alone business. The team projects the necessary expenses to run its company as well as the revenue generated by the artist once the artist has reached stage two of its career. Finally, the team builds an artist management roster while determining the number of acts they need to represent at the developing artist stage (Stage 2) to make a living as an artist manager. • $50,000 to $300,000 (opening and end credits) iii. Independent Films • Documentaries: $500 to $2,500 (film festival submissions) • Theatrical release: $2,500-$7,500 iv. Television • $0 to $50,000 (depends on usage and type of broadcast: cable, pay TV, network, etc.) v. Broadcast TV Commercials • $5,000 to several million (depends on sponsor type and scope: local, regional, or national placement) vi. Video Games • $5,000 to $10,000 e. Sponsorship i. Tour and event sponsorship • $3,000 maximum per sponsor 4. Artist Management Roster and Commissions a. Compute artist income, manager commissions, and ma nager expense b. How much income and how many acts from Stage 2 of career are necessary to maintain a full-time management company with a partner? MUSB 104 Course Syllabus (excerpt): Syllabus: Assessment and Evaluation In this course, students will learn to: • Understand and implement key methods in artist management • Think critically through assessment and strategic planning • Build, grow, and maintain fans through marketing and branding initiatives • Implement a plan for an artist's long-term growth • Build a business as an entrepreneur Student Evaluations and Feedback Overall, student evaluations and feedback have been extremely positive during the years I have been teaching this course, scoring in the top percentile within the college. The practicum "learning while doing" nature of this course reinforces many aspects of the music industry in an active learning environment. Students take ownership of the projects they create and invest heavily into the course over the duration of the term. Based on a student evaluation rating system where 1.0 is "excellent" and 3.0 is "poor", this course has a three-year average rating in the "Instructor" and "Course" survey items of 1.1. Future Considerations I am in the process of developing a marketplace simulation which is a game designed to develop knowledge of the music industry through real-world practicum-based engagement. Players assume the role of an artist manager who signs an artist to its management company to compete in a virtual market against other players/artists. Essentially the game picks up where the management course left off-players release music and develop and launch marketing initiatives to compete in an online marketplace. Each decision the manager makes is impacted by conditions in the marketplace. The manager can play against the computer or other players in the marketplace. The winner is determined by chart position, revenue, profit, market share, and progressing to the next stage of the artist's career.
Structural dynamics probed by X-ray pulses from synchrotrons and XFELs This review focuses on how short X-ray pulses from synchrotrons and XFELs can be used to track light-induced structural changes in molecular complexes and proteins via the pump–probe method. The upgrade of the European Synchrotron Radiation Facility to a diffraction-limited storage ring, based on the seven-bend achromat lattice, and how it might boost future pump–probe experiments are described. We discuss some of the first X-ray experiments to achieve 100 ps time resolution, including the dissociation and in-cage recombination of diatomic molecules, as probed by wide-angle X-ray scattering, and the 3D filming of ligand transport in myoglobin, as probed by Laue diffraction. Finally, the use of femtosecond XFEL pulses to investigate primary chemical reactions, bond breakage and bond formation, isomerisation and electron transfer are discussed. Introduction In order to gain a deeper understanding into how physical, chemical and biological processes work at the atomic level, it is important to know how the structure evolves as a function of time. Whereas static structures can often be determined with laboratory X-ray sources, structural characterization of short-lived intermediates requires short, high-intensity X-ray pulses from synchrotrons or Free Electron Lasers (XFEL). Examples of the time and length scales that have been studied with short X-ray and laser pulses are shown in Figure 1. These processes span 18 orders of magnitude in time, from attoseconds to seconds. The time scale of a given process is intimately linked to the length scale. For example, conformational changes in proteins evolve on the microsecond to millisecond time scale whereas structural changes in small molecules, bond breakage/formation, isomerisation and electron transfer, span from femtoseconds to nanoseconds. The primary time scale in chemistry is governed by the vibrational period of bonded atoms, with higher Z atoms having longer vibrational periods [1]. For example, the oscillation period in the ground states of H 2 and I 2 are 10 fs and 156 fs, respectively. Structural changes in molecules propagate at the speed of sound, typically 1000 m/s, which corresponds to 100 fs/Å. The femtosecond time scale became accessible with the advent of ultrafast lasers in the 1980s and spawned the field of femtochemistry. For filming molecular reactions by optical absorption spectroscopy and electron diffraction, the Nobel Prize in Chemistry was awarded to Zewail in 1999 [2,3]. This work was made possible by the development of chirped-pulse amplification, in which a weak femtosecond optical pulse is stretched before amplification to high energy levels, and then recompressed back to its original pulse duration. Prior to this innovation, for which Gerard Mourou from Ecole Polytechnique in France was awarded the Nobel Prize in Physics in 2019, the pulse energy achievable with ultrashort pulses was severely limited by nonlinear optical processes that would otherwise destroy the gain medium. The ability to amplify femtosecond pulses to high energy levels is critically important for laser/X-ray experiments that often require high pulse energies at specific wavelengths to produce a detectable signal [4]. The time resolution at synchrotrons and XFELs is ultimately limited by the X-ray pulse length, which is 100 ps for synchrotrons and 10-100 fs for XFELs. The longer duration of synchrotron pulses is a consequence of the spread in energy in the electron bunch that arises from random emission of radiation in the bending magnets of the ring. In linear accelerators, the electrons are accelerated without emission and the electron bunch can therefore be very small in space and short in time. The signal from short pulses is usually not resolved by detectors and the fastest time resolution can only be obtained by the pump-probe method. In a laser/X-ray pump-probe experiment, the system is initiated rapidly and uniformly by a short laser pulse, which triggers a structural or electronic change and a delayed X-ray pulse probes the evolution. By varying the delay, the process is probed or filmed by a series of snapshots that can be stitched together into a movie. The instrumental time resolution is the convolution of the X-ray and laser pulse lengths and their relative jitter. The pump-probe principle is shown in Figure 2. The present manuscript is organised as follows. First, we briefly describe the European Synchrotron Radiation Facility (ESRF) in Grenoble, where the first picosecond X-ray experiments were performed in 1994. This is followed by a short presentation of the Extremely Bright Source (EBS) upgrade of the ESRF to a nearly diffraction limited source that was completed in January 2020. The pulse intensity and spectral bandwidth of the EBS beam and the potential for new experiments will also be discussed. Then we'll present some unpublished work from early pump probe experiments at ESRF examining the dissociation and recombination dynamics of I 2 in liquid CCl 4 and the dissociation of CO from Myoglobin studied by Laue diffraction that was the first 3D movie of a protein at work. The manuscript will finally mention a few XFEL experiments with femtosecond resolution. Pump and probe principle. The pump pulse (red) triggers a structural change that is probed by a delayed X-ray pulse (blue). When the sample is refreshed between pulses, the diffraction pattern arising from single pump-probe pairs can be accumulated on an area detector to improve the signal to noise ratio. In femtosecond serial crystallography, the flux is so high that diffraction patterns from single probe pulses can be indexed. The technique is freed from radiation damage since diffraction is faster than radiation damage. Pump probe experiments at ESRF The European Synchrotron Radiation Facility (ESRF) was the first large synchrotron to produce hard X-rays from undulators. It is a 6.0 GeV ring with a circumference of 844 m. With its sister facilities in the USA and Japan, the APS at Argonne and SPring8 near Kyoto, these third-generation synchrotrons have revolutionised X-ray science in fields from nuclear physics to cultural heritage. This is due to the wide energy range, high radiation intensity, coherence and short pulses. The ESRF has 47 beamlines, where 750 experiments are conducted with users per year. Beam time is allocated in peer review competition. The ESRF is shown in Figure 3. The beamlines are highly specialised with unique optics, sample environments and detectors. The facility was upgraded in 2014 with 6 long beamlines with nanometre focusing. The extended experimental hall for these beamlines appears in the photo with the reddish roof. In spite of the success of third-generation synchrotrons, the large horizontal emittance is an obstacle for making smaller and more coherent beams. The ESRF storage ring was upgraded in 2019 to become a diffraction-limited storage ring, the Extremely Bright Source (EBS). In the EBS design, the maximum curvature in the bending sections in the ring is reduced by the use of 7 closely spaced bending magnets rather than by two strongly deflecting electromagnets in classical designs. The extended bend of the orbit reduces the energy loss to synchrotron radiation making the electron beam more monochromatic, which reduces the spatial dispersion everywhere in the ring. The current of the electron beam is kept constant at 200 mA by frequent top-ups, typically every 10 min at present. The cross section of the electron beam is 76 × 16 µm 2 (H × V) in the insertion device sections (FWHM). The brilliance of the photon source has increased by 30-100 depending on the photon energy. The EBS principle is shown in Figure 4 together with the U17 undulator spectra. The EBS pulse length is unchanged, for the time being at least, at 100 ps (FWHM). For more details the reader is referred to the articles by Pantaleo Raimondi, the ESRF accelerator director, who designed the lattice with his colleagues [5,6]. As for the technology for pump-probe experiments at synchrotrons, the first issue to consider is that the repetition rate of wavelength tuneable lasers is much lower that the X-ray pulse frequency. The present picosecond laser on beamline ID09 runs at 1 kHz compared to the synchrotron producing 5.7 MHz pulses in the 16-bunch mode. In practice, the X-ray frequency is lowered by mechanical choppers, which allow to isolate single pulses from special diluted filling modes of the ring. The ID09 choppers reduce the average intensity (ph/s) on the sample by a factor 5700. As a result, it is important to use the intense pink beam (∼100 W) whenever possible. The intensity of the radiation increases with the magnetic field acting on the electrons and to maximise the field, the magnets in the U17 undulator are inside the vacuum of the storage ring. The bunch current is limited to 10 mA/bunch in 4-bunch mode due to the reduction in lifetime at higher currents. The maximum flux from the U17 undulator is 1×10 10 ph at 15 keV for a 10 mA bunch with fully opened slits. At this setting the relative bandwidth δE/E is 4.0% which can be used in small angle scattering experiments. If the slits are set to accept the central cone only, the bandwidth is reduced to 2.0% with 1 × 10 9 ph/pulse. One advantage of reducing the pulse frequency to 1 kHz is that a liquid sample can be exchanged between pulses in a flow cell so that irreversible processes can be studied [7]. The lower frequency also protects the sample from the damage of the full beam. In practice, the white beam at ID09 is first chopped by a pre-chopper, the so called heatload chopper, into 36 µs pulses at 1 kHz. These macro pulses are then chopped by a high-speed chopper (HSC) in front of the sample. The rotor in the HSC is a flat triangle with two slits at the tips of one of the three edges. The HSC opens for 265 ns, short enough for isolating a single pulse from the 4-bunch, 16-bunch and hybrid mode. The isolation of a single pulse from the 16-bunch mode is shown in Figure 5. The details of the chopper system are described by Cammarata et al. [8]. A typical sample environment for pump-probe liquid experiments is shown in Figure 6. The 1.2 ps laser pulse impinges on the sample 15°above the collimator pipe and the delayed 100 ps X-ray pulse is guided to the sample inside a pipe to reduce air scattering. The diffracted signal is recorded by a CCD detector (Rayonix MX170-HS). As mentioned, the pump-pulse frequency for liquid experiments is 1 kHz, whereas experiments with solid samples run at lower frequencies due to the heatload from the laser. In liquid experiments, the detector is exposed for 1-10 s before readout, accumulating 1000-10000 X-ray pulses in the image. To extract the laser induced change, the experiment is first done without laser or, better, at a negative delay, to keep the average sample temperature constant. After azimuthal integration and scaling of the resulting 1D curves (a) Flow cell used for wide-angle scattering on ID09/ESRF. The liquid solution is injected in a capillary. It is then exposed to a 1.2 ps laser pulse followed by a delayed 100 ps X-ray pulse (illustrated as a 30 mm long needle in the collimator). (b) Scattering patterns from non-excited and excited solutions. The signal from the solute is superimposed on a large solvent background in a ratio of ∼1:1000 in most cases. (c) Difference patterns for 3time delays in the iodine experiment in liquid CCl 4 that is described below. at high q near the edge of the detector, the difference curves are calculated. Only a fraction of the solutes (or unit cells in a crystal) is excited due to the limited laser penetration or the finite pulse energy that the liquid can tolerate. The shape of the difference curve dS(q) is independent of the degree of excitation since the contribution from non-excited solutes (unit cells) cancels out in the difference. This approximation, however, breaks down in the case of multiphoton absorption or sequential absorption, in which case the laser fluence has to be reduced. The spectra from ESRF and EBS are compared in Figure 4b. The spectra are measured through a small primary slit 0.5 × 0.5 mm 2 slit 27 m from the source. The gain in intensity from the EBS is a factor 10. Additionally, the horizontal source size is 60 µmH, a 50% reduction compared with the old synchrotron. The total intensity with fully opened slits is the same for the new and old lattice. The line shape of the EBS fundamental is freed from the low-energy pedestal, a characteristic synchrotron feature caused by off-axis radiation from the more divergent electron beam. The beam is focused by a Pd coated toroidal mirror to Ø25 µm. The incidence angle is 2.48 mrad and the mirror cut-off 24 keV. Synchrotron beams are very stable unlike XFEL beams for which the position, intensity and spectrum have to be measured pulse-by-pulse and sorted later. Photolysis of small molecules in solution Historically, the first ultrafast photo triggered reaction was a study of the dissociation & recombination dynamics of I 2 in liquid CCl 4 by K . Eisenthal and his colleagues at Bell Labs in 1974 [9]. They found that most of the dissociated I atoms are captured by the liquid cage and that these atoms recombine in 120 ps while heating the solvent. 15% of the dissociated atoms were found to escape the cage and recombine in microseconds via bimolecular diffusion. The potential energy curve for I 2 is shown in Figure 7 for the ground and lower energy states of interest. At large atomatom separations, the force in the X potential is attractive and drives the atoms closer together towards the potential minimum. At shorter distances, the potential is repulsive. The minimum is the equilibrium bond length of the molecule. A classical and quantum description of diatomic molecules is given by Slater [1]. The paper details the parameters of the Morse potential, the oscillation frequency and amplitude as a function of energy. The energy of the molecule is quantized in discrete vibrational levels, but at ambient temperature, the ground state is essentially 100% occupied. For gas phase I 2 , the bond length is 2.666 Å in the ground state. The oscillation frequency is 6.2 × 10 12 Hz and the vibrational amplitude 0.05 Å [10]. In Figure 7b, the vibrational relaxation from the dissociation energy to the ground state is simulated assuming a time constant of 100 ps. The amplitude of the oscillation becomes smaller as I * 2 returns to the ground state. The solvent, through collisions, dissipates the excess heat. The Eisenthal experiment was repeated with X-rays at ESRF by Plech et al. in [11]. In the I 2 :CCl 4 experiment, the heavy solvent molecules slow down the recombination to 140 ps, which can be resolved with 100 ps X-ray pulses from a synchrotron. The first experiment did not resolve the contraction of I * 2 versus time; rather the recombination was inferred from the heat deposition in the solvent from the cooling of I * 2 (X ). In a follow-up experiment by Lee and his colleagues in 2013, laser slicing was used to push the time resolution below 100 ps [12]. Slicing takes advantage of the short 1.2 ps laser pulse and the low timing jitter between the pump and probe. By collecting time delays in steps of 10 ps, from −150 to 150 ps, the shape of the dS(q, t ) curves could be fitted against a model of the recombination process. The shape of the dS(q, t ) curves are consistent with the exponential cooling decay in a Morse Potential. One important observation from the first iodine studies with X-rays is that the difference curves dS(q, t ) have two principal components: the signal from the changes in solute/cage structure, the goal of the experiment, and a thermal signal from the change in temperature, pressure and density of the solvent. The excitation produces molecules in high energy states and the return to the ground state is accompanied by heat dissipation in the solvent. The effect has a unique X-ray signature dS s (q), which is solvent specific. Since it is impossible to determine two signals from one measurement, the solvent signal has to be measured separately. That can be done by diluting dye molecules that absorb at the wavelength of the experiment. Alternatively, the heat can be generated by exciting the solvent molecules with near infrared light (1000-2000 nm) whereby heat is transferred into vibrational modes, usually via overtones of vibrational modes. Most of the common solvents have been characterised thermally by the dye method by Kjaer and his colleagues in 2013 [13] and the near infrared method was applied by Cammarata and his colleagues [14]. states. The dissociated atoms collide with the solvent cage in ∼300 fs. Recombination in the cage is the dominant process in CCl 4 with 85% of molecules recombining geminately and 15% escaping the cage. Three pathways have been identified, α and β, and γ corresponding to direct vibrational cooling along the X potential to the ground state, the formation of the A/A triplet state (S = 1) and cage escape. (b) Schematic presentation of X state vibrational cooling. The bond is re-formed at the dissociation energy at 1.52 eV in a large amplitude vibrational state. It relaxes to the ground state while losing energy to the solvent via cage collisions. When solvent hydrodynamics is included in the analysis, the time dependence of the temperature and density can be determined independently from the low q part of dS(q) which serves as a check of the overall consistency of the model. The temperature and pressure versus time for I 2 in CCl 4 are shown in Figure 8 as an example. The temperature jump at 1 ps is from the first collisions of I atoms with the cage. The temperature is not defined in the early out-of-equilibrium states; it is calculated here from the average energy uptake of the solvent. After 1 ps, the temperature rises from the cooling of I * 2 (X ) as illustrated in Figure 7b. After 200 ps, the slope change is due to the recombination of the 2.7 ns A/A state. After 10 ns, the solvent expands accompanied by a drop in temperature. The expansion proceeds at the speed of sound until the pressure return to ambient in about 1 µs. The pressure versus time profile is shown in Figure 8b. The theory of heat dissipation is described in more detail in the work by Wulff et al. [15]. Thus, time-resolved X-ray scattering adds information on the reaction mechanism by being sensitive to all the constituents in the sample. The structural sensitivity of X-rays to atom-atom distances in molecules is illustrated in Figure 9. The diatomic molecule is exposed to a monochromatic plane wave. The scattering from the two atoms produces spherical secondary waves that interfere. From the intensity profile on the detector, the change in the atom-atom distance can be deduced, even for a random ensemble of molecules. The principle is the same as in Young's two-slit experiment. The averaging over all orientations produces the soft modulated pattern shown in Figure 9b. Calculating the scattering from isolated gas molecules is the first step in understanding the scattering curves during a chemical transformation. The gas scattering is given by the Debye Function [16]: Here f i (q) and f j (q) are the atomic form factors of atoms i and j and r i j is the distance. The form factors are the sine-Fourier transform of the electron density of the respective atoms. f (q) is approximately a Gaussian function with a half-width of 2π/r , where r is the atomic radius. The expression applies to an ensemble of gas molecules randomly oriented in space. It should be noted that when X-rays are used to measure positions in a molecule, it is the position of the full electron density that is probed unlike with neutrons that probe the nuclear positions. The Fourier inversion of the X-ray scattering to real space produces peaks and valleys of finite width from the size of the atoms. The Debye functions S(q) and dS(q) from structural changes in I 2 are shown in Figure 10 and the experimental curve dS (q, 460 ps) for I 2 in CCl 4 is shown in Figure 11. The solvent background is ∼1000 times greater than the solute signal due to the low solute concentration (mM). It is particularly important to measure S(q, t ) well in the high q range 8-10 Å −1 for the scaling of laser ON and laser OFF images. The scaling is based on the fact that in that range S(q, t ) can be calculated by the Debye function for the excited solutes and solvent. Expressed differently, the liquid appears as a collection of gas molecules at high q. In the dS (q, 460 ps) curve for the I 2 :CCl 4 solution, the red curve is the gas phase part as explained in Figure 10b. In the generalisation of the Debye Equation to a solution, the structure of the liquid solute/solvent mixture is expressed by statistical atom-atom distribution functions g i j (r ) that represent the fluctuating structure in a liquid. The g i j (r ) functions have two parts: the sharp and well-defined intra molecular part and a broad extra molecular part at larger r that describes the bulk solvent and the solute cage. The scattering is calculated from the g i j (r ) functions from MD via: The expression is a generalisation of the Zernike-Prins formula for monoatomic liquids [17] and molecular liquids as described in the book by Hansen and McDonald [18]. N i is the number of atoms of kind i , V the volume and δ i j (r ) the Kronecker delta function δ i i = 1, and δ i j = 0 for i = j . For a time-resolved experiment, the starting solute structures are calculated by density functional theory (DFT) including point charges on the atoms that are important for the solvent interaction. The next step is to perform a MD simulation with the DFT candidate structures for the ground and excited states structures. MD calculations assume thermal equilibrium so only quasi stationary structures can be approximated in this way. MD provides the g i j (r ) functions including the cage. The Zernike-Prins equation is then used to calculate the change in scattering dS(q). A more intuitive presentation of the structural changes is obtained by the sine-Fourier transform: q∆S(q, t ) sin(qr ). The denominator in the integral is the sharpening function which partially corrects for the broadening from atomic form factors that, as mentioned, probe the size of the atoms unlike the g i j functions that measure the positions of the nuclei. The notation ∆S[r ], i.e. with square brackets is to distinguish it from the sister ∆S(q) from which it is derived. ∆S[r ] is an X-ray biased measure of the change in the radial electron density of an average excited atom. High-Z atoms are amplified in X-ray scattering unlike for neutrons. The g i j (r ) functions for the I 2 /CCl 4 solution were calculated by the MD software Moldy using 512 rigid CCl 4 and one I 2 molecule. The g i j (r ) functions that probe the cage for I 2 (X ) are shown in Figure 12(a). The cage radius is given by g (I-Cl) since I is surrounded by Cl atoms in CCl 4 . The first peak in g (I-Cl) is at 3.93 Å. The first coordination shell is at 5.10 Å as defined by the first peak in g (I-C). The change in the cage structure of the reaction products are examined in Figure 11b. The g (I-Cl) distributions for X and A are nearly identical, in position and amplitude, as expected from the bond elongation of ∼0.38 Å in the A/A s state. In contrast, the I 2 (X ) → 2I and I 2 (A/A ) → 2I transitions lead to a 27% increase in Cl population around I. The number of I-Cl pairs increases after dissociation as Cl fills the space vacated by I. The real space change ∆S[r ] is shown in Figure 13 for the gas and solution phase transitions. In the simple gas phase transition I 2 (X ) → I 2 (A/A ), the bond expands from 2.67 to 3.05 Å. That gives a negative peak for the depletion of the ground state and a positive creation peak. The change in cage structure follows that trend, i.e. the cage radius is slightly larger for A/A . The gas phase reaction I 2 → 2I with I atoms infinitely apart, has a single depletion peak at the I 2 (X ) bond length. In solution, the entering Cl atom produces a positive peak at 3.9 Å. In summary, time-resolved wide angle scattering with synchrotron and XFEL radiation is a powerful method for structural studies of molecular reactions in solution. The X-rays probe all pairs of atoms and that provides precious information about the structural dynamic. When the X-ray data is taken over a wide q range, the excited solute structures and the hydrodynamics parameters of the solvent medium can be determined from models combining DFT and MD. Protein dynamics in solution Many proteins cannot be crystallised and time-resolved wide-angle scattering in solution offers a way to study large amplitude conformational changes. The low protein concentration (few mM or less) is a challenge and the large size, about a thousand times larger than a small molecule, complicates the analysis. Recent TR-WAXS data from proteins have demonstrated that medium and large-scale changes in some photo sensitive proteins are to be checked against model predicted scattering patterns. The TR-WAXS method for proteins was pioneered by Marco Cammarata and his colleagues on human haemoglobin (Hb), a tetrameric protein with two Figure 14. Calculated Debye scattering for haemoglobin (HbCO and Hb), myoglobin (Mb) and water. The calculations were performed with CRYOSOL using the crystal structures adapted to the solution phase. The protein signal is much stronger than the water background in the low q limit. (b) Relative change of the protein signal to the water background for the R-to-T transition (HbCO → Hb) for an excited-state concentration of 1 mm. (c) Snapshots of the molecular structures used in the simulations [18]. identical αβ dimers [19]. HbCO in solution is known to have two quaternary structures, a ligated stable R (relaxed) state and an unligated stable T (tense) state. The tertiary and quaternary changes of HbCO, initiated by a ns green laser pulse, were probed by TR-WAXS [20,21]. The analysis was using the allosteric kinetic model for Hb. It was found that the R-T transition takes 1-3 µs which is shorter than observed by optical spectroscopy. In Figure 14a the gasphase scattering from the crystal structures of HbCO and Hb (deoxyHb) is shown together with a myoglobin and water molecule. In Figure 14b, the relative change from the transition HbCO → Hb is calculated for a 1 mM concentration. Note the good signal-to-background ratio between 0.1-1 Å −1 due to the weak water scattering in that q range. The structure of the proteins and water is shown in Figure 14c. The optically induced tertiary relaxation of myoglobin and the refolding of cytochrome c were also studied with TR-SAXS/WAXS. The advantage of TR-SAXS/WAXS over time-resolved X-ray protein crystallography is that it can probe irreversible reactions and largescale conformational changes that cannot take place within a crystal [22][23][24]. Although the scattering patterns from proteins in solution contain structural information, the information is insufficient to reconstruct the structure in atomic detail. In this respect the use of structures from X-ray crystallography and NMR as a starting point is promising and the development of a more advanced analysis is in progress. For more information on these techniques the reader is referred to the recent articles by Bjorling et al. [25] and Ravishankar et al. [26]. Filming a protein at work by Laue diffraction Myoglobin (Mb) is a ligand-binding heme protein whose structure was the first to be solved by X-rays in 1958 [27,28]. Its Fe atom reversibly binds small ligands such as O 2 , CO and NO, which is readily photo dissociated from the heme. The structural changes triggered by ligand photolysis was first filmed with near-atomic resolution at the ESRF in 1996 via time-resolved Laue diffraction by Keith Moffat, University of Chicago and Michael Wulff, ESRF and their co-workers [29]. Diffraction images were generated by single, 100 ps X-ray pulses photolysis of MbCO. The structure of MbCO is shown in Figure 15a. A Laue pattern from a monoclinic crystal (P21) with a linear size of 100-200 µm shown in Figure 15b. The work was done using monoclinic crystals (P21) with a linear size of 100-200 µm as shown in Figure 16a. The packing of the unit cells is shown in Figure 16b for the hexagonal lattice. The latter shows the arrangement of the unit cells and the important space between them that is filled with water. The suspension in water allows the crystalline state to undergo smaller structural changes freed from lattice constraints. The photolysis of a protein crystal is delicate and should be done without damaging the crystal, while still exciting enough unit cells to give a detectable signal. For example, the absorption gradient of the laser beam in the crystal has to be small to avoid thermal bending and thus broadening of the diffraction spots. The unit cell concentration is high in the monoclinic structure, (49.3 mM), so the laser wavelength has to be chosen judiciously to penetrate the crystal. The MbCO absorption spectrum has three features, the Soret band at 420 nm and two weaker bands at 550 and 585 nm, the Q bands α and β. The penetration depth is 1.5 µm on the Soret band and 15 µm on α and β. However, by exciting on the shoulder of the β band at 625 nm, where dissociation still works, the absorption length is 420 µm, a good match to the crystal dimensions in the experiment [30,31]. The fraction of unit cells photolyzed by a 1 mJ pulse of 0.5 mm diameter was ∼20-30%. The crystals were mounted in capillaries in a CO atmosphere and 16-32 images from single pulses were accumulated on the detector before readout. The crystals were rotated in steps of 3°, from 0-180°to fully sample reciprocal space. In some cases, the crystals would be damaged after some time, then replaced and the data merged later. The diffraction pattern is sensitive to changes inside the unit cells. When the non-excited starting structure is known from the PDB data base, the measured intensity changes dI (hkl ) allow to determine the change in electron density via the Fourier Difference Method [31]. The first experiment used the spectrum from a wiggler (W70) covering 7-28 keV. The wiggler was replaced in 2000 by the narrow-band undulator U17 which increased the SNR due to a much lower diffuse background from water in the protein. A second spin off of the narrow band is the lower number of spatial overlaps in the images from the well-defined relation between the scattering angle and the d-spacing (Bragg's law) provided by the narrow 5% bandwidth spectrum at 15 keV. From the measurements of 50,000 intensity changes ∆I(hkl ), the Fourier difference maps were derived as shown in Figure 17. The difference density is superimposed on the CO ligated initial structure in white: red volumes are due to loss of density, blue is from a gain. CO is seen to move to the solvent via 2 interstitial cavities. Initially, it is trapped in a small cavity next to Fe for 5 ns. The CO hole is partially filled by a shift in position of the distant His-64 that blocks geminate recombination. The new CO position is also pushing the Ile-107 residue slightly. Note the tilt of the heme plane and the Fe motion out of the plane. The doming is from the change in coordination from 6 to 5 of Fe after dissociation. Note that red and blue volumes are side-by-side consistent with small rigid translocations. In the 30 ns map, the first cavity is empty and CO is not resolved. In the 300 ns map, CO reappears on the proximal side in a cavity, which is known from studies of Xe gas pockets in myoglobin under pressure. The Fe heme doming persists in the absence of the Fe-CO bond. CO diffuses to the solvent and returns to Fe via random diffusive motion on the ms time scale. The first Laue work on sperm whale myoglobin was followed by studies of on mutants lead by Anfinrud [32] and Brunori [33]. As the CO is on the distal side of the heme at early time delays, the protein function is strongly influenced by the amino acid side chains around that site. The L29F mutant of MbCO, where leucine Leu29 is substituted by phenylalanine, exhibits 1000 time faster dynamics [32]. The difference is superimposed on the initial state. Red represent loss of density, blue gain in density. CO is captured in the "docking site" on the distal side of the heme. Fe has moved 0.2 Å out of the heme plane in response to the change in coordination from 6 to 5. Note how the distal and proximal histidine move in response to the new CO position. That structural change blocks CO geminate recombination to Fe. (c) At 30 ns the docking site population is decreasing. (d) After 300 ns, CO is accumulating in a pocket on the proximal side from where it diffuses into the solvent. After 1 ms, CO returns to the binding site. That pathway is not observed due the loss of synchronization in the reverse reaction. The maps are rendered with the software O7.0 (Alwen Jones, Uppsala University). XFEL experiments The Linear Coherent Light Source (LCLS) at SLAC in Stanford was the first hard X-ray FEL facility to open in 2009. The LCLS was followed by SACLA at SPring8 in Japan in 2011, SwissFEL in Villigen (CH) in 2016 and the European EuXFEL in Schenefeld (D) in 2017. In XFELs bunches of electrons are accelerated to 5-15 GeV in a linear accelerator and injected into long undulators. After a certain point in the undulator, the X-ray field induces a density modulation in the bunch, the SASE effect, which amplifies the intensity by orders of magnitude. The electrons in a micro bunch emit in phase as super electrons. The intensity is then proportional to N 2 e (micro bunch electrons) × N 2 p (periods). The energy range is currently limited to 1-16 keV but higher energies will become available at the EuXFEL in the near future. For a review of the SASE principle, the reader is referred to the article by Margaritondo et al. [34]. XFEL pulses are 10 3 to 10 4 times shorter than synchrotron pulses, i.e. 10-100 fs, and the intensity can reach 1 × 10 12 photons/pulse in a 0.1% BW at 12 keV. The beam is less stable due to the stochastic nature of SASE and it is important to record the beam parameters, the timing jitter in particular, for every pulse. To exploit the short XFEL pulses, the laser/X-ray delay has to be sorted and averaged after the experiment. Femtosecond pulses are perfect for filming bond breakage and bond formation in chemical reactions, isomerization, electron transfer reactions and coherent wave packet motion etc. Diffraction patterns from micro and nano-sized proteins can be acquired with single XFEL pulses. The number of diffraction spots is large enough for indexing, i.e. determining the orientation of randomly oriented crystals, and the pulse is so short that the diffraction can be recorded before the crystal is destroyed by the Coulomb explosion as described by Neutze et al. [35]. The term "diffraction before destruction" is the principle behind Serial Femtosecond Crystallography (SFX). The crystals are injected in the XFEL beam from a jet and thousands to millions of crystals are exposed randomly. By merging the scaled intensity data from thousands of images, the structure can be determined. There are two major advantages of SFX: very small crystals are easier to produce and the structure can be determined at room temperature rather than at cryogenic temperature, where the mobility of the protein is greatly reduced. The reader is referred to the review by Chapman et al. for more details [36]. The SFX technique is applicable to pump-probe work on light sensitive proteins as well. Schlichting and her co-workers studied the helix dynamic following photo dissociation of CO from myoglobin MbCO at the LCLS using SFX at 6.8 keV with a resolution of 1.8 Å [37]. The study revealed that the C, F and H helices move away from the heme whereas the E and A helices move towards it in less than 500 fs, confirming the results previously obtained with TR-WAXS measurements at the LCLS on MbCO [38]. One of the first scattering experiments probing a femtosecond chemical reaction in solution was performed by Hyotcherl Ihee from KAIST in Korea in collaboration with Shin-ichi Adachi, KEK, using the SACLA XFEL at SPring8 [39]. They studied the formation of a gold trimer [Au(CN) − 2 ] 3 . In the ground state, the Au atoms in three molecules are weakly bonded by van der Waals interactions. Upon photo activation, an Au electron is excited to a bonding orbital producing a covalent Au-Au bond with a linear geometry with a lifetime of 500 fs. The Au bonds shorten in a second 1.6 ps step. Finally this linear conformation combines with a free Au(CN) − 2 in 3 ns to form a tetramer. The reaction is shown in Figure 18. The first X-ray spectroscopy experiment from an XFEL was reported by Henrik Lemke and Marco Cammarata and their co-workers in 2013 [40]. They performed a XANES study of the spin-cross-over complex [Fe(bpy) 3 ]Cl 2 in a 50 mM aqueous solution using 100 fs pulses from the LCLS. The position of the Fe absorption edge depends on the Fe-N distance from which they deduce that the switch from the low-spin (LS) to the high-spin (HS) state takes 160 fs. The HS state subsequently decays to the LS state in 650 ps. The experiment was done with fluorescence detection and the XFEL white beam was monochromatized with a diamond monochromator. The K-edge was scanned over 45 eV, the spectral width of the white beam. The main challenge was timing drifts, which could be up to 100 fs per hour. That problem was later solved by time stamping the X-ray pulse followed by sorting the actual delay [41], which allows to exploit the full potential of the short XFEL pulses. In spite of the high intensity and short pulse from XFELs, synchrotrons will remain important for slower dynamics, from 100 ps to seconds, due to the higher beam stability, wider energy range and easier accessibility. It is also important that users have enough time to get to know the beamline and optimise the experimental parameters. The beam parameters for a synchrotron and XFEL beam are shown in Table 1 for ID09 at ESRF and FXE at EuXFEL. The pulse structure at EuXFEL is a 10 Hz macro pulse with each macro pulse having up to 2700 sub pulses that are separated by 220 ns. The large Pixel Detector at EuXFEL and the excitation laser can synchronise to this time structure. It is challenging however, to exchange the samples in the 220 ns dark period between pulses in the macro-pulse train which is often needed since the sample might be destroyed by the pulses.
Magnetar Engines in Fast Blue Optical Transients and Their Connections with SLSNe, SNe Ic-BL, and lGRBs We fit the multi-band lightcurves of 40 fast blue optical transients (FBOTs) with the magnetar engine model. The mass of the FBOT ejecta, the initial spin period and polar magnetic field of the FBOT magnetars are respectively constrained to $M_{\rm{ej}}=0.18^{+0.52}_{-0.13}\,M_\odot$, $P_{\rm{i}}=9.4^{+8.1}_{-3.9}\,{\rm{ms}}$, and $B_{\rm p}=7^{+16}_{-5}\times10^{14}\,{\rm{G}}$. The wide distribution of the value of $B_{\rm p}$ spreads the parameter ranges of the magnetars from superluminous supernovae (SLSNe) to broad-line Type Ic supernovae (SNe Ic-BL; some are observed to be associated with long-duration gamma-ray bursts), which are also suggested to be driven by magnetars. Combining FBOTs with the other transients, we find a strong universal anti-correlation as $P_{\rm{i}}\propto{M_{\rm{ej}}^{-0.45}}$, indicating them could share a common origin. To be specific, it is suspected that all of these transients originate from collapse of extreme-stripped stars in close binary systems, but with different progenitor masses. As a result, FBOTs distinct themselves by their small ejecta masses with an upper limit of ${\sim}1\,M_\odot$, which leads to an observational separation in the rise time of the lightcurves $\sim12\,{\rm d}$. In addition, the FBOTs together with SLSNe can be separated from SNe Ic-BL by an empirical line in the $M_{\rm peak}-t_{\rm rise}$ plane corresponding to an energy requirement of a mass of $^{56}$Ni of $\sim0.3M_{\rm ej}$, where $M_{\rm peak}$ is the peak absolute magnitude of the transients and $t_{\rm rise}$ is the rise time. INTRODUCTION In the past decade, several unique, fast-evolving and luminous transients have been discovered, thanks to the improved cadence and technology of wide-field surveys. These transients are usually quite blue (g − r −0.2) and luminous (an absolute magnitude of −16 M peak −23) at peak and their lightcurves show fast rise and decline with a duration shorter than about ten days. They are, hence, named as fast blue optical transients (FBOTs; e.g., Drout et al. 2014;Inserra 2019). Since Drout et al. (2014) reported a sample of FBOTs identified from a search within the Pan-STARRS1 Medium Deep Survey (PS1-MDS) data, the observations of ∼ 100 FBOT candidates have been presented (e.g., Arcavi et al. 2016;Whitesides et al. 2017;Pursiainen et al. 2018;Tampo et al. 2020;Ho et al. 2019Ho et al. , 2020Ho et al. , 2021. The event rate density of FBOTs is ∼ 1 − 10% of that of local core-collapse supernovae (SNe; Drout et al. 2014;Pursiainen et al. 2018;Ho et al. 2021). The progenitor and energy source of FBOTs are still very unclear. Two different classes of models have been proposed in literature to explain the observational proprieties of FBOTs. The first class broadly contains binary neutron star (BNS), binary white dwarf (BWD) or NS-WD mergers (e.g., Yu et al. 2013Yu et al. , 2015Yu et al. , 2019bZenati et al. 2019); accretion-induced collapse (AIC) of a WD (e.g., Kasliwal et al. 2010;Brooks et al. 2017;Yu et al. 2015Yu et al. , 2019a; SN explosions of ultra-stripped progenitor stars (e.g., Tauris et al. 2013Tauris et al. , 2015Tauris et al. , 2017Suwa et al. 2015;Hotokezaka et al. 2017;De et al. 2018;Sawada et al. 2022) including electron capture SNe (e.g., Moriya & Eldridge 2016;Mor et al. 2022); common envelope jets SNe (Soker et al. 2019;Soker 2022); and tidal disruption of a star by a NS or a black hole (e.g., Liu et al. 2018;Perley et al. 2019;Kremer et al. 2021;Metzger 2022). The common feature of this class of models is that the fast evolution of FBOTs is attributed to a small ejecta mass, and the luminous brightness of FBOTs is attributed to additional energy injection from a central engine sources besides the radioactive decay power by 56 Ni (e.g., Drout et al. 2014;Pursiainen et al. 2018). The extra energy source could be a spinning-down NS arXiv:2206.03303v2 [astro-ph.HE] 3 Aug 2022 or an accreting black hole (i.e., in the tidal disruption models). The second class of models invoke shock breakouts from a dense stellar wind (e.g., Chevalier & Irwin 2011;Ginzburg & Balberg 2012;Drout et al. 2014); interaction between the ejecta from a massive star and a dense circumstellar material (CSM; e.g., Rest et al. 2018;Fox & Smith 2019;Leung et al. 2020;Xiang et al. 2021;Pellegrino et al. 2022); and jet-cocoon interaction and emission (Gottlieb et al. 2022). FBOTs in this class of models are attributed to the breakout of the accumulated energy in the shock. By fitting the bolometeric light curves of FBOTs with the CSM interaction plus 56 Ni decay model, Xiang et al. (2021) and Pellegrino et al. (2022) found that, in order to account for the rapid and luminous light curves, the mass loss rates of the progenitor should be up to ∼ 1M yr −1 , which is however inconsistent with the limits obtained from the radio observations of FBOTs. Recent studies revealed that the hosts of FBOTs are exclusively star-forming galaxies (Drout et al. 2014;Pursiainen et al. 2018;Pellegrino et al. 2022), whose starformation rates and metallicities are consistent with those of extreme stripped-envelope explosions (Wiseman et al. 2020) including hydrogen-poor Type Ic superluminous SNe (SLSNe; e.g., Lunnan et al. 2014;Chen et al. 2017), broad-lined Type Ic SNe (SNe Ic-BL; e.g., Arcavi et al. 2010), and long-duration gamma-ray bursts (lGRBs; e.g., Krühler et al. 2015;Perley et al. 2016). Furthermore, it is worth noticing that these extreme stripped-envelope explosions are widely believed to harbor a long-lived millisecond magnetar (Dai & Lu 1998;Wheeler et al. 2000;Zhang & Mészáros 2001;Yu et al. 2010;Kasen & Bildsten 2010;Woosley 2010;Piro & Ott 2011;Inserra et al. 2013;Zhang 2018), which can lose its rotational energy via spin-down processes to provide an additional energy injection for the explosion. Therefore, in view of the similarity between the host galaxies of FBOTs and those of SLSNe, SNe Ic-BL, and lGRBs, it would be reasonable to suspect that the FBOTs are also powered by a magnetar engine. Indeed, the existence of such an engine can provide a good explanation to the lightcurves of some FBOTs (Yu et al. 2015;Hotokezaka et al. 2017;Rest et al. 2018;Margutti et al. 2019;Wang et al. 2019;Sawada et al. 2022). In additional, the benchmark FBOT event, AT2018cow, from a nearby luminosity distance ≈ 60 Mpc (Prentice et al. 2018;Perley et al. 2019) provided an opportunity for the broad-band observations from radio to γ-rays (Rivera Sandoval et al. 2018;Ho et al. 2019;Margutti et al. 2019;Huang et al. 2019). In particular, the radio observation revealed a dense magnetized environment of AT2018cow, which plausi-bly supports the existence of a newly formed magnetar (Mohan et al. 2020). Based the above considerations, in this paper, we collect a large number of FBOTs from the literature and fit their light curves within the framework of the magnetar engine model. The obtained parameters are further compared with those of SLSNe and SNe Ic-BL associated/unassociated with lGRBs. Previously, Yu et al. (2017) had suggested a united scenario to connect SLSNe and SNe Ic-BL. Therefore, in this paper, we will investigate whether such a united understanding can be extended to the FBOT phenomena, which could provide a key rule to the physical origin of the FBOTs. Sample Selection The criteria for our sample selections are as follows: (1) reported rise time above half-maximum t rise 10 d, (2) spectroscopic redshift measurement from its host galaxy spectral features; (3) published lightcurves observed in at least two filters; (4) at least some data are available close to the peak. Our FBOT sample contains 22 robust cases whose rises were recorded in survey projects. For these events, sufficient data on the rise and decline phases of the lightcurve pose a strict constraint on the model parameters, especially for the ejecta mass. The sample also includes 18 events without any detection during the rise phase of the lightcurve. Due to the lack of observational epoch before the peak, the model parameters are related to the rise time we set. Magnetar Engine Model As usual, the spin-down luminosity of a magnetar can be generally expressed according to the luminosity of magnetic dipole radiation as where L sd,i = 10 47 erg s −1 P −4 i,−3 B 2 p,14 is the initial value of the luminosity, t sd 2 × 10 5 s P 2 i,−3 B −2 p,14 is the spindown timescale, and P i and B p are the initial spin period and polar magnetic strength of the magnetar, respectively. The total rotational energy of the magnetar can be written as E rot = L sd,i t sd = 2 × 10 52 erg P −2 i,−3 . Here the conventional notation Q x = Q/10 x is adopted in cgs units. We adopt the common analytic solution derived by Arnett (1982) to calculate the bolometric luminosity of an FBOT powered by a magnetar as: where t diff is the photon diffusion timescale of the FBOT ejecta and A is the leakage parameter. For an ejecta of a mass M ej and velocity v ej , the diffusion timescale is given by t diff = (3κM ej /4πv ej c) 1/2 , where κ is the optical opacity. Here the dynamical evolution of the ejecta is ignored. The kinetic energy of the ejecta is assumed to be directly determined by the rotational energy of the magnetar so that the ejecta velocity can be estimated as v ej 2E rot /M ej . This assumption is viable as long as t sd t diff and the initial value of the kinetic energy is not much higher than 10 50 erg 1 . By considering that the energy injected into the ejecta could be in the form of high-energy photons, we write the leakage parameter of the ejecta as A = 3κ γ M ej /4πv 2 ej , where κ γ is the opacity for gamma-rays. The frequency-dependence of κ γ is ignored for simplicity. Finally, in order to calculate the monochromatic luminosity of the FBOT emission, we define an photosphere temperature as with the Stefan-Boltzmann constant σ SB , floor temperature T floor , and photospheric velocity v ph v ej (which is a standard approximation in the literature). Lightcurve Fitting By adopting a Markov Chain Monte Carlo method, we use the magnetar engine model described in Section 2.2 with the emcee package (Foreman-Mackey et al. 2013) to fit the multi-band lightcurves of the collected FBOTs. For the Milky Way extinction, we take values from the dust maps of (Schlafly & Finkbeiner 2011) and fix R V = 3.1. Because the extinction of the host galaxy is unknown, we set A V as a free parameter, with a uniform distribution prior between 0 and 0.5 magnitudes. There are 8 free parameters: ejecta mass M ej , initial spin period P i , magnetic field strength B p , opacity κ, opacity to high-energy photons κ γ , floor temperature T floor , host extinction A V , and the time of explosion relative to the first observed data point t shift . Ho et al. (2021) recently reported 22 FBOTs with spectroscopic observations, most of which were classified as Type Ib/Ibn/IIb SNe or hybrid IIn/Ibn SNe. Furthermore, a fraction of FBOTs were found to be Type Ic SNe (e.g., Drout et al. 2013;De et al. 2018). Although a major fraction of FBOTs lack spectroscopic classifications, we assume that these collected FBOTs could plausibly contain a large amount of helium, carbon or oxygen. Thus, the prior of κ for FBOTs is preferably set in a range of 0.05−0.2 cm 2 g −1 , which is suitable for scattering in ionized helium, carbon or oxygen. For those events without any detection before the peak, we note that the upper limit of the prior for t shift is defined as the time between the pre-explosion non-detection and the first detection. The priors of these fitting parameters are listed in Table 1. For each lightcurve fitting, we run the code in parallel using 12 nodes with at least 10,000 iterations where the first 100 iterations are used to burn in the ensemble. We list the fitting results of the derived model parameters in Table 2, while the detailed fittings to the multi-band lightcurves for each event are shown in the Appendix A. Generally, the FBOT lightcurves can be well fitted by the magnetar engine model with high quality. For example, we present the posteriors of the fitting parameters for DES16C3gin in Figure 5. The most important outputs of the lightcurve modeling are the ejecta mass M ej , initial spin period P i , and magnetic field strength B p . We plot M ej vs. P i and P i vs. B p in Figures 1 and 2, respectively. The ejecta masses for most of FBOTs we collected are in the range of ∼ 0.002 − 1 M with a median value of ∼ 0.11 M . For the magnetars, the initial spin periods are centered at ∼ 9.1 +9.3 −4.4 ms. The magnetic field strengths, in which the median value is ∼ 25 B c , have a wide distribution mostly between ∼ B c and ∼ 200 B c . Here, B c = m 2 e c 3 /(q ) = 4.4 × 10 13 G represents the Landau critical magnetic field defined by electron mass m e , electron charge q and reduced Planck constant . Corresponding to our sample selection criterion as t rise 10 d, the upper limit of the ejecta masses of FBOTs can be set to be around 1M , which hints that FBOTs could have the following types of origins with the formation of a rapidly rotating magnetar: (I) BNS mergers to produce massive NS remnants (i.e., the mergernova model; Yu et al. 2013), (II) mergers of a NS and a WD (Zenati et al. 2019), (III) AICs of WDs, including both single-and double-degenerate cases 2 (Yu et al. 2019a,b), and (IV) SN explosions of ultra-stripped stars (i.e., ultra-stripped SNe; e.g., Tauris et al. 2015;Hotokezaka et al. 2017;Sawada et al. 2022). For Case I, since the derived masses here are generally higher than the masses that can be produced by the BNS mergers (e.g., Radice et al. 2018), the mergernova model could be ruled out for most FBOTs. Nevertheless, the model could still account for some special sources such as PS1-12bb, which have the fastest evolution and relatively low luminosities that are consistent with the prediction of the mergernova model. Furthermore, if a larger opacity is taken into account which can be caused by lanthanides, then more samples could be classified to the mergernova candidates as their ejecta masses become smaller than 0.01 M . In any case, the rel-atively low event rate of the BNS mergers definitely makes them can only account for a very small faction of the observed FBOTs. Alternatively, in comparison, the relatively wide range of the ejecta masses of FBOTs most favors the ultra-stripped SN model in close binaries, although it is unclear how newborn NSs formed via this channel can have an initial spin period as high as P i ∼ 2 − 40 ms. We infer that the compact companion in a close binary can increase the angular momentum of the ultra-stripped star by the tidal torque, possibly resulting in the rapid rotation of the NS. In addition, for the FBOTs with ejecta masses around ∼ 0.1M , the WD-related models still cannot be ruled out, which have some advantages in explaining the multi-wavelength features of some FBOTs. Connection with SLSNe and SNe Ic-BL It has been widely suggested that SLSNe and SNe Ic-BL associated/unassociated with lGRBs, at least a good fraction of them, are also driven by millisecond magnetars (e.g., Kasen & Bildsten 2010;Lü & Zhang 2014;Mazzali et al. 2014;Metzger et al. 2015;Kashiyama et al. 2016;Yu et al. 2017;Liu et al. 2017;Nicholl et al. 2017;Lü et al. 2018). Therefore, it is necessary and interesting to investigate the possible connection and differences between FBOTs and these explosion phenomena, as did in As shown in Figure 1, the combination of the four different types of transients shows a clear universal correlation between ejecta mass and initial spin period as with a Pearson correlation coefficient ρ = −0.84, which is well consistent with the results discovered by Yu et al. (2017) for the SLSN sample only. This anti-correlation strongly indicates that these explosion phenomena may share a common origin. It can be also found that the clearest criterion defining FBOTs could be their small ejecta masses, which generally corresponds to relatively large initial spin periods because of the strong anti-correlation. Therefore, on the one hand, FBOTs very likely originate from stellar collapses, just having a progenitor much lighter and much more stripped than those of SLSNe and SNe Ic-BL. On the other hand, the M ej − P i anti-correlation indicates that more massive progenitors have larger angular momenta. In Section 3.1, we suspect that the FBOT progenitors could be ultra-stripped stars in close binary systems, which can be spun up by their compact companions. Following this consideration, the M ej − P i anti-correlation could be a natural result of the interaction between the progenitor and the compact companion (see also Blanchard et al. 2020;Fuller & Lu 2022, Hu et al. 2022. If this hypothesis is true, then it is expected that the progenitors of SLSNe and SNe Ic-BL can also be substantially influenced by a compact companion. Yu et al. (2017) have found that the primary difference between SLSNe and lGRBs could be the magnetic field strengths of their magnetar engines. Specifically, SLSNe have B c B p 10B c , while lGRBs have B p 10B c . Therefore, in Yu et al. (2017), it was suspected that the ultra-high magnetic fields can play a crucial role in launching a relativistic jet to produce GRB emission. Here, it is however found that the surface magnetic fields of more than half FBOTs can be higher than 10B c , but no GRB has been detected to be associated with FBOTs. One possibility is that the GRB emission associating these FBOTs is highly beamed and the emission beam largely deviates from the line of sight. This, however, is disfavored by the difference between the event rate density of FBOTs and lGRBs. Therefore, a more promising explanation is that these FBOT magnetars intrinsically cannot produce GRB emission, even though their magnetic fields satisfy B p > 10B c . The probable reason is that the FBOT magnetars rotate too slowly to provide sufficiently large energy for a relativistic jet. Additionally, in view of the small masses of the FBOT ejecta, the possible fallback accretion onto the magnetar is also potentially weak and thus cannot help to launch the jet. Finally, in view of the significant similarity between FBOTs and SLSNe, it would be reasonable to regard them as an unified phenomenon with different progenitor masses. From this view, the separation between FBOTs and SLSNe is empirical but does not imply fundamentally different physics. For example, the FBOTs, DES16E1bir (M ej ∼ 0.9 M ) and SNLS06D1hc (M ej ∼ 1 M ), in our sample could in fact be classified into SLSNe. In any case, by combining with the FBOT and SLSN data, we can find a weak correlation between P i and B p , as presented in Figure 2. Such a correlation could also exist in the lGRB data, but with a shift in B p . This indicates that GRB magnetars have magnetic fields statistically higher than those of SLSNe and FBOTs for the same initial spin period P i . Shape of Lightcurves According to the parameter values constrained from the fittings, we can calculate the peak absolute magnitude (or peak luminosity), the rise and decline timescales above half-maximum luminosity of the FBOTs, and their 1σ uncertainties, which are also listed in Table 2. These parameters determine the basic shape of the lightcurves of the transients and can be measured directly from observational data, which therefore are very useful for classifying the transients. As presented in the left panel of Figure 3, it seems reasonable to set the boundary between FBOTs and SLSNe at t rise ∼ 10 d where the data are relatively sparse. So we adopt it as a sample selection criterion. Strictly speaking, it cannot be ruled out that the ambiguous gap between FBOTs and SLSNe could just be a result of selection effects and the distribution between these two phenomenological types of explosion phenomena could in fact be intrinsically continuous. Generally speaking, FBOTs together with SLSNe can be separated from SNe Ic-BL including GRB-SNe by the separation line which corresponds to a nickel mass M Ni = 0.3M ej , wheret rise = t rise /d and the numerical coefficients read a 1 = 0.083, b 1 = 5.3, c 1 = 14.94, a 2 = 0.0089, b 2 = 5.2, and c 2 = 12.63. This separation line is plotted using the M ej − E K relationship derived from the M ej − P i relationship, i.e., Equation (4), by assuming that all the kinetic energy of the ejecta comes from the rotational energy of the magnetar. It is commonly believed that the mass of the 56 Ni synthesized during core-collapse SNe is very difficult to reach a few tens percent of the total mass of the SN ejecta (e.g., Suwa et al. 2015;Saito et al. 2022). Based on radiation transport calculations, Ertl et al. (2020) found that current models employing standard assumptions of the explosions and nucleosynthesis predicts radioactive decay powered light curves that are less luminous than commonly observed SNe Ib and Ic. So, both FBOTs and SLSNe of magnitudes above Equation (5) definitely cannot be primarily powered by radioactive decay of 56 Ni and an engine power is required. Nevertheless, some outliers still exist in our sample, e.g., DES14S2anq and DES14S2plb. In comparison, the peak luminosity of SNe Ic-BL is relatively lower, which reduces the energy requirement and, in principle, makes the radioactive power model available. Nevertheless, by considering of the continuous transition between different phenomena, it could still be nature to suggest that the emission of a fraction of SNe Ic-BL including GRB-SNe is also partly powered by the magnetar engine, although the majority of the spin-down energy of the magnetar has been converted to the kinetic energy of the SN ejecta (e.g., Lin et al. 2021;Zhang et al. 2022). As analyzed in Yu et al. (2015Yu et al. ( , 2017, the emission fraction of the spin-down energy is primarily determined by the relationship between the timescales of t sd and t diff , which can be basically reflected by the ratio of the lightcurve rise to the decline times. As shown in the right panel of Figure 3, the FBOT and SLSN data can be well fitted by the line of t decl ≈ 1.8t rise , which corresponds to t sd = t diff . This is a reason why we can regard FBOTs and SLSNe as an unified phenomenon. In comparison, the SNe Ic-BL data are obviously in the t sd < t diff region, as expected. For GRB-SNe, although they are usually classified as SNe Ic-BL, their distribu-tion in the t rise − t decl plane is actually more diffusive than normal SNe Ic-BL. CONCLUSION In this paper, we perform a systematic analysis of multi-band lightcurves of 40 FBOTs using the magnetar engine model and most samples are fitted with high quality. Then, the explosion and magnetar parameters are well constrained. It is found that the median values with 1σ deviation of ejecta mass and initial spin period of the FBOT magnetars are M ej = 0.11 +0.22 −0.09 M and P i = 9.1 +9.3 −4.4 ms. The magnetic field strengths B p are mostly between ∼ B c and ∼ 200B c with a median value of ∼ 25B c . Here, please keep in mind that the value of M ej is somewhat dependent on the adoption of the ejecta velocity. If FBOT explosions can be initially as explosive as and even more explosive than normal SNe Ib/c, the value of M ej would be systematically increased by a factor of ∼ 2 so that an appropriate diffusion timescale can be kept. In view of that the star-formation rates and metallicities of the FBOT hosts are consistent with those of SLSNe and SNe Ic-BL including GRB-SNe (Wiseman et al. 2020), we compare the derived parameters of the FBOTs with the other types of extreme strippedenvelope explosions which are potentially driven by magnetar engines too. Consequently, we find a strong continuous anti-correlation between M ej and P i for FBOTs, SLSNe, GRB-SNe and SNe Ic-BL as P i ∝ M −0.41 ej . A clear criterion to define FBOTs is their small ejecta masses, with an upper limit of ∼1 M , which is around the lower limit of the masses of other explosion phe-nomena. Furthermore, the magnetic field strengths of the FBOT magnetars span from ∼ B c B p 10B c for SLSN magnetars to B p 10B c for lGRB magnetars. These connections indicate that most FBOTs may share a common origin with SLSNe, lGRBs and normal SNe Ic-BL. Since the progenitors of FBOTs likely have low masses, we suspect that most FBOTs originate from the collapse of ultra-stripped stars in close binary systems. However, mergernovae and WD-related models are still not ruled out, which could give nature explanations for some special outlier samples. With the distributions of t rise vs. M peak for these different types of explosions, we know that the FBOT and SLSN data can be separated by a criterion of t rise ∼ 10 d, while FBOTs together with SLSNe can be separated from GRB-SNe and normal SNe Ic-BL by the line corresponding to M Ni = 0.3M ej . These criteria can be used to classify FBOTs, SLSNe and SNe Ic-BL in observation. ACKNOWLEDGMENTS We thank the anonymous reviewer for helpful comments and feedback. We thank Sheng Yang, Ying Qin, Rui-Chong Hu, and Noam Soker for helpful comments, H-J. Lü, M. R. Drout and M. Pursiainen for sharing their data. This work is supported by the National SKA program of China (2020SKA0120300), the National Key R&D Program of China (2021YFA0718500), and the National Natural Science Foundation of China (Grant No. 11833003). A. FBOT SAMPLES AND FITTING RESULTS The observed data and fitting lightcurves for the FBOTs collected in our sample are presented in Figure 4.
Interaction between therapeutic interventions for Alzheimer’s disease and physiological Aβ clearance mechanisms Most therapeutic agents are designed to target a molecule or pathway without consideration of the mechanisms involved in the physiological turnover or removal of that target. In light of this and in particular for Alzheimer’s disease, a number of therapeutic interventions are presently being developed/investigated which target the amyloid-β peptide (Aβ). However, the literature has not adequately considered which Aβ physiological clearance pathways are necessary and sufficient for the effective action of these therapeutics. In this review, we evaluate the therapeutic strategies targeting Aβ presently in clinical development, discuss the possible interaction of these treatments with pathways that under normal physiological conditions are responsible for the turnover of Aβ and highlight possible caveats. We consider immunization strategies primarily reliant on a peripheral sink mechanism of action, small molecules that are reliant on entry into the CNS and thus degradation pathways within the brain, as well as lifestyle interventions that affect vascular, parenchymal and peripheral degradation pathways. We propose that effective development of Alzheimer’s disease therapeutic strategies targeting Aβ peptide will require consideration of the age- and disease-specific changes to endogenous Aβ clearance mechanisms in order to elicit maximal efficacy. Introduction Alzheimer's disease (AD) is characterized in part by the accumulation of the amyloidbeta peptide (Aβ) within the brain parenchyma leading to cellular injury and ultimately death, as well as along blood vessels resulting in vascular dysfunction (Querfurth and LaFerla, 2010). It is suggested that the imbalance between Aβ production and clearance in aging drives Alzheimer's disease progression in late onset Alzheimer's disease (Hardy and Selkoe, 2002). In light of this, many clinical trials have been initiated over the last 20 years that have targeted removal or inhibition of Aβ production with very limited to no success. 1 The failure of these trials has been attributed to targeting Aβ too late in the disease process, after the now recognized prodromal phase of Aβ accumulation in the absence of clinical symptoms (Sperling et al., 2011). Although these treatment strategies show high success rates in rodent models with amyloid precursor protein (APP) overexpression, a major confound is the lack of physiological deficits from aging and long term amyloid burden in these models, which are present in AD patients. Subsequently, endogenous Aβ degradation is diminished more prominently in AD than in disease models, a fact which is often overlooked in the development of treatments. This is a major contributing factor to the lack of clinical efficacy of Aβ specific therapies (Tanzi et al., 2004). We present three classes of AD treatments, and discuss how they rely on, and interact with, three physiological pathways of Aβ clearance. Firstly, we examine Aβ immunization strategies, and their dependence on the peripheral sink. Then we discuss the effectiveness of small molecule therapies targeting APP cleavage in the context of reduced Aβ degradation in the brain parenchyma. Finally, we present an array of lifestyle interventions in the clinic for AD, and discuss how these preserve vascular health, and how this may ultimately enhance interstitial fluid (ISF) and soluble Aβ drainage mechanisms. We survey therapeutic strategies that are presently in clinical development and describe the Aβ degradation pathways that are necessary for drug efficacy while highlighting potential disease-specific confounds that may contribute to previously failed trials. Immunization Strategies Currently, there are both passive and active immunization strategies being developed for Alzheimer's disease. These antibodies target various components of the AD pathology, such as Aβ and tau, but we will focus on those targeting Aβ ( Table 1). The most prominent area of immunization research has largely been focused on the development of antibodies against Aβ. One passive immunization antibody currently undergoing clinical trials is solanezumab [LY2064230]. It is the humanized IgG 1 analog of the murine m266.2 antibody, an anti-Aβ monoclonal antibody (Samadi and Sultzer, 2011). In vitro studies have shown solanezumab to have a strong affinity (Kd of 10 −12 ) towards the middle region of Aβ13-28 and thus acts primarily via a peripheral sink mechanism, although other physiological buffers and endogenous Aβ binding proteins may occur (DeMattos et al., 2001). The ''peripheral sink'' hypothesis is based on the premise that antibodies minimally transverse the blood-brain barrier (BBB) and thus clearance of Aβ from the brain relies on antibodies binding to Aβ within the bloodstream. Antibodies directed towards Aβ shift the balance of Aβ from the brain and the surrounding vasculature, leading to an efflux of Aβ into the periphery (DeMattos et al., 2001). However, in AD, this ''peripheral sink'' may be compromised, as Aβ efflux mechanisms may be less efficient or countered by influx of Aβ transcytosis into the brain (Kurz and Perneczky, 2011). Through rodent studies, it was shown that m266 treatment rapidly increased plasma Aβ40/42, which directly correlated with the amount of brain Aβ burden pre-treatment. These data support the idea that m266 acted as a ''peripheral sink'' to directly facilitate the efflux of Aβ from the brain (DeMattos et al., 2002). Another advantage of solanezumab is the selective binding to soluble Aβ, which greatly decreased the incidence of vasogenic edema and microhaemorrhages that were associated with earlier antibodies (Racke et al., 2005), called Amyloid Associated Imaging Abnormalities in human trials (Sperling et al., 2011). Phase 2 trials conducted in 52 mild-to-moderate AD patients in a double blind, placebo-controlled manner demonstrated a dose-dependent increase in plasma Aβ40 and Aβ42 as well as an increase in unbound Aβ42 in the CSF. These results suggested that solanezumab bound soluble Aβ, which thereby disrupted the equilibrium between soluble and insoluble Aβ within the CNS, resulting in reduction in the deposited Aβ burden (Farlow et al., 2012). Two Phase 3 trials, EXPEDITION-1 and -2 conducted in patients with mild-to-moderate AD did not show statistically significant improvement in cognition or activities of daily living as measured by the Alzheimer's Disease Assessment Scales, (ADASCog 11 and 12) or the Alzheimer's Disease Centers Scale for Activities of Daily Living (ADCS-ADL); however, there were significant differences in secondary outcome measures (Doody et al., 2014). Currently, there is a third Phase 3 trial in 2,100 mild AD patients who have elevated levels of Aβ plaques (EXPEDITION-3, ClinicalTrials.gov identifier: NCT01900665), a Phase 2/3 trial to test solanezumab in carriers of the APP, presenilin-1 and presenilin-2 Alzheimer's gene mutations (DIAN Study, ClinicalTrials.gov identifier: NCT01760005), and a Phase 2 study in seniors deemed to be at high risk for AD who have amyloid positive PET scans (A4 Study, ClinicalTrials.gov identifier: NCT02008357). Early studies, using passive immunization in AD patients, were often associated with microhemorrhages and vasogenic edema (Racke et al., 2005). In an attempt to avoid these side-effects, a humanized anti-Aβ monoclonal antibody with an IgG4 backbone called crenezumab (MABT5102A) was created (Adolfsson et al., 2012). Crenezumab is similar to solanezumab in that both target the midsection of Aβ; however, the IgG4 isotype reduces the risk of Fcγ receptormediated overactivation of microglial cells potentially leading to deleterious proinflammatory responses, while maintaining effective Aβ phagocytosis and clearance (Adolfsson et al., 2012). Furthermore, crenezumab recognizes both soluble Aβ oligomers and multiple Aβ aggregates that are present in AD brains (Adolfsson et al., 2012). In vitro experiments have shown that crenezumab both neutralized and protected neurons against toxic Aβ oligomers (Adolfsson et al., 2012). Phase 1 trials have shown to be extremely promising, as a dose-dependent increase in total plasma Aβ levels was observed to serum crenezumab concentrations demonstrating significant target interactions (Adolfsson et al., 2012). As well, no trial participants developed vasogenic edema, demonstrating the safety of high dose treatment (Adolfsson et al., 2012). Currently, there are several Phase 2 trials with crenezumab being tested as a method of prevention for carriers of the PSEN1 E280A mutations, (Alzheimer's Prevention Initiative, ClinicalTrials.gov identifier: NCT01998841), on brain amyloid burden in mild to moderate AD (BLAZE Study, ClinicalTrials.gov identifier: NCT01397578) and a long-term safety extension study (ClinicalTrials.gov identifier: NCT01723826). There has also been some success in terms of active Aβ immunotherapies for treatment of AD. Active Aβ immunotherapies are potentially more cost-effective and longlasting compared to passive Aβ immunotherapies, which require recurring antibody infusions. Studies have shown an increase in cross-reactive, potentially protective Aβ autoantibodies as a result of active immunotherapy treatment in vervets, which are much lower in AD patients when compared to healthy, age-matched individuals (Weksler et al., 2002;Britschgi et al., 2009). However, active Aβ immunotherapies often require the use of a strong adjuvant for antibody production that may be detrimental to elderly patients who already exhibit an above average proinflammatory cytokine levels (Michaud et al., 2013b). CAD106 is one such active immunization strategy that is currently in clinical trials. CAD106 was designed with multiple Aβ1-6 coupled to a virus-like Qβ particle, to avoid activation of inflammatory T cells (Wiessner et al., 2011). In APP transgenic mouse studies, CAD106 administration generated Aβ-specific antibodies without activation of Aβ-specific T cells (Wiessner et al., 2011). In prevention trials, CAD106 immunization significantly reduced plaque formation in APP24 mice, however in treatment trials of advanced plaque formation, efficacy was reduced. CAD106 treatment had no effect on levels of vascular Aβ and proinflammatory cytokines, and did not increase microhemorrhages. CAD106 treatment in rhesus monkeys, which share similar Aβ sequences to humans, showed a dose dependent increase in antibody production and Phase 1 trials in humans have found similar results (Wiessner et al., 2011;Winblad et al., 2012). Phase 1 trials showed that CAD106 did not have any adverse effects related to the treatment and a significant portion of the patients treated (67% in cohort 1 and 82% in cohort 2) developed Aβ antibody response that met the responder threshold (Winblad et al., 2012). Phase 2 trials were designed to establish antibody responses and tolerability to various doses, different regions of injection, and different doses of adjuvant. Partial results from Phase 2 findings have reported that longterm exposure to high amounts of CAD106 did not have any additional safety findings (Graf et al., 2014;ClinicalTrials.gov identifier: NCT01097096). ACI-24 is another active immunization method under developed (Nicolau et al., 2002). ACI-24 is an Aβ1-15 peptide that is bound to liposomal surfaces through two palmitoylated lysine residues, forming a tandem at each end of the peptide (Muhs et al., 2007). The Aβ1-15 sequences were chosen as it retained the B cell epitope of Aβ, but lacked the T cell activation epitope (Monsonego et al., 2001(Monsonego et al., , 2003. ACI-24 treatment restored memory deficits in mice, and produced mainly isotopes IgG, IgG2b, and IgG3, whereby the first two IgGs are related to noninflammatory Th2 response and the latter is a T cellindependent IgG subclass (Gavin et al., 1998;Muhs et al., 2007). Double transgenic mice treated with ACI-24 showed an improvement in memory function which correlated with an increase in IgG antibodies (Sigurdsson et al., 2004;Muhs et al., 2007). Furthermore, treatment resulted in significant reductions in both insoluble Aβ40 and Aβ42, as well as soluble Aβ42, with slight decreases in soluble Aβ40. This effect was observed without additional microglial activation, astrogliosis, or proinflammatory cytokine production (Muhs et al., 2007). Currently, there is a Phase 1/2 trial to examine the safety, tolerability, immunogenicity and efficacy of ACI-24 in mild-tomoderate AD patients (EudraCT Number: 2008-006257-40). Efflux Pathways for Clearance of Aβ Since only 0.1% of all peripherally administered or in vivo generated antibodies cross the BBB, then the efflux of Aβ from the brain and subsequent degradation pathways in the periphery will be required for effective passive or active immunization strategies targeting Aβ (Banks et al., 2002;Morgan, 2011). The major efflux pathway for Aβ across the BBB is via the low density lipoprotein receptor-related protein -1 (LRP-1; Kanekiyo and Bu, 2014). LRP-1 is a large multi-functional receptor that regulates endocytosis of multiple ligands directly or indirectly through interaction with other ligands, such as Apolipoprotein E (ApoE), α2-macroglobulin, or other receptor associated proteins many of which have been implicated in AD pathogenesis (Liu et al., 2013;Kanekiyo and Bu, 2014). LRP-1 is abundantly expressed on neurons, glia and vascular cells within the brain and thus, is ideally located as a mechanism for Aβ clearance. For the present argument, LRP-1 is expressed on the microvascular including capillaries, venules and arterioles (Sagare et al., 2013). Furthermore, in AD patients and a mouse model of cerebrovascular amyloid angiopathy (CAA), LRP-1 staining is greatly reduced on vessels and is co-localized to amyloid plaques (Shibata et al., 2000;Deanne et al., 2004;Donahue et al., 2006). LRP-1 expression is also reduced in an age-dependent manner on microvasculature adding further to the potential deficits in Aβ efflux from the brain . As mentioned, LRP-1 transports other ligands such as ApoE, the major lipid/protein chaperone in the brain which is known to bind directly to Aβ. There are 3 ApoE isoforms, ApoEε4 representing a risk factor for AD and having the lowest affinity for Aβ, ApoEε2 which represents a protective factor with a high binding affinity, and ApoEε3 which has an affinity between the two (Liu et al., 2013). Thus the differential affinity of the ApoE isoforms for Aβ binding could have further deleterious implications for the clearance of Aβ. Studies on aging and AD have suggested that there is an increased permeability of the BBB with age and AD progression, as a result of oxidative stress and vascular changes, suggesting that passive diffusion of Aβ across the BBB might also contribute to Aβ efflux (Kalaria and Hedera, 1995;Marques et al., 2013). Under these conditions, a role for perivascular macrophages, pericyte or endothelial cell uptake, and degradation of Aβ after diffusion may also contribute (Verbeek et al., 2000;Hawkes and McLaurin, 2009). However, it has been proposed that macrophage function decreases with aging and pericytes are extremely sensitive to Aβ-induced toxicity, therefore the balance between these processes must be considered (Sengillo et al., 2013). Lastly, the receptor for advanced glycation end-products (RAGE) plays an important role in AD by contributing to Aβ-induced neuronal dysfunction, microglial activation, and a key role in Aβ transcytosis into the brain (Yan et al., 1996(Yan et al., , 2012Deane et al., 2003;Origlia et al., 2008Origlia et al., , 2010. In AD, Aβ can act as a ligand for RAGE and subsequently stimulate the upregulation of RAGE via a positive feed-back mechanism (Bierhaus et al., 2005). With RAGE-induced influx of Aβ into the brain, RAGE activity may negate the already compromised effects of Aβ efflux pathways, resulting in no change to the overall brain Aβ levels (Kurz and Perneczky, 2011). An oral inhibitor of RAGE, TPP488 (PF-04494700) has been shown to block these interactions. In vitro studies showed that TPP488 inhibited soluble RAGE binding to RAGE ligands, but more importantly in this context, to Aβ42. 2 Phase 2 trials in people with mild-to-moderate AD showed that TPP488 was well tolerated in subjects that received either a low 10 mg or a high 20 mg dose (Sabbagh et al., 2011; ClinicalTrials.gov identifiers: NCT00566397, NCT00141661). However, results were inconclusive with respect to plasma Aβ levels and inflammatory markers, and there were no significant differences in cognitive and functional measures (Sabbagh et al., 2011). Recently, a Phase 3 trial of TPP488 in mild-to-moderate AD patients starting in 2014 was announced. The combination use of promoting Aβ clearance from the brain and blocking re-entry may provide a more powerful treatment then either strategy alone. Small Molecule Inhibitors Currently, small-molecule inhibitors are being developed to inhibit various processes involved in Aβ plaque formation in AD. One class of inhibitors that are presently under examination by various studies are β-site APP Cleaving Enzyme (BACE) inhibitors. As β-secretase plays a major role in the production of Aβ peptides from APP, BACE inhibitors may reduce the amount of Aβ that is produced, allowing for endogenous Aβ clearance mechanisms to function more effectively. However, for these small-molecule inhibitors to work, they must cross both the BBB and enter the appropriate compartments within neurons (Vassar et al., 1999;Gabathuler, 2010). Therefore, in addition to target efficacy, BACE, these therapeutics must be developed with the appropriate molecular weight and charge to overcome these challenges (Pardridge, 2007). Specific BACE inhibitors are discussed below and summarized in Table 1. MK8931, a small molecule inhibitor of BACE 1 and BACE2, has shown in Phase 1 trials that single doses of up to 500 mg were well tolerated and met with reductions in CSF Aβ of up to 92% in healthy individuals (Forman et al., 2012). Furthermore, MK8931 has shown to have a relatively long half-life, 20 h, which is ideal for single daily dosing paradigms (Forman et al., 2012;Stone et al., 2013). Currently, Phase 2/3 trials are examining the safety and efficacy of MK-8931 at daily dosages of 12 and 40 mg, as well as long-term treatment effects on ADAS-cog and ADCS-ADL scores (EPOCH, ClinicalTrials.gov identifier: NCT01739348 and APECS, NCT01953601). AZD3293 (LY3314814) is an oral, brain permeable BACE 1 inhibitor, currently being developed by AstraZeneca and Eli Lilly (Haeberlein et al., 2013). Mouse and guinea pig studies have shown AZD3293 treatment resulted in a dose-and timedependent reduction in the amount of Aβ40/42 and soluble β-APP in the brain, CSF and plasma (Haeberlein et al., 2013). Results from quantitative analyses of soluble α-APP and β-APP in healthy volunteers showed that AZD3293 treatment resulted in a dose-dependent decrease in the amount of soluble β-APP in the CSF, while soluble α-APP showed a similar broadly dose-dependent increase (Höglund et al., 2014). Phase 1 trials of AZD 3293 have recently been completed. A Phase 2/3 trial is currently recruiting, and will examine the safety and efficacy of the AZD 3293 over 2 years of treatment in early AD (ClinicalTrials.gov identifier: NCT02245737). Clinical Dementia Rating-Sum of Boxes (CDR-SOB) will be used as the primary outcome measure, with ADAS-COG and ADCS-ADL as secondary outcome measures, as well as other imaging and clinical markers. Another oral BACE inhibitor, VTP-37948, currently in development by Vitae Pharmaceuticals, has shown good brain penetrance and reduced CSF Aβ levels by up to 80% in preclinical studies. 3 At present, VTP-37948 is in Phase 1 trials with healthy individuals to examine safety and tolerability, as well as the pharmacokinetics and pharmacodynamics of the drug. E2609 is a BACE1 inhibitor that is being developed by Eisai Ltd. It has been shown to significantly inhibit Aβ40/42 production in both the CSF and plasma of cynomolgus monkeys after oral dosing (Lucas et al., 2012). Partial results from Phase 1 studies have shown E2609 was well tolerated across all dosage treatments (up to 800 mg), and had a prolonged effect in reducing plasma Aβ40/42 after a single dosage in healthy individuals . Currently, trials have been completed on patients with MCI and those with evidence of Aβ pathology, and a dose-finding Phase 2 trial is underway in patients with MCI and mild AD (ClinicalTrials.gov identifier: NCT02322021). They also examined the safety and pharmacology of the E2609 across Japanese and Caucasian populations. Further studies on BACE inhibitors must be conducted to determine efficacy in target engagement as well as potential off-target deleterious side-effects. Post-mortem analyses of AD brains demonstrated increased BBB permeability in brain regions, such as the hippocampus, which may aid BACE inhibitors to reach their targets (Montagne et al., 2015). However, >90% of AD patients exhibit CAA, amyloid deposition within the vasculature of the central nervous system, resulting in hypoperfusion as well as a physical barrier to influx of BACE inhibitors (Revesz et al., 2002;Thal et al., 2008). BACE cleaves many substrates, that play important roles in the nervous system, and thus inhibition of BACE may also lead to deleterious effects throughout the body. One example is neuregulin-1, a peptide that is essential for heart and nervous system development as well as the maintenance of muscle spindles (Britsch, 2007). Other BACE1 substrates, seizure-protein 6, L1, CHL1 and contactin-2, are important neural cell adhesion molecules that are crucial for guidance and maintaining neural circuits (Kuhn et al., 2012;Zhou et al., 2012). Furthermore, BACE1 KO mice demonstrate problems in axon targeting, although this may represent a developmental issue, re-programming of neural circuits and adult neurogenesis (Rajapaksha et al., 2011). Therefore, the consequences of total BACE inhibition on the critical function of these other substrates need to be examined further, as the effects of these BACE inhibitors on Aβ offers a promising therapeutic route. An alternate and possibly less prone to deleterious Intraparenchymal Degradation Pathways In order for small molecule therapies to be effective in AD, they not only need to exhibit high CNS bioavailability but also utilize endogenous parenchymal Aβ catabolism pathways. In regards to the BACE 1 inhibitors described above, these inhibitors will decrease new Aβ production and thus potentially prevent further neuronal and vascular damage. However, at time of treatment most AD patients will have a pre-existing Aβ load within the CNS that may require catabolism for full recovery. The endogenous parenchymal catabolic pathways for Aβ are regulated by a number of degrading enzymes in the extracellular space, secreted chaperones that monitor proteostasis, as well as glial cell uptake and degradation by lysosomal, autophagic and proteosomal pathways (Guénette, 2003;Tanzi et al., 2004;Lai and McLaurin, 2012;Wyatt et al., 2012). Enzymatic degradation of Aβ peptides is accomplished by a number of enzymes including, but not limited to, neprilysin (NEP), insulin degrading enzyme (IDE), angiotensin converting enzyme (ACE) and various matrix metalloproteinases (MMPs). NEP is the most extensively studied, and has been shown to degrade both Aβ40 and Aβ42 in vitro and in vivo (Iwata et al., 2000(Iwata et al., , 2001. Furthermore, recent studies in aged mice and AD patients have shown decreased levels of NEP in the hippocampus and temporal gyrus, regions with a high amyloid load in AD (Yasojima et al., 2001;Iwata et al., 2002). However, the degradation of Aβ is complicated and cannot be accomplished by a single enzyme, as Kms and Aβ aggregation state will play a role. IDE has also been shown to have Aβ degrading activity, as IDE deficient mice have increased brain Aβ levels and overexpression in an AD mouse model decreases Aβ levels Leissring et al., 2003). Furthermore, insulin competes with Aβ for IDE degradation and thus in Diabetes Mellitus, Aβ levels are increased within the CNS (Qiu and Folstein, 2006). The role of MMPs, ACE, endothelin-converting enzyme and others all contribute to Aβ catabolism however the precise role for each enzyme is not fully elucidated. A recent hypothesis suggests that a family of secreted chaperones, which exist in the extracellular space, patrol the brain for misfolded proteins and aid in clearance (Wyatt et al., 2012). The chaperones relating to Aβ clearance that have been identified are clusterin (also referred to as ApoJ), α2-macroglobulin and ApoE. As mentioned above, all three chaperones are co-receptors for LRP-1 and thus aid in Aβ catabolism via uptake by neurons, glia or vascular cells. In AD, astrocytes and microglia are the immune effectors of the CNS and thus play a role in injury resolution via limiting effects of toxic Aβ species (Guénette, 2003). Astrocytes become activated and surround amyloid plaques in an attempt to limit the damage to surrounding neuropil (Akiyama et al., 2000). Examination of human AD brain demonstrated the presence of N-truncated Aβ within astrocytes and more specifically Aβ was detected in lysosomal granules of astrocytes, thus suggesting phagocytosis and degradation (Funato et al., 1998;Thal et al., 1999;Nagele et al., 2003). In support of the pathological findings, adult mouse astrocytes have been shown to degrade Aβ deposits in brain sections, and phagocytose extracellular Aβ (Wyss-Coray et al., 2003;Koistinaho et al., 2004;Mandrekar et al., 2009). Although astrocyte uptake and degradation is less efficient then resident microglial cells, this pathway may contribute to Aβ clearance under treatment strategies. Microglial cells are the resident phagocytes of the CNS and play a significant role in Aβ clearance (reviewed in Lai and McLaurin, 2012). Although microglial cells in vitro readily phagocytose and degrade soluble and fibrillary Aβ, there is some controversy regarding efficacy under the pathological conditions present in AD brains (Lee and Landreth, 2010). Similar to astrocytes, microglia surround amyloid plaques, and electron microscopy studies have suggested the intracellular presence of Aβ in endosomal compartments (Frackowiak et al., 1992). Furthermore, Aβ can be detected within lysosomal compartments of non-plaque associated microglial cells after treatment with an anti-aggregant compound, 1-fluoro-scylloinositol (Hawkes et al., 2012). Thus, although some literature suggests that accumulation of Aβ and amyloid plaques may be the result of immuno-incompetent microglia in AD, the presence of small molecules therapies, such as immunotherapy, curcumin and scyllo-inositol, that boost phagocytosis support a role for microglia in therapeutic interventions (Wilcock et al., 2004;McLaurin et al., 2006;Yanagisawa et al., 2010). Lifestyle Interventions In recent years, AD research has begun to focus on changes that can be made in daily living and activity which potentially decrease risk and delay symptomatic expression of AD. During the International Conference on Nutrition and the Brain a compilation of the presented data led to the identification of 7 changes to be integrated into daily living, six involving diet and the seventh recommending exercise (Barnard et al., 2014). Exercise and diet alterations have been associated with improvements in symptomatic and pathophysiological AD outcomes in both animal models and humans (Luchsinger et al., 2002;Stranahan et al., 2012;Hawkes et al., 2015;Lim et al., 2015;Lin et al., 2015). Regular, controlled diet and exercise are likely protective against AD pathogenicity through the maintenance of cardiovascular and cerebrovascular health. As mentioned above, the vasculature plays a crucial role in the clearance of Aβ across the BBB because of receptors (i.e., LRP-1, RAGE) expressed on the plasma membranes of capillary, arteriole, and small venule cells (Deane et al., 2003;Sagare et al., 2013). It has recently become apparent that regular and healthy pulsation of blood vessels promote a convective bulk flow of the brain's parenchymal ISF, which acts to clear toxic solutes such as Aβ Weller et al., 2008;Iliff et al., 2012). Lifestyle interventions, including exercise, diet and sleep, may have direct beneficial effects on Aβ clearance, as well as act indirectly through enhancing vascular health. Exercise There are >20 ongoing and recently completed clinical trials assessing the efficacy of different exercise interventions on AD symptoms and pathology. 4 Thoroughly detailing these trials is beyond the scope of this review, however a few points will be noted. Firstly, exercise has had beneficial effects on many cognitive assessments (de Andrade et al., 2013;Winchester et al., 2013;Okonkwo et al., 2014). Secondly, AD interventional benefits are seen in a multitude of exercises ranging from walking to high-intensity aerobic physical activity (Venturelli et al., 2011;Nascimento et al., 2012;Vidoni et al., 2012;Hoffmann et al., 2013;Suttanon et al., 2013;Winchester et al., 2013;Arcoverde et al., 2014;Okonkwo et al., 2014). Finally, regular exercise over years is correlated with significantly slowed Aβ deposition over time, perhaps in part due to enhanced Aβ clearance (Okonkwo et al., 2014). The beneficial effects of aerobic exercise on brain health have been well studied in recent years, with a focus on enhanced neurogenesis, increased levels of neurotrophic factors such as brain-derived neurotrophic factor (BDNF), and reduced risk for AD (Voss et al., 2013). Lin et al. (2015) demonstrated that exercise is associated with improved Aβ clearance mechanisms by upregulation of LRP-1 protein levels whereas RAGE was unchanged (Lin et al., 2015). Interestingly, one study in the Tg2576 AD mouse model demonstrated reduced soluble Aβ, but unchanged total Aβ, as a result of exercise (Nichol et al., 2008). This further suggests that Aβ clearance mechanisms specifically are upregulated with exercise. BDNF signaling in the brainstem of mice has recently been shown to increase the excitability of parasympathetic cholinergic neurons which, by way of the vagal nerve, act to lower resting heart rate (Wan et al., 2014). This provides molecular evidence for a mechanism by which aerobic exercise can reinforce vascular health (Wan et al., 2014;Mattson, 2015). Since the structurally and functionally impaired cerebrovasculature in AD dampens blood vessel-dependent Aβ clearance pathways, physical activities which can upregulate BDNF signaling and provide enhanced amyloid removal to the periphery, would be beneficial in slowing disease progression (Dorr et al., 2012;Lai et al., 2015). Diet Healthy diet maintenance is important in the modulation of AD symptoms and pathology, assisting endogenous mechanisms including neurogenesis, antioxidant protection, and Aβ clearance (Aliev et al., 2013;Maruszak et al., 2014). Type 2 Diabetes Mellitus (T2DM) is an acquired disease caused by hyperglycaemia and potentially insulin resistance. The main risk factors for the incidence and prevalence of T2DM are obesity, primarily from high fat diets, and age (Barbagallo and Dominguez, 2014;Centers for Disease Control and Prevention, 2014). Studies have shown that T2DM patients are at an increased risk of developing dementia, and conversely, 80% of AD patients have T2DM (Janson et al., 2004;Biessels et al., 2006). In a recent study by Hawkes et al. (2015) mice that were subject to a high fat diet during gestation and early life exhibited deficits in perivascular clearance of Aβ, an effect which was exacerbated when the high fat diet was lifelong. Also, vascular deposits of Aβ in aged human cases of hyperlipidemia were significantly greater when compared to aged-matched people with normal lipid levels post-mortem (Hawkes et al., 2015). Dietary plans have been well researched in AD. The main three are the Mediterranean, ketogenic, and caloric restriction diets (Aliev et al., 2013;Maruszak et al., 2014). The Mediterranean diet involves replacement of meat products, especially red meat, with plant-based alternatives, and primarily olive and fish oils for additional fats (Yannakoulia et al., 2015). Clinical studies on the effect of this diet on cognition in MCI/AD are inconsistent, with some showing benefits and others not (Olsson et al., 2015;Yannakoulia et al., 2015). Contrary to the Mediterranean diet, ketogenic and caloric restriction diets involve a significant reduction in food intake (Maruszak et al., 2014;Paoli et al., 2014). The ketogenic diet aims to create a state of fasting within the body (Paoli et al., 2014). This reduces metabolic induced stresses, including damage from reactive oxidative species and pathogenic mitochondrial biogenesis (Paoli et al., 2014). Ketogenic diets may also decrease the production of advanced glycation end products, which accumulate on Aβ plaques, potentially assisting in one of the aforementioned clearance cascades by decreasing reuptake of Aβ by RAGE (Deane et al., 2003;Srikanth et al., 2011;Paoli et al., 2014). Caloric restriction on the other hand, is achieved by a moderate decrease in overall intake of calories (Maruszak et al., 2014). Caloric restriction has been associated with a reduced risk of AD and memory improvements in the elderly, and has been a beneficial intervention in mouse models of AD (Lee et al., 2000(Lee et al., , 2002Luchsinger et al., 2002;Gustafson et al., 2003;Wu et al., 2008;Witte et al., 2009). This diet has also been demonstrated to decrease Aβ pathology through enhancing non-amyloidogenic APP cleavage by α-secretase and increasing clearance of Aβ through upregulated IDE levels (Farris et al., 2004;Wang et al., 2005;Tang and Chua, 2008). There are two current clinical trials on the effectiveness of caloric restriction in MCI/AD. Sleep Sleep and circadian rhythm disturbances are common in aging and AD patients (Floyd et al., 2000;Cipriani et al., 2014;Zelinski et al., 2014). Circadian rhythms are controlled by the suprachiasmatic nucleus (SCN) in the hypothalamus, which acts like a biological clock in its management of many physiological functions (Reppert and Weaver, 2001;Coogan et al., 2013;Videnovic et al., 2014;Zelinski et al., 2014). Mouse models of AD exhibit circadian rhythm alterations suggesting a link to Aβ (Sterniczuk et al., 2010;Baño Otalora et al., 2012). Even selfreported disruptions in sleep indicate a 33% increased risk of dementia and a 51% increased risk of AD (Benedict et al., 2014). Higher brain amyloid burden and lower CSF Aβ levels were observed in sleep deprived and narcoleptic patients without AD (Spira et al., 2013;Liguori et al., 2014). AD animal models in which sleep is deprived show similar trends of increased amyloid load, as well as increased memory dysfunction (Kang et al., 2009;Rothman et al., 2013;Di Meco et al., 2014). A study in cognitively normal middle-aged men measured CSF biomarkers of AD by an intrathecal catheter (Ooms et al., 2014;AWAKE study, ClinicalTrials.gov identifier: NCT01194713). A 6% decrease in Aβ42 levels were observed following an unrestricted sleep, however no change was observed in participants who remained awake (Ooms et al., 2014). The benefits of sleep on soluble Aβ levels in the brain may have been underestimated due to measurement of only spinal CSF (Ooms et al., 2014). There is also enhanced clearance by the glymphatic system during sleep, leading to additional Aβ efflux into the blood and cervical lymph nodes (Szentistványi et al., 1984;Iliff et al., 2012;Xie et al., 2013). Using in vivo twophoton microscopy in sleeping and in anesthetized mice, Xie et al. (2013) measured a 60% increase in the brain parenchyma, associated with an increase in the rate of exchange of CSF-ISF. These results suggest that regular sleep may serve to aid in the clearance of toxic solutes, such as Aβ, from the extracellular space in the brain, and normalization of sleep patterns may serve as beneficial lifestyle intervention in the treatment of AD (Xie et al., 2013). There are 7 active clinical trials for sleep interventions in AD, two of which are directly concerned with sleep apnea. Sleep apnea causes pauses or disruptions in breathing during sleep, and is correlated with cognitive dysfunction in AD patients (Janssens et al., 2000;National Heart, Lung, and Blood Institute, 2012). It is potentially associated with AD through mechanisms such as cellular oxidative stress, hypoxia, and sleep disturbances (Pan and Kastin, 2014). The two current clinical trials for sleep apnea and AD are assessing the effectiveness of a continuous positive airway pressure (CPAP) device on measurements including cognition, quality of daily living and CSF levels of Aβ42 (AZAP, ClinicalTrials.gov identifier: NCT01400542 and SNAP, ClinicalTrials.gov identifier: NCT01962779). The use of CPAP has effectively improved sleeping conditions in patients with mild-to-moderate AD, an effect that was maintained for 3 weeks (Cooke et al., 2009). Sleep apnea leads to impaired vascular health, with increased arterial stiffness and blood pressure, decreased cerebral blood flow, and thickened carotid intima-media (Daulatzai, 2012;Ciccone et al., 2014). The tunica intima and tunica media are the innermost perivascular layers along cerebral arteries, and the tunica media is the area within which the majority of ISF flows on its way out of the brain during perivascular clearance Weller et al., 2008Weller et al., , 2010. Therefore if sleep disturbances are thickening this area, the rate at which bulk flow of ISF travels would theoretically be decreased. This, in conjunction with the impaired arterial health in sleep apnea, would potentially diminish the perivascular clearance of Aβ, leading to increased CAA and amyloid in the brain parenchyma (Daulatzai, 2012). Nighttime wakefulness also leads to a decrease in glymphatic Aβ clearance because of the inability for parenchymal expansion, which normally speeds the rate of CSF-ISF exchange during sleep (Xie et al., 2013). Both glymphatic and perivascular clearance mechanisms are further diminished by the impaired vasculature present in sleep disorders due to irregular blood vessel pulsations Iliff et al., 2013b). Vascular Health Hypertension, increased body mass index, abnormal glucose regulation, hyperlipidemia, and hypercholesterolemia are all vascular related risk factors for AD and cognitive decline (Kivipelto and Solomon, 2008;Reynolds et al., 2010;Tolppanen et al., 2012;Liu et al., 2014;Deckers et al., 2015). AD patients analyzed for the relationships between cardiovascular risk factors and cognition showed lower scores on the MMSE and ADAS-COG in hypertensive patients compared to those with normal blood pressure, and lower scores on the MMSE and Clinical Dementia Rating in patients with hyperlipidemia compared to those without (Lobanova and Qureshi, 2014). Such findings suggest vascular injury may play a role in cognitive dysfunction in AD. Vasoactive drugs therefore might have the potential to treat aspects of AD, including deficits in cognition. Two current vasoactive drugs in development for AD are nilvadipine and cilostazol (Ikeda, 1999;Nimmrich and Eckert, 2013). Nilvadipine is a dihydropyridine that blocks calcium channels and prevents cognitive decline in patients with MCI (Hanyu et al., 2007;Nimmrich and Eckert, 2013). Although intervention with Nilvadipine leads to decreased hypertension, its main effect on cognition is postulated to be through neuroprotection, potentially by decreased calcium-mediated excitotoxicity of neurons (Takakura et al., 1992;Hanyu et al., 2007;Nimmrich and Eckert, 2013). A Phase 3 placebo-controlled trial of nilvadipine in mild-moderate AD is underway (NILVAD, ClinicalTrials.gov identifier: NCT02017340). Cilostazol has multiple beneficial effects on the vasculature, antiplatelet activity, and is an inhibitor of the cAMP and cGMP regulator phosphodiesterase type 3 (PDE3; Ikeda, 1999;Saito and Ihara, 2014). Studies of cilostazol in human dementia/AD patients showed some benefit on slowing cognitive decline and increasing cerebral perfusion (Sakurai et al., 2013;Ihara et al., 2014). Patients with mild dementia, but not moderate to severe dementia, maintained higher MMSE scores when treated with cilostazol in conjunction with the acetylcholinesterase inhibitor donepezil, compared to donepezil alone CASID study, ClinicalTrials.gov identifier: NCT01409564). Slower decline in ADAS-COG scoring was also seen in another study, along with increased regional cerebral blood flow to the right anterior cingulate lobe (Sakurai et al., 2013). Cilostazol is theorized to slow AD progression by promoting perivascular clearance of ISF, which contains soluble Aβ, in part due to it vasodilation and regulation of blood vessel pulsations Han et al., 2013;Saito and Ihara, 2014). Additionally, PDE3 expression is increased in arterial cells with Aβ deposition, primarily in smooth muscle cells (Maki et al., 2014). Smooth muscle cells are present within the tunica media of the perivascular space along leptomeningeal arteries, a pathway through which ISF, including soluble Aβ, is cleared from the brain (Weller, 2005;Kwee and Kwee, 2007;Carare et al., 2008;Weller et al., 2008). In a mouse model of CAA, cilostazol protected against vascular and cognitive deficits, and decreased Aβ deposits, potentially because of enhanced perivascular clearance (Maki et al., 2014). This suggests beneficial effects of cilostazol in AD on perivascular clearance of ISF, and on a vascular protective signaling cascade involving inhibition of PDE3 and increased cAMP and cGMP activity. Perivascular Clearance There are two proposed mechanisms by which drainage of the brain's extracellular fluid, also referred to as ISF, occur Iliff et al., 2012). The first involves a bulk flow of ISF and solutes within arterial perivascular spaces, across the BBB, where they enter cervical lymph nodes (Szentistványi et al., 1984;Carare et al., 2008). Perivascular spaces, or Virchow-Robin spaces, are small areas around blood vessel walls, including the smooth muscle cells which line arteries, that are in continuity with the subarachnoid space from the point where blood vessels penetrate into the brain parenchyma (Weller, 2005;Kwee and Kwee, 2007;Weller et al., 2008). The pia mater sheaths the perivascular space along leptomeningeal arteries until they branch off into smaller arterioles and capillaries (Weller, 2005;Kwee and Kwee, 2007;Carare et al., 2008). According to one theory, flow of ISF occurs along the basement membranes of capillaries and arterioles until it converges upon leptomeningeal arteries, where the fluid continues to drain in the perivascular tunica media, the space between the arterial smooth muscle cells, and in the tunica adventitia, the basement membranes of the smooth muscle cells Weller et al., 2008Weller et al., , 2010. There is further convergence with leptomeningeal and major cerebral arteries before the ISF drains completely out of the brain into the cervical lymph nodes (Szentistványi et al., 1984;Weller et al., 2010). This theory implies that flow of ISF occurs retrogradely only along capillaries, arterioles, and arteries, not along venules or veins (Figure 1; Carare et al., 2008;Arbel-Ornath et al., 2013). Evidence for perivascular clearance has been obtained by experiments involving radio-or fluorescent labeled tracers. Furthermore, multi-photon imaging and mathematical models have provided strong support (Szentistványi et al., 1984;Schley et al., 2006;Carare et al., 2008;Wang and Olbricht, 2011;Arbel-Ornath et al., 2013). ISF drainage was theorized to be by a convective bulk flow mechanism by assessing the clearance of multiple radiolabeled compounds of differing molecular weights. These compounds cleared at the same rate, confirming that ISF did not drain by diffusion, in which case the smaller molecules would have moved faster (Cserr et al., 1981;Abbott, 2004;Carare et al., 2008). Using multi-photon imaging through a surgically dissected cranial window in mice, Arbel-Ornath et al. (2013) observed active perivascular clearance of ISF in vivo. Consistent with previous conclusions, the fluorescent tracer flowed out of the brain along capillaries and arteries but not veins (Arbel-Ornath et al., 2013). Perivascular clearance is reliant on proper vasculature flow, as it is non-existent after cardiac arrest and is diminished with decreased perfusion Arbel-Ornath et al., 2013). Theoretical models propose that the pulsating motion of blood flowing into the brain acts to push ISF in the opposing direction, thereby explaining why the draining of ISF requires functioning vasculature, and why fluid is not cleared along veins (Schley et al., 2006;Carare et al., 2008;Wang and Olbricht, 2011). Although Aβ self-aggregates into fibers and forms plaques, there is still a significant portion of soluble species of Aβ that is released to the extracellular space (Selkoe, 2001). Since convective bulk flow of ISF drains solutes, perivascular drainage FIGURE 1 | Perivascular and glymphatic drainage of brain interstitial fluid (ISF). These mechanisms drain brain ISF and parenchymal solutes (e.g., Aβ) to the periphery. Perivascular clearance involves ISF efflux along capillaries, arterioles, and arteries within the perivascular space. Glymphatics involve CSF influx and CSF/ISF efflux within the perivascular space along arteries and veins, respectively. Color Legend, red: arteries/arterioles/capillaries; blue: veins/venules; light blue: perivascular space; pink dotted line: ISF efflux; yellow dotted line: CSF influx; purple dotted line: CSF/ISF efflux. represents one of the major Aβ clearance mechanisms and could be impaired in AD . The APP/PS1 AD mouse model showed a 60% increased retention of fluorescent tracer, compared to wild type mice (Arbel-Ornath et al., 2013). The presence of Aβ plaques on arteries cause increased tortuosity and decreased calibre which in turn disrupts and lengthens blood flow transit times (Dorr et al., 2012). This irregularity in the vasculature is likely interrupting the pulsating motion which drives perivascular clearance. Also, it was found that fluorescent conjugated tracers and Aβ40 when injected are present where plaques would form in AD (Hawkes et al., 2013. The hippocampus of non-transgenic mice showed an aging-related decrease in perivascular clearance of injected Aβ40, suggesting the importance of this clearance mechanism to AD pathogenesis (Hawkes et al., 2013). Together, these conclusions support a positive feedback loop for CAA and AD, where impaired perivascular ISF drainage leads to Aβ deposition along the arteries, which causes weakened vasculature functioning, thereby allowing increased levels of Aβ to remain in the brain parenchyma, more specifically in the hippocampus Hawkes et al., 2013). Glymphatics The glymphatic system is another ISF drainage system recently proposed by Maiken Nedergaard's group (Iliff et al., 2012;Iliff and Nedergaard, 2013). Glymphatics involve the para-arterial, unidirectional flow of CSF from the cisterna magna of the subarachnoid space, along penetrating leptomeningeal arteries, towards the pituitary and pineal recesses of the 3rd ventricle, and into the brain within the perivascular (Virchow-Robin) space (Iliff et al., 2012(Iliff et al., , 2013a. CSF enters the brain parenchyma and mixes with ISF in a process reliant on aquaporin-4 (AQP4) channels present on the perivascular endfeet of astrocytes. The extracellular fluid composed of CSF and ISF is proposed to exit the parenchyma by drainage out of the brain within the venous perivascular space of large calibre veins only, where it then flows into the cervical lymph nodes, or absorbs into the blood across arachnoid villi on dural sinuses (Figure 1; Szentistványi et al., 1984;Iliff et al., 2012). Similar to perivascular clearance, glymphatic flow is driven by the pulsation of blood vessels; however, the former is by bidirectional flow along the vasculature, and the latter is unidirectional (Hadaczek et al., 2006;Schley et al., 2006;Wang and Olbricht, 2011;Iliff et al., 2013b). The glymphatic system was observed in vivo in mice after an intrathecal injected contrast agent was followed by magnetic resonance imaging, and through a closed cranial window with two-photon laser scanning microscopy (Iliff et al., 2012(Iliff et al., , 2013a. These results were reproduced using clinically-relevant levels of intrathecal injections in rats and mice (Yang et al., 2013). In confirmation that arterial pulsations drive the unidirectional flow of CSF into the parenchyma, unilateral ligation of the mouse carotid artery decreased blood flow and pulsality as well as flow of the injected tracers. Subsequently, treatment with dobutamine to increase blood flow and pulsality, increased tracer movement (Iliff et al., 2013b). Similar to perivascular clearance, soluble Aβ is cleared from the brain parenchyma by glymphatic drainage Iliff et al., 2012). Interestingly, the rates of CSF-ISF exchange significantly decrease with age, suggesting an agerelated decline in the glymphatic clearance of toxic solutes, such as Aβ (Kress et al., 2014). Unlike perivascular clearance, the physics of the overall drainage system are affected by molecular size, because larger tracers were slower to enter the parenchyma following subarachnoid injection, suggesting a reliance on AQP4 channels for CSF-ISF exchange (Iliff et al., 2012). In healthy conditions, the expression of AQP4 is highly polarized to astrocytic endfeet (Iliff and Nedergaard, 2013). However, with neuroinflammation as well as with age, this expression profile changes, with decreased AQP4 at the perivascular endfeet and increased levels in the soma. This may explain the impairment in glymphatic flow with age, suggesting another mechanism by which Aβ clearance may be diminished in AD (Iliff and Nedergaard, 2013;Kress et al., 2014). Additionally, both the perivascular and the glymphatic clearance systems are impaired following an ischaemic stroke in mice (Arbel-Ornath et al., 2013;Gaberel et al., 2014). Because of a close clinical relationship with stroke, this provides further evidence to why drainage of Aβ may be impaired in AD (Viswanathan et al., 2009). In addition, late-onset AD is thought to relate to impaired cerebral clearance of Aβ (Mawuenyega et al., 2010). This is in contrast to overproduction which leads to amyloid accumulation from mutations affecting APP processing (e.g., mutations in APP or presenilin complex), resulting in early-onset Autosomal Dominant AD or trisomy 21, which predisposes to early-onset dementia in Down's syndrome. In this context, collagenosis of the deep penetrating venular system could be predicted to affect the glymphatic clearance of amyloid and other toxic proteins along the perivascular venular pathways. Collagenosis causes wall thickening and stenosis of the deep venular system and correlates with confluent periventricular white matter hyperintensities (pvWMH), which is seen in elderly controls and Alzheimer's patients on Magnetic Resonance images (Moody et al., 1995;Black et al., 2009). PvWMH are associated with hypertension and other vascular risk factors, older age and possible genetic factors. Recent work from our group suggests the correlation of pvWMH is strongest with significant stenosis of the medium to large venules. We hypothesize that this leads to BBB leakage and perivenous edema, which further disrupts clearance mechanisms for amyloid and other toxins along the perivascular pathways (Black et al., 2009;Gao et al., 2012). It is of note that pvWMH are also associated with CAA and visible as microbleeds, which would be consistent with disruption of the perivascular pathway, thereby exacerbating deposition of periarteriole Aβ (Pettersen et al., 2008). CAA and AD associated arterial damage, as described above, would decrease the functioning of both perivascular and glymphatic drainage by impairing arterial pulsations Dorr et al., 2012;Iliff et al., 2013b). Despite little Aβ deposition, the structural and functional integrity of venules and veins also deteriorate in AD, which may lead to further impairments in amyloid clearance (Revesz et al., 2003;Weller et al., 2009;Lai et al., 2015). The presence of atherosclerosis is common in patients with AD, causing further irregularities in vascularization including the thickening of the perivascular space, which likely decreases vascular associated Aβ clearance (Ross, 1993;Frink, 2002;Roher et al., 2003). One of the major hallmarks of perivascular clearance is that soluble Aβ deposits along capillaries and arteries as ISF drains (Hawkes et al., 2013. Since para-venous drainage out of the brain is proposed to occur in the glymphatic system, it is surprising that Aβ deposits are not abundantly seen along venules and veins (Revesz et al., 2003;Weller et al., 2009;Iliff et al., 2012). There are a couple of explanations for this observation. First, it is possible that the lack of smooth muscle cells on veins provide less substrates for Aβ to deposit on Weller et al. (2008). Secondly, peripheral monocytes have been reported to enter the lumen of veins, but not arteries, and clear soluble Aβ (Michaud et al., 2013a). Healthy lifestyle interventions play an important role in the maintenance of the preceding Aβ clearance mechanisms. As discussed above, consistent exercise, diet and sleep contribute to proper structural and functional integrity of the vasculature, maintaining regular blood vessel pulsations, which are the driving force behind both the perivascular and glymphatic clearance mechanisms Iliff et al., 2013b). Together these conclusions stress the importance of these lifestyle interventions in the maintenance of vasculature health, and subsequently in delaying the progression of AD. Conclusion and Future Perspectives In this review article, we have summarized three different therapeutic approaches that rely on various combinations of physiological Aβ clearance mechanisms. We have highlighted some caveats to these approaches and attempted to highlight mechanisms that might function in synergy to remove toxic Aβ from the CNS as a disease-modifying therapy. The function of the Aβ clearance mechanisms, efflux at the BBB, catabolism within the CNS or drainage via either the perivascular drainage pathway or glymphatics, must be considered when designing new therapeutics and interpreting results from ongoing clinical trials, including the potential exacerbation of clearance mechanisms from small vessel arteriolar, capillary and venular disease. We propose that clinical trial failure may not be the sole result of an ineffective drug candidate or wrong patient population but may represent a lack of endogenous clearance mechanisms needed to support drug effects. Although all cases of AD are characterized pathologically in the same manner, the cause of disease and co-morbidities that contribute to disease progression vary extensively between patients. In light of this, multi-modal therapeutic approaches may need to be considered, with an eye on personalized medicine to account for the variability in presentation of AD patients, including reliable quantification of subtypes of small vessel disease as well as patterns of gray and white matter atrophy, which is overlooked by many currently popular automatic pipelines. Combination therapies are already in practice for various diseases and thus may also be necessary for AD. Furthermore, the treatment strategies may need to vary depending on disease state or age of the individual at presentation, with the goal of treating the disease or delaying the onset.
Finite-temperature effects on interacting bosonic 1D systems in disordered lattices We analyze the finite-temperature effects on the phase diagram describing the insulating properties of interacting 1D bosons in a quasi-periodic lattice. We examine thermal effects by comparing experimental results to exact diagonalization for small-sized systems and to density-matrix renormalization group (DMRG) computations. At weak interactions, we find short thermal correlation lengths, indicating a substantial impact of temperature on the system coherence. Conversely, at strong interactions, the obtained thermal correlation lengths are significantly larger than the localization length, and the quantum nature of the T=0 Bose glass phase is preserved up to a crossover temperature that depends on the disorder strength. Furthermore, in the absence of disorder, we show how quasi-exact finite-T DMRG computations, compared to experimental results, can be employed to estimate the temperature, which is not directly accessible in the experiment. I. INTRODUCTION For their ability to simulate condensed matter systems, ultracold atoms in disordered optical potentials are known to be very effective and versatile systems. The appeal of such systems, already highlighted in the observation of Anderson localization [1][2][3] for vanishing interactions, is increasing in the research activity on many-body quantum physics. Since several decades, large effort has been made to investigate the combined effect of disorder and interaction on the insulating properties of onedimensional (1D) bosonic systems, both theoretically and experimentally. From a theoretical view point, the T = 0 phase diagram describing the superfluid-insulator transitions has been studied for both random disorder [4][5][6] and quasiperiodic lattices [7][8][9][10][11]. The quasi-periodic lattice displays behaviors that are qualitatively and quantitatively different from those of a true random disorder. Yet, the occurrence of localization makes it a remarkable testbed for studying the Bose-glass physics. On the experimental side, the disorder-interaction phase diagram has been examined [12][13][14] and, in the recent study of Ref. [15], measurements of momentum distribution, transport and excitation spectra showed a finite-T reentrant insulator resembling the one predicted by theory. In this context, the question of the effect of finite temperature is however still open [16] and a direct link between the T = 0 theory and the experiment is still missing. In particular, whether and to what extent the T = 0 quantum phases persist at the low but finite experimental temperatures still has to be understood. Increasing the temperature in a clean (i.e., non-disordered) system, the quantum Mott domains progressively shrink, vanishing at the "melting" temperature k B T 0.2U , with U being the Mott energy gap [17]. In the presence of disorder, no theoretical predictions are so far available. In this article, starting from the recent experimental study [15], we analyze the coherence properties of the system. By comparing the experimental finite-T data with a phenomenological approach based on DMRG calculations [18][19][20] for our inhomogeneous system at T = 0, we provide a qualitative estimation of the coherence loss induced by temperature throughout the disorderinteraction diagram. In this framework, the coherence loss is quantified in terms of a phenomenological parameter, the effective thermal correlation length. Furthermore, a rigorous analysis of the temperature dependence of the correlation length is provided by exact diagonalization of the Hamiltonian for the case of small homogeneous systems. A reduction of the correlation length above a disorder-dependent characteristic temperature can be interpreted as a crossover from a quantum to a normal phase. In the regime of strong interactions, the exact diagonalization method -which well reproduces the melting temperature for the clean commensurate Mott insulator -is found to apply also to the disordered case, thus providing a crossover temperature for the incommensurate Bose glass phase. Complementarily, we show how to estimate the temperature of the experimental system by comparison of the measured momentum distribution with quasi-exact theoretical results, obtained with a finite-T DMRG method [21][22][23][24]. Up to now it was possible to determine the temperature of a 1D quasi-condensate in the presence of the trap alone [25]. By using the DMRG simulations, it is also possible to determine temperatures of quasi-1D systems in the presence of lattice potentials. For the present experiment we estimate the temperature in the superfluid regime without disorder. Problems can arise in the analysis of insulating experimental systems as these are not necessarily in thermal equilibrium. Attempts of temperature measurements for such systems are reported as well, highlighting the difficulties also caused by the coexistence of different phases in the considered inhomogeneous system. The exposition of this work is organized as follows. Sec. II describes the experimental setup and methods. In Sec. III, we explain the theoretical methods employed in the subsequent sections to analyze the finite-T effects on the quantum phases of the system. After recalling the main experimental results reported in Ref. [15], Sec. IV presents a phenomenological approach based on T = 0 DMRG calculations that captures thermal effects and introduces an effective thermal correlation length. The effect of the system inhomogeneity is analyzed as well. In Sec. V, we perform exact diagonalization for small homogeneous systems. For weak interactions, this provides the T -dependence of the correlation length for the superfluid and weakly interacting Bose glass while, for strong interactions, it provides the crossover temperature for the existence of the quantum phases, the Mott insulator and the strongly interacting Bose glass. Measurements of the system entropy support the latter results. In Sec. VI, we use finite-T DMRG calculations for an ab initio thermometry in a clean system. In particular, experimental temperatures are estimated by comparing the experimental momentum distributions with quasi-exact DMRG calculations. In Sec. VII, entropy measurements throughout the full disorder-interaction diagram are also provided. Finally, the conclusions are reported in Sec. VIII. II. EXPERIMENTAL METHODS Starting from a 3D Bose-Einstein condensate (BEC) with N tot 35 000 atoms of 39 K, a strong horizontal 2D optical lattice (with depth of 30 recoil energies) is ramped up such that an array of independent potential tubes directed along the z-axis is created. This forms a set of about 500 quasi-1D systems, as depicted in Fig. 1. Additionally, a quasi-periodic lattice along the z-direction is then ramped up, yielding a set of disordered quasi-1D systems [3,12]. Such systems are described by the disordered Bose-Hubbard Hamiltonian [8,10] where b † i , b i , and n i = b † i b i are the creation, annihilation and number operators at site i. The Hamiltonian is characterized by three main energy scales: the tunneling energy J, the quasi-disorder strength ∆ and the inter-FIG. 1. Experimental setup. Two horizontal optical lattices provide a tight confinement forming an array of 1D vertical potential tubes for the 39 K atoms with tunable interaction energy U . The vertical quasi-periodic potential is formed by superimposing two incommensurate optical lattices: the main lattice (λ1 = 1064 nm), which is related to the tunneling energy J, and the secondary one (λ2 = 859.6 nm), which is related to the disorder amplitude ∆. The harmonic trapping confinement makes the 1D systems inhomogeneous. action energy U . The tunneling rate J/h 110 Hz is set by the depth of the primary lattice with spacing d = λ 1 /2 = 0.532 µm. ∆ can be suitably varied by changing the depth of a weaker secondary lattice, superimposed to the primary one and having an incommensurate wavelength λ 2 such that the ratio δ = λ 1 /λ 2 = 1.243 . . . is far from a simple fraction and mimics the potential that would be created by a truly irrational number. U can be easily controlled as well thanks to a broad Feshbach resonance [27] which allows to change the inter-particle scattering length a s from about zero to large positive values. Finally, the fourth term of the Hamiltonian, which is characterized by the parameter α 0.26J, represents the harmonic trapping potential, centered around lattice site i 0 . Depending on the value of U , the mean site occupancy can range from n = 2 to n = 8. More details on the experimental apparatus and procedures are given in Ref. [15]. Theoretical phase diagrams for the model (1) were obtained by numerical computation and analytical arguments [7][8][9][10] for the ideal case of zero temperature and no trapping potential. However, due to experimental constrains, the 1D quasi-condensates we actually produce are at low but finite temperatures (of the order of few J, thus below the characteristic degeneracy temperature T D 8J/k B [28]). Moreover, the unavoidable trapping confinement used in the experiment makes the system inhomogeneous and limits its size. As a result, in the experimental system, different phases coexist and the theoretical sharp quantum phase transitions occurring in the case of the thermodynamic limit are actually replaced by broad crossovers. The analysis of the next sections is mainly based on the momentum distribution P (k). Experimentally, P (k) is obtained by releasing the atomic cloud from the trapping potential and letting it expand freely for 16 ms before acquiring an absorption image. From the root-meansquare (rms) width of P (k) we get information about the coherence of the system. III. THEORETICAL METHODS A. Averaged momentum distribution DMRG calculations, as described in subsections III C and III E, give access to the density profiles in the 1D tubes and to the single-particle correlation functions where · · · T denotes the quantummechanical expectation value in thermal equilibrium. The corresponding momentum distributions are computed according to where W (k) is the Fourier transform of the numerically computed Wannier function. For quasi-momenta k in the first Brillouin zone, W (k) can be approximated very well by an inverse parabola. The notation (· · · ) indicates the average over all tubes in the setup. B. Distribution of particles among tubes There are several assumptions made in modeling the experimental setups. As numerical calculations and most theoretical analyses are better suited for studying lattice models, one has to derive the lattice model from the continuous Hamiltonian corresponding to the optical lattices setup. For our system, this issue is discussed in Refs. [8,10]. The experiment comprises a collection of 1D tubes modeled by Hamiltonian (1). Due to the transverse component of the harmonic trapping potential, these tubes contain different numbers of particles. The total number of particles N tot is known with an uncertainty of 15% and the distribution of particles among the tubes is also not exactly known. In the theoretical analysis, we consider two different distributions, that we call Thomas-Fermi (TF) distribution and grand-canonical (GC) distribution, respectively. The former basically assumes that, during the ramping of the lattice potentials, particles are not redistributed among the tubes. The latter rather assumes that the system evolves until it has reached its equilibrium state and particles have correspondingly redistributed between the tubes. For the Thomas-Fermi approximation, the distribution of particles among the tubes still corresponds to the Thomas-Fermi distribution of the anisotropic 3D BEC before the ramping of the lattice potentials. Integrating the Thomas-Fermi profile along the z-direction gives a continuous 2D transverse density profile of the form where R r = 2µ/mω 2 r and µ =hω 2 (15a s /ā) Here ω r andω are the radial and mean optical trap frequencies before the loading of the tubes, andā = h/mω is the associated harmonic length. Inserting the experimental parameters, we obtain the relation R r 1.9 N In addition, we consider the grand-canonical approach, which is well suited for calculations done with finite-T DMRG. This is also useful in the classical limit (J = 0) for which the grand partition function naturally factorizes. We choose a global chemical potential µ such that the expectation value of the total number of particles is N tot . As the different tubes are independent of each other, the effective chemical potential µ ν of tube ν is determined by µ and by the transverse component of the harmonic trapping potential such that µ ν = µ − 1 2 mω 2 r r 2 ⊥,ν where r ⊥,ν is the transverse 2D position of tube ν. Physically, this assumes that particles are redistributed between tubes when the lattice potentials are ramped up. In order to determine µ for a given total number of particles N tot = ν N (µ ν ), we rely on data for the number of atoms N (µ ν ) in a tube for a given chemical potential of the tube. N (µ ν ) is computed numerically with finite-T DMRG or in the classical limit of the model. Contrary to the TF approach, N (µ ν ) here depends on the temperature and on all parameters of the model, in particular the interaction. As in the experiment, theoretical expectation values are averaged over all tubes. Typical N (µ ν ) relations for the trapped system are shown in Fig. 2a for the values of interaction and temperature that will be used later. The corresponding distribution of the atom numbers in the tubes is given in Fig. 2b, showing that, in comparison to the TF approximation, the GC approach favors tubes with lower fillings. For the typical parameters of the experiment and range of temperatures found hereafter, the modification of P (k) due to a change of N tot by ±15% is less relevant than the modification obtained by changing the assumption about the tube atom number distribution (TF or GC). Consequently, unless stated differently, we use N tot = 35 000 in the following. C. Phenomenological finite-T approach based on For a single tube, standard DMRG [18][19][20] calculations provide accurate T = 0 results for the momentum distribution. As the analysis of the full U -∆ diagram requires computations for 94 points, a systematic scan of the temperature for each point using finite-T DMRG represents a numerical challenge. In the case of the 2D Bose-Hubbard model without disorder, such an ab initio fit of the data was carried out using quantum Monte-Carlo [29]. In Ref. [15] and in Sec. IV, we pursue a phenomenological approach to capture finite-temperature effects. Since temperature is expected to induce an exponential decay of the correlations g ij at long distances |i − j|, the idea is to first do DMRG calculations at T = 0, which are computationally cheap, and to then multiply the obtained correlators g ij (T = 0) by e −|i−j|/ξ T . The parameter ξ T , in the following called effective thermal correlation length, is left as the only free parameter to fit the finite-T experimental data. Specifically, we introduce the modified correlationsg The normalization factor C is chosen such that the corresponding momentum distribution P (k) obeys P (k = 0) = 1. In the superfluid regime, this approach is motivated by Luttinger liquid theory [30]. In this theory the correlation function behaves as which interpolates between a power-law behavior when |i − j| ξ T and an exponential behavior when |i − j| > ∼ ξ T . Here K is the dimensionless Luttinger parameter, which is of order one in our case. This formula is expected to be valid in the low-temperature regime with a thermal correlation length behaving asξ −1 where u is the sound velocity. In the Luttinger liquid result (5), the exponential tail at finite T is expected to depend on the particle density n/d. Hence, for inhomogeneous systems, one should rather have a site-dependent ξ T , also varying from tube to tube. However, for the sake of simplicity, for each point in the diagram, we use a single ξ T for all tubes and all sites. Of course, this approach is not exact and its validity depends on the temperature regime and the considered phase. It can be tested quickly on small homogeneous systems using exact diagonalization. Such a comparison shows that the phenomenological ansatz provides a sensible fit of the exact finite-T data for the range of temperatures relevant for the experiment, i.e., T J/k B . The validity of the approach for the trapped system is discussed further in Sec. VI. D. Exact diagonalization for homogeneous systems For small homogeneous systems (α = 0), we use full diagonalization of the Hamiltonian (1) to obtain real-space correlations g ij at finite temperatures. Such correlation functions typically show an exponential decay that we fit using points with relative distance ∆z ≤ 4d, 5d to obtain the total correlation length ξ(T ). We use systems with various densities and sizes. Depending on the density, the system size L ranges from 8d to 13d. Because of finite-size effects, the results are useful as long as ξ(T ) is sufficiently below the system size. E. Quasi-exact finite-T DMRG computations Zero-temperature DMRG computations [18][19][20], as employed in the approach described above, variationally optimize a certain ansatz for the many-body quantum state so-called matrix product states. While this only covers pure states, it can be extended to directly describe thermal states [21][22][23][24]. To this purpose, one computes a so-called purification of the thermal density matrix ρ β = e −β(H−µN ) , where β = 1/k B T . Specifically, if the system is described by a Hilbert space H, a purification |ρ β of the density matrix is a pure state from an enlarged Hilbert space H ⊗ H aux such that ρ β = Tr aux |ρ β ρ β |, i.e., such that the density matrix is obtained by tracing out the auxiliary Hilbert space H aux from the projector |ρ β ρ β |. As the purification |ρ β is a pure many-body state, we can make a matrix product ansatz for it and deal with it in the framework of DMRG. Noting that it is simple to write down a purification for the infinite-temperature state ρ 0 = 1, one can start the computation at infinite temperature and use imaginary-time evolution to obtain finite-T purifications increasing ∆ induces the transition from the superfluid (SF) to the Anderson insulator (AI). In the absence of disorder (∆ = 0), increasing U leads to the superfluid-Mott insulator (MI) transition. For increasing ∆ at large interaction, according to T = 0 DMRG calculations, MI domains exist only at the right of the dashed line (i.e., U > 2∆ for large U ), where they coexist with SF or Bose glass (BG) domains, respectively below and above ∆ = 2J. The diagram has been generated on the basis of 94 data points (crosses). Standard deviations of Γ are between 2% and 5%. Data taken from Ref. [15]. expectation values of any observable A can be evaluated IV. PHENOMENOLOGICAL ANALYSIS OF THE U -∆ COHERENCE DIAGRAM An overview of the insulating properties of our system is provided by measurements of the momentum distribution P (k) [15]. Obtained by interpolating 94 sets of measurements, Fig. 3 shows the rms width Γ of P (k) as a function of the interaction strength U and the disorder strength ∆. The plot is representative of the phase changes occurring in the system. At small disorder and interaction values where the system is superfluid, P (k) is narrow (blue zone). At larger disorder and interaction values, P (k) progressively broadens (green, yellow, and red zones) meaning that the system is becoming more and more insulating. In particular, along the ∆ = 0 line, the diagram is consistent with the progressive formation of a Mott insulator, which, in our inhomogeneous system, coexists with a superfluid fraction. For increasing ∆ along the U = 0 line, an Anderson insulator forms above the critical value ∆ = 2J predicted by the Aubry-André model [3,31]. For finite U and ∆, we observe a reentrant insulating regime extending from small U and ∆ > 2J to large U , which surrounds a superfluid regime at moderate disorder and interaction. This shape is similar to that of the Bose glass phase found in theoretical studies of the U -∆ diagram for homogeneous systems at T = 0 [4,10,32]. The coexistence of different phases due to the trapping potential can be observed clearly in density profiles, which can be computed numerically by DMRG. For example, Fig. 4 gives the calculated density profiles for T = 0 in tubes with N = 20, 55, 96 atoms in the stronginteraction regime. For these strong interactions and in the absence of disorder (top), the profiles show the typical wedding cake structure, where the commensurate Mott domains (integer n) are separated by incommensurate superfluid regions (non-integer n). Adding disorder (bottom), the Mott regions progressively shrink and the smooth density profiles of the incommensurate regions become strongly irregular, as expected in the case of a Bose glass. Note that the dashed line in Fig. 3 delimits the region of the diagram where Mott-insulating domains appear at zero-temperature. These domains are quantitatively defined by the condition that, in the T = 0 DMRG density profiles for the three representative tubes with N = 20, 55, 96 atoms, there are at least three consecutive sites with integer filling. The challenge of the investigation of the experimental diagram and of its comparison with the ideal theoretical case lies in the inhomogeneity and in the finite temperature, especially, as the temperature is not directly accessible in the experiment. In the following, we first compare the experimental finite-T diagram with DMRG calculations reproducing our inhomogeneous system at T = 0. Subsequently, a phenomenological extension of the T = 0 results to finite temperatures provides a more quantitative understanding of the temperature-induced coherence loss. Theoretical rms width Γ of the momentum distribution P (k) at T = 0, averaged over all tubes. The diagram is built from 94 data points as in the experimental diagram in Fig. 3. For few representative points, P (k) is also shown at the side of the diagram: the theoretical result for T = 0 (blue, dot-dashed) is compared to the experimental finite-T data (black, solid). Data taken from Ref. [15]. shows the full U -∆ coherence diagram at T = 0 in terms of the rms width Γ of P (k), together with a few distributions P (k) at representative points. The data are based on the TF hypothesis for the distribution of particles among tubes. Indeed, using the GC hypothesis would require to compute all N (µ) curves across the diagram which is rather expensive numerically. In contrast to the typical phase diagrams for homogeneous systems [10], here, only crossovers between regimes occur, as different phases can coexist due to the inhomogeneity of the system. Still, Fig. 5 shows the same three main regions occurring in the experimental diagram; in particular, the strongly-correlated regime for large interaction strengths with a reentrance of the localization. However, the different ranges of the color scales reveal the quantitative difference between the theoretical T = 0 results and the experimental finite-T results in Fig. 3. In particular, for small U (left panels in Fig. 5), the numerical T = 0 momentum distributions (blue, dot-dashed curves) are considerably narrower than the experimental finite-T ones (black, solid curves). Conversely, for large U (right panels), the thermal broadening is much less relevant. In the following, we try to better understand and quantify this aspect using first the phenomenological approach. B. Phenomenological approach and elementary interpretation of the coherence diagram A natural source of broadening of the momentum distribution P (k) is the temperature, and we address its effect for the whole U -∆ diagram based on the phenomenological approach explained above in Sec. III C. The phenomenological approach has the advantages of simplicity and of a direct connection to the described T = 0 results with TF distribution of atoms among tubes, yielding a first elementary interpretation for temperature effects. For each point in the diagram, we systematically fitted the experimental distribution P (k) with the phenomenological ansatz resulting from Eq. (4), leaving the effective thermal correlation length ξ T as a single fit parameter. Some typical fits (red, dashed curves) are shown in side panels of Fig. 6. The main part of the figure shows the rms width Γ of the phenomenological momentum distribution across the whole U -∆ diagram. This should be compared to the experimental diagram in Fig. 3, employing the same color scale. The obtained Γ values are similar across the whole diagram, except for the large-U and small-∆ region, where the fits are not good. As explained in the next section, this discrepancy is due to the completely different thermal response of the coexisting superfluid and Mott-insulating components. A rough interpretation of the diagram is that the inverse total correlation length, denoted by ξ(T ), is approximately given by the sum of the inverses of an intrinsic (T = 0) correlation length, denoted by ξ 0 , and the thermal correlation length ξ T . This is summarized by the formula The zero-temperature correlation length ξ 0 is finite in the localized Mott-insulating and Bose glass regimes. In homogeneous systems ξ 0 diverges in the superfluid regime and ξ(T ) would then be identical to the effective thermal correlation length ξ T . For our inhomogeneous systems, ξ 0 becomes large in the superfluid regime, but remains finite. We can interpret ξ T as a quantification of the thermal broadening which is obtained, according to Eq. (4), by convolving the theoretical zero-temperature momentum distribution P (k) of width 1/ξ 0 with a Lorentzian distribution of width 1/ξ T . Depending on the point in the diagram, one may then separate the intrinsic zerotemperature and the thermal contributions to the ob- 6. U -∆ diagram for the rms width Γ of the phenomenological P (k) (red, dashed), obtained as the convolved momentum distribution (see text) that fits the experimental P (k) (black, solid). The thermal correlation length ξT is the fitting parameter that phenomenologically accounts for thermal effects according to the ansatz (4). The full diagram is generated by interpolation from the same U -∆ points as in Fig. 3. Data taken from Ref. [15]. served broadening. Remember that both ξ 0 and ξ T are effective correlation lengths appearing after averaging over many inhomogeneous tubes and are in principle not directly related to the correlation lengths for a homogeneous system, although they are expected to follow the same trends with interaction and disorder. The behavior of ξ T as extracted from the fits is shown in Fig. 7 for the whole U -∆ diagram. For U < 10J, ξ T is rather short, d < ∼ ξ T < ∼ 2d, showing that thermal broadening is important for the superfluid and weakly interacting Bose glass regimes. Moreover, ξ T does not strongly vary as a function of ∆. This shows that the overall increase of Γ with increasing ∆ in Fig. 6 is essentially due to a decrease of the intrinsic correlation length ξ 0 . In this context it is important to note that, FIG. 7. U -∆ diagram of the thermal correlation length ξT resulting from the phenomenological ansatz (4) by fitting it to the experimental momentum distribution P (k). Thermal effects are significantly more relevant for small U . Data taken from supplemental material of Ref. [15]. when increasing ∆, the localization length in the considered quasi-periodic model (1) can reach values much smaller than the lattice spacing d more rapidly than in the case of true random potentials [10]. This is favorable for the experiment, which then probes the strongly localized Bose glass regime. In the superfluid region, the thermal contribution to the broadening is clearly dominating and the observed small values of ξ T correspond to a gas with short-range quantum coherence. Let us now discuss the large-U regime. There, the obtained ξ T are significantly larger, suggesting that the strongly correlated phases are only weakly affected by finite-temperature effects. For large U in the Mott phase, ξ 0 can get much smaller than d. Here, the rms width is dominated by the intrinsic T = 0 width, as confirmed directly by the fits in the side panels of Fig 6. Importantly, this shows that the observed reentrance of the localization in the experimental diagram is driven by interactions and disorder, and not by thermal effects. V. EFFECT OF TEMPERATURE ON THE CORRELATION LENGTH FROM EXACT DIAGONALIZATION As the effective thermal correlation lengths ξ T in the phenomenological approach are found to be relatively short with respect to the system size, one can gain a first understanding of the temperature dependence of the correlation length ξ(T ) from exact diagonalization calculations for small-sized systems, as described in Sec. III D. Let us stress that the validity of this analysis is limited to the regions of the phase diagram, where the correlation length ξ(T ) is sufficiently shorter than the considered system sizes L ∈ [8d, 12d]. . Inset: density dependence of 1/ξ(T ). As shown in the main panel, the change of density n can be taken into account by a scaling factor such that, when ξ(T ) is plotted versus kBT /Jn 1/2 , all curves overlap for kBT > ∼ 2J. Data taken from supplemental material of Ref. [15]. Fig. 8 shows the temperature dependence of the inverse correlation length ξ −1 (T ) at U = 2J (superfluid regime) for several densities below n = 1. The data show a crossover from a low-temperature regime k B T J to a high-temperature regime k B T J. When U is not too large, a natural energy scale is set by the bandwidth 4J that controls this crossover. With exact diagonalization, we cannot investigate the low-temperature regime due to finite-size effects, but let us recall that, according to the Luttinger liquid field theory [30], a linear behavior ξ −1 ∼ k B T /Jd is expected, with a prefactor that depends on density and interactions. In the opposite regime of high T , we are able to determine the correct scaling of the correlation length from exact diagonalization. In the range 2J < ∼ k B T < ∼ 100J, which is also the range of experimental interest, the numerical results are very well fitted by the relation A. Thermal broadening for weak interactions with c = 2.50(5) being a fit parameter, valid for the relevant range of densities and interaction U = 2J. This formula is inspired by the one given in Ref. [33] for free fermions, ξ(T ) d/arcsinh(k B T /J). For high temperatures, ξ −1 (T ) is thus logarithmic in T , corresponding to a "classical" limit of the lattice model and is attributed to the finite bandwidth. We do not have a theoretical argument for the observed √ n scaling; so it should be taken as an ansatz that describes the data for the given values of U and n, but not as a general formula. Additional computations performed in the presence of disorder (see Fig. 9) confirm the previous results of strong thermal effects for small U . For disordered systems, fluctuations of the local density and hence of the correlation functions are much more relevant. Thus, in small sized-systems, trying to fit the exponential decay of the real-space correlations proves to be difficult. We hence determine the inverse correlation length ξ −1 (T ) from a Lorentzian fit of the momentum distribution P (k). ξ −1 (T ) starts to increase at rather small T , showing that there is a non-negligible impact of thermal fluctuations already at low temperatures. This explains the short ξ T observed in the analysis of the experimental data for weak interactions. It is however interesting to point out that, according to recent studies on transport properties of the same system, the broadening of P (k) with T is not accompanied by a change of the system mobility [15]. Further investigations of this persisting insulating behavior at finite T might establish a link with the many-body localization problem [16,35]. B. Quantum-normal crossover temperature for strong interaction Let us now discuss the temperature dependence of the correlation length for strong interactions (U > 10J). As shown in Fig. 10, ξ(T ) is only weakly dependent on T at low temperatures while a relevant broadening sets in above a crossover temperature T 0 . This effect can be seen clearly not only for the Mott phase, for which it occurs when the thermal energy becomes comparable with the FIG . 10. Temperature dependence of the inverse correlation length, calculated by exact diagonalization for a strongly interacting system with U = 44J, disorder strength ∆ = 10J and for both the commensurate case of a Mott insulator (n=1) and the incommensurate case of a Bose glass (n=0.46). The system lengths are L = 9d and L = 13d, respectively. Arrows indicate the crossover temperatures T0 below which ξ −1 (T ) is rather constant before starting increasing. energy gap U [17], but also for the gapless Bose glass. T 0 is here determined as the position of the maximum of the derivative of 1/ξ(T ). Fig. 11 shows the computed crossover temperature T 0 as a function of the disorder strength ∆ for a representative interaction strength and for both a commensurate and an incommensurate density. For the commensurate density and ∆ = 0, we obtain k B T 0 = 0.23(6)U , which is comparable to the Mott insulator "melting" temperature k B T 0.2U , predicted for higher-dimensional systems [17]. As ∆ increases, T 0 decreases, which is consistent with a reduction of the gap due to the disorder. For the Bose glass (incommensurate density), the crossover temperature shows instead a linear increase with ∆, i.e., k B T 0 ∝ ∆. This result, already observed in numerical simulations at small disorder strengths [34], can be intuitively justified with the following reasoning. The energies of the lowest levels that the fermionized bosons can occupy increase with the disorder strength. So the larger ∆, the higher the effective Fermi energy that sets the temperature scale for the existence of the quantum phase (the Bose glass). The exact diagonalization results confirm those obtained in Sec. IV with the phenomenological approach: we showed in Fig. 11 that for sufficiently large ∆, ξ(T ) is not significantly affected by the finite temperature. This is in agreement with the large ξ T obtained phenomenologically (see Fig. 7 at large U ). Finally, the fact that the crossover temperatures in the incommensurate and commensurate cases are different for small disorder and strong interaction, clarifies why in the phenomenological approach the fit of the momentum distribution with a single ξ T is not working properly in this regime, as previously mentioned. In particular, while the superfluid component broadens in the same way as it does for small U , the weakly-disordered Mott-insulating component for T < T 0 does not. As a consequence, considering a single thermal broadening leads to an overall overestimation of the derived Γ. C. Experimental momentum width versus entropy Since a procedure to determine the experimental temperature in a disordered system is not available, a direct comparison of theory and experiment is not possible. Nevertheless we can give a first experimental indication of the consistency of the previous results by investigating the correlation length as a function of entropy, which we can measure as described below. In Fig. 12, we report the measured rms width Γ of the momentum distribution P (k) as a function of the entropy in the regime of strong interaction and finite disorder, where the Bose glass and the disordered Mott-insulating phases coexist. The measurement clearly shows the existence of a plateau at low entropy, before a broadening sets in, which nicely recalls the theoretical behavior of the inverse ξ(T ) in Fig. 10. Assuming a monotonic increase of temperature with entropy, this experimental result supports the theoretical prediction that, for sufficiently large U and ∆, the T = 0 quantum phases can persist in the finite-T experiment. The entropy in the 1D tubes is estimated as follows. We first measure the initial entropy of the system in the 3D trap: in the BEC regime with T /T c < 1, where T c is the critical temperature for condensation in 3D, we use the relation S = 4N T k B ζ(4)/ζ(3)(T /T c ) 3 , where ζ is the Riemann Zeta function [36]. The reduced temperature T /T c is estimated from the measured condensed fraction by taking into account the finite interaction energy. After slowly ramping the lattices up and setting the desired values of U and ∆, we again slowly ramp the lattices down, such that only the 3D trapping potential remains, and we again measure the entropy as just described. As an estimation for the entropy in the 1D tubes we use the mean value of these initial and final entropies. Through variation of the waiting time, the amount of heating can be changed and we can hence obtain the rms width Γ for different entropies. The data in the experimental coherence diagram, Fig. 3, and the lowest-entropy point of Fig. 12 correspond to the shortest used waiting times. For example, the lowest-entropy point in Fig. 12 has the same rms width (Γ 0.42π/d) as the one obtained for the coherence diagram at U = 23.4J and ∆ = 6.6J. VI. THERMOMETRY WITH FINITE-T DMRG A standard procedure for obtaining the temperature of a quasi-condensate in a harmonic trap is to use the linear relation T =h 2 n δp/0.64k B md between the temperature T and the half width at half maximum δp of the Lorentzian function that fits the experimental momentum distribution [25,26]. However, so far, there exists no formula for the temperature in the interacting disordered or clean lattice systems. Here, we perform ab initio finite-T DMRG computations of P (k) to estimate T , both in the superfluid and Mott-insulating regimes. We note that in Ref. [15] we actually provided a rough estimate of the superfluid temperature, T 3J/k B . The value was obtained by inverting Eq. (7) for the approx-imate temperature dependence of the correlation length ξ(T ) and replacing ξ(T ) by the effective thermal correlation length ξ T obtained with the phenomenological approach. (According to Eq. (6), in the superfluid regime ξ(T ) ≈ ξ T as ξ 0 is considerably larger than ξ T ). In this simplified approach, the inhomogeneity of the system was taken into account by performing a local density approximation (LDA). The more precise finite-T DMRG analysis, described in the following, yields a superfluid temperature that is twice as large as the old estimate. As described in Sec. III E, using finite-T DMRG, we can perform ab initio calculations to obtain P (k). This allows for a proper thermometry of the system and also for testing the validity of the phenomenological approach. After quasi-exact simulation of the system for different temperatures, the resulting momentum distributions P (k) are compared with the experimental data to estimate the experimental temperature. We restrict the analysis to two points on the ∆ = 0 axis of the diagram: one for U = 3.5J, corresponding to the superfluid regime, and another one for U = 21J, which is deeply in the strong-interaction regime. Let us recall the general trends for the rms width Γ of P (k): Γ typically increases with the interaction strength U and the temperature T , and also when the number of particles N ν in a tube is decreased. As the momentum distribution is normalized to P (k = 0) = 1, low-filled tubes display flat tails while highly filled ones yield a more peaked momentum distribution. Lastly, one should keep in mind that the exact distribution of atoms among the tubes in the experiment is not known. We therefore study both the Thomas-Fermi (TF) and grand-canonical (GC) hypotheses as described in Sec. III B. A. In the superfluid regime In Fig 13 we show the results for the superfluid regime. To estimate the temperature from experimental data we compute several theoretical P (k) curves for different temperatures and select the one that best matches the experimental P (k). For the chosen interaction strength U = 3.5J, a good estimate for the temperature is found to be T = 5.3J/k B assuming the GC distribution for particles among tubes (bold orange curve in Fig. 13). The theoretical result matches the experimental data rather well, except for some oscillations in the tails that are due to correlated noise from the apparatus. Fig. 13 also shows the theoretical P (k) under the hypothesis of the TF distribution of the particles with temperatures T = 5.3J/k B and T = 8J/k B . The former is more peaked and hence less wider than the GC curve for the same temperature. The latter is the best fit of the experimental data under the TF hypothesis. With the TF hypothesis we thus obtain larger temperature estimates than with the GC one. This is consistent with the general dependence of P (k) on the particle number N and the particle number distributions. As shown in Fig. 2, the GC distribution has more particles in outer low-filled tubes and less in the higherfilled inner ones, when compared to the TF distribution. To show that thermal broadening is certainly relevant in the considered parameter regime, Fig. 13 also shows the narrow P (k), obtained from T = 0 DMRG data for both the TF distribution and the GC one for T = 5.3J/k B . In Fig. 14 we report the rms width Γ of the momentum distribution P (k) as a function of temperature, as obtained by finite-T DMRG computations, for both the GC and TF distributions. It shows that, for temperature estimates, the knowledge about atom distribution is more important than the present 15% fluctuations in the number of atoms N tot . As the GC approach takes into account a possible redistribution of the atoms among tubes during the slow ramping of the lattice potentials, we consider it to be more realistic and reliable than the TF one, which, in a sense, freezes the particle distribution to that in the initial 3D trap. As already mentioned, the temperatures obtained with T -DMRG (T = 5.3J/k B with the GC approach and T = 8J/k B with the TF one) are higher with respect to the rough estimate (T 3J/k B ) presented in Ref. [15], where we performed exact diagonalization calculations with a LDA. Yet, the order of magnitude is the same. The finite-T DMRG approach is in principle much more reliable as it is basically approximation-free and takes into account the actual system sizes and trapping potentials. While exact diagonalization results, combined with LDA, do not take into account properly the system inhomogeneity, they can nevertheless easily provide the general trend of the correlation length with temperature. With the exact finite-T calculations, we can also test the phenomenological approach discussed for the full coherence diagram in Sec. IV. For both the TF and GC hypotheses, in Fig. 15, we show the data for T = 0 DMRG (blue) and for the phenomenological approach with ξ −1 T = 0.65d (red). The latter curves are compared to actual T = 5.3J/k B finite-T DMRG data under the GC hypothesis (black). The agreement is rather good for both the TF and GC distributions since the corre- sponding T = 0 curves for P (k) are already similar. It is interesting to note that, despite the inhomogeneity of the system, assuming a single effective thermal correlation length ξ T in the phenomenological approach [Eq. (4)] works nicely in the superfluid regime, where the rms width Γ is dominated by thermal broadening. While the phenomenological approach, based on T = 0 DMRG data and on the effective thermal correlation length ξ T , here yields the correct functional form for the thermal P (k), it does not allow to determine the temperature precisely. The temperature dependence of ξ T can be obtained rather well from exact diagonalization for homogeneous systems as long as T is not too low (cf. Sec. V A). However, its dependence on the atom distribution is not so easy to predict. So, for the phenomenological approach, the difficulty lies in the fact that very similar P (k) can be obtained with the two considered particle distributions at quite different temperatures as documented by the exact results in Fig. 13. B. In the Mott-insulating regime Similar comparisons are carried out for the stronginteraction regime with U = 21J. The data are shown in Figs. 16 and 17. For larger U values the momentum distributions P (k) for a single tube are typically wider. Yet, for such a tube with T = 0, the rms width is not a monotonous function of the number of particles because of the wedding cake structure. For instance, particles added to a Mott plateau in the bulk will eventually form a superfluid dome that will contribute with a narrower signal to the P (k) curve of the tube. Consequently, at low temperatures, this regime is more sensitive to the particle distribution than the superfluid one. This is already visible in the T = 0 data for the TF and GC hypotheses. Contrary to the superfluid regime, the matching of the theoretical curves (GC and TF) with the experimental one is not very convincing since one cannot account equally well for the central dome and the tails of the momentum distribution at the same time. As a rough estimate for the temperature we obtain T ≈ 2J/k B under the GC hypothesis. As in the superfluid case, this is smaller than the value (4.6J/k B ) obtained under the TF hypothesis. In any case, experimental temperatures in the Mott regime are apparently lower than those in the superfluid regime. The discrepancy between theory and experiment should be mainly due to thermalization issues. In the inhomogeneous system experimental temperatures could vary spatially, since the insulating components, which are less susceptible to heating because of the Mott gap, do not thermalize with the superfluid components [37,38]. As done in the superfluid case, we can again use finite-T DMRG to test the phenomenological approach (cf. Fig. 17). The phenomenological ansatz for the momentum distribution, corresponding to Eq. (4), is fitted to exact DMRG data for T = 2J/k B . The effective thermal correlation lengths ξ T are chosen to best fit the central dome of the exact curve, although this results in considerable deviations in the tails. Such deviations are however in agreement with the fact that, as already explained in Sec. V B, the commensurate component of the Mott insulator thermally broadens less than the incommensurate superfluid one, leading to an overestimation in the phenomenological broadening of the tails. An additional complication originates from the fact that finite-size systems are more sensitive to temperature. At T = 2J/k B , the shortest Mott plateaus, like for example those shown in Fig. 18, have almost completely melted despite the fact that the aforementioned estimate T 0.2U/k B for the melting temperature yields 4J/k B at this interaction strength. This means that the T = 0 correlation functions, employed for the phenomenological approach, differ qualitatively from the actual finite-T correlations. VII. EXPERIMENTAL U -∆ ENTROPY DIAGRAM Thermometry on the basis of finite-T DMRG in principle allows to also determine the system temperature in the presence of disorder. However, to get reliable temperature estimates one should ensure that the experimental system is in thermal equilibrium. As discussed previously, thermalization is hampered by localization in the Mott insulator and Bose glass phases. In the absence of a straightforward thermometry procedure for the full diagram, we estimate the experimental entropy, according to the procedure described in Sec. V C, to provide an indication for the temperature changes with respect to the temperature estimates obtained for the clean case (∆ = 0). Fig. 19 shows the entropy S of the system across the U -∆ diagram. We observe that S is quite independent of ∆ and displays an overall increase towards small U , which is presumably due to a reduced adiabaticity in the preparation of the 1D systems for weak interactions. This result is in agreement with the fact that the temperature estimated for FIG. 19. U -∆ diagram for experimental estimates of the entropy per particle, S/N kB. The white crosses show the data points from which the 2D diagram was generated by interpolation. Data taken from supplemental material of Ref. [15]. the Mott-insulating regime is smaller than the one found for the superfluid in Sec. VI. Moreover, the measurement suggests that an increase of disorder might likely not be accompanied by an increase of temperature. VIII. CONCLUSIONS The behavior of quantum matter in the presence of disorder and interaction is a very complex subject, especially when one studies experimental systems which, beside being inhomogeneous due to the trap confinement, are necessarily at finite temperature. Starting from a recent study on the quantum phases observed in 1D bosonic disordered systems [15], in this paper we provided a careful examination of the effects of finite temperature. To this purpose, two different DMRG schemes have been employed: (i) a direct simulation of the thermal density matrix in the form of a matrix product purification, and (ii) a less costly phenomenological method based on DMRG ground state data that are extended to finite temperatures by introducing an effective thermal correlation length. This analysis of our inhomogeneous system is corroborated by exact diagonalization studies for small-sized systems without trapping potential. While in the weak-interaction regime thermal effects can be rather strong, they are significantly less relevant in the strong-interaction one. There, the scaling of the correlation length with T shows a weak dependence below a crossover temperature, indicating that the stronglycorrelated quantum phases predicted by the T = 0 theory can persist at the finite temperatures of our experiment. Furthermore, by using quasi-exact finite-T DMRG simulations, we provided a temperature estimate for a superfluid in a lattice, the main source of uncertainty being the actual distribution of atoms among several quasi-1D systems in the experiment. Experimentally, a possible way to reduce this uncertainty is to use a flat top beam shaper providing homogeneous trapped systems [39][40][41][42]. The latter modification would for example also allow for a better discrimination of the features of the Bose glass and the Mott insulator in the strong-interaction regime. In the insulating regimes, the Mott insulator and the Bose glass, experimental thermalization issues prevent precise temperature estimates. A mixture with an atomic species in a selective potential [43] working as a thermal bath could be employed to guarantee thermalization of the species under investigation. Another open question is whether the persistence of the insulating behavior for the disordered system with weak interactions could be related to the proposed many-body localization phenomenon [16,35].
Timing of Fistula Creation and the Probability of Catheter-Free Use: A Cohort Study Background: Fistula creation is recommended to avoid the use of central venous catheters for hemodialysis. The extent to which timing of fistula creation minimizes catheter use is unclear. Objective: To compare patient outcomes of 2 fistula creation strategies: fistula attempt prior to the initiation of dialysis (“predialysis”) or fistula attempt after starting dialysis (“postinitiation”). Design: Cohort study. Setting: Five Canadian dialysis programs. Patients: Patients who started hemodialysis between 2004 and 2012, who underwent fistula creation, and were tracked in the Dialysis Measurement Analysis and Reporting (DMAR) system. Measurements: Catheter-free fistula use within 1 year of hemodialysis start, probability of catheter-free fistula use during follow-up, and rates of access-related procedures. Methods: Retrospective data analysis: logistic regression; negative binomial regression. Results: Five hundred and eight patients had fistula attempts predialysis and 583 postinitiation. At 1 year, 80% of those with predialysis attempts achieved catheter-free use compared to 45% with post-initiation attempts (adjusted odds ratio [OR]preVSpost = 4.67; 95% confidence interval [CI] = 3.28-6.66). The average of all patient follow-up time spent catheter-free was 63% and 28%, respectively (probability of use per unit time, ORpreVSpost = 2.90; 95% CI = 2.18-3.85). This finding was attenuated when accounting for maturation time and when restricting the analysis to those who achieved catheter-free use. Predialysis fistula attempts were associated with lower procedure rates after dialysis initiation—1.61 procedures per person-year compared with 2.55—but had 0.65 more procedures per person prior to starting dialysis. Limitations: Observational design, unknown indication for predialysis and postinitiation fistula creation, and unknown reasons for prolonged catheter use. Conclusions: Predialysis fistula attempts were associated with a higher probability of catheter-free use and remaining catheter-free over time, and also resulted in fewer procedures compared with postinitiation attempts, which could be due to timing of attempt or patient factors. Catheter use and procedures were still common for all patients, regardless of the timing of fistula creation. Introduction Arteriovenous fistulas ("fistulas") are considered the preferred form of vascular access for hemodialysis due to lower rates of complications and mortality compared with central venous catheters ("catheters"). [1][2][3][4] The Canadian, American, and European guidelines recommend the creation of a fistula prior to the initiation of hemodialysis to minimize exposure to catheters. [5][6][7] The advantage of predialysis fistula attempts is that they allow time for the fistula to mature so the access is ready for use at the start of hemodialysis. 8 However, as many as 20% of people who have a fistula created prior to the initiation of hemodialysis will not use it due to death or lack of progression of their chronic kidney disease. 9 Many predialysis fistula attempts fail and patients start hemodialysis with a catheter or their fistula fails over time, requiring a second attempt and/or supplemental catheter use. 10 Finally, many urgent dialysis starts have no opportunity to create a predialysis fistula, and half of all fistula attempts occur after starting hemodialysis. 11 The extent to which predialysis and postinitiation fistula creation minimizes catheter use and their attendant complications is unknown. Insufficient information is available to prepare patients for their hemodialysis experience when choosing a fistula as their vascular access modality, including fistulas created after dialysis initiation. We sought to describe the outcomes of patients who underwent predialysis fistula creation compared to those who had an attempt after the start of dialysis, with respect to the probabilities of achieving independent fistula use, remaining catheter-free over time, and the rate of accessrelated procedures. We hypothesized that a predialysis fistula attempt would be associated with more favorable outcomes, but anticipated high catheter use regardless of the timing of fistula creation. This exploration into 2 different patterns of care and their consequences (as opposed to fistula patency and functionality) may help guide clinical decision-making, inform the patient-physician conversation regarding modality choice, and help to set realistic expectations. Patient Population This study included incident hemodialysis patients between January 1, 2004, and May 31, 2012, aged 18+ years, who received at least one fistula attempt. Focusing on fistula creation strategies, we excluded patients who started on, or transitioned to, peritoneal dialysis (PD) within 6 months and those who started dialysis with an arteriovenous graft. To increase generalizability, we also excluded patients with a life expectancy of less than 1 year due to metastatic cancer or other terminal illnesses, based on a review of the patient record. Data Source We conducted a retrospective cohort study using data from the Dialysis Measurement Analysis and Reporting (DMAR) system collected while it was operational in 5 Canadian dialysis programs (Southern Alberta Renal Program, Manitoba Renal Program, Sunnybrook Health Sciences Centre, London Health Sciences Center, and The Ottawa Hospital). Detailed baseline data were captured including demographics, comorbidities, laboratory values, and predialysis care. Accessrelated procedures and changes in patient status were collected longitudinally. All information was entered by trained staff and double-reviewed by experts to ensure accuracy and consistency in coding. We followed participants from dialysis start until the first of transplant, recovery of kidney function, transfer out of the dialysis program, transfer to PD (after 6 months), death, or the end of follow-up (August 31, 2012). Predictor: Timing of Fistula Creation The main predictor of interest was timing of fistula creation. Patients were categorized based on whether their first fistula creation attempt occurred before or after initiation of dialysis ("predialysis" or "postinitiation" group). Primary Outcome: Catheter-Free Use The primary outcome for this study was catheter-free use, defined as independent use of a fistula for hemodialysis (ie, without a catheter in place). We measured catheter-free use in 2 ways: (1) whether catheter-free use of a fistula was achieved-cumulative incidence of use over time and binary use (yes/no) by 1 year in patients with 1 year of follow-up, and (2) the probability that the fistula was used catheter-free during each day of follow-up. We recorded movement of patients in and out of periods of catheter-free use by tracking catheter insertions and removals. We allowed multiple fistula attempts when calculating catheter-free use. For example, if a patient's predialysis fistula failed but they received a second attempt, any catheter-free use of the second fistula was included. Secondary Outcome: Access-Related Procedures We analyzed the rate of access-related procedures from the start date of dialysis. Procedures were then subcategorized as catheter-related or fistula-related. We defined access-related procedure rates as the number of procedures per person-year of follow-up from dialysis start. Because fistula patency is not assessed until after commencing dialysis, we did not include interventions occurring prior to dialysis in rate calculations. We did present the counts of those procedures separately, to study the procedural burden experienced by patients who received a predialysis fistula. The initial catheter insertions or fistula creations required for starting hemodialysis were considered predialysis procedures (ie, all patients had at least one predialysis procedure). Surgical explorations prior to fistula creation may have occurred before dialysis start in the postinitiation group. Statistical Analysis We summarized patient characteristics by the timing of fistula creation using standard methods (eg, means and percentages), as appropriate. We described the crude probability of catheter-free use and proportion of time spent catheter-free. We looked at the cumulative incidence of achieving catheter-free use treating death, transplant, recovery of kidney function, and starting PD as competing risks. In this survival analysis, we censored observations that were still event free at study end or when the study participant was transferred to another program. We used the Fine and Gray model to estimate subhazard ratios (SHRs) for initial catheter-free use. We also studied the probability of achieving catheter-free use of a fistula by 1 year after hemodialysis start in participants who were still under observation at 1 year. We used binary logistic regression to estimate odds ratio (OR) of use at 1 year. We then analyzed the probability of catheter-free use for each day of follow-up using logistic regression of repeated Bernoulli trials within subject, using robust variance methods to account for the within-subject dependencies. Crude procedure rates and counts with 95% confidence intervals (CIs) were described using Poisson regression. To obtain incidence rate ratios (IRRs), we modeled procedure rates using negative binomial regression to account for overdispersion. We adjusted all regression models for age, sex, a history of diabetes mellitus, or cardiovascular disease (including coronary artery disease, congestive heart failure, cerebrovascular disease, and peripheral vascular disease), and whether the patient started dialysis as an inpatient. We also assessed for confounding effects of body mass index (BMI), cancer, estimated glomerular filtration rate (eGFR), starting in an intensive care unit, length of predialysis care, and anatomical location of first fistula creation attempt. Sensitivity and Subgroup Analyses We conducted several subgroup and sensitivity analyses to create more comparable groups to explain why differences may have occurred. We repeated the analysis of the proportion of time spent catheter-free starting follow-up from the date of fistula attempt (postinitiation group), and again from 3 months after the attempt (both groups) to allow for fistula maturation before starting the clock. This analysis showed whether postinitiation fistulas ever "caught up" to the success of their predialysis counterparts. We also restricted the analysis of the proportion of catheter-free use to those who achieved it at some point during follow-up. This analysis removed some unmeasured patient confounders, as all patients were healthy enough and followed for a sufficient duration of time, to achieve a functioning fistula. Finally, we examined outcomes in prespecified groups to account for the additional urgent starts in the postinitiation group, including those who started dialysis as an outpatient and those with at least 4 months of predialysis care. We used Stata 14 to conduct all analyses (www.stata.com). Research ethics approval and waiver of patient consent were obtained from each of the 5 participating programs. Results A total of 1091 patients with a fistula attempt met criteria for inclusion in the study (Figure 1). Five hundred and eight participants had predialysis fistula attempts, while 583 had postinitiation attempts. The predialysis group were an average of 3 years older, had a higher BMI, and longer predialysis care (see Table 1). The postinitiation group had more than double the inpatient starts, including a higher percentage of starts in an intensive care unit. The 2 groups did not differ by sex, eGFR at dialysis start, anatomical location of their first fistula creation attempt, or the presence of diabetes mellitus, cardiovascular disease, or cancer. Median patient follow-up time was nearly 2 years, and was 4 months shorter for the predialysis group. Predialysis fistula attempts occurred a median time of 5 months (median = 4.7, interquartile range [IQR] = 2.3-11.7) prior to starting dialysis. Postinitiation fistula attempts occurred a median of 3 months (median = 3.3, IQR = 1.8-6.2) after starting dialysis. Reaching the end of the study period was the most common reason for termination of follow-up (60%), followed by death (25%), receipt of a kidney transplant (7%), transfer to another program (5%), switch to PD (2%), and recovery of kidney function (1%), and did not differ by group. Catheter-Free Use Sixty-four percent of patients achieved catheter-free use at some point during follow-up (81% of predialysis fistulas and 50% of postinitiation fistulas). Of those who achieved catheter-free use, the median time from dialysis start to use was zero months for the predialysis group (median = 0, IQR = 0-0)-thus if fistula use was achieved, most patients started dialysis using their fistula-and 9 months (median = 9.3, IQR = 6.5-13.3) for the postinitiation group. Figure 2 shows the cumulative incidence of patients achieving catheter-free use over time. At 6, 12, and 24 months, the probabilities of use were 76%, 79%, and 82% for the predialysis group, and 10%, 37%, and 54% for the postinitiation group, respectively (SHR preVSpost = 3.08; 95% CI = 2.62-3.62). On average, the predialysis group spent 63% of their follow-up time catheter-free, compared with 28% in the postinitiation group (Figure 3). Modeled probability of achieving catheter-free use over time was almost 3 times greater for the predialysis group (OR preVSpost = 2.90; CI = 2.18-3.85). This effect was attenuated when allowing up to 3 months for fistula maturation (OR preVSpost = 1.78; CI = 1.30-2.44), but remained significant. A similar attenuation occurred when comparing only those who achieved catheter-free use (OR preVSpost = 2.06; CI = 1.51-2.82). The association between predialysis fistula attempts and greater proportions of time spent catheter-free was no longer significant when restricted to those who achieved catheter-free use and had at least 3 months for maturation (80% for the predialysis group and 73% for the postinitiation group; OR preVSpost = 1.10; CI = 0.74-1.62). Access-Related Procedures On average, the predialysis group had 1.61 procedures per person-year after the start of dialysis, while the postinitiation group had 2.55 procedures per person-year (see Table 2). A predialysis fistula attempt was associated with lower overall procedure rates (IRR preVSpost = 0.65, CI = 0.58-0.73), catheter-related procedures rates (IRR preVSpost = 0.62, CI = 0.52-0.74), and fistula-related procedures rates (IRR preVSpost = 0.68, CI = 0.60-0.77). Considering the procedures prior to dialysis, the predialysis fistula group received on average 0.65 more procedures per person than the postinitiation group (1.65 vs 1.00). Other Analyses Restricting the cohort to those who initiated dialysis as outpatients or who had 4 months of predialysis care had no Note. The x-axis is the time (in months) from starting dialysis to the first date of catheter-free fistula use. Up to 3 fistula attempts were included in the analysis. Death, transplant, recovery of kidney function, and starting peritoneal dialysis were treated as competing risks (patient is retained in the risk set-assumed they will never achieve catheter-free use). Reaching the end of study date or being transferred to a different program was treated as censoring events (patient is removed from the risk set-assumed they may still achieve catheter-free use at some future time). significant effect on our findings (ORs remained favorable to the predialysis group). We found no evidence of confounding for BMI, cancer, eGFR at dialysis initiation, starting in an intensive care unit, length of predialysis care, and location of first fistula creation attempt. Discussion Patients who received postinitiation fistula attempts had a lower probability of catheter-free use compared with patients who had predialysis fistula attempts. After a postinitiation fistula attempt, patients had a 50% to 54% chance of achieving catheter-free use and spent less than a third of their dialysis time catheter-free. While timing of attempt may be important, the observed differences between these 2 strategies are likely also due to a combination of patient factors and the time required for fistula maturation. The increased exposure to catheters in those with a postinitiation fistula attempt is likely the result of the delay in fistula creation and inherent differences in patients attempting a fistula prior to dialysis. The additional analyses we conducted clarify this issue to the extent possible. Across time, the likelihood of achieving catheter-free use improved, but remained Note. Crude Poisson rates per person-year with 95% CI. IRRs from negative binomial regression models, adjusting for age, gender, diabetes, cardiovascular disease, and inpatient dialysis start. Crude Poisson counts per person and 95% CI. IRR = incidence rate ratio; CI = confidence interval. lower for the post-initiation group by 20% to 30%. Allowing up to 3 months for fistula maturation conservatively increased the proportion of time spent catheter-free, suggesting only some of the overall differences can be attributed to timing of fistula attempt. Restricting the analysis to only those who achieved catheter-free use also improved estimates, suggesting unmeasured patient factors that prevent a fistula from successfully maturing partially explain our findings. When both patient factors and timing were accounted for, the difference in probability of catheter-free use over time was no longer significant, suggesting there may be nothing fundamentally superior about a predialysis attempt. However, we cannot rule out an effect of timing of fistula creation. Counter to expectations, 12 restricting the analyses to those who started dialysis as outpatients or with predialysis care did not reduce the relative difference between the predialysis and postinitiation groups; therefore, our findings were not explained by the different number of urgent starts. Few research studies have used catheter-free use to measure fistula success. If the main reason to attempt fistula creation is to avoid catheter-related complications, then the proportion of time on dialysis spent without a catheter in place is an important outcome measure. 11 Yet, research in this area remains largely focused on measures such as patency rates, 13 which are arguably less patient-oriented and meaningful than catheter-free days. This study provides unique insight into the burden of catheter use with different fistula creation strategies. Our findings suggest a predialysis attempt is a superior strategy in this regard. However, if a predialysis fistula is not an option (eg, urgent starts), our study describes the limited extent a postinitiation attempt mitigates catheter use. Furthermore, even predialysis fistula creations had a relatively low proportion of time spent catheter-free-about two-thirds of follow-up time. When we restricted our analysis to those who achieved catheter-free use, the groups were more similar. Much of the prior literature supporting fistulas is based on comparing patients with functioning fistulas with patients with other forms of access in place. 4 It appears the outcomes of patients who are able to achieve a functioning fistula are better. Unfortunately, there is a sizable risk that a fistula will never function and we cannot reliably identify patients in whom fistula attempts are more likely to succeed. Fistulas have been described as having advantages in terms of decreased patient morbidity and increased patient survival rates. 2,4 However, the benefits of fistulas over other forms of access have yet to be definitively established in controlled clinical trials. We believe this is a necessary step before firm conclusions can be made. 14 If the superiority of fistulas is confirmed in clinical trials, deciding on the timing of a fistula attempt is still not straightforward. Prior research shows that 20% of patients with predialysis attempts never start dialysis due to death or nonprogression of their kidney disease. 9 Simulated data are conflicting with regard to the effect of timing of fistula creation on life expectancy, and certain populations may benefit from different creation strategies. 15,16 The potential benefits of fistulas, including our findings in terms of catheter-free use, should be considered alongside this other information when patients and providers make decisions about the choice of vascular access and timing of access creation. Our study found relatively high procedure rates, regardless of the timing of fistula creation. Patients have identified hemodialysis access-related complications and procedures as research priorities, 17 as they often experience "unpreparedness" and "insecurity" regarding the complications they will face. 18 Being fully informed may improve patient confidence and their ability to cope. 18 Our findings suggest patients who receive a fistula can expect an average of 2 procedures per year, in addition to procedures prior to starting dialysis (ie, the fistula creation or catheter placement to initiate hemodialysis). An earlier predialysis fistula attempt may further increase the number of procedures a patient experiences before dialysis. 19 A systematic review showed that 40% of fistulas require at least one intervention within their first year, and fistula performance appears to decline over time. 10 This information can help patients set realistic expectations when they choose a particular vascular access strategy. The primary strengths of our study are the granularity and the quality of the data provided by the DMAR system. Data collection occurred prospectively at 5 large dialysis programs, reflecting diversity in practice, and underwent review to ensure data accuracy. Information such as comorbidities was referenced from a source document for consistent definitions. Detailed information on procedure types, indications, and dates were collected, allowing for a longitudinal picture of the whole course of dialysis therapy. The primary limitation of our study is its observational design. This limitation impacts any causal inferences drawn from our study (ie, superiority of predialysis fistulas), but not the description of our outcome measures (ie, time spent catheter-free). Selection bias has been shown to influence the causal results of observational studies in this patient population. 20,21 There may be differences between the type/ course of renal disease, or patient characteristics, such as vessel size, for candidates who undergo a predialysis versus postinitiation fistula attempt that were not accounted for in our study design. The sensitivity and subgroup analyses we conducted attempted to parse out this bias. However, limited follow-up times may have influenced our results in ways not accounted for by the data. In addition, the fact that a smaller proportion of dialysis patients undergo fistula creation in the participating dialysis programs compared with other jurisdictions worldwide may influence the observed results, but this is representative of current Canadian practice and likely represents a more positive view if these centers are more selective when referring patients for a fistula attempt. Certain clinical factors such as fistula maturation were not tracked, and our method of data collection precluded an indepth investigation into the causes of catheter use (eg, delayed maturation, failed cannulation, or patient wishes). Nonetheless, the quality and the granularity of detail regarding vascular access collected during this time period makes the data uniquely valuable to address certain research questions. In conclusion, predialysis fistula attempts are associated with a higher probability of catheter-free use and fewer procedures compared with postinitiation attempts, which can likely be attributed to both timing and patient factors. However, catheter use and procedures are still common in both groups. These findings can be used to guide the discussion between patient and provider when selecting an access strategy. contributed to data analysis/interpretation; A.C. and P.R. contributed to statistical analysis; R.R.Q. and P.R. contributed to supervision or mentorship. Each author contributed important intellectual content during article drafting or revision and accepted accountability for the overall work by ensuring that questions pertaining to the accuracy or integrity of any portion of the work are appropriately investigated and resolved. Ethics Approval and Consent to Participate Ethics approval was obtained separately at all participating sites. Consent for Publication All authors have reviewed a final version of the manuscript and have consented to publication. Availability of Data and Materials Data and materials can't be made publicly available due to restrictions on its disclosure and use. Declaration of Conflicting Interests The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Drs Oliver and Quinn disclose they are co-inventors of the Dialysis Measurement Analysis and Reporting system.
Culture Shock and Coping Mechanisms of International Korean Students: A Qualitative Study International students bring academic, cultural, and economic value to universities around the world. However, adjustments for these students can be difficult as a result of culture shock, resulting in early exit from the university. In order to help inform university personnel on how to better assist international students, this study examines the interpersonal, psychological, and physiological symptoms of culture shock of three Korean international graduate students at a large public university in the southwest United States. Data were collected through three interviews and seven weekly online journals. The findings uncovered the existence of culture shock for each of the three participants to differing degrees at various times throughout the semester. In particular, a comparatively higher incidence of interpersonal and psychological culture shock symptoms compared with physiological ones was displayed, thus showing strong support for theories that conceptualize culture shock as individualized in nature. In addition, the data revealed that personal characteristics, family, religion, and exercise all played a role in the participants’ abilities to cope with culture shock. The results of this study could help universities better understand and support international students, ensuring that the university can benefit from the unique value these students bring to campus. Introduction According to the Institute of International Education (2019), 1,095,299 international students were studying in the United States during the 2018/2019 school year. While international students bring unique cultural knowledge, they also are exposed to the often contrasting cultural norms of the United States, and in particular, the culture of universities. Culture is an essential variable that must be considered in the education process for international students in the United States. The "multiple demands for adjustment that individuals experience at the cognitive, behavioral, emotional, social, and physiological levels, when they relocate to another culture" (Chapdelaine & Alexitch, 2004, p. 168) are undeniably critical for the adjustment of international students and often elicit initial debilitating intrapersonal and interpersonal issues. International students from South Korea, in particular, made up 5% of the international student population in the United States in 2018/2019, making it the third-largest group behind China and India. South Korean students are an important part of the international student population, bringing academic, cultural, and economic value to the universities they attend. However, the transition from home in South Korea to universities in the United States is not always easy, with many of these students experiencing culture shock upon arrival. While a substantial amount of literature exists regarding culture shock in general, more empirical research is needed on the factors affecting newly arrived Korean students at colleges and universities in the United States. As international students from Korea constitute a large group studying in the United States, there is an urgent need for a better understanding of the variables associated with culture shock to help ease Korean students' transition to academic life in a foreign country. Theoretical framework Scholars typically define the phenomenon of culture shock using four conceptual models. The first, and by far the most prominent, is the recuperation model. This model incorporates both recovery from physical symptoms associated with this condition as well as psychological ones triggered by identity crises. Lysgaard's (1954) famous U-shaped curve is illustrative of the process, representing the transition from initial positive feelings about the host culture, to negative ones sparked by cultural dissonance and language problems, and finally a return to a "high" of cultural acceptance and adaptation. Similar to Lysgaard's (1954) U-shaped model is Gullahorn and Gullahorn's (1963) W-shaped model. In essence, Gullahorn and Gullahorn's model is a double U reflecting the stages of excitement at the prospect of returning home, followed by the re-entry shock of encountering family and friends who may not understand or appreciate the sojourner's changed identity, and finally re-integration with family, friends, and the culture as a whole, signifying a realization of the positive and negative aspects of both countries. Another model, the learning model, is based on the conception of culture shock as a learning process (Anderson, 1994). Initially, international students are held to be ignorant of their unconscious assumptions about life as well as the norms of the host culture. In response to this, they must learn the sociocultural skills necessary for adjustment and integration by increasing their cultural awareness. As such, the U-shaped curve is eschewed in favor of a gradually upward sloping learning curve. The third model, the journey model, treads a middle ground between the first two models as it conceives of culture shock in linear terms as a transitional experience that is symptomatic of both recovery and learning (Adler, 1975;Anderson, 1994;Ito, 2003). This phenomenological journey conceptualization portrays the psychological adjustments that international students engage with over time. It follows a methodical progression from the periphery of the host culture to the center as well as from rejection and unawareness to acceptance and understanding. By resolving their feelings of cognitive dissonance, these individuals develop cultural sensitivity as they move from early "ethnocentrism" to full "ethnorelativism" (Bennett, 1986). The equilibrium model is the final model and a dynamic, mechanical, and cyclical one based on the contention that individuals suffering culture shock are in disequilibrium, and their reactions to the host culture show a desire to return to balance (Anderson, 1994). To achieve homeostasis, international students must adequately adjust to the new cultural demands by achieving a satisfying level of functioning in terms of their behavior, environment, and frame of reference (Grove & Torbiorn, 1985). As Anderson (1994) points out, each of these models has its shortcomings as they present only a partial conceptualization of culture shock. While individually each may provide one or more pieces to the puzzle, their theoretical isolation from other equally valid models prevents the construction of a cohesive impression of this phenomenon that examines the social, behavioral, cognitive, emotional, and physiological aspects jointly. This disconnection reveals a need for either a conceptualization of culture shock that embodies the many features of the existing models and fully actualizes the multi-faceted nature of this phenomenon or, more likely, a recognition of the individualized nature of culture shock, such as the one called for by Fitzpatrick (2017), that is unable to be supported by a strict model. Literature review International students, as opposed to expatriates who move for family or work, present a unique side to culture shock in that they not only have to acculturate to the host country but also, and perhaps more importantly, adapt to academic life on a foreign campus (Alsahafi & Shin, 2017;Choi, 2006;Hwang, Martitosyan, & Moore, 2016;Mesidor & Sly, 2016;Sato & Hodge, 2009;Zhang, 2016). Inspired by her own experience as an international student, Choi (2006) found six major difficulties faced by international students: insufficient language proficiency, different cultural knowledge, mismatch between needs and the program, lack of faculty support, stress, and institutional inflexibility. Other researchers have reported similar findings, such as Sato and Hodge's (2009) four categories: language differences, academic plight, positive/negative relationships, and emerging self-awareness. Language proficiency has been found to be one of the top difficulties for international students in university settings. De Araujo's (2011) literature review found English as the top issue for international student adjustment. While universities often require language proficiency tests for admissions, Kuo (2011) found that these tests may not adequately measure international students' preparedness and that students still struggle with listening comprehension and oral expression. However, there is not full consensus that English proficiency is the largest obstacle for international student adjustment. Zhang and Goodson (2011) conducted a systematic literature review on 64 quantitative studies focused on predictive factors of cultural adjustment for international students. The review revealed that stress and social support were stronger predictors than English proficiency. Andrade's (2009) study also challenges English proficiency as a major issue affecting the academic and social adjustment of international students. The study reported that neither international students nor their professors felt English language proficiency presented major difficulties with studies, though it should be noted that the university in the study had an international population of about 50% and may have catered their instruction to accommodate this large population group. Recent studies on culture shock have continued to examine the unique challenges faced by international students but have also expanded to include the coping mechanisms employed to help overcome the challenges. Park, Lee, Choi, and Zepernick (2016) found that successful adjustment was associated with coping strategies such as changes in personal problem solving, social support, mentoring relationships, religious beliefs, and the use of campus services. The researchers also suggested that a difference may exist between married and non-married sojourners facing culture shock. Alsahafi and Shin (2017) added improving language proficiency, time management, and mixing with others to the list of coping strategies. While some universities may be taking action to help international students with culture shock, Presbitero (2016) noted that many students do not engage with campus or medical services and suggested that universities be more proactive in encouraging the participation in services for reducing the effects of culture shock. While progress has been made on understanding the coping mechanisms of culture shock, adjustment to a new culture is not guaranteed, and not all international students succeed in coping with university life. While the cultural distance between the home and host cultures may be a factor (see, for example, Mumford, 2000), Fitzpatrick (2017) suggests success in overcoming culture shock may be tied more to the individual and the context rather than to culture. Newsome and Cooper (2016) documented the cultural and social experiences of eighteen international graduate students studying in Britain. By the end of the study, five participants had a positive adjustment, ten had a partial adjustment, and three were unable to adjust and dropped out. Studies such as Newsome and Cooper's present an argument that universities need to better understand both the symptoms of culture shock and the coping mechanisms utilized to successfully overcome them. Despite the large number of Korean students studying abroad, only a few studies have specifically examined the cultural adjustment issues these students face in American universities. Chun and Poole (2009) found that Koreans studying in a graduate social work program in the United States experienced difficulties in five areas: academic problems, financial difficulties, cultural barriers, psychological problems, and family concerns. They further found that several physiological, psychological, and social coping strategies aided the students in managing the difficulties they faced. Lee and Carrasquillo (2006) noted gaps between the perceptions of professors in the United States and their international Korean students regarding the role of the professor, sources of knowledge, and preferred classroom strategies. Though these studies begin to illuminate both the difficulties that Korean students face as well as point to effective coping strategies for managing these difficulties, additional work is needed to understand how these factors contribute to individual experiences of culture shock. To address this gap, the current study will provide an overview and insight into the following research questions: 1. What interpersonal, psychological, and physiological symptoms of culture shock are experienced by newly arrived Korean graduate students? 2. How do newly arrived Korean graduate students cope with these symptoms of culture shock? Context and participants This study was conducted at a large public university in the southwest United States. As of the 2017 academic year, the student population was over 66,000 with about 68% of students identifying as white. Approximately 9% of the student population were international students. The sampling strategy used in this study was non-random and purposeful. Other than the requirement that the participants be first-semester Korean students at the university, no other restrictions were placed upon involvement in the study. The three international students from Korea selected for this study were all newly enrolled students. They were all male, master's degree students of roughly the same age (27, 30, and 32). Table 1 below provides a snapshot of several demographic factors related to the three participants. Research design The research design was a basic or generic qualitative study, which according to Merriam (1998) "seek[s] to discover and understand a phenomenon, a process, or the perspectives and worldviews of the people involved" (p. 11). The rationale for this decision stemmed from the individual nature of the questions as well as from the belief in the existence of the social construction of multiple realities. Through individual interviews in a secure location and a systematic review of the journal entries of the three participants over a 15-week semester, the stories of these individuals were able to be told in a more comprehensive and meaning-sensitive fashion than a survey alone. The experiences of these international students with culture shock were brought to life by embracing the participants' power to speak for themselves. This in-depth exploration of a select few university students rather than a surface level rendering of many international students provided a more complete picture of the phenomenon for these individuals, which in turn helped to uncover "concrete universals", that is, the presence of the general in the particular (Erickson, 1986). Second, the flexibility inherent in qualitative research allowed for openness to emerging problems and/or ideas in the study. The ability to modify (or change completely) questions in interviews as well as L1 journals, for example, reflected the evolving nature of the research. By translating and transcribing the data collected from these two instruments regularly as well as consistently checking with the bilingual interviewer, I was able to constantly compare the findings with my assumptions and make changes accordingly. Without this freedom to explore, valuable insights would potentially have been missed. Data collection Three interviews (approximately one hour each) at selected intervals during the 15-week fall semester (mid-August, mid-October, and mid-December) were conducted primarily by the first author in English with assistance from a bilingual transcriptionist/translator. The individual and informal format involved the use of a semi-structured, open-and closed-ended set of questions and probes. The interviews were digitally recorded, transcribed, and translated (when necessary). The participants were also asked to keep a weekly electronic journal in Korean over the 15-week semester that was open-ended but guided by the provision of questions and statements that asked them to reflect on their experiences in the host culture. The questions and statements, however, evolved throughout the semester to capture any emerging trends or unanticipated events or experiences, such as incidents that directly or indirectly involved the participants and their adjustment to the host culture. Consequently, the culture shock-related findings gathered from the L1 journals provided the basis for questions in subsequent interviews in which the participants were asked about their perceptions of these events or instances. Data analysis The data collected from the interviews and L1 journal entries were input into computer files with language inaccuracies left intact and analyzed using the software program NVivo 7. The constant comparative method of data analysis was used to keep the focal point simultaneously on description, explanation, and evaluation (Glaser & Strauss, 1967). Data were coded based on the study's definition of culture shock as the "multiple demands for adjustment that individuals experience at the cognitive, behavioral, emotional, social, and physiological levels when they relocate to another culture" (Chapdelaine & Alexitch, 2004, p. 168). Given the inherent connections among several of these categories, however, three subcategories were created from these five reference points to provide a broad accounting of the interpersonal (including social and behavioral), psychological (including cognitive and emotional), and physiological symptoms of the participants' cross-cultural adjustments. The simultaneous and continuous reflection upon the data collected as well as the researchers' perceptions created a thick description (Geertz, 1973), thereby allowing for a thorough elucidation of the cases at hand. Positionality Given the first author's role in conducting interviews and analyzing data, his positionality is a critical component in interpreting the results. As an English instructor in South Korea for eight years, he had the opportunity to immerse himself in Korean culture and learn the language to an advanced proficiency level. He developed a deep appreciation for the culture of both Korea and the United States. His firsthand knowledge of both cultures acted as a resource that enabled him to be empathetic while remaining sensitive to epistemological concerns, hierarchical language power differentials, and the inherent dangers involved in speaking for the other. With the help of a translator, he strove to provide an equal exchange between Korean and US culture. He made every effort to avoid overgeneralizations and the creation or perpetuation of stereotypes by remaining aware of his own biases and clearly delineating his etic perspective from the participants' emic ones. What interpersonal, psychological, and physiological symptoms of culture shock are experienced by the newly arrived Korean graduate students? All three participants exhibited a much higher incidence of interpersonal and psychological symptoms compared to physiological symptoms of culture shock. Interpersonal symptoms resulted from the participants' confusion regarding the behavioral and social norms in the United States. and reflected their comparatively limited interaction with Americans. Psychological symptoms manifested through a wide range of feelings such as feelings of stress and loneliness. Physiological symptoms, primarily of fatigue, were infrequently mentioned throughout the study. In most cases across the three categories, the stress surrounding the three participants' adjustments to the use of English and university life provided the impetus for many of their interpersonal, psychological, and physiological symptoms. Interpersonal symptoms Interpersonal symptoms largely resulted from the confusion and curiosity on the part of all three of the participants regarding the behavior and social norms in the United States. In terms of behavior, Sang noticed behavior around campus that conflicted with behaviors he learned as a child in Korea. I was surprised to see people sleeping anywhere, lying down, and so on. In Korea, from childhood, we are educated to look right in front of others and act properly. When I see people reading outside on a beautiful day, it looks good. But people sitting down anywhere, lying everywhere, eating during a lecture while making noise, putting their feet up on chairs in front of professors, or leaving classes early very loudly, don't seem to be considerate for others and they look selfish. However, Sang reasoned later in the interview that this may be a cultural difference. It's not an individual problem. It's the culture. I can't say it's good or bad. For a person who is educated not to act that way, I can see it's bad, but for people who never thought or learned that way, it's okay. A social norm both Sang and Kwang seemed to struggle with was the differences between the two societies in terms of hierarchical structure. Kwang explains the difference in the following way: Korea is the country dominated by Confucian ideas considerably even than China from which the ideas originated. Therefore, that younger pay elder respect is considered natural things. … In the US, there's no such situation, such as Confucianism, so I am surprised at the relationship between professors and students; they talk freely about whatever-textbooks, problems, and so on. It seems very free. In commenting on how language and hierarchy are related, Sang shared: It is strange that they call their father or professor as just "you" It is the same between children and adult. On the contrary, the Korean language has many term of respect. We think much of propriety. In fact, it's unfamiliar with me to call the kent as "kent." In Korea, if younger people call older people's name, it is very bad manner. Both Kwang and Sang struggled with the lack of hierarchy in US society, yet both found ways to eventually appreciate the difference in culture, although to different degrees. In general agreement with the society, Kwang stated: "I think this is good merits in that they make an equal social culture having no a formal atmosphere as hierarchy." Sang took a relatively more neutral stance: "In my opinion, it is sometimes good and sometimes bad. … They meet equally each other so older person do not order to someone who younger people. They looks respect each other." Another topic that arose from the interviews and journals was cultural indifference. In other words, the lack of "noticing" international students on the part of Americans. Sang explained his feelings about the cultural indifference as follows: No one seems to really care. They seem more indifferent about it. Of course, scholars or other tourists might be interested in Korean and oriental culture, but common Americans don't seem to want to know about Korea. Maybe they think Korea is just a country with no importance...If people are interested in Korea or the Korean language, they would probably ask, "Are you Korean?" when I pass by. However, due to their lack of interest, even if I say I'm a Korean, I don't think people are attracted to it. That doesn't make me too happy. While Sang took this negatively, Kwang reasoned that perhaps it is a result of the demographics of the United States: "In the States, there are so many kinds of races and people from different countries. Thus, people don't care about where they are from." For Sang and Kwang, questions regarding the behavioral and social norms in the United States frequently arose from the beginning of the semester until the end, reflecting both confusion and a desire to learn. For Hong, however, there was a relative silence in terms of interpersonal challenges, perhaps revealing little difficulty in adjusting to the customs of the host culture but more likely a sign of his comparatively limited interaction with Americans. Psychological symptoms Psychologically based symptoms of culture shock were described by all three of the participants in their interviews and journal entries. Throughout the journal entries, words such as confused, anxiety, frustration, embarrassment, nervousness, surprise, and worry provided evidence that Sang, Kwang, and Hong were all experiencing psychological discomfort. While in several cases unique to the individual, the anxiety surrounding the three participants' adjustments to the university provided the impetus for many of their feelings of cognitive and emotional dissonance. Some of the strain experienced was the result of differences in the administration of classes at the university. Sang, in his fourth journal entry, asserted that he felt surprised, dumbstruck, and gloomy because he misunderstood the grading system in one of his classes. Unlike in Korea, where in general the education system deals in absolutes rather than in hypothetical situations, in this class, his instructor had asked the pupils to estimate their confidence level next to each of their responses. Hong, like Sang, described challenges in dealing with the academic culture of the university. In his fourth journal, he related how he felt nervous because of his unfamiliarity with the format and level that he should expect on the first exam he was to take. While some strain was a result of class administration, a large part was a result of the struggles the participants had in using English as the medium of instruction. In a journal entry, Sang related how unhappy he was about the time required to study for his classes because of his English ability. Like Sang, Kwang, in his first journal entry, also described feeling psychological (and physiological) symptoms of culture shock due to his English ability, yet he had the conviction that things would improve: "I feel a lack of energy and enthusiasm due to only English. … I hope to return my mind from discouragement to the mind that I can do everything." In addition, Hong, in his first and fifth journal entries related his apprehension of the role of English in his classes: "I am really worried about that...That is my big concern nowadays." Hong's writing expressed symptoms of psychological stress which were directly related to his concerns about the English language. In his midterm interview, Hong had reached a low point in his psychological adjustment to US culture. He related feeling frustrated, nervous, and pathetic because of the difficulties he was having with the English-and time-related demands under which the university had placed him. Another factor in the participants' psychological discord was a result of loneliness. In his first interview, Sang related he "would have been very lonely" if his family were not with him; instead, their presence had been a source of comfort and inspiration. Because Kwang was living apart from his wife and child for the first semester, he also wrote about having to endure feelings of loneliness in his journal. One of the biggest psychological adjustments to life in the United States was being without his family. In the first interview, for example, he related how this separation made him feel: At first, after I arrived here, I was very alone and very sad because I was with my family for the whole time and I became alone all of a sudden. Plus, there was nothing in the house, so it made me feel worse. In the third interview, Kwang did not directly relate his psychological state in regard to his separation from his family other than to say that had this unit been with him, "that would have helped me a lot. … I know that her presence would be great for me." Physiological symptoms Physiological symptoms of culture shock were surprisingly absent in most of the interviews and journal entries, though when mentioned, the physiological symptoms were all in the form of exhaustion due to the rigors of their respective academic programs. In one of his journal entries, Sang explained his exhaustion: "Nowadays, I feel so tired because of my classes. I had been had homework and quiz which hard and complicated problems every week." Sang reiterated this line of thought during one of his interviews: "I try really hard to study, so I get physically tired and I am chased by time." Similarly, in his midterm interview, Kwang lamented the lack of rest he was able to get while studying for a master's degree: "Even the weekend is not a weekend here. Every day is the same; every day is busy. I can't rest my body, my mind. … Sometimes, I felt emotions such as angry and physically tired." Hong echoed a similar sentiment to the other two in his second interview: "I'm so tired right now because of the exam today." Sang's, Kwang's, and Hong's statements all show how the academic requirements of the university had become a source of their exhaustion. Yet despite the similarities in feelings, the perceptions of the physical challenges were viewed differently by each individual. Sang and Kwang believed their respective physical symptoms were merely temporary features of their lives and viewed their physiological difficulties as obstacles that could be overcome. In contrast, Hong's journal entries and interviews revealed that he seemed overwhelmed by the exhaustion caused by the university's demands. In discussing his study exhaustion, Hong divulged: "There's break time from studying. However, I don't want to spend even that small break for those things. … I have to study." Unlike the other two, Hong seemed far less able to cope with his exhaustion in fear that allowing himself free time could adversely affect his academics. In summary, the data revealed that the culture shock symptoms for each participant were individualized. For Sang, interpersonal and psychological examples of his stress in adjusting to US culture were expressed in all of his interviews and found in many of his journal entries, with the social and behavioral factors being the most common. In Kwang's case, the analysis of what he had written and said revealed numerous instances of cognitive and emotional strain. Finally, for Hong, the psychological demands of academic work in an American university were the most commonly mentioned symptoms. How do the newly arrived Korean graduate students cope with these symptoms of culture shock? Sang, Kwang, and Hong coped with culture shock in unique and varied ways. While many of the coping strategies came from their personal outlook, acceptance of the challenges, and resolve to push through, other outside factors such as family, religion, and exercise also played a role. All three participants revealed through their journal and interviews that they were able to come around to understanding cultural differences by the end of the semester simply through their outlook of resolve and acceptance. Sang, for example, relied on his strength of character: "Anyway I have to face difficulties resolutely also I do not mistake again. ...Yet, this makes me livelier...I actively respond to my situation right now." In addition, Sang's words in his fourteenth journal entry and final interview, respectively, showed that he had not given up on his ability to adjust to US culture; if anything, his journal reflections exhibited his resolve to learn from his experiences: "Problems originate from me, from my lack of English ability, so I have hopes for changes in my thoughts if time passes." In Kwang's thirteenth journal entry, he remained positive about his ability to adjust despite feeling "heavy": "However, I do not frustrate even though the English problem leads me to stress." Consequently, Kwang's words revealed his capacity to cope with any future shocks he would face in the host culture. By the last interview, Hong had overcome his initial psychological discomfort with the culture in the United States. He confidently asserted at that time: I'm already all adjusted, so there is nothing surprising anymore. … There have been surprising issues during the 4 months, but I don't find them anymore. … I think people need to have confidence because it's not too difficult to live here and anyone can do it. There should be no worries. I worried a lot in the beginning. … People live here, so there shouldn't be any problem, so no worries, take it easy. I don't even see the need for advice. … My advice is not to worry. This utterance provided evidence of an individual who felt assured that he had emerged on the other side of his adaptation to the host culture and believed that he was cognitively and emotionally capable of surviving any future challenges. Family also emerged as an important factor in the participants' ability to cope with culture shock. In discussing some of his difficulties, Sang shared: "[My family] helped me not to be so lonely in this environment. When I realize that I am my children's dad, I am motivated to persevere through the difficulties." Sang's wife and children were important factors for helping him continue to cope with the cognitive and emotional stress of adapting to US culture. While Kwang did not have his family present, he realized the benefit it would have been to him: "[My wife] would have helped me a lot. … I know that her presence would be great for me." Hong did not specifically mention family, but the absence of interpersonal symptoms in his interviews and journal entries may have been a result of living with his mother during the first part of the semester. Finally, Kwang offered two additional strategies for coping with the psychological and physiological symptoms of culture shock: religion and exercise. Kwang's words evidenced his willingness to admit that he could not resolve his cultural struggles on his own or through recourse to his own value system. In terms of religion's role in coping with some of the psychological symptoms, particularly stress, Kwang said: "I need to overcome the stress through my faith." As for dealing with the physiological symptoms of culture shock, Kwang offered exercise as a coping mechanism: "Running is a good way to relieve my stress. Sometimes, when I run around park close to my apartment or on running machine in fitness room, I feel I can do whatever." While much of the adjustment to academic culture in the United States came from their outlook of acceptance and resolve from within, the participants all had outside factors that aided in the acceptance and resolve. For Sang, and possibly Hong, family played a role. For Kwang, religion and exercise aided him in the absence of his family. Discussion The findings revealed the existence of some degree of culture shock for each of the three participants at different times throughout the semester. While there were similarities among Sang, Kwang, and Hong, there were also important differences in the quantity and quality of the symptoms, implying that this phenomenon was neither a preordained nor an entirely shared experience. In effect, this variation provided strong support for the individual nature of culture shock. For Sang, interpersonal and psychological examples of his stress in adjusting to US culture were found in many of his journal entries and expressed in all of his interviews, with the social and behavioral factors being the most common. In Kwang's case, the analysis of what he had written and said revealed numerous instances of cognitive and emotional strain. Finally, for Hong, the psychological demands of adjustment to US culture were the most commonly mentioned symptoms. The present study did not confirm the appropriateness of using models to represent culture shock. Instead, support for the idea that issues related to this phenomenon are individualized and thus not uniform in terms of severity or timing was provided. In addition, the comparatively different experiences of the three participants with culture shock as well as their use of sources of stress as opportunities for change and growth were in agreement with Choi (2006) and Park et al. (2016). This study supports findings, such as those found by the literature review done by de Araujo (2011), that English language proficiency is one of the major difficulties faced by international students. The influence of English on the respective adjustments of the three participants to US culture was noteworthy and constant throughout the study. The self-professed difficulties that Sang, Kwang, and Hong faced in using this language in all spheres of their lives proved to be the most daunting of the challenges they faced. Their relative lack of proficiency with English affected them both inside and outside of the university classroom and severely curtailed their opportunities for establishing any real sense of connection with the host culture. Based on the number of comments throughout the study devoted to the role of English in the participants' respective adjustments to the culture in the Unites States, this factor was an instrumental variable in all facets of their lives. In particular, as this language was the sole means by which content was delivered in all of the classrooms on campus and this environment was where the three participants spent the majority of their time, concerns about their English proficiency in this arena were the most often mentioned. For international students such as Sang, Kwang, and Hong, the university's general lack of any provision or accommodation for the needs of second language learners placed them at a disadvantage in relation to native speakers and thereby engendered culture shock-related symptoms. The anxiety produced by their English proficiency level increased throughout the study, implying that their ability to adjust to US culture had been impeded by their language ability, thus perpetuating feelings of culture shock. The participants shared some of the same coping strategies as those in Park et al. (2016) and Alsahafi and Shin (2017). The impact of the participants' personal outlooks on lessening the effects of culture shock was substantial. The attitudes and personalities of Sang, Hong, and Kwang, to varying degrees, evidenced a strong sense of resiliency in the face of cultural challenges as well as the optimism that things would improve over time. In addition, the willingness to not only understand cultural differences but to learn from them characterized many of the participants' interview responses and journal entries. The marital status of two of the participants was shown to be an influential coping mechanism. Sang's wife and children were both a resource and the source of motivation driving his desire to succeed. For Kwang, despite the anxiety he felt because of the semester-long separation from his wife and child, the future reunion with his family was a source of strength for him when coping with culture shock. This finding supports the suggestion made by Park et al. (2016) that marital status may contribute to the coping mechanisms utilized to handle culture shock. The influential connection between religiosity and coping with culture shock was supported by the collected data from Kwang. The influence of this factor was evident when viewed from a spiritual perspective. The sense of connection to a higher power instilled in Kwang the confidence and faith that the adjustment to life in the United States would be accomplished. Strategies that appeared to be absent included campus services and opportunities to improve language proficiency. Presbitero's (2016) assertion that international students do not engage in campus services possibly held true for the participants in this study. There was near silence throughout the study on the side of the participants regarding the use of campus services for support. As for improving language proficiency, the participants found that the overwhelming nature of their studies and the subsequent psychological stress prevented them from engaging in experiences that would improve their English. One additional strategy that appeared in this study but is not included in Park et al.'s (2016) and Alsahafi and Shin's (2017) is exercise. This strategy helped Kwang release psychological and physiological stress that resulted from culture shock. Although only mentioned by one participant in this study, exercise has the potential to be a viable strategy for international students overwhelmed by the stress of studying in a foreign country. Limitations and Implications There are several limitations to this study. First, generalizations that can be made from the data are limited due to the qualitative nature of the study and the small number of participants. An expansion of the participant pool in terms of number, demographics, and location could help to provide a more comprehensive picture of the phenomenon of culture shock. Second, the duration of this study was only one semester. By tracking individuals over several semesters, a study could offer greater insight into the effect, if any, of the length of stay on participants' abilities to cope with culture shock. Based on the findings of this study, the practical implications for university administrators, in particular, are several. The stress associated with the English language, while present for most international students, is perhaps most acute for those beginning their academic programs. Accordingly, the following accommodations could help to alleviate this strain. While most universities have a proficiency examination, the somewhat haphazard method in which it is administered as well as the oftentimes lax enforcement of the results needs to be remediated. Greater concern for the reliability and validity of the testing instrument and environment as well as the training of the test administrators should be enforced. Furthermore, if a student is identified as needing English remediation, he or she should be strongly encouraged to take classes in the community or through the university's intensive English program, request help from on-campus facilities that provide tutoring or aid in study skills, or hire a private tutor during the same semester. Though this may increase the student's workload in the short term, the increase in English proficiency may help to reduce future academic stress as the student enrolls in higherlevel courses. In addition, assigning an American partner to international students during both students' first semesters could prove to be a mutually beneficial arrangement. For the American, this system would give them valuable exposure to individuals from outside of the United States; for international students, this partnership could provide valuable linguistic and cultural assistance. Also, the entire faculty of the university, but particularly the native speakers, could be reminded of the necessity of consideration of the linguistic needs of international students as well as of the different educational systems from which they have emerged. Lecture, group work, and frequent testing, for example, may be unfamiliar approaches to learning. As a consequence, training on how to facilitate the interaction of international students with American students in classroom activities, validate different linguistic and cultural backgrounds in classes, and do group work, for example, could be provided to faculty members. Additionally, international students could be strongly encouraged by the faculty to ask questions and seek out the instructor for clarification on content that they do not understand. It is hoped that future studies further examine the individualized nature of culture shock (Fitzpatrick, 2017). While this study focused specifically on Korean international students, many other demographic groups comprise the international student community in the United States and are in need of study. Beyond the United States, universities around the world are attracting international students to their campuses, each with their unique contextual factors. Future research on the culture shock of international students in non-Western universities would help further contribute to our understanding of culture shock's individualized nature. Conclusion This study uncovered the individual nature of the culture shock phenomenon. All of the participants were found to exhibit certain interpersonal, psychological, and physiological symptoms at various times throughout the semester that were indicative of the stress brought on by their respective adjustments to the demands of the university studies in the host culture. In conclusion, the participants in this study for the most part proved to be strong individuals who were capable of coping with the shocks of adjusting to US culture. While tested at various times and in different ways throughout the study, their respective voices made it clear that the challenges they had faced would not likely dim the pursuit of their academic, personal, and professional goals.
Patients Recovering from Severe COVID-19 Develop a Polyfunctional Antigen-Specific CD4+ T Cell Response Specific T cells are crucial to control SARS-CoV-2 infection, avoid reinfection and confer protection after vaccination. We have studied patients with severe or moderate COVID-19 pneumonia, compared to patients who recovered from a severe or moderate infection that had occurred about 4 months before the analyses. In all these subjects, we assessed the polyfunctionality of virus-specific CD4+ and CD8+ T cells by quantifying cytokine production after in vitro stimulation with different SARS-CoV-2 peptide pools covering different proteins (M, N and S). In particular, we quantified the percentage of CD4+ and CD8+ T cells simultaneously producing interferon-γ, tumor necrosis factor, interleukin (IL)-2, IL-17, granzyme B, and expressing CD107a. Recovered patients who experienced a severe disease display high proportions of antigen-specific CD4+ T cells producing Th1 and Th17 cytokines and are characterized by polyfunctional SARS-CoV-2-specific CD4+ T cells. A similar profile was found in patients experiencing a moderate form of COVID-19 pneumonia. No main differences in polyfunctionality were observed among the CD8+ T cell compartments, even if the proportion of responding cells was higher during the infection. The identification of those functional cell subsets that might influence protection can thus help in better understanding the complexity of immune response to SARS-CoV-2. Introduction The characterization of the immune response mounted against Severe Acute Respiratory Syndrome-Coronavirus-2 (SARS-CoV-2) infection is crucial to understanding and predicting short-and long-term protection. Both innate and adaptive immunity has been well described during severe cases as well as in recovered patients [1][2][3][4][5][6][7][8][9][10] and it has been reported that an integrated response can limit COVID-19 disease severity [11]. Developing SARS-CoV-2 antigen-specific CD4+ and CD8+ T cells besides antibodies is crucial to prevent severe outcomes and protect against reinfections [11,12]. This explains, at least in part, why: (i) immunocompromised patients with reduced humoral response and deficient B cells can develop a SARS-CoV-2 specific T cell response [13]; (ii) patients experiencing mild COVID-19 can successfully control the virus thanks to a robust SARS-CoV-2 T cell response even in the absence of antibodies [11,[14][15][16][17]. SARS-CoV-2 T cell response in patients recovered from COVID-19 is multi-specific as T cells recognize several epitopes, by using a heterogenous T cell receptor (TCR) [18][19][20][21]. Functional studies using peptide pools covering most of SARS-CoV-2 encoded proteome demonstrated that T cell response to structural proteins such as the membrane (M), spike (S) or nucleocapsid (N) is co-dominant and that a significant reactivity is also developed against other targets, such as Open Reading Frames (ORFs) and nonstructural proteins (NSPs) [5,18,19]. However, whether this multi-specificity is the key to long-term protection is still uncertain. CD4+ and CD8+ T cell polyfunctionality indicate the ability of cells to simultaneously produce more than one cytokine and to exert multiple functions. This is a crucial feature in antigen-specific responses as, in some cases, the quality of the response can be more important than the quantity in conferring protection against reinfection or pathogen reactivation [22,23]. In this scenario, CD4+ T helper type 1 (Th1) and Th17 are fundamental in inducing CD8+ T and B cells activity and promoting a pro-inflammatory response [12,24,25]. For example, Th1 and Th17 CD4+ T and CD8+ T cells dominate the influenza A virus-specific response, so inducing both a highly inflammatory environment and viral clearance [26][27][28]. For these reasons, given the role and capability of these cells, the aim of the study is to characterize the polyfunctional profile of SARS-CoV-2-specific T cells. Moreover, we aimed to investigate possible differences in the specific response between patients experiencing and recovering from moderate or severe infection, deepening at the same time the immunogenic capacity of M, N and S SARS-CoV-2 structural proteins. Characteristics of the Patients We studied a total of 28 patients with COVID-19 pneumonia admitted into the Infectious Diseases Clinics or to the Intensive Care Unit (ICU) of the University Hospital in Modena over the period of March 2020-May 2020, and 10 healthy donors. Characteristics of patients are reported in Table 1. COVID-19 moderate and COVIDsevere presented higher levels of LDH when compared to recovered moderate and recovered severe, respectively. Regarding SARS-CoV-2-specific IgM and IgG, even if IgM were more represented among patients with moderate disease, no statistically significant differences were found between those with COVID-19 and the recovered, while HD tested negative for both assays. One patient from the COVID-19 severe group and one from the recovered severe group presented with type 2 diabetes. Recovered moderate and recovered severe were hospitalized and diagnosed with SARS-CoV-2 infection 120 ± 18 (mean ± SD) days and 128 ± 3 (mean ± SD) days, respectively, before blood withdrawal. An example of the gating strategy for the identification of cells able to exert one or more functions is reported in Figure S1. Peripheral blood mononuclear cells (PBMCs) were stimulated or not with M, N or S peptide pool, cultured and stained. PBMCs were first gated according to their physical parameters, and the aggregates were electronically removed from the analysis by using a gate designed for singlets. Living (Live/Dead, L/D-) cells and CD3+ T cells were identified. Among CD3+ cells, CD4+ and CD8+ T cell subpopulations were identified. In each subpopulation, the percentage of cells producing interferon (IFN)-γ, Tumor Necrosis Factor (TNF), Interleukin (IL)-2, IL-17, and granzyme B (GRZB), as well as expression of CD107a, was then quantified. Recovered Patients Who Experienced a Severe Disease Display High Percentage of Antigen-Specific CD4+ T Cells Producing Th1 and Th17 Cytokines Cytokine production was assessed following 16 h of in vitro stimulation with SARS-CoV-2 peptide pools covering the sequence of different proteins (N, M or S). The percentage of CD4+ and CD8+ T cells producing IFN-γ, TNF, IL-2, IL-17, and GRZB was quantified along with the percentage of cells able to express CD107a. The identification of these cytokines allows us to recognize different subsets of helper CD4+ and CD8+ T cells, such as: (i) Th1, defined as cells producing IFN-γ, TNF, IL-2; (ii) Th17 identified as cells producing IL-17; (iii) cytotoxic T cells, which are positive for GRZB and CD107a [29,30]. Individuals who recovered from a severe form of COVID-19 disease showed a higher percentage of CD4+ T cells responding to N and S compared to healthy donors (HD) (Figure 1a). Moreover, taking into consideration all the stimuli used, patients who recovered from a severe disease exhibited a higher percentage of CD4+ T cells producing IFN-γ, TNF and IL-2 compared to either HD or individuals who recovered from moderate disease ( Figure 1b). This was also observed when COVID-19 patients with a moderate disease were compared to HD. Furthermore, recovered individuals who experienced a severe disease also displayed a higher percentage of CD4+ T cells producing IL-17 compared to recovered moderate, regardless of the stimulus used ( Figure 1b). On the other hand, COVID-19 patients with severe infection were characterized by higher proportions of cells expressing CD107a compared to HD after M and S stimulation, indicating a more enhanced cytotoxic phenotype ( Figure 1b). Regarding CD8+ T cell response, the percentage of CD8+ T cells responding to peptide pool stimulation was higher in COVID-19 patients with a moderate disease compared to either HD or recovered individuals who experienced a moderate infection. In addition, COVID-19 patients with severe form exhibited a higher percentage of responding CD8+ T cells compared to those who recovered from a severe form ( Figure 2a). Furthermore, after in vitro stimulation with M, COVID-19 severe patients displayed a higher percentage of CD8+ T cells expressing CD107a compared to individuals who recovered from severe infection ( Figure 2b). Thus, antigen-specific CD8+ T cells are more abundant among COVID-19 patients and present a more pronounced cytotoxic phenotype in line with their role in mediating clearance during viral infections [31]. Recovered Patients Who Experienced a Severe Disease Are Characterized by Polyfunctional SARS-CoV-2 Antigen-Specific CD4+ T cells In vitro stimulation with the M peptide pool induced a different polyfunctional profile between COVID-19 moderate and severe patients, COVID-19 severe patients and those who recovered from severe disease. Moreover, the polyfunctional response was different when compared to HD in either patients with moderate COVID-19 or those who recovered from severe disease. In particular, COVID-19 moderate patients and recovered individuals from severe disease, when compared to HD, reported higher percentages of IFN-γ+IL-2+TNF+, IFN-γ+TNF+ and IL-2+TNF+ within CD4+ T cells. Patients experiencing COVID-19 moderate also displayed a high percentage of IFN-γ+IL-2+ within CD4+ T cells. The percentage of the latest population was higher in COVID-19 severe and recovered moderate if compared to recovered severe and HD (Figure 3a). Stimulation with N induced differences in the overall polyfunctionality of CD4+ T cells between patients who recovered (moderate vs. severe) and between COVID-19 severe patients and those who recovered from severe disease. Finally, COVID-19 moderate patients and recovered displayed a different cytokine profile when compared to HD. Regarding the subsets of polyfunctional CD4+ T cells, individuals who recovered from the severe disease exhibited the same cytokine production as seen with M stimulation. In addition, this group of patients presented a small population of TNF+, IL-17+ cells. COVID-19 moderate patients, compared to HD, also presented a high percentage of IFN-γ+IL-2+TNF+ and IL-2+TNF+ ( Figure 3b). , COVID-19 severe (n = 6), recovered moderate (n = 9) and recovered severe (n = 6). Mean (center bar) ± standard error of the mean (SEM, upper and lower bars) is represented. Statistical analysis was performed using Kruskal-Wallis nonparametric test corrected for multiple comparisons by controlling the False Discovery Rate (FDR), method of Benjamini and Hochberg. * q < 0.05; ** q < 0.01. Background was subtracted from each sample. (b) Representation of the total production of each cytokine after stimulation of CD4+ T cells. We evaluated the percentage of CD4+ T cells producing IFN-γ, TNF, IL-2, IL-17, GRZB as well as expressing CD107a among HD (n = 10), COVID-19 moderate (n = 7), COVID-19 severe (n = 6), recovered moderate (n = 9) and recovered severe (n = 6). Data are represented as individual values, mean (center bar) ± standard error of the mean (SEM, upper and lower bars) is represented. Statistical analysis was performed using Kruskal-Wallis non-parametric test corrected for multiple comparisons by controlling the False Discovery Rate (FDR), method of Benjamini and Hochberg. * q < 0.05; ** q < 0.01; *** q < 0.001. Background (i.e., the value determined in unstimulated controls) was subtracted from each sample. Finally, after stimulation with S, individuals who recovered from different disease severity showed a different polyfunctionality as well as COVID-19 moderate patients and recovered from the moderate disease. In addition, recovered from severe disease displayed a distinct polyfunctional asset compared to COVID-19 severe and HD. Individuals who recovered from severe disease presented almost overlapping results as those observed after stimulation with N and M. Moreover, they also displayed a higher percentage of TNF+IL-17+ within CD4+ T cells if compared to COVID-19 severe and HD. Regarding COVID-19 moderate, the cell distribution after stimulation is the same as the one measured after N stimulation (Figure 3c). For clarity, Figure 3d shows the legend of the colors and symbols of the previous Figure 3 panels. The polyfunctional profile of CD8+ T cells after in vitro stimulation with M or N was similar among the groups. Only the S peptide pool induced a slightly different profile in COVID-19 moderate patients when compared to HD (Figure 4). , COVID-19 severe (n = 6), recovered moderate (n = 9) and recovered severe (n = 6) patients. Data in pie charts are represented as median values. Frequencies were corrected by background subtraction as determined in non-stimulated controls using SPICE software. Statistical analysis between pie charts was performed using permutation test (* p < 0.05). Pie arches represent the total production of different cytokines. Discussion In this study, we describe the differences in the production of cytokines by SARS-CoV-2-specific T cells from patients with COVID-19 (severe or moderate) and in recovered individuals after in vitro stimulation with different peptide pools. Our aim was to measure not only the magnitude but also the characteristics, in qualitative terms, of such antigen-specific response. We found that COVID-19 moderate patients develop polyfunctional CD4+ T cells compared to patients experiencing a severe infection, that in turn display a higher percentage of CD107a+ cells. Besides their helper capability, CD4+ T cells , COVID-19 severe (n = 6), recovered moderate (n = 9) and recovered severe (n = 6) patients. Data in pie charts are represented as median values. Frequencies were corrected by background subtraction as determined in non-stimulated controls using SPICE software. Statistical analysis between pie charts was performed using permutation test (* p < 0.05). Pie arches represent the total production of different cytokines. Discussion In this study, we describe the differences in the production of cytokines by SARS-CoV-2-specific T cells from patients with COVID-19 (severe or moderate) and in recovered individuals after in vitro stimulation with different peptide pools. Our aim was to measure not only the magnitude but also the characteristics, in qualitative terms, of such antigenspecific response. We found that COVID-19 moderate patients develop polyfunctional CD4+ T cells compared to patients experiencing a severe infection, that in turn display a higher percentage of CD107a+ cells. Besides their helper capability, CD4+ T cells can exert cytotoxicity, and this has been described during persistent infections such as those by Epstein-Barr virus [32], cytomegalovirus [33], and Human Immunodeficiency Virus (HIV) [34]. Cytotoxic potential can be measured, detecting the expression of the degranulation marker CD107a [35]. This result is in line with other studies demonstrating that patients experiencing severe COVID-19 usually mount an impaired SARS-CoV-2 T cellspecific response [11,36]. It is known that the expression of exhaustion markers such as Programmed Death-1 (PD-1) and T-cell immunoglobulin and mucin domain-3 (Tim-3) is associated with disease progression [37,38]. This might reinforce the concept that patients experiencing a more severe infection present impaired CD4+ and CD8+ T cell functionality due to an exhausted phenotype. However, whether the expression of such markers reflects functional exhaustion rather than ongoing activation is still debated [37]. During the infection, Th1 cytokines such as IFN-γ, IL-2 and TNF are essential for supporting the expansion and maturation of CD8+ T lymphocytes and B cells [12]. The loss of CD4+ Th1 leads to a progressive CD8+ T cell decline and dysfunction with important implications for controlling the infection [39]. In addition, Th17 cells are responsible for the recruitment of several different cell populations at the site of the infection, inducing the inflammatory process necessary for the immediate protective response against a pathogen [24,25]. We found that, if compared to patients experiencing severe COVID-19, those recovering from severe COVID-19 display SARS-CoV-2-specific, highly polyfunctional CD4+ T cells with a Th1 and Th17 phenotype. No differences were reported in the CD8+ T cell compartment, reflecting the T cell kinetics of the immune response contraction according to which 2 weeks after onset symptoms, when circulating CD8+ T cells progressively decline, CD4+ T cells remain stable and eventually increase in the initial recovery phase (1-2 months after infection), more than immediately after infection [11,40]. T cells are able to both proliferate and secrete cytokines that in turn can influence other cell functions as well as induce cytolysis of infected cells. Polyfunctionality is the ability of cells to simultaneously perform more than one function, and it can be measured at a single cell level by flow cytometry [41]. In CD4+ T cells, such property is a correlate for protection against different pathogens. As an example, comparing the profile (more than the amount) of T cell cytokine production in different groups of HIV-infected individuals such as in those who control the infection to that of patients with a chronic progression of the infection revealed the presence of several key molecules involved in controlling the infection. This approach suggested that in some cases the quality of the T cell response, not the quantity, is correlated with immune protection [23]. During cytomegalovirus (CMV) infection, the development of polyfunctional T cells correlates with a better prognosis and confers an immunological advantage against other pathogens [22]. In addition, polyfunctional CD4+ T cells represent a marker for spontaneous control of viral replication in CMVseropositive patients undergoing liver transplantation [42]. On the whole, this indicates the importance of measuring representative functions of T cells to identify and define correlates of immune protection. The identification of the most immunogenic epitopes is key to the study and understanding of cellular immune response to gain insights into virus-induced infection mechanisms. An immunogenic peptide is one that is presented by a self-major histocompatibility complex (MHC) and is able to elicit a T cell response [43]. Thus, the identification of such epitopes is also of importance in the context of future therapies. M, N and S are SARS-CoV-2 structural proteins that constitute different portions of the virus. These proteins have different interactions with the other parts of the virion, and during the infection, they interact differently and in different moments with the host cell. This may define a different level of immunogenicity for each protein. For these reasons, we deepened the SARS-CoV-2 specific response to M, N and S. Overall, in our study we observed that M, N and S induced a similar response among the categories considered, confirming their co-dominance [18,44]. We are aware that this study has a main limitation since the number of individuals that we could study is relatively small, because of the difficulties to obtain biological material from patients admitted to the hospital. However, even if we could study a relatively low number of patients, we could define the polyfunctionality profile of CD4+ and CD8+ T cells during and after SARS-CoV-2 infection in patients experiencing different severities of COVID-19. Global knowledge of the complex interaction during the cellular response to infection, as well as SARS-CoV-2-induced changes, is helping in understanding mechanisms beyond the immune response toward protective phenotype. In addition, the identification of unique cell subsets involved in immune protection could allow us to develop and use more and more sophisticated techniques that accurately measure the outcome of new therapies. Thus, the successful use of functional T cell analyses will likely help to significantly advance the field of SARS-CoV-2 therapy as well as vaccine efficacy, and hopefully, aid in reducing the global burden of the pandemic. Patients Four groups of patients were enrolled in this study, along with a group of healthy donors (HD). We enrolled 13 COVID-19 patients admitted into the Infectious Diseases Clinics or Intensive Care Unit (ICU) of the University Hospital in Modena between March and May 2020. Patients tested positive for the SARS-CoV-2 PCR test. Within this group, 7 patients (median age: 55.0 years) were classified as moderate and 6 (63.0 years) as severe, according to World Health Organization guidelines [45]. We also studied 15 COVID-19 recovered patients, enrolled during follow-up visits between June and August 2020. Within this group, 9 patients (56.0 years) were classified as moderate and 6 (56.5 years) as severe. COVID-19 and recovered patients were subdivided for the analysis according to disease severity. Moreover, 10 HD (49.5 years) were included in this study. HD presented neither symptoms nor prior diagnosis of SARS-CoV-2 and had negative serology. Informed consent, according to Helsinki Declaration, was provided by each participant. All uses of human material have been approved by the local Ethical Committee (Comitato Etico dell'Area Vasta Emilia Nord, protocol number 177/2020, 11 March 2020) and by the University Hospital Committee (Direzione Sanitaria dell'Azienda Ospedaliero-Universitaria di Modena, protocol number 7531, 11 March 2020). Blood Processing Blood samples were obtained after informed consent. For COVID-19 patients, blood was obtained after diagnosis of SARS-CoV-2 infection during hospitalization. For recovered patients, blood was collected during a follow-up visit within 120-128 days after hospital admission and SARS-CoV-2 diagnosis. Up to 20 mL of blood were collected from each patient in vacuettes containing ethylenediamine-tetraacetic acid. Peripheral blood mononuclear cells (PBMCs) were isolated according to standard procedures and stored in liquid nitrogen until use [46]. Plasma was collected and stored at −80 • C until the quantification of IgM and IgG, performed according to standard methods by SARS-CoV-2 IgM or IgG Quant Reagent Kit for use with Alinity (Abbott, Abbott Park, IL, USA). Statistical Analysis Quantitative variables were compared using the Kruskal-Wallis non-parametric test corrected for multiple comparisons by controlling the False Discovery Rate (FDR), method of Benjamini and Hochberg. Statistically significant q values are represented (* q < 0.05; ** q < 0.01; *** q < 0.001). T cell polyfunctionality was defined by using Simplified Presentation of Incredibly Complex Evaluation (SPICE) software (version 6, kindly provided by Dr. Mario Roederer, Vaccine Research Center, NIAID, NIH, Bethesda, MD, USA) [48]. Data from the total cytokine production are represented as individual values, means, and standard errors of the mean. Regarding polyfunctionality, data in pie charts are represented as median values; statistical analysis was performed using permutation test (* p < 0.05; ** p < 0.01; *** p < 0.001). Data in graphs are reported as individual values, means and standard errors of the mean. Statistical analyses were carried out using Prism 6.0 (GraphPad Software Inc., La Jolla, CA, USA). Background was subtracted from each sample.
Dry eye disease and tear film assessment through a novel non-invasive ocular surface analyzer: The OSA protocol We describe the role of OSA as a new instrument in the study of dry eye, and we recommend a protocol for conducting the tests as well as describe the advantages and disadvantages compared with other instruments. A comparison with other ocular surface devices (Tearscope Plus, Keratograph 5M, anterior-segment ocular coherence tomography, Easy Tear View-Plus, LipiView, IDRA, and LacryDiag) were presented due to manual or automatic procedure and objective or subjective measurements. The purpose of this study was to describe the OSA as new non-invasive dry eye disease diagnostic device. The OSA is a device that can provide accurate, non-invasive and easy-to-use parameters to specifically interpret distinct functions of the tear film. This OSA protocol proposed a lesser to higher non-invasive ocular surface dry eye disease tear film diagnostic methodology. A complete and exhaustive OSA and OSA Plus examination protocol was presented within the subjective questionnaire (Dry Eye Questionnaire 5, DEQ5), limbal and bulbar redness classification (within the Efron grade Scale, interferometry lipid layer thickness (LLT) (according to Guillon pattern), tear meniscus height (manually or automatic), first and mean non-invasive break up time (objective and automatic) and meibomian gland (MG) dysfunction grade and percentage (objective and automatic). The OSA and OSA Plus devices are novel and relevant dry eye disease diagnostic tools; however, the automatization and objectivity of the measurements can be increased in future software or device updates. The new non-invasive devices supposed represent a renewal in the dry eye disease diagnosis and introduce a tendency to replace the classic invasive techniques that supposed less reliability and reproducibility. Introduction Ocular surface pathology is a general term that includes dry eye, with involvement of the cornea, conjunctiva, eyelids, and meibomian glands (MGs). Dry eye is a group of disorders characterized by loss of tear film homeostasis, due to either lipid layer alteration owing to the MGs (evaporative dry eye) or insufficient aqueous tear production (hyposecretory dry eye) leading to tissue damage and inflammation (1). There are various techniques for measuring and diagnosing dry eye. The most common tests for this diagnosis are invasive and can yield results that differ from the natural properties of the tear, so non-invasive methods would be more appropriate (2). Ocular surface diagnostic tests for dry eye disease should combine high precision, good sensitivity and reproducibility. Among the most commonly used diagnostic devices, Placido method rings have been used in different studies as an alternative to break-up time (BUT) to avoid the use of fluorescein, although they have a weak correlation with other dry eye disease diagnostic measurements (3). It has been recommended that ocular surface measurements be performed from less invasive to more invasive (4). Such measurements include the use of a questionnaire to collect symptoms (5), evaluation of limbal and bulbar conjunctival hyperemia (6), assessment of tear meniscus (7), study of lipid layer thickness (LLT) and pattern (8), non-invasive tear breakup time (NIBUT) (9) and infrared meibography (10). However, some of the measures used to evaluate dry eye can be influenced by the subjectivity of the examiner. Among the non-invasive devices for dry eye measurement are Tearscope Plus R (Keeler, Windsor, United Kingdom), Polaris (bon Optic, Lübeck, Germany), EasyTear Viewplus R (EasyTear, Rovereto, Italy), Oculus Keratograph 5M R (Oculus, Arlington, WA, United States) (K5M), LipiView R interferometer (TearScience Inc., Morrisville, NC, United States), IDRA R Ocular Surface Analyzer from SBM System R (Orbassano, Torino, Italy), LacryDiag R Ocular Surface Analyzer (Quantel Medical, Cournon-d'Auvergne, France) and Ocular Surface Analyzer (OSA) from SBM System R (Orbassano, Torino, Italy) (11)(12)(13). A summary of the functionalities of the ocular surface devices is presented in Table 1. Regarding Tearscope Plus, the device is attached to the slit lamp, and the measurement is achieved through image analysis software (14). Polaris uses LED light to improve the visibility of both the lipid layer of the tear film and the tear meniscus (15). On the other hand, Oculus Keratograph introduces tear analysis software with an integrated caliper that allows capturing images for a better measurement of the height of the tear meniscus (16). Anterior segment optical coherence tomography (AS-OCT) also allows the measurement of the height of the tear meniscus through integrated software, producing a very high-quality resolution in micrometers. AS-OCT and Keratograph are two comparable methods (17). EasyTear Viewplus R is also attached to the slit lamp, and through white LED lights, it achieves analysis of the lipid layer, NIBUT and tear meniscus; with infrared LEDs, it performs meibography, and the software quantifies the image structures (18). LipiView R allows automated measurements of the lipid layer with nanometer precision. The limitation is that only values greater than 100 nm are displayed (19). IDRA R is attached to the slit lamp to perform the measurement quickly and in a fully automated manner (20). LacryDiag R uses white light in its system to capture images and infrared light for the analysis of the MGs (13). Finally, OSA R is designed to perform dry eye assessment based on the following diagnostic measurements: Dry Eye Questionnaire (DEQ-5), limbal and bulbar conjunctival redness classification, tear meniscus height, LLT interferometry, NIBUT, and meibography gland dysfunction loss percentage. In the present study, we describe the role of OSA as a new instrument in the study of dry eye, and we recommend a protocol for conducting the tests as well as describe the advantages and disadvantages compared with other instruments. Materials and equipment Questionnaire Many questionnaires to analyze and classify symptoms are entered into the software of the instruments for dry eye assessment: Ocular Surface Disease Index (OSDI) in Keratograph 5M (21), Standard Patient Evaluation of Eye Dryness Questionnaire (SPEED) in IDRA (20) and Dry Eye Questionnaire (DEQ-5) in OSA (5). On the contrary, LD (3,22), LipiView (19,20), EasyTear Viewplus, Polaris and Tearscope Plus (23, 24) have no questionnaires in their software. The sensibility and specificity are influenced not only by the number of items in each questionnaire, or the time studied but also by the capacity to classify symptoms. The OSDI is a 12-item questionnaire focusing on dry eye symptoms and their effects in the previous week. In subjects with and without dry eye disease, the OSDI has shown good specificity (0.83) and moderate sensitivity (0.60) (25). The SPEED has eight items to evaluate the frequency and severity of symptoms in the last 3 months. Sensibility and specificity values are 0.90 and 0.80, respectively (26, 27). In the DEQ-5, the symptoms in the past week are analyzed through five questions. This survey has been validated in comparison to the OSDI (Spearman correlation coefficients, r = 0.76) (28) and (r = 0.65, p < 0.0001). The sensitivity is 0.71, and the specificity is 0.83 (29). Thus, any of these three questionnaires could be a good option to analyze dry eye symptoms, although Frontiers in Medicine 02 frontiersin.org the DEQ-5 might be quicker to use, given the number of items. The advantage that OSA presents with respect to other dry eye analyzers is that the questionnaire has few items and is completed quickly. However, as disadvantages, we find that questionnaires with a greater number of items have greater repeatability. Limbal and bulbar redness classification Regarding the limbal and bulbar redness classifications (LBRC), Keratograph 5M has software (R Scan) to save images and objectively classify them into four degrees ranging from 0 to 3 (30). IDRA, LacryDiag and OSA use subjective procedures, given that the software only shows the image taken and the analysis must be carried out by an observer using a scale (31). Efron is software widely used to subjectively classify redness in eyes (entered in OSA, IDRA and LacryDiag). The Efron scale has achieved excellent reproducibility (32, 33) and is one of the more accurate scales based on fractal dimension (34). Comparing objective and subjective redness classifications, the highest reproducibility is observed when hyperemia is assessed and scored automatically (6,30). Among the rest of the ocular surface devices, Tearscope Plus, Polaris, EasyTear Viewplus and LipiView interferometer do not offer a redness analyzer. Therefore, the ideal device has to implement and automatic, objective, non-invasive LBRC assessment integrated into a platform and software within the rest of the ocular surface parameters. The advantage that OSA presents with respect to other dry eye analyzers is that the LBRC is carried out according to the international scale established by Efron. However, as disadvantages, we find that the analysis of redness is subjective while the Keratograph 5M presents a software that performs it objectively and automatically. Lipid layer thickness There are different devices to measure the thickness of the lipid layer, most of which are based on optical interferometry, such as OSA. These devices are Tearscope Plus, EasyTear Viewplus, Polaris, Keratograph 5M, and LipiView. The basic technology in them is the same; the measurement is performed non-invasively by observing the phenomenon of interference fringes, which allows the thickness of the lipid layer secreted by the MGs to be analyzed. With Tearscope Plus, EasyTear Viewplus and Polaris, the result obtained has a subjective and qualitative component, as the observer compares the image he sees with the same classification that exists for the thickness of the lipid layer in five different categories as described by Guillon (35) (amorphous structure, marbled appearance, wavy appearance, yellow, brown, blue or reddish interference fringes). This same classification allows a quantitative equivalent (from thinner to thicker: < 15 nm-not present, ∼15 nm-open meshwork, ∼30 nm-closed meshwork, ∼30/80 nmwave, ∼80 nm-amorphous, ∼80/120 nm-color fringes, ∼120/160 nm-abnormal color) used by OSA and IDRA. Keratograph 5M uses four interferometric patterns instead of five 1 = open mesh (13-15 nm); 2 = closed mesh (30-50 nm); 3 = wave (50-80 nm); and 4 = color fringe (90-140 nm). In both devices, the subjectivity of the observer is influential during classification; this type of measurement is considered to be more reliable and repeatable, with less deviation in the results (36-38). Only LipiView is capable of measuring with nanometer precision (39). It is a non-invasive instrument that takes live digital images of the tear film, measures its lipid component, and assesses LLT using an interference color unit (ICU) score (usual average ≥ 75 score points). Illumination is projected over the lower third of the cornea from a color interference pattern as a result of the specular reflection at the lipid aqueous border. The detected color is related to the device and is shown as an ICU, which is equivalent to nanometers. Different publications support the reliability of the LLT measurement with LipiView, both in its value as a diagnostic element compared to other devices in which the observer intervenes and in its intra-and interobserver repeatability (19, 20,40,41). The advantage that OSA presents with respect to the rest of dry eye analyzers is that the classification of the lipid pattern of the tear film is carried out in accordance with the international scale established by Guillon. However, as disadvantages, we find that the analysis of the lipid thickness is of a qualitative nature, while LipiView presents a software that measures the thickness of the lipid layer quantitatively. Tear meniscus height Several ocular surface devices (EasyTear Viewplus, AS-OCT, Keratograph5 M, LipiView, OSA and IDRA) present the possibility of measuring tear meniscus height, and the acquisition of multiple images is performed non-invasively, as the water content can be accurately evaluated with an integrated caliper along the edge of the lower or superior eyelid. OSA Plus and IDRA are unique devices that automatically and objectively measure the tear meniscus height of the lower lid. Scientific evidence is needed to establish the repeatability and reproducibility of these devices. The works presented on tear meniscus height are scarce, but they support its repeatability, in both the one carried out in a slit lamp (42) and the one completed with Keratograph 5M, which has a significant correlation with traditional diagnostic tests for dry eye disease (43,44). Future lines of research should measure the tear meniscus volume instead of the height to estimate the aqueous layer of the tear. The advantage that OSA presents with respect to other dry eye analyzers is that the height of the tear meniscus is measured manually (with OSA) and automatically (with OSA Plus), making it an objective test. In this sense, the rest of the dry eye analyzer devices perform a manual measurement of the height of the tear meniscus. Non-invasive break-up time NIBUT is objectively measured by Keratograph 5M, OSA, IDRA and LacryDiag. These devices record the first alteration of the tear film (FNIBUT) as well as the average BUT for all points of measurement (MNIBUT). Keratograph 5M (45-48) performs the measurement automatically for 24 s, but using OSA (49), IDRA (12,50,51) and LacryDiag (13, 52), the clinician manually activates and stops video recording. Keratograph 5M has shown good repeatability and reproducibility in patients with dry eye and healthy controls (43). It is the most commonly utilized instrument in ocular surface studies and is used for the validation of the other devices (11,13,36,53). OSA and LacryDiag measurements of NIBUT are obtained through the detection of distortions in circular rings that are reflected in the tear film using the Placido rings accessory (13). Employing OSA Plus and IDRA, grids can be inserted into the internal cylinder of the device to project structured images onto the surface of the tear film, and the examiner can choose between manual or automatic analysis. In a validation study, IDRA showed good sensitivity and specificity values for NIBUT (12). NIBUT can be subjectively measured by Tearscope Plus, Polaris and EasyTear Viewplus. These instruments project a grid of equidistant circles of light onto the surface of the eye that are blurred by the tear film rupture. The NIBUT is taken as the time elapsed until the blur of the lines can be observed. Polaris (54), EasyTear Viewplus (55), TS (56-58) and Keratograph 5M produced similar average results relating to NIBUT in the study carried out by Bandlitz et al. (11). Because Keratograph 5M is the only device that performs the NIBUT measurement fully automatically, it is the recommended instrument for the measurement of this parameter. The advantage that OSA presents with respect to the rest of dry eye analyzers is that the measurement of the FNIBUT and MNIBUT is carried out automatically and objectively. Therefore, it is on a par with other dry eye analyzer devices such as the Keratograph 5M and the LacryDiag. Meibomian gland dysfunction Non-contact infrared meibography is a technique used to study MG dysfunction by evaluating MG dropout. The qualification of the degree of MG dropout can be determined subjectively by means of a scale or objectively through software that automatically calculates the relationship between the area of loss of MG and the total area of the eyelid (value ranging from 0 to 100%) (59). Automatic objective measures may be more useful for detecting early gland loss (60). The non-invasive instruments that can perform the study of MG dysfunction are Keratograph 5M, OSA, IDRA, EasyTear Viewplus, LacryDiag and LipiView. The analysis of meibography with EasyTear Viewplus and LipiView (20,61,62) is carried out subjectively by comparing it with a scale. In LacryDiag, the analysis is semiautomatic. The examiner manually delimits the exam area, and the software provides the percentage of MG loss (13). OSA (49) and IDRA (12, 20, 50, 51) have automatic, semiautomatic or manual procedures for analyzing the present and absent gland area and show MG loss in a classification of four degrees: 0-25, 26-50, 51-75, and 76-100%. In the manual procedure, the examiner selects the area in which the MGs are located. In addition, OSA Plus and IDRA perform automatic 3D meibography. Using Keratograph 5M, the analysis can be subjective by comparing the image obtained with a reference scale with four degrees (ranging from 0 to 3) (13, 45, 46) or semiautomatic through the ImageJ software that provides the total area analyzed and the area covered by MGs (47,60,63,64). The advantage that OSA presents with respect to the rest of dry eye analyzers is that the measurement of the MGD percentage is carried out automatically and objectively. Therefore, it represents an improvement over other dry eye analyzer devices such as the Keratograph 5M and the LacryDiag that perform manual or semi-automatic measurement using software. The ocular surface analyzer protocol: Methods and anticipated results Non-invasive tear film analysis is performed with the Integrated Clinical Platform (ICP) within the OSA. The OSA includes a full assessment of the ocular surface through a combination of dry eye disease diagnostic tests. The test allows the quick assessment of the details of the tear film composition, including the lipid, aqueous and mucin layers, in addition to conjunctival redness classification and MG assessment. The instrument is fit in the slit lamp tonometer hall. Regarding the technical data, the image resolution is six megapixels, the acquisition mode is multishot and movie acquisition, the focus can be manual or automatic, and Placido disc and NIBUT grids are available. Furthermore, the color and sensitivity to infrared cameras are accessible, and the light source is an infrared or blue light-emitting diode (LED). An OSA device image was presented in Figure 1. The OSA protocol examination includes all available noninvasive dry eye disease tests in the device. Temperature and humidity room examination conditions must be stable during all measurements. Illumination of the room should be performed under mesopic conditions. The patient must not wear soft or rigid contact lenses at least 48 h prior to the examination. In addition, no lubricants, eyedrops or make-up should be used before the measurements. Ocular surface tests are taken in alternating fashion between both eyes. Furthermore, between OSA measurement steps, the subjects blink normally within 1 min. Prior to the next measurement, the subject blinks deliberately three full times. The order of the measurements is from minor to major tear film fluctuations in the following order. Subjective questionnaire The questionnaire included in the OSA platform is the DEQ-5 (5, 65-67). It has five questions divided into three blocks: (I) Questions about eye discomfort: (a) During a typical day in the past month, how often did you feel discomfort (from never to constantly) and (b) When your eyes feel discomfort, how intense was the feeling of discomfort at the end of the day, within 2 h of going to bed? (from never have it to very intense). (II) Questions about eye dryness: (a) During a typical day in the past month, how often did your eyes feel dry? (from never to constantly) and (b) When you felt dry, how intense was the feeling of dryness at the end of the day, within 2 h of going to bed? (from never have it to very intense). (III) Question about watery eyes: (a) During a typical day in the past month, how often did your eyes look or feel excessively watery? (from never to constantly). At the end of the questionnaire, the OSA platform summarizes the results, with scores ranging from 0 to 4 for questions I-a, II-a and III and scores ranging from 0 to 5 for questions I-b and II-b. The total possible score in this questionnaire is 22 points. Chalmers et al. (5) described mean healthy population results of 2.7 ± 3.2 points within a clinical difference to detect six points (68) (based on the variation between severity classification) (5). Limbal and bulbar redness classification The LBRC was detected within the blood vessel fluidity of the conjunctiva to evaluate the redness degree with the Efron (69) Scale (0 = normal, 1 = trace, 2 = mild, 3 = moderate and 4 = severe). For this measurement, no cone was placed on the device. A central picture must be taken to assess limbal conjunctival redness (Figure 2). Therefore, a nasal and temporal picture must be taken to assess bulbar conjunctival redness (Figure 1). Efron (69) and Wu et al. (30) did not report mean healthy population values, although they established clinically normal as grade 0-1. The clinical difference to detect is 0.5 grading (68). Lipid layer thickness At this point, the quality of the tear film lipid was assessed. The LLT evaluation was performed with optic interferometry. Furthermore, the evaluation of the quantity of the lipid layer was classified into seven different pattern categories defined by Guillon (35). For this measurement, a plain cone is placed on the device. The patient must blink normally during an approximately 10-s video recording. Later, the video is compared with the seven videos to match the exact lipid layer pattern (Figure 3). Tear meniscus height The TMH test evaluates the aqueous layer quantity within a millimeter caliper (≤ 0.20 mm-abnormal and > 0.20 mmnormal). For this measurement, the plain cone is placed on the device. The picture consists of a central capture of the tear meniscus focalized in the center of the green square (Figure 4). Later, the millimeter caliper is placed at the start and end of the tear meniscus, and the height is obtained. Multiple measurements can be performed as well as nasal or temporal TMH. Mean healthy population results were presented by several authors. Nichols Non-invasive break-up time Regarding this measurement, the tear film mucin layer quantity is assessed. The FNIBUT and MNIBUT are evaluated with a special grid cone, which evaluates the tear film break in seconds. The Placido cone is set for this test. The patient must deliberately blink two times; after this, the video recording starts and stops at the first involuntary blink. The device auto analyzes the measurement and reports the first point of the blur grid as the FNIBUT and the generalized tear film BUT as the MNIBUT (Figure 5) Meibomian glands dysfunction The MG dysfunction percentage us measured with an infrared non-contact camera that evaluates the upper and lower lid after everting it with a swab. For this measurement, no cone is placed on the device. MG pictures of the upper and lower eyelids must be captured inside the green square. After the catch, MG assessment can be performed automatically or manually (Figure 6). In addition, a combination of both methods can be performed with the semiautomated method that allows the addition or removal of non-detected MGs manually. The MG dysfunction percentage can be classified into four degrees: ∼0%-Grade 0, < 25%-Grade 1, 26-50%-Grade 2, 51-75%-Grade 3 and > 75%-Grade 4 (72,73). The device permit to perform a simulated or real (with OSA Plus) 3D MG pattern (Figure 7). Future research lines and limitations New emerging lines of research are focused on the search for identifiers that allow us to recognize biomarkers of the effects of the ocular surface in a more objective, automated and minimally invasive way. To enhance the field, the development of new algorithmic calculations and the incorporation of software for data analysis, such big data and machine learning, will allow us to recognize, detect and classify more accurately the different values, including the interrelations between them, in an automated way with different parameters (74). Independent and dissociated observation of the tear film, inclusion of palpebral parameters and analysis of proinflammatory factors without the need for invasive, expensive, rapid or invited tests are potential future directions that should be analyzed (75,76). Future researchers should consider that the intensity of illumination produced by these instruments in their measurements can cause an increase in the blink rate and reflex tearing (77). Therefore, the main limitations found are the lack of objectivity and automation in the measures conducted, absence of correlations between existing tests and lack of extrapolation to other similar systems. However, the lack of intra-and interobserver repeatability in some of the measurement tools due to the interaction of an observer limits neutrality and increases biases, which impact the validity of the results. Within the limitations of this study, an accuracy and repeatability research is needed to validate this ocular surface device. Conclusion The OSA is a device that can provide accurate, noninvasive and easy-to-use parameters to specifically interpret distinct functions of the tear film. The use of variables and subsequent analysis of results can generate relevant information for the management of clinical diagnoses. The OSA and OSA Plus devices are novel and relevant dry eye disease diagnostic tools; however, the automatization and objectivity of the measurements can be increased in future software or device updates. Data availability statement The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.
Regulation of telomere metabolism by the RNA processing protein Xrn1 Abstract Telomeric DNA consists of repetitive G-rich sequences that terminate with a 3΄-ended single stranded overhang (G-tail), which is important for telomere extension by telomerase. Several proteins, including the CST complex, are necessary to maintain telomere structure and length in both yeast and mammals. Emerging evidence indicates that RNA processing factors play critical, yet poorly understood, roles in telomere metabolism. Here, we show that the lack of the RNA processing proteins Xrn1 or Rrp6 partially bypasses the requirement for the CST component Cdc13 in telomere protection by attenuating the activation of the DNA damage checkpoint. Xrn1 is necessary for checkpoint activation upon telomere uncapping because it promotes the generation of single-stranded DNA. Moreover, Xrn1 maintains telomere length by promoting the association of Cdc13 to telomeres independently of ssDNA generation and exerts this function by downregulating the transcript encoding the telomerase inhibitor Rif1. These findings reveal novel roles for RNA processing proteins in the regulation of telomere metabolism with implications for genome stability in eukaryotes. INTRODUCTION Nucleoprotein complexes called telomeres are present at the ends of linear eukaryotic chromosomes, where they ensure replication of the chromosome ends and prevent their recognition as DNA double-strand breaks (DSBs) (1,2). Telomeric DNA in most eukaryotes consists of tandem arrays of short repeated sequences which are guanine-rich in the strand running 5 -3 from the centromere toward the chromosome end. The G-rich strand at both ends of a chromosome extends over the C-strand to form a 3ended single-stranded G-rich overhang (G-tail) (3,4). This G-tail is important for telomere replication, because it pro-vides a substrate for the telomerase enzyme. Telomerase is a ribonucleoprotein complex that uses its RNA component as a template to elongate the telomere by addition of G-rich telomeric repeats to the G-tail (1,5). The telomerase-extended single-stranded DNA (ssDNA) must then be copied by the conventional replication machinery to reconstitute the double-stranded telomeric DNA. In Saccharomyces cerevisiae, single-stranded G-rich tails of 5-10 nt in length are present at telomeres throughout most of the cell cycle except in late S phase, when longer overhangs are detected (4,6,7). Removal of the last RNA primers that are generated by lagging-strand synthesis appears to match the observed overhang length (8). By contrast, the telomeric C-strands generated by leading-strand synthesis are resected by about 30-40 nt before being filled in again to leave DNA ends with a 3 overhang of about 10 nt (8,9). This resection depends on the MRX (Mre11-Rad50-Xrs2) complex, on the exonuclease Exo1 and on the Sgs1-Dna2 helicase-nuclease complex (10)(11)(12). G-tails at both leading-and lagging-strand telomeres are covered by the CST (Cdc13-Stn1-Ten1) complex, which is an RPA-like complex that binds with high affinity and sequence specificity to the telomeric ssDNA overhangs (13). The CST complex drives the localization of telomerase to telomeres through a direct interaction between Cdc13 and the telomerase subunit Est1 (14)(15)(16)(17). MRX, in turn, ensures robust association of telomerase with telomeres by promoting the binding of the checkpoint kinase Tel1 via a specific interaction with the MRX subunit Xrs2 (18)(19)(20)(21)(22). It remains unclear whether Tel1 facilitates telomerase association directly by phosphorylating specific targets that promote telomerase recruitment, and/or indirectly by stimulating resection of the C-strand, thus generating a ssDNA substrate for telomerase action (23)(24)(25). Interestingly, Mre11 inactivation strongly reduces the binding to telomeres of the telomerase subunits Est1 and Est2, while it has a moderate effect on Cdc13 binding (26). Further work has shown that the absence of Mre11 reduces Cdc13 binding only to the leading-strand telomere, while Cdc13 ability to bind to the lagging-strand telomere is not affected (9). This observation is consistent with the finding that Mre11 binds only to leading telomeres to generate the single-stranded overhangs (9). In addition to drive telomerase localization to telomeres, the CST complex also genetically and physically interacts with the DNA polymerase ␣/primase complex and promotes lagging strand synthesis during telomere replication (27,28). Furthermore, it prevents inappropriate generation of ssDNA at telomeric ends. Cdc13 inactivation through either the cdc13-1 temperature sensitive allele or the cdc13-td conditional degron allele results in both degradation of the 5 -terminated DNA strand and checkpoint-mediated cell cycle arrest (29)(30)(31). Similarly, temperature sensitive alleles of either the STN1 or TEN1 gene cause telomere degradation and checkpoint-dependent cell cycle arrest at the nonpermissive temperature (32)(33)(34)(35). DNA degradation in the cdc13-1 mutant depends mainly on the 5 -3 nuclease Exo1 (36,37), suggesting that CST protects telomeric DNA from Exo1 activity. There is emerging evidence that telomere metabolism is influenced by RNA processing pathways. In eukaryotes, RNA processing relies on two highly conserved pathways involving both 5 -3 and 3 -5 exoribonuclease activities (38). In particular, 5 -3 degradation is performed by the Xrn protein family, which comprises the cytoplasmic Xrn1 enzyme and the nuclear Rat1 enzyme (also known as Xrn2) (39). The 3 -5 RNA processing activity is due to the exoribonuclease Rrp6 that belongs to the nuclear exosome (40). In addition, RNA molecules are subjected to a quality control system, which is called nonsense-mediated mRNA decay (NMD) and degrades non-functional RNAs that might otherwise give rise to defective protein products (38). RNA processing proteins have been recently implicated in telomere metabolism in both yeast and mammals, although the related mechanisms are poorly understood. In particular, Xrn1 has been identified in genome-wide screenings for S. cerevisiae mutants with altered telomere length (41,42). Moreover, proteins belonging to the mammalian NMD pathway have been found to bind telomeres and to control telomere length (43,44). Similarly, the lack of the S. cerevisiae NMD proteins was shown to cause telomere shortening by increasing the amount of Stn1 and Ten1, which in turn inhibit telomerase activity by interfering with Est1-Cdc13 interaction (45)(46)(47)(48). Furthermore, both Xrn1 and the nuclear exosome control degradation of the RNA component of human telomerase (49). Finally, Rat1 and the NMD pathway control the level of a new class of noncoding RNAs called TERRA (telomeric repeat-containing RNA), which are transcribed from the subtelomeric sequences and likely regulate telomere length (50)(51)(52). Here we show that the lack of the S. cerevisiae RNA processing factors Xrn1 or Rrp6 suppresses the temperature sensitivity of cdc13-1 mutant cells by attenuating the activation of the DNA damage checkpoint response. In particular, Xrn1 is required to activate the checkpoint upon telomere uncapping because it promotes the generation of ssDNA. Furthermore, Xrn1 maintains telomere length independently of ssDNA generation by promoting Cdc13 association to telomeres through downregulation of the transcript encoding the telomerase inhibitor Rif1. Southern blot analysis of telomere length The length of HO (Homothallic)-induced telomeres was determined as previously described (53). Briefly, yeast DNA was digested with SpeI and the resulting DNA fragments were separated by 0.8% agarose gel electrophoresis and hybridized with a 32 P-labeled probe corresponding to a 500 bp ADE2 fragment. To determine the length of native telomeres, XhoI-digested yeast DNA was subjected to 0.8% agarose gel electrophoresis and hybridized with a 32 Plabeled poly(GT) probe. Standard hybridization conditions were used. ChIP and qPCR ChIP analysis was performed as previously described (54). Quantification of immunoprecipitated DNA was achieved by quantitative real-time PCR (qPCR) on a Bio-Rad Min-iOpticon apparatus. Triplicate samples in 20 l reaction mixture containing 10 ng of template DNA, 300 nM for each primer, 2× SsoFast™ EvaGreen ® supermix (Bio-rad #1725201) were run in white 48-well PCR plates Multi-plate™ (Bio-Rad #MLL4851). The qPCR program was as follows: step 1, 98 • C for 2 min; step 2, 98 • C for 5 s; step 3, 60 • C for 10 s; step 4, return to step 2 and repeat 30 times. At the end of the cycling program, a melting program (from 65 to 95 • C with a 0.5 • C increment every 5 s) was run to test the specificity of each qPCR. qPCR at the HO-induced telomere was carried out by using primer pairs located at 640 bp centromere-proximal to the HO cutting site at chromosome VII and at the non-telomeric ARO1 fragment of chromosome IV (CON). qPCR at native telomeres was carried out by using primer pairs located at 70 and 139 bp from the TG sequences on telomeres VI-R (right) and XV-L (left), respectively. Data are expressed as fold enrichment over the amount of CON in the immunoprecipitates after normalization to input signals for each primer set. qRT-PCR Total RNA was extracted from cells using the Bio-Rad Aurum total RNA mini kit. First strand cDNA synthesis was performed with the Bio-Rad iScript™ cDNA Synthesis Kit. qRT-PCR was performed on a MiniOpticon Real-time PCR system (Bio-Rad) and RNA levels were quantified using the Ct method. Quantities were normalized to ACT1 RNA levels and compared to that of wild-type cells that was set up to 1. Primer sequences are available upon request. Fluorescence microscopy Yeast cells were grown and processed for fluorescence microscopy as described previously (55). Fluorophores were cyan fluorescent protein (CFP, clone W7) (56) and yellow fluorescent protein (YFP, clone 10C) (57). Fluorophores were visualized on a Deltavision Elite microscope (Applied Precision, Inc) equipped with a 100× objective lens (Olympus U-PLAN S-APO, NA 1.4), a cooled Evolve 512 EM-CCD camera (Photometrics, Japan) and an Insight solid state illumination source (Applied Precision, Inc). Pictures were processed with Volocity software (PerkinElmer). Images were acquired using softWoRx (Applied Precision, Inc) software. Other techniques Visualization of the single-stranded overhangs at native telomeres was done as previously described (6). The same gel was denatured and hybridized with the end-labeled Crich oligonucleotide for loading control. Protein extracts to detect Rad53 were prepared by trichloroacetic acid (TCA) precipitation. Rad53 was detected using anti-Rad53 polyclonal antibodies from Abcam. Secondary antibodies were purchased from Amersham and proteins were visualized by an enhanced chemiluminescence system according to the manufacturer. The lack of Xrn1 or Rrp6 partially suppresses the temperature sensitivity of cdc13-1 cells Protection of telomeres from degradation depends on the CST (Cdc13-Stn1-Ten1) complex, which specifically binds to the telomeric ssDNA overhangs (13). We have previously shown that the RNA processing proteins Xrn1 and Rrp6 are required to fully activate the checkpoint kinase Mec1/ATR at intrachromosomal DSBs (58). We then asked whether Xrn1 and/or Rrp6 regulate checkpoint activation also in response to telomere uncapping. To this end, we analyzed the effect of deleting either the XRN1 or the RRP6 gene in cdc13-1 cells, which show temperature-dependent loss of telomere capping, ssDNA production, checkpoint activation and cell death (29,30). As expected, cdc13-1 cells were viable at permissive temperature (25 • C), but died at restrictive temperature (26-30 • C) ( Figure 1A). Deletion of either XRN1 or RRP6 partially suppressed the temperature sensitivity of cdc13-1 cells, as it allowed cdc13-1 cells to form colonies at 26-28 • C ( Figure 1A). Xrn1 and Rrp6 appear to impair cell viability of cdc13-1 cells by acting in two different pathways, as xrn1Δ rrp6Δ cdc13-1 triple mutant cells formed colonies at 30 • C more efficiently than both xrn1Δ cdc13-1 and xrn1Δ cdc13-1 double mutant cells ( Figure 1B). Xrn1 controls cytoplasmic RNA decay, whereas RNA processing in the nucleus depends on its nuclear paralog Rat1 (61). Targeting Rat1 to the cytoplasm by deleting its nuclear localization sequence (rat1-ΔNLS) restores Xrn1like function in RNA degradation (61), prompting us to ask whether it could restore Xrn1 function in causing loss of viability of cdc13-1 cells. Strikingly, cdc13-1 xrn1Δ cells expressing the rat1-ΔNLS allele on a centromeric plasmid formed colonies at 27 • C much less efficiently than cdc13-1 xrn1Δ cells expressing wild type RAT1 ( Figure 1D). Thus, Xrn1 impairs viability in the presence of uncapped telomeres by controlling a cytoplasmic RNA decay pathway. Xrn1 and Rrp6 are required to fully activate the checkpoint at uncapped telomeres A checkpoint-dependent arrest of the metaphase to anaphase transition is observed in cdc13-1 cells at high temperatures (29). Failure to turn on the checkpoint allows cdc13-1 cells to form colonies at 28 • C (30), indicating that checkpoint activation can partially account for the loss of viability of cdc13-1 cells. We therefore asked whether the enhanced temperature resistance of cdc13-1 xrn1Δ and cdc13-1 rrp6Δ cells might be related to defective checkpoint activation. Cell cultures were arrested in G1 with ␣-factor at 23 • C and then released from G1 arrest at 28 • C, followed by monitoring nuclear division at different time points. As expected, cdc13-1 cells remained arrested as large budded cells with a single nucleus throughout the experiment (Figure 2A). Conversely, although xrn1Δ and rrp6Δ single mutant cells slowed down nuclear division compared to wild type cells, cdc13-1 xrn1Δ and cdc13-1 rrp6Δ cells started to divide nuclei about 90 min after release (Figure 2A). We then examined under the same conditions phosphorylation of the Rad53 checkpoint kinase that is necessary for checkpoint activation and can be detected as changes in Rad53 electrophoretic mobility. After release at 28 • C from G1 arrest, Rad53 phosphorylation was strong in cdc13-1 cells, as expected, whereas it was undetectable in cdc13-1 xrn1Δ cells and it was reduced in cdc13-1 rrp6Δ cells (Figure 2B). Taken together, these results indicate that Xrn1 and Rrp6 are required to fully activate the checkpoint in response to telomere uncapping caused by defective Cdc13. Xrn1 and Rrp6 regulate telomere capping through a mechanism that is distinct from that involving the NMD pathway In both yeast and mammals, the NMD pathway is involved in quality control of gene expression by eliminating aberrant RNAs (62). Interestingly, NMD inactivation was shown to suppress the temperature sensitivity of cdc13-1 cells by increasing the levels of the Cdc13 interacting proteins Stn1 and Ten1, which likely stabilize the CST complex at telomeres (46,47,63). These high levels of Stn1 and Ten1 are also responsible for the short telomere length phenotype of nmdΔ mutants, possibly because Stn1 and Ten1 inhibit telomerase activity by interfering with Est1-Cdc13 interaction (16,34,64,65). As 77% of the transcripts that are upregulated in nmdΔ cells are upregulated also in xrn1Δ cells (66), we asked whether Xrn1 and/or Rrp6 action at telomeres might involve the same pathway that is regulated by NMD. To this purpose, we constructed fully functional Ten1-Myc and Stn1-HA alleles to analyze the levels of Ten1 and Stn1 in xrn1Δ and rrp6Δ cells. As expected, the amounts of Ten1-Myc and Stn1-HA were greatly increased in cells lacking the NMD protein Upf2 (Figure 2C and D). By contrast, the lack of Xrn1 or Rrp6 did not change the amount of Ten1-Myc ( Figure 2C) and only very slightly increased the amount of Stn1-HA ( Figure 2D). Furthermore, Xrn1 and Rrp6 do not compensate for the absence of each other in controlling Ten1 and Stn1 levels, as the amount of Ten1-Myc ( Figure 2E) and Stn1-HA ( Figure 2F) in xrn1Δ rrp6Δ double mutant cells was similar to that in xrn1Δ and rrp6Δ single mutant cells. The presence of the Myc or HA tag at the C-terminus of Ten1 and Stn1, respectively, did not affect the possible regulation of the corresponding mRNAs by Xrn1 or Rrp6, as the suppression of the temperature sensitivity of cdc13-1 cells by XRN1 or RRP6 deletion was similar either in the presence or in the absence of the TEN1-MYC or STN1-HA allele (Supplementary Figure S1). We also analyzed the epistatic relationships between Xrn1/Rrp6 and NMD. The effect of deleting UPF2 in xrn1Δ cdc13-1 cells could not be assessed due to the poor viability of the triple mutant at 23-25 • C. Nonetheless, deletion of UPF2, which partially suppressed the temperature sensitivity of cdc13-1 cells, further improved the temperature resistance of cdc13-1 rrp6Δ double mutant cells at 32 • C compared to both cdc13-1 rrp6Δ and cdc13-1 upf2Δ cells ( Figure 2G). Altogether, these data suggest that Xrn1 and Rrp6 impair survival of cdc13-1 by acting in a pathway that is different from that involving the NMD proteins. Xrn1 is required to generate ssDNA at uncapped telomeres It is known that cell death of cdc13-1 cells at restrictive temperatures is due to generation of telomeric ssDNA that triggers checkpoint-mediated metaphase arrest (29,30). Hence, the improved temperature resistance of cdc13-1 xrn1Δ and cdc13-1 rrp6Δ cells might be due to a reduction of the amount of telomeric DNA that becomes single-stranded in cdc13-1 cells at restrictive temperatures. We therefore assessed the presence of ssDNA at natural chromosome ends by analyzing genomic DNA prepared from exponentially growing cells. Non-denaturing in-gel hybridization with a C-rich radiolabeled oligonucleotide showed that the amount of telomeric ssDNA after incubation of cells at 28 • C for 5 h was lower in cdc13-1 xrn1Δ double mutant cells than in cdc13-1 cells ( Figure 3A). By contrast, the level of single-stranded TG sequences showed a very similar increase in both cdc13-1 and cdc13-1 rrp6Δ mutant cells compared to wild type cells ( Figure 3A). The function of Cdc13 in telomere protection is mediated by its direct interaction with Stn1 and Ten1. In contrast to Cdc13, Stn1 inhibits telomerase action by competing with Est1 for binding to Cdc13 (64,65). As a consequence, cells lacking the Stn1 C-terminus (stn1-ΔC) display long telomeres because the Stn1-ΔC variant fails to compete with Est1 for binding to Cdc13. Furthermore, these same cells accumulate telomeric ssDNA, although the amount of this ss-DNA is not enough to impair cell viability (34,64,67). We therefore evaluated the specificity of the genetic interactions between Cdc13, Xrn1 and Rrp6 by analyzing the consequences of deleting XRN1 or RRP6 in stn1-ΔC cells. Like in cdc13-1 cells, generation of telomeric ssDNA in stn1-ΔC cells was reduced by the lack of Xrn1, but not by RRP6 deletion ( Figure 3B). Thus, Xrn1 is required to generate ss-DNA at dysfunctional telomeres, whereas Rrp6 does not, implying that the defective checkpoint response in cdc13-1 rrp6Δ cells cannot be ascribed to a reduced generation of telomeric ssDNA. The data above suggest that the lack of Xrn1 might suppress the temperature sensitivity of cdc13-1 cells by attenuating the generation of telomeric ssDNA. We then asked whether the overexpression of Exo1, which bypasses MRX requirement for intrachromosomal DSB end resection (68), decreased the maximum permissive temperature of cdc13-1 xrn1Δ cells. Strikingly, cdc13-1 xrn1Δ cells containing the EXO1 gene on a 2 plasmid were more temperaturesensitive than cdc13-1 xrn1Δ cells containing the empty vector ( Figure 3C). This finding supports the hypothesis that the lack of Xrn1 can partially bypass the requirement for CST in telomere capping because it attenuates the generation of telomeric ssDNA. Xrn1 maintains telomere length by acting as a cytoplasmic nuclease Xrn1 has been identified in genome-wide screenings for S. cerevisiae mutants that are affected in telomere length (41,42). We confirmed the requirement for Xrn1 in telomere elongation by using an inducible short telomere assay that allows the generation of a single short telomere without affecting the length of the other telomeres in the same cell (10). We used a strain that carried at the ADH4 locus on chromosome VII an internal tract of telomeric DNA sequence (81 bp TG) adjacent to an HO endonuclease recognition sequence ( Figure 4A) (10,69). Upon cleavage by HO, the fragment distal to the break is lost, and, over time, the TG side of the break is elongated by the telomerase. As shown in Figure 4B, sequence addition at the HO-derived telomere was clearly detectable after galactose addition in Figure 3. The lack of Xrn1 reduces ssDNA generation at uncapped telomeres. (A and B) Cell cultures exponentially growing at 23 • C were shifted to 28 • C for 5 h. Genomic DNA was digested with XhoI, and single-stranded G-tails were visualized by non-denaturing in-gel hybridization (native gel) using an end-labeled C-rich oligonucleotide as a probe. The gel was denatured and hybridized again with the same probe for loading control (denatured gel). (C) Cell cultures were grown overnight and 10-fold serial dilutions were spotted onto YEPD plates. Plates were incubated at the indicated temperatures before images were taken. wild type cells, whereas it was strongly delayed and reduced in xrn1Δ cells, confirming the requirement for Xrn1 in telomere elongation. Xrn1 controls telomere length by acting as cytoplasmic nuclease. In fact, expression of the Xrn1 nuclear paralog Rat1 lacking its nuclear localization sequence (rat1-ΔNLS) restored telomere length in xrn1Δ cells ( Figure 4C). Furthermore, telomeres in xrn1-E176G cells expressing the nuclease defective Xrn1 variant were as short as in xrn1Δ cells ( Figure 4D). In a deep transcriptome analysis of the genes that are misregulated by the lack of Xrn1, xrn1Δ cells showed ∼3fold reduction of the levels of TLC1 (58), the RNA component of the telomerase enzyme. However, a 2 plasmid overexpressing TLC1 from a galactose inducible promoter did not allow xrn1Δ cells to elongate telomeres (Supplementary Figure S2A), although wild-type and xrn1Δ cells expressed similar amount of TLC1 RNA (Supplementary Figure S2B). Thus, telomere shortening in xrn1Δ cells cannot be simply explained by the reduction of TLC1 RNA. Xrn1 promotes Cdc13 association to telomeres independently of ssDNA generation Productive association of telomerase to telomeres requires the generation of ssDNA that leads to the recruitment of Cdc13. Cdc13 in turn recruits the telomerase to telomeres by interacting with the telomerase subunit Est1 (14)(15)(16)(17). Binding of MRX to telomeres allows Tel1 recruitment that strengthens the association of telomerase to telomeres by phosphorylating unknown targets (23)(24)(25). The finding that telomere shortening in mrxΔ and tel1Δ cells can be suppressed by targeting the telomerase to telomeres through a Cdc13-Est1 protein fusion (70) suggests that MRX/Tel1 promotes Cdc13-Est1 interaction rather than Cdc13 association to telomeres. As Xrn1 was found to promote MRX association at intrachromosomal DSBs (58), we asked whether the expression of a Cdc13-Est1 fusion could restore telomere length in xrn1Δ cells. A Cdc13-Est1 fusion expressed from a singlecopy plasmid did not suppress the telomere length defect of xrn1Δ cells, although it was capable to elongate telomeres in wild type, mre11Δ and tel1Δ cells ( Figure 4E) and all cell cultures expressed similar levels of CDC13-EST1 mRNA ( Figure 4F). This finding suggests that the telomere length defect of xrn1Δ cells is not due to MRX dysfunction. The inability of the Cdc13-Est1 fusion protein to suppress the telomere length defect of xrn1Δ cells raises the possibility that Cdc13 itself cannot bind telomeres in the absence of Xrn1. As loss of telomerase is known to be accompanied by recruitment of Cdc13 and Mre11 to telomeres (71), we analyzed the generation of Cdc13 and Mre11 foci before or after loss of telomerase in wild-type and xrn1Δ cells. These cells expressed fully functional Cdc13-CFP and Mre11-YFP fusion proteins. As expected, telomerase removal by loss of a plasmid-borne copy of EST2 resulted in a significant increase of both Mre11-YFP and Cdc13-CFP foci in wild-type cells as early as 25-50 generations after loss of telomerase, with only a subset of them colocalizing ( Figure 5A-C). By contrast, xrn1Δ cells showed a reduction in the number of Cdc13-CFP foci ( Figure 5A and B), but not of Mre11-YFP foci ( Figure 5A and C), compared to wild-type, suggesting a requirement for Xrn1 in promoting Cdc13 association to telomeres. To investigate further this hypothesis, we analyzed the amount of Cdc13 bound at native telomeres in wild-type and xrn1Δ cells that were released into a synchronous cell cycle from a G1 arrest ( Figure 6A). Cdc13 binding to telomeres peaked in wild type cells 45 min after release, concomitantly with the completion of DNA replication, while it remained very low in xrn1Δ cells throughout the time course ( Figure 6A and B), although both cell type extracts contained similar amount of Cdc13 ( Figure 6C). Because Cdc13 binds telomeric ssDNA and the lack of Xrn1 impairs ssDNA generation at uncapped telomeres, the reduced Cdc13 association at telomeres in xrn1Δ cells might be due to defective generation of telomeric single-stranded overhangs. To investigate this issue, XhoI-cut DNA prepared at different time points after release into the cell cycle from a G1 arrest was subjected to native gel electrophoresis, followed by in-gel hybridization with a C-rich radiolabeled oligonucleotide. As shown in Figure 6D, both wild type and xrn1Δ cells showed similar amount of G-tail signals that reached their maximal levels 15-45 min after release, indicating that the lack of Xrn1 does not affect the generation of single-stranded overhangs at capped telomeres. As generation of telomeric single-stranded overhangs requires the MRX complex (10-12,72), we also analyzed Mre11 association at native telomeres. Wild-type and xrn1Δ cells released into a synchronous cell cycle from a G1 arrest showed similar amount of telomere-bound Mre11 ( Figure 6E), consistent with the finding that the lack of Xrn1 does not affect the generation of telomeric singlestranded overhangs. Altogether, these data indicate that Xrn1 promotes Cdc13 binding/association to telomeres independently of ssDNA generation. Xrn1 promotes Cdc13 association at telomeres by downregulating Rif1 level Deep transcriptome analysis showed that the RIF1 mRNA level was ∼3-fold higher in xrn1Δ cells than in wild-type (58). This mRNA upregulation caused an increase of the Rif1 protein level, as shown by western blot analysis of wild type and xrn1Δ protein extracts ( Figure 7A), prompting us to test whether this Rif1 upregulation can account for the telomere defects of xrn1Δ cells. As expected from previous findings that Rif1 has a very slight effect on the generation of telomeric ssDNA (73,74), the increased Rif1 levels did not account for the increased temperature resistance of cdc13-1 xrn1Δ cells compared to cdc13-1. In fact, although RIF1 deletion decreased the maximum permissive temperature of cdc13-1 cells (75,76), cdc13-1 rif1Δ xrn1Δ cells were more temperature-resistant than cdc13-1 rif1Δ cells ( Figure 7B), indicating that the suppression of the temperature sensitivity of cdc13-1 cells by XRN1 deletion does not require Rif1. Rif1 was originally identified as a telomere-binding protein that negatively regulates telomerase-mediated telomere elongation (77). Interestingly, the lack of Rif1, although causing a very slight increase of ssDNA formation, yet leads to considerably more Cdc13 binding at telomeres (74). Therefore, Rif1 might block the association/accumulation of Cdc13 at telomeres through a direct mechanism. Consistent with this hypothesis, a 2 plasmid carrying the RIF1 gene counteracted the ability of the Cdc13-Est1 fusion to elongate telomeres in wild-type cells ( Figure 7C). Thus, we investigated whether the upregulation of Rif1 in xrn1Δ cells could explain both the reduced Cdc13 binding and the telomere length defect of the same cells. As shown in Figure 7D, deletion of RIF1 totally suppressed the telomere length defect of xrn1Δ cells. Telomere length in rif1Δ xrn1Δ cells was the same as in rif1Δ cells ( Figure 7D), suggesting that Xrn1 acts in telomere length maintenance by counteracting the effects of Rif1. As telomeres were much longer in xrn1Δ rif1Δ cells than in xrn1Δ cells, we could not compare the above cell types for Cdc13 association at native telomeres. Thus, we used the strain with the 81 bp TG repeat sequence adjacent to the HO endonuclease cut site ( Figure 4A) (10), where HO induction generates an HO-derived telomere whose length is similar in both xrn1Δ and xrn1Δ rif1Δ cells. As expected (74), ChIP analysis revealed that the amount of Cdc13 associated to the HO-induced telomere was higher in rif1Δ cells than in wild-type ( Figure 7E). Furthermore, although all cell type extracts contained similar amounts of Cdc13 ( Figure 7F), the lack of Rif1 restored Cdc13 association to telomeres in xrn1Δ cells. In fact, the amount of Cdc13 bound at the HOinduced telomere in xrn1Δ rif1Δ cells was higher than in xrn1Δ cells ( Figure 7E). Altogether, these findings indicate that Xrn1 promotes Cdc13 association to telomeres by controlling Rif1 levels. DISCUSSION Here we provide evidence that the RNA processing proteins Xrn1 and Rrp6 are involved in telomere metabolism. In particular, we found that the temperature sensitivity of cdc13-1 mutant cells is partially suppressed by the lack of Rrp6 or Xrn1, as well as by Rrp6 or Xrn1 nuclease defective variants, independently of the NMD proteins. The increased temperature resistance of cdc13-1 xrn1Δ and cdc13-1 rrp6Δ cells is related to their inability to activate the checkpoint. Checkpoint activation in cdc13-1 cells is due to the accumulation at telomeres of ssDNA that turns on the checkpoint kinase Mec1. Our data indicate that the defective checkpoint response in cdc13-1 rrp6Δ double mutant cells cannot be ascribed to reduced ssDNA generation. Interestingly, Rrp6 was shown to promote the association of RPA (58) and Rad51 (78) at intrachromosomal DSBs in yeast and mammals, respectively, by an unknown mechanism. Thus, one possibility is that Rrp6 modulates directly or indirectly the association to telomeric ssDNA of protein(s) required for checkpoint activation. Chromatin samples taken at the indicated times after ␣-factor release were immunoprecipitated with anti-Myc antibodies. Coimmunoprecipitated DNA was analyzed by quantitative real-time PCR (qPCR) using primer pairs located at telomeres VI-R and XV-L and at the non-telomeric ARO1 fragment of chromosome IV (CON). Data are expressed as relative fold enrichment of VI-R and XV-L telomere signals over CON signals after normalization to input signals for each primer set. The mean values ±s.d. are represented (n = 3). (C) Western blot with anti-Myc antibodies of extracts used for the ChIP analysis shown in (B). (D) Genomic DNA prepared from cell samples in (A) was digested with XhoI and the single-strand telomere overhang was visualized by in-gel hybridization (native gel) using an end-labeled C-rich oligonucleotide. The same DNA samples were hybridized with a radiolabeled poly(GT) probe as loading control (denatured gel). (E) Chromatin samples taken at the indicated times after ␣-factor release were immunoprecipitated with anti-Myc antibodies. Coimmunoprecipitated DNA was analyzed by qPCR using primer pairs located at VI-R telomere. Data are expressed as in (B). XhoI-cut genomic DNA from exponentially growing cells was subjected to Southern blot analysis using a radiolabeled poly(GT) probe. (E) HO expression was induced at time zero by galactose addition to yeast strains carrying the system described in Figure 4A. Chromatin samples taken at the indicated times after HO induction were immunoprecipitated with anti-Myc antibodies and coimmunoprecipitated DNA was analyzed by qPCR using primer pairs located 640 bp centromere-proximal to the HO cutting site and at the non-telomeric ARO1 fragment of chromosome IV (CON). Data are expressed as relative fold enrichment of TG-HO over CON signal after normalization to input signals for each primer set. The mean values ±s.d. are represented (n = 3). *P < 0.05, t-test. (F) Western blot with anti-Myc antibodies of extracts used for the ChIP analysis shown in (E). By contrast and consistent with the finding that Xrn1 and Rrp6 impairs viability of cdc13-1 cells by acting in two distinct pathways, the lack of Xrn1 reduces the generation of telomeric ssDNA upon telomere uncapping. This observation, together with the finding that EXO1 overexpression decreases the maximum permissive temperature of cdc13-1 xrn1Δ cells, indicates that Xrn1 participates in checkpoint activation in response to telomere uncapping by promoting the generation of telomeric ssDNA. Interestingly, Xrn1 contributes to generate ssDNA also at intrachromosomal DSBs that are subjected to extensive resection and stimulates Mec1-dependent checkpoint activation, similarly to telomeres following Cdc13 inactivation (29,30,58). By contrast, Xrn1 does not contribute to the generation of singlestranded overhangs at capped telomeres, suggesting a role for Xrn1 in promoting resection specifically at DNA ends that elicit a DNA damage response. Because Xrn1 acts in resection as a cytoplasmic nuclease, one possibility is that the lack of Xrn1 increases the persistence of non-coding RNAs that can inhibit the action of nucleases by annealing with the ssDNA molecules that are generated following telomere uncapping. However, overproduction of the Ribonuclease H1 (Rnh1), which decreases endogenous RNA:DNA hybrids in vivo as well as TERRA levels and R loops at telomeres (79)(80)(81)(82), did not restore the temperature sensitivity in cdc13-1 xrn1Δ cells (Supplementary Figure S3). A previous deep transcriptome analysis has revealed that the amounts of the majority of mRNAs coding for DNA damage response proteins remained unchanged in xrn1Δ cells and the few genes that were misregulated are not obvious candidates (58). Therefore, further work will be required to identify the target(s) by which Xrn1 promotes ssDNA generation and checkpoint activation at uncapped telomeres. We also show that Xrn1 acts as a cytoplasmic nuclease to maintain telomere length. Strikingly, the lack of Xrn1 dramatically reduces Cdc13 association to telomeres. This defective Cdc13 recruitment is not due to reduced ssDNA generation, as the lack of Xrn1 does not impair ssDNA generation at capped telomeres. On the other hand, the lack of Xrn1 causes upregulation of the RIF1 mRNA and subsequent increase of the Rif1 protein level. Rif1 was shown to decrease Cdc13 association at telomeres independently of ssDNA generation (74), suggesting that the high Rif1 levels in xrn1Δ cells might explain the reduced Cdc13 binding and the telomere length defect of the same cells. Consistent with this hypothesis, we found that the lack of Rif1 completely suppresses the telomere length defect and restores Cdc13 association at telomeres in xrn1Δ cells. Altogether, these findings indicate that Xrn1 promotes Cdc13 association to telomeres and telomere elongation independently of ssDNA generation by controlling the amount of Rif1. By contrast, Rif1 is not the Xrn1 target in promoting ssDNA generation and checkpoint activation at uncapped telomeres, as the lack of Xrn1 still suppresses the temperature sensitivity of cdc13-1 rif1Δ cells. In conclusion, Xrn1 appears to have two separate functions at telomeres: (i) it facilitates the generation of ss-DNA and checkpoint activation at uncapped telomeres; (ii) it maintains telomere length independently of ssDNA generation by downregulating the amount of Rif1, which in turn counteracts Cdc13 association to telomeres. As RNA-processing factors are evolutionarily conserved and telomere protection is critical for preserving genetic stability and counteracting cancer development, our findings highlight novel mechanisms through which RNA processing proteins can preserve genome integrity.
Cell Technologies in the Stress Urinary Incontinence Correction The scientific literature of recent years contains a lot of data about using multipotent stromal cells (MSCs) for urinary incontinence correction. Despite this, the ideal treatment method for urinary incontinence has not yet been created. The cell therapy results in patients and experimental animals with incontinence have shown promising results, but the procedures require further optimization, and more research is needed to focus on the clinical phase. The MSC use appears to be a feasible, safe, and effective method of treatment for patients with urinary incontinence. However, the best mode for application of cell technology is still under study. Most clinical investigations have been performed on only a few patients and during rather short follow-up periods, which, together with an incomplete knowledge of the mechanisms of MSC action, does not make it possible for their widespread implementation. The technical details regarding the MSC application remain to be identified in more rigorous preclinical and clinical trials. Introduction Pelvic organ prolapse and stress urinary incontinence are reported in 40-50% of postmenopausal women, affecting 200 million people worldwide. Stress urinary incontinence is more common in women after vaginal delivery than in patients who have undergone a caesarean section, this suggesting a lack of complete tissue repair after vaginal delivery. Standard therapies often provide symptomatic relief, but do not target against the underlying etiology, and exhibit tremendous patient-to-patient variability of results, limited success, and procedure-related complications. More clinical trials of new treatment methods involving the required number of patients and with evaluation the long-term results of therapeutic methods for the incontinence correction should be encouraged [1][2][3][4][5][6][7]. When studying the pathogenesis of stress urinary incontinence in women, it is necessary to pay attention to age-related changes of the female urethra. The minimum and maximum indicators of the length, width, area or volume of organs and structures in the lower urinary tract can normally vary up to 2-3 times. With age, in healthy women the absolute and relative length of the urethra, the urethrovesical angle, and the inclination of the urethra do not change. Both smooth and striated muscle tissues, which are part of various departments of the female urethra, undergo atrophy during the aging process. Smooth muscle tissue is less variable with age, but striated muscle symplasts are sometimes completely absent in urethral biopsies from elderly patients. With age, the vascularization and density of the innervation decrease in the urethral structures, but the content of connective tissue in the external urethral sphincter increases. Urinary tract mobility at young women is more pronounced than at older women [8]. Over the past two decades, tissue engineering and regenerative medicine have made significant advances. Although the term "regenerative medicine" covers most areas of medicine, in fact urology is one of the most progressive. A lot of urological innovations and inventions have been studied over the past decades. Given the quality-of-life problems associated with urinary incontinence, there is a strong incentive for the development and implementation of new technologies. In addition, there is potential for further significant progress in regenerative medicine approaches using biomaterials, multipotent stromal cells (MSCs), or combinations thereof. All this is based on the need to replenish anatomical or physiological tissue deficiencies, reduce morbidity, and improve the long-term effectiveness of treatment. The ideal material for these purposes must meet the following criteria: provide mechanical and structural support; be biocompatible and maintain consistency with surrounding tissues; be "suitable for purpose" to meet specific application needs ranging from static support to transmission of biologically active cell signals [9]. Along with MSC therapy, the rapidly expanding field of tissue engineering has a promising future in urology clinics. The renal tissue functional biounit has been developed using cellular technologies. Urinary excretion has been successfully demonstrated by embryonic kidneys generated from MSCs. It has been shown that artificial embryonic cells derived from pluripotent mouse stromal cells give rise to living offspring. Cell therapy represents an attractive alternative for the treatment of stress urinary incontinence: Myoblast and fibroblast therapy has been used safely and effectively. In addition, stress urinary incontinence has been successfully treated clinically with MSCs derived from muscle tissue. Skeletal muscle-derived MSCs differentiated into smooth muscle cells when implanted into the corpora cavernosa in experimental models. Various types of MSCs have been investigated for use in the repair of the external sphincter and striated muscle tissue of the urethra. The use of MSCs appears to be a feasible and safe method with promising results for treatment of patients with incontinence [4,6,[10][11][12]. MSCs have the ability for self-renewal and differentiation into a number of cell types and promote the release of chemokines and the migration of cells necessary for tissue regeneration. Mesenchymal MSCs are progenitor cells with an increased ability to proliferate and differentiate and these cells are less tumorigenic than MSCs from adult tissues [13]. In connection with the above, the aim of the investigation was to make a narrative overview of the literature regarding experimental and clinical data with using cell technologies, especially MSCs, to improve the treatment of stress urinary incontinence and to facilitate the search of guidance for further research. In Vitro Data Testifying the Promise of Using MSCs for the Urinary Incontinence Correction Cellular therapy shall have the potential to offer future solutions for both the initial placement of a slings and the procedure failure treatment. Preclinical studies show that MSCs can enhance the repair of damaged tissue, either through direct integration and replacement of damaged tissue (differentiation), or through secreted factors that influence the recipient's response mechanisms (paracrine effect) [14]. Sprague-Dawley rat adipose tissue MSCs were adsorbed onto polyglycolic acid fibers, which formed a scaffold with a shape that mimics a sling complex. The results demonstrated that tissue scraping may contain MSC after 12 weeks in vitro culture under static stress. With increasing culture time, the engineered tissue showed significant improvement in biomechanical properties, including maximum load and Young's modulus, as well as mature structures of tissue collagen. In addition, differentiated MSCs cultured under static stress retained myoblast phenotype on polyglycolic acid scaffolds [15]. The type I collagen content during stress urinary incontinence in women is significantly reduced in the periurethral tissues at the level of the vagina and in its fibroblasts. Exosomes increased the expression of the col1a1 gene in these cells, the expression levels of TIMP-1 (endogenous or tissue metalloproteinase inhibitor) and TIMP-3 were upregulated in them with significant downregulation of MMP-1 (matrix metalloproteinase) and the level of MMP-2 expression. That is, the use of MSC exosomes increases the type I collagen content by increasing its synthesis and decreasing degradation by fibroblasts [16]. Thus, experimental studies have demonstrated that MSCs increase the content of collagen in the periurethral tissues and persist for a long time in conditions of static stress on several materials used for the manufacture of slings. This may be of relevance for future treatment modalities of stress urinary incontinence including the use of cellular technologies. Cellular Technologies in the Treatment of Patients with Urinary Incontinence The scientific literature of recent years contains a large amount of data devoted to the study of mesh structures and the possibilities of their modification using MSCs for implantation into patients for correcting tissue defects and pelvic organ prolapse. Based on the literary analysis, Maiborodin et al. [17] studied the influence of cellular technologies on the results of implantation of mesh materials used in urology. The authors conclude that the ideal implant has not yet been created. Additional studies with a longer follow-up period are needed to determine the most successful and safe methods and materials for the restoration of pathologically altered or lost tissues and the transition to clinical trials. It is also yet to come to an unambiguous understanding of the best sources of MSCs, ways for stimulation of proliferation, preservation, and delivery of these cells into the necessary tissues of the body, to thoroughly study the causes of inefficiency and the risks of developing various complications, especially in the long term. Injection therapy with formulations including MSCs has been developed as a minimally invasive alternative to the surgical treatment of stress incontinence. MSC treatment is believed to promote functional regeneration of the urethral sphincter in patients with suspected internal sphincter system insufficiency. Evidently autologous fat and muscle tissues appear to be the most suitable source of MSCs for urological applications [18]. Current published literature presents safety and efficacy data regarding adult autologous muscle-derived cell injection for urinary sphincter regeneration in 80 patients at 12-month follow-up. In these studies, no long-term adverse events were reported and patients undergoing cellular injection at higher doses revealed at least 50% reduction in stress leaks and pad weight at 12-month follow-up. All dose groups demonstrated statistically significant improvement in patient-reported incontinence-specific quality-of-life scores at 12-month follow-up. Most likely, injection of muscle-derived cells across a range of dosages are safe. Efficacy data suggest a dose-response with more patients responsive to the higher doses of these cells [19]. Pathology (insufficiency) of the ligamentous apparatus of the urethra is most often the main cause in the development of the stress urinary incontinence in women. The transplantation of autologous MSCs derived from adipose tissue into the periurethral region is a new treatment method for stress urinary incontinence. Ten women with symptoms of stress incontinence were injected with MSCs via a transurethral and transvaginal approach under urethroscopy observation. Urinary incontinence decreased significantly during the first 2, 6, and 24 weeks after injection therapy. Autologous MSC periurethral injection represents a safe but short-term effective treatment for stress urinary incontinence. However, further studies with a longer follow-up period shall be needed to confirm long-term effectiveness [1]. According to the data of Zambon et al. [5], cure and relapse rates were 40% and 70%, respectively, 1 year after incontinence therapy with periurethral injection of autologous adipose or muscle MSCs. The results of a small uncontrolled single center clinical study showed effectiveness of injection of MSCs into the region of urethral sphincters for the stress urinary incontinence correction. These results should be confirmed in larger cohort and controlled studies with longer follow-up that also evaluates applicability and safety. MSC Incontinence Correction in Experimental Models The regenerative potential of MSCs derived from the human dental pulp was evaluated in an animal model of stress urinary incontinence. To simulate stress urinary incontinence the n. pudendus were cut in female rats, and then MSCs previously differentiated in the myogenic direction were injected into the striated muscle tissue of the urethra. MSCs bound to cells of myogenic lines in vitro, and 4 weeks after injection they contacted with the cells of muscle tissues in vivo. MSCs promoted vascularization and significant recovery of continence, and the sphincter volume was almost restored. Moreover, MSCs were found in the damaged nerve, suggesting a role in nerve repair [20]. A single injection of MSCs partially restores urethral function in an incontinence model. Single as well as repeated doses of 2 × 10 6 MSCs 1 h, 7 days, and 14 days after vaginal distension and crushing n. pudendus in rats improved the integrity of the urethra, restoring the composition of its connective tissue and neuromuscular structures. MSC treatment improved elastogenesis, prevented dysfunction of the external urethral sphincter, and restored the n. pudendus morphology [7]. Muscle MSCs expressing green fluorescent protein (GFP) were injected into the tail vein of rats 3 days after vaginal overstretching. In samples of the damaged urethra, MSCs were detected only 2 h after injection, but they did not integrate into the tissues. Along with this, MSCs enhance the expression of genes associated with cell proliferation, neural growth factor and extracellular matrix, as well as the expression of smooth and striated muscle proteins in the injured urethra [21]. The muscle and adipose tissue MSC effect on the stress urinary incontinence treatment was investigated experimentally. These cells were isolated from rats and labeled by transfection of an enhanced GFP-gene. Rats received an injection of cells into the bladder neck and transurethral into the sphincter region. Through 0, 15, 30, and 60 days after cell injection the urodynamic test showed that both types of MSCs improved urinary function in rats with internal sphincter deficiency, but the effect of MSC from muscle tissue was more pronounced. According to data of histological analysis in rats treated with MSCs, the content of myosin and α-actin of smooth muscle cells was significantly higher than in the control group after the Hanks solution injection [22]. It was not ruled out that the transplantation of mesenchymal MSCs leads to neovascularization and restoration of muscle cells in animal models of incontinence via the paracrine process [12] or through the exosome production [23,24]. It has been investigated whether local administration of exosomes derived from human adipose tissue MSCs contributes to the correction of experimental stress urinary incontinence in rats. Incontinence was modeled by cutting n. pudendus and vaginal distension. MSCs or exosomes on a Cell Counting Kit-8 matrix were inserted into the peripheral urethra. MSC exosomes can dose-dependently enhance the growth of cultured skeletal muscle and Schwann cell lines. Proteomic analysis has shown that these exosomes contain varied proteins of different signaling pathways: PI3K-Akt, Jak-STAT, and Wnt, which are associated with the regeneration and proliferation of striated muscles and nerves. After exosome injection, rats had a higher bladder capacity and urinary onset pressure, more striated muscle symplasts and peripheral nerve fibers in the urethra than untreated animals. Both urethral function and histological results in rats in the exosome-injected group were slightly better than in the MSC-injected group [23]. Urine-derived MSCs may promote myogenesis after injury of the urethral sphincter muscle (hyperextension of the rat vagina). Following the injection of exosomes from these MSCs the urodynamic parameters of the damaged sphincter have also been improved significantly, and the damaged muscle tissue was restored. The activation, proliferation, and differentiation of own MSC have been proven [24]. Laboratory studies and small sized clinical series suggest applicability of MSCs for the correction of the stress urinary incontinence. However, when summarizing these results, there are often doubts about the adequacy of the impact in modeling experimental pathology. In clinical conditions, the stress urinary incontinence in women is very often diagnosed after vaginal childbirth, which is accompanied by hormonal influences with corresponding changes in the birth canal and surrounding tissues; an initial (congenital) change in the periurethral tissues, including their ligamentous apparatus, is also possible, whereas in experimental modeling, healthy tissues are injured without taking into account their initial state and the general hormonal changes in the body. In addition, it is necessary to note the heterogeneity of recommendations for choosing the best source of MSCs. MSC Using Ineffectiveness and Its Possible Causes The data that the cell therapy effectiveness in patients with stress urinary incontinence is lower than expected have emerged recently [10,18,[25][26][27]. The main limitations of the MSC using are associated with loss of function after expansion ex vivo, poor engraftment in vivo or survival after transplantation, deposition in the injection site, as well as a lack of understanding of the exact mechanisms of action underlying therapeutic results and MSC behavior in vivo [13]. The results of joint transplantation of muscle-derived cells and mesenchymal MSCs into the urethra have been evaluated. The experiment was carried out on old goats that gave birth many times. The average number of cells with a luminescent label injected per animal was 29.6 ± 4.3 × 10 6 . The urethra samples were obtained at 28 or 84 days after cell transplantation. The transplanted cells were identified in all urethras on day 28, regardless of the type. The most pronounced fluorescence was noted in the co-transplant group. A distinct decrease in the luminescence intensity was observed between 28 and 84 days after all types of transplantation. Both MSCs and muscle cells have promoted striated muscle formation when co-transplanted directly into the external urethral sphincter. These events were rare in the MSC-only group. If cells were injected into the submucosa, they remained undifferentiated, usually in the form of clearly distinguishable repositories. The results showed that co-transplantation of MSCs and muscle cells is more likely to improve urethral closure than transplantation of each cell type separately [10]. The distribution of autologous cells was established during transurethral (under endoscopic control) and periurethral administration to female goats. There have been episodes of leakage of cell suspension in 19% of transurethral injections after needle withdrawal. A repository in the urethral wall was found in all animals 28 days after transplantation. The average percentage of these depots in relation to all injections performed was 68.7% and 67.0% for the groups after periurethral and transurethral injection, respectively. The repository frequency identified in the external urethral sphincter was 18.8% and 17.1%, respectively. Leakage of cell suspension, insufficient accuracy of injection of cells into the external urethral sphincter, and long-term deposition of these cells may contribute to insufficient effectiveness of cell therapy in patients with incontinence [26]. MSC therapy for stress incontinence, common in women with type 2 diabetes and obesity, has also shown low efficacy. It was found that epigenetic changes caused by prolonged exposure to a dyslipidemic environment led to abnormal global transcriptional traits of genes and microRNAs and disrupt the reparative capacity of MSCs in muscle tissue [27]. Literature about efficacy of using MSCs for the correction of stress urinary incontinence reports different results. We speculate that technical variations are largely responsible for the failures and low efficiency of cell therapy, first of all, the speed and method of introducing the cell suspension. Conclusions The cell therapy results in patients and experimental animals with incontinence have shown promising results, but the procedures require further optimization, and more research is needed to focus on the clinical phase. The MSC use appears to be a feasible, safe, and effective method of treatment for patients with urinary incontinence. However, the best mode for application of cell technology is still under study. Most clinical investigations have been performed on only a few patients and during rather short follow-up periods, which, together with an incomplete knowledge of the mechanisms of MSC action, does not make it possible for their widespread implementation. The technical details regarding the MSC application remain to be identified in more rigorous preclinical and clinical trials. Conflicts of Interest: The authors declare that they have no conflict of interest. The sponsors had no role in the design, execution, interpretation, or writing of the study.
A comparison of platelet count in severe preeclampsia, mild preeclampsia and normal pregnancy Background: Preeclampsia, the most common of hypertensive disorders of pregnancy is an idiopathic multisystem disorder affecting 2 – 10% of all pregnancies and together they form one member of the deadly triad, along with hemorrhage and infection that contribute greatly to the maternal morbidity and mortality rates. The identification of this clinical entity and effective management play a significant role in the outcome of pregnancy. Platelet count is emphasized to play a significant role in hemostasis mechanism of preeclampsia and the degree of thrombocytopenia increases with severity of preeclampsia. This study was conducted to find correlation of platelet count in severe preeclampsia, mild preeclampsia and normal subjects. Methods: Total 140 subjects, 70 control and 70 cases were enrolled in the study. Samples for platelet count were collected and estimation was carried out by the auto-analyzers. The statistical evaluation is done using SPSS version 22 along with Anova and student t-test. Results: The mean platelet count was significantly lower (p <0.05) in mild and severe preeclampsia than that in the normal pregnancy. Decreased platelet count in severe preeclampsia was significant compared to that in mild preeclampsia. Conclusions: The frequency of thrombocytopenia was found to be directly related with the severity of disease, so platelet count can be used as a simple and cost effective tool to monitor the progression of preeclampsia, thereby preventing complications to develop during the gestational period. INTRODUCTION Pregnancy is a physiological process but can induce hypertension in normotensive women or aggravate already existing hypertension. Preeclampsia, the most common of hypertensive disorders of pregnancy is an idiopathic multisystem disorder affecting 2-10% of all pregnancies and together they form one member of the deadly triad, along with haemorrhage and infection that contribute greatly to the maternal morbidity and mortality rates. 1 The identification of this clinical entity and effective management play a significant role in the outcome of pregnancy. Normal pregnancy is associated with impressive changes in the haemostatic mechanism to maintain placental function during pregnancy and to prevent excessive bleeding in delivery. The combined changes of increase coagulation factors and suppression of fibrinolytic activity are defined as hypercoagulable state or prothrombotic state. 2,3 It usually occurs in the last trimester of pregnancy and more commonly in primiparas. It is characterized by maternal endothelial dysfunction presenting clinically with hypertension and proteinuria, and results in hypercoagulable state and may lead to acute renal failure (ARF), pulmonary oedema and approximately 10% of woman with severe preeclampsia may developed ,hemolysis, elevated liver enzyme and low platelet count refered to as HELLP syndrome. 4 The endothelial dysfunction develops because of the formation of uteroplacental vasculature insufficient to supply adequate blood to the developing fetus resulting in fetoplacental hypoxia leading to imbalances in the releases and metabolism of prostaglandins, endothelin and nitric oxide by placental and extra placental tissue. These as well as enhanced lipid peroxidation and other undefined factors contribute to the hypertension platelet activation and systemic endothelial dysfunction. 5 Many haemostatic abnormalities have been reported in association with hypertensive disorder of pregnancy. Thrombocytopenia is most common of these abnormalities. 6 The degrees of thrombocytopenia increases with severity of disease. 7 Thrombocytopenia in preeclampsia is attributed to various causes including increases platelet consumption due to disseminate intravascular coagulopathy and/or immune mechanism. 8 Most of the studies observed significant decrease in platelet count during normal pregnancy. 9 There is a significant decrease in platelet count especially during second and third trimesters. Thrombocytopenia can result from decrease in platelet production or accelerated platelet destruction. The various mechanisms of thrombocytopenia in pregnancy explained by different workers are as under: • Hemodilution in late pregnancy. 10 • Decreased platelet survival time during normal pregnancy. 11 • Plasma beta thromboglobulin and platelet factor 4 levels, both reflecting platelet activation, were significantly increased during normal pregnancy, indicating an increase in platelet activation, and supporting the hypothesis that there is an increased turnover of platelets during the progression of normal pregnancy. 12 Hence the study is aimed to analyse the utility of platelet count in pre-eclampsia so as to prevent complication, early detection, careful monitoring and appropriate management to reduce the morbidity and mortality of both mother and child. METHODS This comparative prospective study was conducted at the J.K Hospital associated with L.N Medical College and research centre, Bhopal during the period of one year. Cases of preeclampsia will be categorized on the basis of blood pressure based upon classification according to the scheme the National High Blood Pressure Education Program (NHBPEP) (2000) criteria 13 • Mild preeclampsia: Patient having systolic blood pressure between 140-160 mmHg, diastolic blood pressure between 90-110 mmHg and proteinuria upto 1+ • (Severe preeclampsia: Patient having systolic blood pressure between >160 mmHg, diastolic blood pressure >110 mmHg plus one or more of the following criteria: proteinuria >1+, headache, visual disturbance, upper abdominal pain, oliguria (<400 ml/24 hours), serum creatinine elevated >1.2 mg/dl, marked elevation of serum transaminase AST or ALT, fetal growth restriction and pulmonary edema. The study includes total 140 subjects, 70 normotensive pregnant women without any complication and 70 pregnant women with signs and symptoms of preeclampsia in third trimester of gestation. All the subjects were undergone blood investigations, i.e. complete blood cell count for Platelet count using EDTA anticoagulant blood and analyzed on Mindray, Automated Hematology Analyzer". The test was conducted within 1 hour of sample collection maintaining at room temperature to minimize variation due to sample aging. All consenting Gestation age and gestation matched normal pregnant women in 3rd trimester would constitute the control. Exclusion Criteria include subjects having history of essential hypertension, with known liver disease, renal disorder, hydatidiform mole, with known bleeding disorder, on anticoagulant therapy, with established DIC, idiopathic thrombocytopenic purpura, history of illicit drug use, any associated inflammatory disease or sepsis, any associated malignancy. Thrombocytopenia was defined as platelet count <150x10 9 /L. Statistical analysis The statistical software namely statistical package for the social sciences (SPSS) version 22 is used for analysis of data. Analysis of variance (ANOVA) is used to compare the variables. Data is expressed as mean±standard deviation. The p-value was calculated for each parameter and p<0.05 is considered statistically significant. Student t-test was used for doing comparison of Platelet Count in Severe Preeclampsia, Mild Preeclampsia and Normal Pregnancy. Bar diagram were used for graphical representation of this data. RESULTS In the present study, we have studied platelet count in total 70 cases of preeclampsia in 3rd trimester of pregnancy. It included 31 (44.30%) cases of mild pre eclampsia, 39 (55.70%) cases of sever pre eclampsia. (Table 1). Similarly, 70 normotensive age and gestation matched pregnant women in 3 rd trimester were also studied as controls. In the present study, mean age of the cases was 25.12±3.65 years. Maximum 68 (82.82%) cases were between 20-29 years of age. There was only one case of mild preeclampsia and severe preeclampsia each above 35 years of age as shown in Table 1. The mean gestational age in cases was 33.55±3.93 weeks. In subgroups of cases, the mean gestational age was 33.94±3.54 and 33.30±4.43 weeks in mild pre eclampsia and severe pre eclampsia respectively as shown in Table 2. Statistically these differences were found to be insignificant (p = 0.197; p >0.05). Table 3. The difference in parity in cases and controls was found statistically insignificant (p=0.572, p>0.05). When compared among the subgroups of cases, in severe pre eclampsia out of 39 cases 25 were primiparous and 14 were multiparous. Thus, severe pre eclampsia was found to be more common in primiparous women as compared to that in mild pre eclampsia and statistically this difference was found to be significant (p<0.05) ( Table 3). In the present study, the mean platelet count in cases was found to be 168±74.22 x 10 9 /L with a range of 24-366 x 10 9 /L while in controls the mean platelet count was 229.61±73.27 x 10 9 /L with a range of 76 -450 x 10 9 /L as shown in Table 4. Thus, there was decrease in mean platelet count in cases as compared to that in controls and this difference was statistically significant (p=0.000, <0.05). The mean platelet count in various subgroups was 197.29±73.65 x 10 9 /L in mild preeclampsia and 145.25±66.96 x 10 9 /L in severe preeclampsia (Table 4, Figure 4). Thus, the platelet count was found to decrease with the progression of disease. On statistical analysis, when the mean platelet count in different subgroups of cases were compared with that in the controls, the decrease in platelet count in mild preeclampsia (p=0.043, <0.05) and in sever preeclampsia (p<0.05) was significant. The comparison of platelet count amongst the subgroups of cases showed the decrease in platelet count in severe pre eclampsia was significant (p=0.002, <0.05) when compared with that in mild preeclampsia. In the present study the cases and controls were distributed according to the levels of platelet count into three categories as normal (>150 x 10 9 /L), low (100-150 x 10 9 /L), and very low (<100 x 10 9 /L) platelet counts. 40 (57.14%) cases were found to have normal platelet counts while 17(24.28%) cases had low platelet counts and 13(18.57%) cases had very low platelet counts. Similarly, in controls 61 (87.14%) women were found to have normal platelet counts, 7 (10.06%) women had low platelet counts and only 2 (2.8%) women had very low platelet counts, as shown in Table 5. In total 70 cases of pre eclampsia, thrombocytopenia was seen in 30(42.85%) cases. There were only 9 (12.85%) case with thrombocytopenia in mild pre eclampsia group whereas in sever pre eclampsia 21 (30%) cases showed thrombocytopenia. In the controls, only 9 (12.85%) women showed thrombocytopenia. DISCUSSION In the present study, mean age of the cases was 25.12±3.65 years. Maximum 68 (82.82%) cases were between 20-29 years of age ( Table 2). It appears that as far as age is concerned, there is no or little difference between normal healthy pregnant women and patients with different degrees of severity of pregnancy induced hypertension. But it was clear that most patients in normal pregnant control group and patients with pregnancy induced hypertension were in age ranging between 21 to 29 years. Jaleel et al and Kumar et al also found maximum cases between 21-30 years of age, similar to the present findings. 14,15 Younger age of occurrence of preeclampsia testifies the early age of marriage and pregnancy in our country as compared to western countries. The mean gestational age observed in cases in the present study was 33.55±3.93 weeks. In subgroups of cases the mean gestational age in mild preeclampsia and severe preeclampsia were found to be 33.94±3.54 and 33.22±4.23 weeks respectively. Statistically, these differences were found to be insignificant (Table 3). Priyadarshini et al, Jahromi et al also observed similar findings with that of present study, their findings were also statistically not significant. 16,17 The present study observed, 41 (58.60%) cases and 40 (57.20%) controls were primiparous and there was no significant difference in the parity of cases and controls (Table 3). However, in subgroups of cases the present study observed, the severe pre-eclampsia (64.10% cases) were more frequent in primiparous women as compared to that in mild preeclampsia (51.60% cases). Chaware et al observed that in mild pre-eclampsia 52% cases were primigravidae, 28% patients were second gravidae and 20 % cases were third or more gravidae. 18 Similarly Sameer et al observed patients of mild pre eclampsia and severe pre-eclampsia were all more frequent in primiparous women accounting for 66.03% and 65.51% cases respectively. 19 In present study severe pre-eclampsia 67.5% patients were primigravidae, followed by 22.5% cases of second gravidae and 10% of third or more gravidae. The present findings correlate with authors in respect of severe preeclampsia. In the present study, the mean platelet count in cases was found to be 168±74.29 x 10 9 /L while in controls it was 229.61±73.27 x 10 9 /L ( Table 4). This decrease in platelet count in cases was statistically significant as compared to that in controls. Chauhan et al observed platelet count in cases as 157.18±56.66 x 10 9 /L and in controls as 222.93±97.94 x 10 9 /L. 20 Meshram DP observed platelet count 242±62 x10 9 /L in control group and 160±51 x10 9 /L in preeclampsia. 21 These authors also found significantly decreased platelet counts in pre eclampsia as compared to that in controls, similar to the present findings. In present study, subgroups of cases, the platelet count was 197.29±73.65 x 10 9 /L in mild preeclampsia, and 145.23±66.91 10 9 /L in sever preeclampsia ( Table 4). The present study also found thrombocytopenia in 30 (42.85%) cases with only 9 (12.85%) cases having thrombocytopenia in mild preeclampsia patients and 21 (30%) cases in severe preeclampsia (Table 5). Vrunda et al found thrombocytopenia in 10 (25%) cases of mild PIH and 20 (62.5%) cases of severe PIH. 22 Mohapatra et al found thrombocytopenia in 2 (6.6%) cases of mild PIH and 18 (60%) cases of preeclampsia. 23 Thus, the platelet count was found to decrease with the severity of disease. This gradually reduced platelet counts in patients of mild pre eclampsia to severe pre eclampsia were comparable to those reported by other authors as shown in Table 6. On comparing the subgroups of cases, the number of thrombocytopenia cases was more in patients with severe pre-eclampsia followed by that in mild pre eclampsia. Thus, the frequency of thrombocytopenia cases was also found to be directly related with the severity of disease. The mechanism of thrombocytopenia in pre eclampsia is variously explained as under: It may be due to increased consumption of platelets with increased megakaryocytic activity to compensate it. Platelets adhere to areas of damaged vascular endothelium resulting in secondary destruction of platelets (O'Brien et al). 27 • Prostacyclin is an important eicosanoid that exerts strong inhibition of platelet aggregation. There is continuous availability of this eicosanoid from blood vessels which keeps circulating platelets in a dispersed and disaggregated form (O'Brien et al). 27 Deprivation of this prostacyclin makes the circulating platelets even more vulnerable to aggregation. Removal of aggregated platelets might be responsible for thrombocytopenia often observed in pregnancy induced hypertension (FitzGerald et al). 28 • Platelets from severely preeclamptic patients showed less response than normal to a variety of aggregating agents suggesting that platelets may have undergone previous aggregation in the microcirculation (Whigham et al 1978). 29 • Recent studies have documented that increased plasma levels of sFlt1-soluble vascular endothelial cell growth factor (VEGF) receptor type 1 as well as endoglin, an endothelial cell-derived member of the tumor growth factor-2 (TGF-2) receptor family (Venkatesha et al), are present in patients intended to develop preeclampsia as early as the late first trimester. 30 Increased levels of soluble fms-like tyrosine kinase-1 (sFlt1) and endoglin mRNA is present in preeclamptic placentae, suggesting this is the source of these proteins (Kita et al). 31 sFlt1 binds and neutralizes VEGF and placental growth factor (PLGF), another important VEGF and placental growth factor (PLGF), another important VEGF family member whose levels normally increase during pregnancy, whereas endoglin blocks the binding of TGF-2 to endothelial cells. (Young et al). 32 These types of pregnancies are also associated with qualitative alterations suggesting increased platelet turnover. CONCLUSION The frequency of thrombocytopenia was found to be directly related with the severity of disease, so platelet count can be used as a simple and cost effective tool to monitor the progression of preeclampsia, thereby preventing complications to develop during the gestational period.
Teacher Guided Training: An Efficient Framework for Knowledge Transfer The remarkable performance gains realized by large pretrained models, e.g., GPT-3, hinge on the massive amounts of data they are exposed to during training. Analogously, distilling such large models to compact models for efficient deployment also necessitates a large amount of (labeled or unlabeled) training data. In this paper, we propose the teacher-guided training (TGT) framework for training a high-quality compact model that leverages the knowledge acquired by pretrained generative models, while obviating the need to go through a large volume of data. TGT exploits the fact that the teacher has acquired a good representation of the underlying data domain, which typically corresponds to a much lower dimensional manifold than the input space. Furthermore, we can use the teacher to explore input space more efficiently through sampling or gradient-based methods; thus, making TGT especially attractive for limited data or long-tail settings. We formally capture this benefit of proposed data-domain exploration in our generalization bounds. We find that TGT can improve accuracy on several image classification benchmarks as well as a range of text classification and retrieval tasks. Introduction Recent general purpose machine learning models (e.g., BERT [Devlin et al., 2019], DALL-E [Ramesh et al., 2021], SimCLR [Chen et al., 2020a], Perceiver [Jaegle et al., 2021], GPT-3 [Brown et al., 2020]), trained on broad data at scale, have demonstrated adaptability to a diverse range of downstream tasks. Despite being trained in unsupervised (or socalled self-supervised) fashion, these models have been shown to capture highly specialized information in their internal representations such as relations between entities Heinzerling and Inui [2021] or object hierarchies from images [Weng et al., 2021]. Despite their impressive performance, the prohibitively high inference cost of such large models prevents their widespread deployment. A standard approach to reducing the inference cost while preserving performance is to train a compact (student) model via knowledge distillation [Bucilua et al., 2006, Hinton et al., 2015 from a large (teacher) model. However, existing distillation methods require a large amount of training data (labeled or unlabeled) for knowledge transfer. For each data point, the teacher must be evaluated, making the process computationally expensive Xie et al. [2020d], He et al. [2021], Sanh et al. [2019a]. This is compounded by the need to repeat the distillation process separately for every down-stream task, each with its own training set. Enabling efficient distillation is thus an important challenge. Additionally, minimizing the number of distillation samples would especially benefit low-data down-stream tasks, (e.g. those with long-tails). Another inefficiency with standard distillation approaches is that within each evaluation of the teacher, only the final layer output (aka logits) is utilized. This ignores potentially useful internal representations which can also be levered for knowledge transfer. Various extensions have been proposed in the literature along these lines (see, e.g., [Sun et al., 2020, Aguilar et al., 2020, Li et al., 2019, Sun et al., 2019 and references therein). However, despite their success, most use the teacher model in a black-box manner, and do not fully utilize the domain understanding it contains [Cho andHariharan, 2019, Stanton et al., 2021]. In these approaches, the teacher is used passively as the input sample distribution is fixed and does not adapt to the student model performance. Consequently, these forms of distillation do not lead to faster training of a high-performance student model. Figure 1: An overview of the proposed teacher guided training (TGT) framework. Given a learning task, the framework leverages a large teacher with a pretrained generator and labeler that exhibits high performance on the task. In particular, we assume that the generator consists of an encoder and a decoder. TGT performs three key operations during student model training: (1) Given an original training instance, by using the teacher generator, identify a novel task-relevant instance. We search for informative instances in the lower dimensional latent space, where we can propagate the gradient to. (2) Obtain (soft) labels for the original and newly generated training instance from the teacher labeler; and (3) Minimize the student training objective that depends on the original dataset and the newly generated instances and their corresponding labels produced by the teacher labeler. Note that TGT reduces to standard knowledge distillation in the absence of the generator component. In this work, we go beyond the passive application of large teacher models for training compact student models, and leverage the domain understanding captured by the teacher to generate new informative training instances that can help the compact model achieve higher accuracy with fewer samples and thus enable reduced training time. In particular, we propose the teacher guided training (TGT) framework for a more efficient transfer of knowledge from large models to a compact model. TGT relies on the fact that teacher's internal representation of data often lies in a much smaller dimensional manifold than the input dimension. Furthermore, we can use teacher to help guide training by identifying the directions where the student's current decision boundary starts to diverge from that of the teacher, e.g., via backpropagating through the teacher to identify regions of disagreement. We also give a formal justification for the TGT algorithm, showing that leveraging the internal data representation of large models enables better generalization bounds for the student model. Given n instances in a Ddimensional space the generalization gap for learning a Lipschitz decision boundary of an underlying classification task decays only as O n − 1 D [Györfi et al., 2002]. In contrast, assuming that the large model can learn a good data representation in a d-dimensional latent space, the TGT framework realizes a generalization gap of O n − 1 d + W(D, D t ) , where W(D, D t ) denotes the Wasserstein distance between the data distribution D and the distribution D t learned by the underlying generative teacher model. Typically d D, thus TGT ensures much faster convergence whenever we employ a high-quality generative teacher model. This makes TGT especially attractive for low-data or long-tail regimes. In order to realize TGT, we take advantage of the fact that most of the unsupervised pretrained models like Transformers, VAE, and GANs have two components: (1) an encoder that maps data to a latent representation, and (2) a decoder that transforms the latent representation back to the original data space. We utilize this latent space for the data representations learned by the teacher model to efficiently search for the regions of mismatch between the teacher and student's decision boundaries. This search can take the form of either (i) a zero-order approach involving random perturbation or (ii) a first-order method exploring along the direction of the gradient of a suitably defined distance measure between the teacher and student models. Some of these pretrained models, particularly in NLP such as T5 [Raffel et al., 2020], can also provide labels for a downstream task and act as a sole teacher. However, our approach is sufficiently general to utilize separate pretrained models for generative and discriminative (labeler) functions (cf. Fig. 1), e.g., we employ an image BiGAN as generator and an EfficientNet as labeler for an image classification task. Our main contributions are summarized as follows: 1. We introduce the TGT framework, a conceptually simple and scalable approach to distilling knowledge from a large teacher into a smaller student. TGT adaptively changes the distribution of distillation examples, yielding higher performing student models with fewer training examples. 2. We provide theoretical justifications for utilizing the latent space of the teacher generator in the TGT framework, which yields tighter generalization bounds. 3. We empirically demonstrate the superiority of TGT to existing state-of-the-art distillation approaches, also showing results on both vision and NLP tasks, unlike most previous work which is specialized to one domain. Related Work Our proposed TGT framework can be considered a form of data augmentation where data is dynamically added at points of current discrepancy between the teacher and student. Next, we provide a brief overview of how data augmentation has been used in the context of distillation. Using pseudo labels. The earliest line of work involves using consistency regularization [Sajjadi et al., 2016, Laine and Aila, 2017, Tarvainen and Valpola, 2017] to obtain pseudo labels for unlabelled data where a model is expected to make consistent predictions on an unlabeled instance and its augmented versions, cf. [Miyato et al., 2019, Xie et al., 2020a, Verma et al., 2019, Berthelot et al., 2019, Sohn et al., 2020, Zhu et al., 2021. Another approach is self-training [Xie et al., 2020d, Du et al., 2021 where a smaller teacher model is learned on the labeled data first which is then used to generate pseudo labels for a large but relevant unlabeled set. A large student model is then trained on both labeled and pseudo labeled sets. Label propagation [Iscen et al., 2019] is another direction where unlabeled instances receive pseudo labels based on neighboring labeled instances in a similarity graph constructed based on the representations from a model trained on only labeled data. Furthermore, there have been work on learning to teach [Fan et al., 2018, Raghu et al., 2021, Pham et al., 2021, where the teacher is dynamically updated so as to provided more valuable pseudo labels based on the student loss function. Such an interactive approach presents a challenging optimization problem and potentially opens up the door for borrowing techniques from reinforcement learning. In contrast, our work focuses on the setting where high-quality pretrained teacher model is fixed throughout the training. Here, we focus on a setting where updating the large teacher model is prohibitively costly or undesirable as such a model would potentially be used to distill many student models. Moreover, many large models like GPT-3 may only be available through API access, thus making it infeasible to update the teacher. Using pretrained models. One can use large scale pretrained class conditional generative models like BigGAN [Brock et al., 2019] or VQ-VAE2 [Razavi et al., 2019] to generate more data for augmentation. Despite evidence [Webster et al., 2019] that GANs are not memorizing training data, using them to simply augment the training dataset has limited utility when training ResNets [Ravuri and Vinyals, 2019b,a]. One possible reason might be the lack of diversity [Arora et al., 2017] in data generated by GANs, especially among high density regions [Arora et al., 2018]. In contrast, we use generative models to adaptively explore the local region of disagreement between teacher and student as opposed to blindly sampling from the generative model. This way we circumvent the excessive reliance on samples from high density regions which often have low diversity. Another line of work by Chen et al. [2020b] combines unsupervised/self-supervised pretraining (on unlabeled data) with SimCLR-based approach [Chen et al., 2020a], task-specific finetuning (on labeled data), and distillation (natural loss on labeled and distillation loss on unlabeled data). The setup considered in this work is very close to our work with two key differences: (1) We assume access to a very high-quality teacher model, which is potentially trained on a much larger labeled set, to provide pseudo labels; (2) We go beyond utilizing the given unlabeled dataset from the domain of interest, exploring the dynamic generation of domain-specific unlabeled data by leveraging the representations learned by pretrained models. Additionally, our work aims to develop a theoretical framework to identify the utility of unlabeled data instances for student training, especially the unlabeled instances generated based on teacher learned representations. Using both pseudo labels and pretrained models. The idea of combining pretrained models to generate training instances along with pseudo-labelers has been previously considered in the name of the GAL framework [He et al., 2021]. However, the GAL framework generates these new instances in an offline manner at the beginning of student training. In contrast, our proposed approach (cf. Fig. 1) generates the new informative training instances in an online fashion, aiming at improving the student performance while reducing its training time. Recently, MATE-KD [Rashid et al., 2021] also considers a setup where a generator model is used to obtain new training instances based on the current student model performance (by looking at the divergence between the student and teacher predictions). However, there are two key differences between our proposed TGT approach and the MATE-KD framework: First, their method updates the teacher so as to find adversarial examples for the students, but this can cause the generator to drift away from true data distribution. Second, they perturb in input space itself and do not leverage the latent space of the teacher, which is the crux of our method. Further details are provided in App. A. Another work worth mentioning is KDGAN [Wang et al., 2018] which leverages a GAN during distillation. However, it samples examples from a GAN without taking the student performance into account. We also note [Heo et al., 2019, Dong et al., 2020 that search for adversarial examples during distillation. However, their search also does not depend on student's performance, resulting in wasteful exploration of those regions of the input spaces where the student is already good. Further, unlike TGT, [Heo et al., 2019, Dong et al., 2020 perform example search in the input space which is often inefficient due to the large ambient dimension of the input space. Finally, data-free KD approaches perform knowledge distillation using only synthetically generated data [Nayak et al., 2019, Yoo et al., 2019, Chen et al., 2019. Unlike TGT, in this approach, the synthetic data distribution is updated at each epoch, but this causes the student model to lose the information over epochs and experience accuracy degradation [Binici et al., 2022]. In this framework, Micaelli and Storkey [2019] targeted generating samples that would cause maximum information gain to the student when learned, however, it also suffers from similar drawbacks as MATE-KD noted above. Teacher Guided Training We begin by formally introducing our setup in Section 3.1. We then describe our proposed TGT framework in Sec. 3.2 and present a theoretical analysis in Sec. 3.3 and Sec. 3.4. Problem setup In this paper, we focus on a multiclass classification task where given an instance x ∈ X the objective is to predict its true label y ∈ Y := [K] out of K potential classes. Let D := D X,Y denote the underlying (joint) data distribution over the instance and label spaces for the task. Moreover, we use D X and D Y |X=x to denote the marginal distribution over the instance space X and the conditional label distribution for a given instance x, respectively. A classification model f : . . , f (x) K ) takes in an input instance x and yields scores for each of the K classes. Finally, we are given a (tractable) loss function : R K × [K] → R which closely approximates model's misclassification error on an example (x, y), e.g., softmax-based cross-entropy loss. We assume access to n i.i.d. labeled samples S labeled , generated from D. Given S labeled n and a collection of allowable models F, one typically learns a model with small misclassification error by solving the following empirical risk minimization (ERM) problem: (1) Besides the standard classification setting introduced above, in our TGT setup, we further assume access to a high quality teacher model, which has: • Teacher generator. A generative component that captures D X well, e.g., a transformer, VAE, or ALI-GAN. This usually consists of an encoder Enc : X → R d and a decoder Dec : R d → X. • Teacher labeler. A classification network, denoted by h : X → R K , with good performance on the underlying classification task. In general, our framework allows for h to be either a head on top of the teacher generator or an independent large teacher classification model. Given S labeled n and such a teacher model, our objective is to learn a high-quality compact student (classification) model in F, as assessed by its misclassification error on D. Proposed approach To train a student model f ∈ F, we propose to minimize: where d : R K × R K → R is a loss function that captures the mismatch between two models f and h, andS m = {x j } j∈ [m] is introduced in subsequent passage. The first term, (f (x i ), y i ), corresponds to standard ERM problem (cf. Eq. (1)). The subsequent terms, , do not make use of labels. In particular, the second term, d (f (x i ), h(x i )), corresponds to the knowledge distillation [Bucilua et al., 2006, Hinton et al., 2015 where the teacher model provides supervision for the student model. We introduce a novel third term, Here, we want to generate additional informative samples which will help student learn faster, e.g., points where it disagrees with teacher but still lie on the data manifold. In other words, we want to findx as follows: We propose two specific approaches to generate novel samples: 1. Isotropically perturb in latent space: This can be regarded as a zero-order search in the latent space, which satisfies the constraint of remaining within the data manifold. 2. Gradient-based exploration: Run a few iterations of gradient ascent on Eq. (3) in order to find the example that diverges most with teacher. To enforce the constraint, we run the gradient ascent in the latent space of the teacher generator as opposed to performing gradient ascent in the instance space X, which might move the perturbed point out of the data manifold. For a high-quality teacher generator, the latent space should capture the data manifold well. To implement this we need to backprop all the way through the student and teacher-labeler to the teacher-decoder, as shown in Fig. 1. Mathematically, it involves the following three operations: z := Enc(x); z ← z + η∇ z d (f (Dec(z)) , h(Dec(z)));x := Dec(z). This is akin to a first-order search in the latent space. Extension to discrete data. Note that perturbing an instance from a discrete domain, e.g., text data, is not as straightforward as in a continuous space. Typically, one has to resort to expensive combinatorial search or crude approximations to perform such perturbations [Tan et al., 2020, Zang et al., 2020, Ren et al., 2019. Interestingly, our approach in Eq. (4) provides a simple alternative where one performs the perturbation in the latent space which is continuous. On the other hand, in gradient based exploration, we assume that X is a differentiable space in order to calculate necessary quantities such as ∂f (x) ∂x in Eq. (5). This assumption holds for various data such as images and point clouds but not for discrete data like text. We can, however, circumvent this limitation by implementing weight sharing between the output softmax layer of the teacher's decoder Dec and the input embedding layer of the student f (and also to teacher labeler h when an independent model is used). Now, one can bypass discrete space during the backward pass, similar to ideas behind VQ-VAE [Hafner et al., 2019]. Note that, during forward pass, we still need the discrete representation for decoding, e.g., using beam search. Finally, we address the superficial resemblance between our approach and adversarial training. In adversarial training, the goal is to learn a robust classifier, i.e., to increase margin. Towards this, for any x, one wants to enforce model agreement in its local neighborhood B r (x), i.e., f (x ) = f (x), ∀x ∈ B r (x). One needs to carefully choose small enough neighborhood by restricting r, so as to not cross the decision boundary. In contrast, we are not looking for such max-margin training which has its own issues (cf. [Nowak-Vila et al., 2021]). We simply want to encourage agreement between the teacher and student, i.e., f (x ) = h(x ), ∀x . Thus, we don't have any limitation on the size of the neighborhood to consider. As a result, we can explore much bigger regions as long as we remain on the data manifold. Value of generating samples via the latent space In this section, we formally show how leveraging the latent space can help learning. For this exposition, we assume X = R D . Furthermore, for directly learning in the input space, we assume that our function class F corresponds to all Lipschitz functions that map R D to R K . Then for any such function f ∈ F, there are existing results for generalization bound of the form [Devroye et al., 2013, Mohri et al., 2018: where R ,f (D) is true population risk of the classifier, R ,f (S n ) is empirical risk, and R n (G ,F ) is the Rademacher complexity of the induced function class G ,F , which is known in our case to be O(n −1/D ) (see App. B for more details). Any reduction in the Rademacher term would imply a smaller generalizing gap, which is our goal. In our TGT framework, we assume availability of a teacher that is able to learn a good representation for the underlying data distribution. In particular, we assume that, for x ∈ supp(D X ), we have i.e., for x, applying the decoder Dec on the latent representation of x, as produced by the encoder Enc, leads to a point Dec • Enc(x) ∈ X that approximates x with a small error. This ability of teacher generator to model the data distribution using latent representation can be used to reduce the complexity of the function class needed. Specifically, in TGT framework, we leverage the teacher decoder to restrict the function class to be a composition of the decoder function Dec and a learnable Lipschitz function operating on the latent space R d . Since d D, this leads to a function class with much lower complexity. Next, we formally capture this idea for distillation with both the original samples S n sampled from D X as well as the novel samplesS introduced by the teacher generator. In what follows, we only consider the distillation losses and ignore the first loss term (which depends on true labels). Our analysis can be easily extended to take the latter term into account (e.g., by using tools from Foster et al. [2019]). We start with the standard distillation in the following result. Theorem 3.1. Suppose a generative model with Enc and Dec satisfies the approximation guarantee in Eq. (6) for D X . Let Dec and teacher labeler h be Lipschtiz functions, and the distillation loss d satisfies Assumption C.1. Then, with probability at least 1 − δ, the following holds for any f ∈ F. Thus, we can reduce the Rademacher term from O(n −1/D ) to O(n −1/d ), which yields a significant reduction in sample complexity. However, as the teacher model is not perfect, a penalty is incurred in terms of reconstruction and prediction error. See App. C.1 for the details. Thus far, we have not leveraged the fact that we can also use the teacher to generate further samples. Accounting for using samplesS n generated from teacher generator instead, one can obtain similar generalization gap for the distillation based on the teacher generated samples: be n i.i.d. samples generated by the the TGT framework, whose distribution be denoted byD X . Further, letf n ∈ F denote the student model learned via distillation onS n , with h as the teacher model and d be the distillation loss satisfying Assumption C.1. Then, with probability at least 1 − δ, we have Motivation for gradient based exploration Results so far do not throw light on how to design optimalD X , i.e., the search mechanism in the latent space for our TGT framework. In this regard, we look at the variance-based generalization bounds [Maurer and Pontil, 2009]. These were previously utilized by Menon et al. [2021a] in the context of distillation. Applying this technique in our TGT approach, we would obtain a generalization bound of the form: where, and M(n) depending on the covering number for the induced function class G h d ,F (cf. Eq. (16)). Here, we note that by combining Eq. (7) with Lemma D.4 translate the bound on R h d ,f (D X ) to a bound on R ,f (D) with an additional penalty term that depends on the quality of the teacher labeler h. Note that Eq. (7) suggests a general approach to select the distributionD X that generates the training samplesS n . In order to ensure small generalization gap, we need to focus on two terms depending onD X : (1) the variance term VarD X ( d (x)); and (2) the divergence term W(D X ,D X ). We note that finding a distribution that jointly minimizes both terms is a non-trivial task. That said, in our sampling approach in Eq. (5), we control for W(D X ,D X ) by operating in the latent space of a good quality teacher generative model and minimize variance by finding points with high loss values through gradient ascent, thereby striking a balance between the two objectives. We refer to App. C.3 for more details on the bound stated in Eq. (7). Experiments We now conduct a comprehensive empirical study of our TGT framework in order to establish that TGT (i) leads to high accuracy in transferring knowledge in low data/long-tail regimes (Sec. 4.1); (ii) effectively increases sample size (Sec. 4.2); and (iii) has wide adaptability even to discrete data domains such as text classification (Sec. 4.3) and retrieval (Sec. 4.4). Long-tail image classification Setup. We evaluate TGT by training student models on three benchmark long-tail image classification datasets: ImageNet-LT [Liu et al., 2019c], SUN-LT [Patterson and Hays, 2012], Places-LT [Liu et al., 2019c] We employ off-the-shelf teacher models, in particular BigBiGAN . We report top-1 accuracy on balanced eval sets. We also state the number of model parameters and inference cost (in terms of FLOPs) for all the methods. Note that TGT leads to performance improvements over standard distillation on all three datasets, particularly for ImageNet-LT where the teacher generator models the task distribution well. TGT also often outperforms stated baselines that rely on much larger and expensive models. The teacher generator is trained on the unlabelled full version of ImageNet [Russakovsky et al., 2015]. Results. The results 1 are reported in Table 1 compared with similar sized baselines (we ignored gigantic transformer models). We see that TGT is able to effectively transfer knowledge acquired by the teacher during its training with the huge amount of data into a significantly smaller student model, which also has lower inference cost. We see that TGT considerably improves the performance across the board over standard distillation, even on Sun-LT and Places-LT whose data distribution does not exactly match to the distribution that the teacher's generator was trained with. Comparing TGT (random) (cf. Eq. (4)) and TGT (gradient-based) (cf. Eq. (5)) indicates that most of our win comes from utilizing the latent space, the form of search being of secondary importance. Thus, for all subsequent experiments we only consider TGT (random). Here, we note the baselines stated from the literature in Table 1 rely on specialized loss function and/or training procedures designed for the long-tail setting, whereas we do not leverage such techniques. One can pontentially combine the TGT framework with a long-tail specific loss function as opposed to employing the standard cross-entropy loss function as a way to further improve its performance. We leave this direction for future explorations. Even more interstingly, on Amazon-5 and Yelp-5, TGT with randomly initialized student, i.e., TGT (Random Init), outperfroms the standard approach of finetuning a pretrained model with one-hot labels, i.e., One-hot (Pretrained). , standard distillation (distillation), and TGT in simulated low-data regimes. We imitate a low-data regime via subsampling the Im-ageNet training set and evaluate the trained student models on the entire eval set. We employ 450k training steps for normal training and standard distillation, and 112k training steps for TGT. TGT outperforms other methods in less training steps, thus, effectively simulating an increase in the sample size. TGT in low-data regime To further showcase effectiveness of knowledge transfer via TGT, we simulate a low-data regime by varying the amount of available training data for ImageNet [Russakovsky et al., 2015] and studying its impact on student's performance. For these experiments, we use the same model architectures as in Sec. 4.1, but finetune the teacher labeler on the entire ImageNet. We then compare the performance of the student trained via TGT, with the students trained via normal training (with one-hot labels) and standard distillation. The results are shown in Fig. 2. Note that both TGT and standard distillation utilize additional training data more effectively than normal training, with TGT being the most efficient of the two. The curves show TGT is equivalent to an increase in sample size by 4x, compared to the normal training. This verifies that TGT generates informative training instances for the student. Method recall@20 recall@100 Teacher ( ] is employed to serve as a task-agnostic generator. All student models follow the architecture of DistilBERT [Sanh et al., 2019b]. TGT significantly outperforms standard training (One-hot) and teacher-label only distillation (Distillation). TGT closes the teacher-student gap by 37% at @20, 63% at @100) compared to the standard distillation. [Sanh et al., 2019b] model for the student model architecture. Both teacher networks are pretrained on a very large generic text corpus of size 160GB. The teacher labeler model is finetuned on the corresponding dataset for each task. The teacher generator is not specialized to any specific classification task. Text classification Results. The results are reported in Table 2 where we compare TGT with other data augmentation and distillation baselines. We see that TGT considerably improves the performance and beats the state-of-the-art methods MATE-KD [Rashid et al., 2021] and UDA [Xie et al., 2020a]. Also, note that by using TGT on a randomly initialized student, we can match the performance of normal finetuning (with one-hot labels) on a pretrained model on Amazon-5 and Yelp-5. We highlight that baselines such as MATE-KD [Rashid et al., 2021] always work with a pretrained student model. Thus, the aforementioned improvements realized by TGT with a randomly initialized student model demonstrates enormous saving in overall data and training time requirement as it eliminates the need for pretraining on a large corpus. This further establishes that TGT can enable a data-efficient knowledge transfer from the teacher to the student. Text retrieval Setup. Finally, we evaluate TGT on Natural Questions (NQ) [Kwiatkowski et al., 2019] -a text retrieval benchmark dataset. The task is to find a matching passage given a question, from a large set of candidate passages (21M). We utilize the RoBERTa-Base dual-encoder model Oguz et al. [2021] as our teacher labeler. For teacher generator, we employ BART-base . We utilize DistilBERT dual encoder model as our student architecture. We follow the standard retrieval distillation setup where the teacher labeler provides labels for all the within-batch question-to-passage pairs for the student to match. We consider three baselines: One-hot trains the student with the original one-hot training labels whereas Distillation utilizes the teacher labeler instead. In uniform negatives, for a given question-to-passage pair in NQ, we uniformly sample and label additional 2 passages from the entire passage corpus (21M). TGT instead dynamically generates 2 confusing passages for each question-passage pair with BART generator, infusing the isotropic perturbation as per Eq. (4). Results . Table 3 compares TGT with other baselines. TGT significantly improves retrieval performance, closing the teacher-student gap by 37% at recall@20 and 63% at recall@100 compared to the standard distillation. Unlike TGT, uniformly sampled random passages only partially helped (slightly on recall@20 but degrades at @100 compared to the standard distillation). A plausible explanation is that the randomly sampled passages do not provide enough relevance to the matching pair since the output space is extremely large (21M). TGT instead generates informative passages that are close to the matching pair. Conclusion and Future Directions We have introduced a simple and formally justified distillation scheme (TGT) that adaptively generates samples with the aim of closing the divergence between student and teacher predictions. Our results show it to outperform, in aggregate, existing distillation approaches. Unlike alternative methods, it is also applicable to both continuous and discrete domains, as the results on image and text data show. TGT is orthogonal to other approaches that enable efficient inference such as quantization and pruning, and combining them is an interesting avenue for future work. Another potential research direction is to employ TGT for multi-modal data which would require accommodating multiple generative models with their own latent space, raising both practical and theoretical challenges. can cause the generator to drift away from true data distribution. In contrast, we keep the pre-trained teacher-generator model fixed throughout the training process of the student. Our objective behind employing the generator model is to leverage the domain knowledge it has already acquired during its pre-training. While we do want to generate 'hard instances' for the student, we also want those instances to be relevant for the underlying task. Thus, keeping the generator fixed introduces a regularization where the training instances the student encounters do not introduce domain mismatch. Keeping in mind the objective of producing new informative training instances that are in-domain, we introduce perturbation in the latent space realized by the encoder of the teacher-generator model (see Figure 1). This is different from directly perturbing an original training instance in the input space itself, as done by MATE-KD. As evident from our theoretical analysis and empirical evaluation, for a fixed teacher-generator model, employing perturbation in the latent space leads to more informative data augmentation and enables good performance on both image and text domain. B Background and notation For a, b ∈ R, we use a = O(b) to denote that there exits a constant γ > 0 such that a ≤ γ · b. Given a collection of n i.i.d. random variables U n = {u 1 , . . . , u n } ⊂ U, generated from a distribution D U and a function τ : U → R, we define the empirical mean of {τ (u 1 ), . . . , τ (u n )} as For the underlying multiclass classification problem defined by the distribution D := D X×Y , we assume that the label set Y with K classes takes the form [K] := {1, . . . , K}. We use F to denote the collection of potential classification models that the learning methods is allowed to select from, namely function class or hypothesis set: which is a subset of all functions that map elements of the instance space X to the elements of R K . Given a classification loss function : R K × Y → R and a model f : X → R K and a sample S labeled n = {(x i , y i )} i∈[n] generated from D, we define the empirical risk for f ∈ F as follows. Further, we define the population risk for f ∈ F associated with data distribution D as follows. Note that, when the loss function is clear from the context, we drop from the notation and simply use R f (S labeled n ) and R f (D) to denote the the empirical and populations risks for f , respectively. Given a function class F, the loss function induces the following function class. Definition B.1 (Rademacher complexity of G ,F ). Now, given a sample S labeled n = {(x i , y i )} i∈[n] ∼ D n and a vector σ = (σ i , . . . , σ m ) ∈ {+1, −1} with n i.i.d. Bernoulli random variables, empirical Rademacher complexity R S (G ,F ) and Rademacher complexity R n (G ,F ) are defined as Let S n = {x i } i∈[n] be a set of n unlabeled samples generated from D X . Then, given a teacher model h : X → R K and a distillation loss d : Accordingly, the population (distillation) risk for f ∈ F is defined as Again, when d is clear from the context, we simply use R h f (S n ) and R h f (D) to denote the empirical and population distillation risk for f , respectively. Note that, for a (student) function class F and a teacher model h, d produces an induced function class G d ,h (F), defined as and respectively. C Deferred proofs from Section 3 C.1 Proof of Theorem 3.1 In this subsection, we present a general version of Theorem 3.1. Before that, we state the following relevant assumption on the distillation loss d . Assumption C.1. Let : R K × Y → R be a bounded loss function. For a teacher function h : X → R K , the distillation loss d takes the form h(x) y · (f (x), y). Remark C.2. Note that the cross-entropy loss d (f (x), h(x)) = − y h(x) y · log f (x) y , here, one of the most common choices for the distillation loss, indeed satisfies Assumption C.1. 2 The following results is a general version of Theorem 3.1 in the main body. Theorem C.3. Let a generator with the encoder Enc and decoder Dec ensures the approximation guarantee in Eq. (6) for D X . Let Dec and teacher labeler be Lipschtiz functions, F be function class of Lipschitz functions, and the distillation loss d be Lipschtiz. Then, with probability at least 1 − δ, the following holds for any f ∈ F. where L denotes the effective Lipschitz constant of the induced function class G h d ,F in Eq. (16). Additionally, if the distillation loss d satisfies Assumption C.1 with a classification loss , then Eq. (19) further implies the following. Proof. Note that where (i) follows from the definition of G h d ,F in Eq. (16) and (i) follow from the standard symmetrization argument [Devroye et al., 2013, Mohri et al., 2018. Next, we turn our focus to the empirical Rademacher complexity R Sn (G h d ,F ). Recall that S n = {x 1 , x 2 , . . . , x n } contains n i.i.d. samples generated from the distribution D X . We define another set of n pointsS It follows from our assumption on the quality of the generator (cf. Eq. (6)) that Note that where σ denote a vector with n i.i.d Bernoulli random variables. ) for some f ∈ F. Now, we can define a new function class from R d to R: Therefore, it follows from Eq. (23) and Eq. (24) that where the last two inequality follows from the definition of G h d ,F (cf. Eq. (16)) and the standard symmetrization argument [Devroye et al., 2013, Mohri et al., 2018, respectively. Now, the standard concentration results for empirical Rademacher complexity implies that, with probability at least 1 − δ, we have the following. We now focus on establishing Eq. (29). Note that, for a sampleS n = {x 1 , . . . ,x n } generated by the TGT framework, there exists {z 1 , . . . , z n } ⊂ R d such that Thus, where (i) employs Eq. (35). Thus, combining Eq. (31) and Eq. (36) gives us that Now, similar to the proof of Eq. (28), we can invoke Lemma D.3 and the concentration result for empirical Rademacher complexity to obtain the desired result in Eq. (29) from Eq. (37). Remark C.5. Note that, if the distillation loss d satisfies Assumption C.1 with a loss function , then, one can combine Theorem C.4 and Lemma D.4 to readily obtain bounds on R ,fn (D) with an additional term This term captures the quality of the teacher labeler h. Here, M(n) = sup Sn⊂X n N (1/n, G h,IS d ,F (S n ), · ∞ ), with N ( , G h,IS d ,F (S n ), · ∞ ) denoting the covering number [Devroye et al., 2013] of the set Proof. By utilizing the uniform convergence version of Bennet's inequality and uniform bound for VarS n ( IS d (x)), where VarS n ( IS d (x)) denotes the empirical variance of IS d (x) based onS n , the following holds with probability at least 1 − δ [Maurer and Pontil, 2009]. Remark C.7. Eq. (43) suggests general approach to select the distributionD X that generated the training samplesS n . In order to ensure small generalization gap, it is desirable that the variance term VarD X ( IS d (x)) is as small as possible. Note that, the distribution that minimizes this variance takes the form This looks like the lagrangian form of Eq. (3). Interestingly, TGT framework with gradientbased sampling (cf. equation 5) focuses on generating samples that maximizes the right hand side RHS of Eq. (45) by first taking a sample generated according to D X and then perturbing it in the latent space to maximize the loss d f (x), h(x) . Thus, the resulting distributionD X has pdf that aims to approximate the variance minimizing pdf in Eq. (45). Here it is worth pointing out that, since exact form of pD X (·) and p D X (·) is generally not available during the training, it's not straightforward to optimize the weighted risk introduced in Eq. (39). Remark C.8. Note that, as introduced in Section 3, TGT framework optimizes the empirical risk in Eq. (38) as opposed to minimizing Eq. (39). In this case, one can obatain a variance based bound analogous to Eq. (43) that takes the form: where, (II) denotes and M(n) depending the covering number for the induced function class G h d ,F (cf. Eq. (16)). Notably, this bound again incurs a penalty of W(D X ,D X ) which is expected to be small for our TGT based sampling distribution when we employ high-quality teacher generator. D Toolbox This section presents necessary definitions and lemmas that we utilize to establish our theoretical results presented in Sec. 3 (and restated in App. C. Definition D.1 (Wasserstein-1 metric). Let (X, ρ) be a metric space. Given two probability distributions D 1 X and D 2 X over X, Wasserstein-1 distance between D 1 X and D 2 X is defined as follows. where (i) follow by dividing and multiply by L; (ii) follows as, for any g ∈ G h d ,F is g L is 1-Lipschitz; and (iii) follows from Lemma D.2. Lemma D.4. Let the distillation loss d satisfy Assumption C.1 with a bounded loss function : R K × Y → R. Then, given a teacher h : X → R K and a student model f : where D Y |X = (D Y |X (1), . . . , D Y |X (K)) is treated as a vector in R K . Proof. Note that where (i) follow from the Cauchy-Schwarz inequality. Now the statement of Lemma D.4 follows from the assumption on the loss is bounded. E Additional experiments E.1 Long-tail image classification Please see Table 4 for Places365-LT result. Discussion is in Sec. 4.1. F Details to reproduce our empirical results Hereby we provide details to reproduce our experimental results. F.1 Long-tail image classification (Sec. 4.1) Dataset. The full balanced version of 3 datasets (ImageNet 4 , Place365 5 , SUN397 6 ) are available in tensflow-datasets (https://www.tensorflow.org/datasets/). Next to obtain the the long-tail version of the datasets, we downloaded 7 image ids from repository of "Large-Scale Long-Tailed Recognition in an Open World [Liu et al., 2019b]" according to which we subsampled the full balanced dataset. [Samuel et al., 2021]). We also state the number of model parameters and inference cost (in terms of FLOPs) for all the methods. Note that TGT leads to performance improvements over standard distillation. Note that, for Places-LT, TGT does not outperform stated baselines for the literature that rely on specialized loss function and/or training procedures designed from the long-tail setting. One reason for this could be that the BigBiGAN does not generate very informative samples for Places-LT due to distribution mismatch. That said, as discussed in Sec. 4.1, one can combine the TGT framework with a long-tail specific loss function as opposed to employing the standard cross-entropy loss function as a way to further improve its performance. We directly used teacher generator as BigBiGAN ResNet-50 checkpoint from the official repository https://github.com/deepmind/deepmind-research/tree/master/bigbigan. (We did not fine-tune it.) Student training. We start from randomly initialized MobileNetV3-0.75 model. We employed SGD optimizer with cosine schedule (peak learning rate of 0.4 and decay down to 0). We also did a linear warm-up (from 0 to peak learning rate of 0.4) for first 5 epochs. The input image size are unfortunately different between EfficientNet-B3 model, BigBiGAN-ResNet50, and MobileNetV3-0.75 models. From original images in dataset, we use Tensorflow's bicubic resizing to obtain appropriate size image for each mode. We did a grid search over the perturbation parameters σ and η (c.f. Eq. (4) and Eq. (5)). All hyper-parameters and grid are listed in table below: F.2 TGT in low-data regime (Sec. 4.2) Dataset. We used ImageNet 9 dataset from tensflow-datasets repository (https://www. tensorflow.org/datasets/). We used in-built sub-sampling functionality available in Teacher model. For teacher labeler, we directly used trained EfficientNet-B3 model checkpoint available from "Sharpness Aware Minimization" repository 10 For teacher generator, we directly used trained BigBiGAN checkpoint from the official repository https: //github.com/deepmind/deepmind-research/tree/master/bigbigan. (We did not finetune either of the models.) Student training. We start from randomly initialized MobileNetV3-0.75 model. We employed SGD optimizer with cosine schedule (peak learning rate of 0.4 and decay down to 0). We also did a linear warm-up (from 0 to peak learning rate of 0.4) for first 5 epochs. The input image size are unfortunately different between EfficientNet-B3 model, BigBiGAN-ResNet50, and MobileNetV3-0.75 models. From original images in dataset, we use Tensorflow's bicubic resizing to obtain appropriate size image for each mode. Following standard practice in literature He et al. [2016], Jia et al. [2018], we train one-hot and standard distillation student models for 90 epochs (= 450k steps). We use 4x less steps for TGT than the simple distillation baseline, which amounts to 450k/4 = 112k steps. F.3 Text classification (Sec. 4.3) Dataset. We conduct text classification experiments on following datasets: • Leather goods are no longer a bargain in Spain, though very good quality products may still be priced lower than at home. [SEP] Leather goods are still very cheap in Spain. Leather and leather goods are no longer a bargain in Spain, though very good quality products may still be priced lower than at home and abroad. [SEP] Leather goods are still very cheap at Spain. Data label: Entail Teacher label: Neutral Then I got up as softly as I could, and felt in the dark along the left-hand wall. [SEP] The wall was wet. Then I got up as softly as I could, and walked the way I felt in the dark along the left [SEP] The wall was wet. Data label: Entails Teacher label: Entail But then this very particular island is hardly in danger of being invaded except, of course, by tourism. [SEP] This island is least likely to be invaded by tourism. But then this very particular island is not in danger of being invaded except, of course, by tourism. [SEP] The island is likely to be invaded by tourism. Data label: Contradicts Teacher label: Neutral All you need to do is just wander off the beaten path, beyond the bustling tourist zone. [SEP] There is no point going off the beaten path, there is nothing there. All you need to do is just wander off the beaten path, and youĺl be in the bustling tourist zone of the city. [SEP] There is no point going off the beaten path, there is nothing there. Data label: Entails Teacher label: Neutral The silt of the River Maeander has also stranded the once-mighty city of Miletus. [SEP] The River Maeander has been depositing silt near Miletus for nearly two millennia. The silt of the River Mae has also stranded the once-mighty city of Miletus. [SEP] The River Maeander has been depositing silt near Miletus for more than two decades.
Mechanisms of Mitochondrial Malfunction in Alzheimer's Disease: New Therapeutic Hope Mitochondria play a critical role in neuron viability or death as it regulates energy metabolism and cell death pathways. They are essential for cellular energy metabolism, reactive oxygen species production, apoptosis, Ca++ homeostasis, aging, and regeneration. Mitophagy and mitochondrial dynamics are thus essential processes in the quality control of mitochondria. Improvements in several fundamental features of mitochondrial biology in susceptible neurons of AD brains and the putative underlying mechanisms of such changes have made significant progress. AD's etiology has been reported by mitochondrial malfunction and oxidative damage. According to several recent articles, a continual fusion and fission balance of mitochondria is vital in their normal function maintenance. As a result, the shape and function of mitochondria are inextricably linked. This study examines evidence suggesting that mitochondrial dysfunction plays a significant early impact on AD pathology. Furthermore, the dynamics and roles of mitochondria are discussed with the link between mitochondrial malfunction and autophagy in AD has also been explored. In addition, recent research on mitochondrial dynamics and mitophagy in AD is also discussed in this review. It also goes into how these flaws affect mitochondrial quality control. Furthermore, advanced therapy techniques and lifestyle adjustments that lead to improved management of the dynamics have been demonstrated, hence improving the conditions that contribute to mitochondrial dysfunction in AD. Introduction Alzheimer's disease (AD) is a neurological illness causing progressive cognitive and behavioral deficits. It includes the inability to form recent memories and the loss of previously essential memories. A German neuropathologist named Alois Alzheimer was the first to describe it in 1906 [1]. AD affects both declarative and nondeclarative cognition. Patients have trouble reasoning, understanding intellectual concepts, and even speaking [2,3]. Early-onset Alzheimer's disease is a rare type of Alzheimer's disease, accounting for about one-two percent of all cases. Mutations in the amyloid-beta precursor protein (APP), presenilin 1 (PS1 or PSEN1), and presenilin 2 (PS2 or PSEN2) loci cause familial AD. Way earlier familial AD is caused by mutations in these genes, which cause a rise in amyloid- (40 or 42) oversupply. The most common form of sporadic AD is late-onset sporadic AD, which appears after the age of 65 and is associated with the APOE4 genotype [4]. A mix of genetic, environmental, and behavioral variables plays a substantial role in lateonset sporadic. Additional polymorphisms may potentially play a role in late-onset Alzheimer's disease [5]. The mitochondrial malfunction has been linked to the development of nearly every neurological illness, including Alzheimer's disease (AD). The relationship between mitochondrial dynamics and amyloid toxicity has been a primary emphasis in this context. According to current research, mitochondrial calcium homeostasis dysfunction is linked to tau and other comorbidities in Alzheimer's disease. Evidence gathered from various models or experimental settings, on the other hand, was not always reliable, which is an ongoing concern in the field. Mitochondria are strategically positioned to play a crucial role in neuronal cell survival or death as integrators of power metabolism and cell death cascades. There is much evidence that mitochondrial malfunction and oxidative damage play a role in Alzheimer's disease etiology. This study examines evidence suggesting that mitochondrial dysfunction plays a significant early impact on AD [6]. Researchers are looking into the link between mitochondrial malfunction and autophagy in Alzheimer's disease. Lipofuscin formation in neurons is caused by insufficient autophagy digestion of oxidatively damaged macromolecules and organelles, worsening neuronal dysfunction. Scientists are particularly interested in developing autophagy-related therapeutics since autophagy is the principal mechanism for breaking down protein complexes and malfunctioning organelles [3]. This review also deals with autophagy as a possible therapeutic target in the genesis of Alzheimer's disease. Dislocation of mitochondria can occur due to interactions with cytoskeleton elements, particularly microtubules, as well as internal mitochondrial processes. It is unclear whether these variances are due to genetic differences. Furthermore, mitochondria in Alzheimer's disease cells are perinuclear, with only a few energetic organelles in the distant processes. They would ordinarily be scattered in healthy cells and essential for synaptic activity, ion channel pumps, exocytosis, and other functions. Elevation in reactive oxidative species and losses in metabolic capabilities characterize AD neurons, and these alterations are visible early in the disease's course. Lower pyruvate dehydrogenase (PDH) protein levels and decreased mitochondrial respiration were seen in the 3xTg-AD brain, indicating mitochondrial dysfunction as early as three months of development. Higher hydrogen peroxide generation and lipid peroxidation were also found in 3xTg-AD animals, indicating increased oxidative stress. At nine months, 3xTg-AD mice had significantly more mitochondrial amyloid-beta (A), which was connected to an increase in A binding to alcohol dehydrogenase (ABAD). Embryonic neurons generated from the hippocampus of 3xTg-AD mice showed a significant reduction in mitochondrial respiration and an elevation in glycolysis. These findings indicate that in the embryonic hippocampus neurons, there is mitochondrial dysfunction which persists in females all through the reproductive period and is aggravated with biological aging. We provide an overview of the fundamental processes that control mitochondrial dynamics and how defects in these pathways accord with the quality control of mitochondria. All these add to the malfunction of mitochondria in AD. Mitochondrial Dynamics and Its Functions Mitochondria are semiautonomous organelles in cells that carry out a variety of metabolic processes. The mitochondrial structure is very dynamic, switching among both grain-like and thread-like morphology frequently via a process known as fusion and fission, according to recent developments in imaging modalities [4] (Figure 1). Fusion causes the merging of mitochondria into one, combining cellular contents with mitochondrial DNA (mtDNA) to change into a more resource-rich organelle. Fission leads to mitochondrial reproduction and regulates apoptosis, mitophagy, and alteration in bioenergetic demands. Fusions can happen when the edges of the organelles clash and merge or when one mitochondrion encounters the edge of another mitochondrion and merge [5]. Mitochondria can fuse after many "efforts," in which the tip of one mitochondrion slides along the side of another or approaches it many times. Successful fusion is indicated by joint intracellular relocation of the mitochondria accused of being fused [6]. Expanding a mitochondrion can cause mitochondria to fission. The thin region that will become the site of division, on the other hand, may migrate multiple times along the mitochondrion, and sequences of fission and fusion events at the exact location are prevalent [7]. As a result, the dumbbell shape seen in a few large mitochondria appears to be a form that changes slowly. In contrast, more slender mitochondria do not divide into nearly symmetric sections [8]. The mechanisms of mitochondrial fusion and fission and mitochondrial movement are referred to as "mitochondrial dynamics" [9,10]. Mitochondrial integration in mammals requires the outer mitochondrial membrane-anchored proteins mitofusin1 (MFN1) and mitofusin 2 (MFN2) [11,12], which incorporate two transmembrane domains linked by a small intermembrane-space loop, a cytosolic N-terminal GTPase domain, and two cytosolic hydrophobic heptad repeat coiled-coil domains. MFN1 and MFN2 can interact in trans with MFN1 on some other mitochondrion to bind neighboring mitochondria or MFN1's coiled-coil domains to form homo-and heterooligomers [13][14][15]. GTP hydrolysis plays a crucial role in the fusion process. Although the 2 Oxidative Medicine and Cellular Longevity mechanism is unknown, hydrolysis within GTP may cause MFN to shift conformation. Optic atrophy one protein, a dynamin family 100 kD GTPase located either bound to the inner mitochondrial membrane or present in the intermembrane space and is necessary to tether and fuse mitochondrial inner membranes, is needed for mitochondrial fusion [16,17]. Mgm1p from both mitochondria interacts in trans to anchor and bind the interior membranes after the exterior membranes unite during mitochondrial fusion [16]. By splicing the human OPA1 gene, at least eight mRNA variants of OPA1 are produced [18]. After posttranslational proteolytic processing by mitochondrial processing peptidase (MPP), the longer isoforms are linked to the inner mitochondrial membrane. Still, the S1 and S2 protease site cleavage can give extra short isoforms positioned in the intermembrane gap [19,20]. The i-AAA protease Yme1 at S2 [21,22], the presenilin-associated rhomboid-like protein (PARL) at S2 [23], the m-AAA proteases AFG3L2 [15], and paraplegin [24,25], and the presenilin-associated rhomboid-like protein (PARL) at S2 [21,22], and the m-AAA mitochondrial membrane potential loss regulates OPA1 cleavage, which devastates the prolonged OPA1 isoform and facilitates S1 separation [20,22,25]. Fission in mammalian mitochondria is caused by the combination of dynamin-like protein 1 (DLP1/Drp1, Dnm1p in yeast) and human fission protein 1 (hFis1, Fis1p in yeast). Upregulation of hFis1 or dominant-negative DLP1 or RNAi knockdown of hFis1 or DLP1 enhances the lengthening of mitochondria, implying that hFis1 and DLP1 are the fission proteins required for mitochondrial elongation [26][27][28]. A single C-terminal transmembrane motif anchors hFis1 in the membrane. At the same time, six helices with two tetratricopeptide repeat-(TPR-) like folds are found in the N-terminal cytosolic region, which is implicated in protein interactions. As per cross-linking and fluorescence resonance energy transfer (FRET), hFis1 and DLP1 connect cooperatively [29]. The 1-helix is involved in the hFis1 TPR's quick contact with a DLP1-fission complex, presumably moderating DLP1-hFis1 couplings. The ability to drive mitochondrial fission is reduced when hFis1 oligomerization-deficient mutants are overexpressed, indicating that hFis1 oligomerization may play a significant role in mitochondrial fission [30,31]. The number of mitochondria and mitochondrial DNA should be doubled twice during the cell cycle. The nucleocytoplasmic ratio to chondroma in somatic cells is more or less constant [32]. As a result, at the very least, a loose synchronization of reproduction cycles must be expected. Many protists have shown systematic changes in mitochondrial shape during the cell cycle [33]. These changes generally follow this scheme: the chondriome comprises one highly perforated basket-shaped complex that lines the cell's periphery at the start of the interphase [34]. The size of the mitochondrial basket expands during interphase, as does the number of tiny mitochondria. The mitochondrial basket is broken into many fragments during mitosis, which tends to form clusters, and the overall number of mitochondria is significantly decreased again following cytokinesis. The fragmentation of mitochondrial networks into single mitochondria can affect metabolite transport and generate a unique environment for the nucleus [35]. Figure 1: Mitochondrial fission and fusion. Mitochondria are active entities with a constant fission and fusion process that mixes their content. Mitochondrial fusion results in mitochondria that are elongated and extensively linked. For the fission pathway, the main proteins are dynamin-related protein 1 (Drp1), which governs mitochondrial fission in two ways: initially, it is transported from the cytosol to the mitochondrial outer membrane (OM); and secondly, its assembly on the mitochondrial surface causes restriction of the mitochondria, resulting in the division of one mitochondrion into two entities, mitochondrial fission factor (MFF), fission-1 (Fis1), and homologs MiD49 and MiD51. Mitofusins 1 and 2 (MFN1/2) at the outer membrane (OM) and opticatrophy1 (OPA1) at the inner membrane (IM) coordinate mitochondrial fusion, which begins with MFN 1/2-mediated OM fusing of two mitochondria and is accompanied by OPA1-directed IM fusion. Oxidative Medicine and Cellular Longevity The human membrane-associated RING-CH (MARCH)-5 E3 ubiquitin ligase is located in the outer mitochondrial membrane [36][37][38]. MARCH5 combines and ubiquitinates hFis1, DLP1, and Mfn2. Mitochondrial elongation is caused by RNAi of MARCH5, showing that mitochondrial fission is prevented. MARCH5 may influence DLP1 trafficking, as abnormal DLP1 clustering has been found in cells expressing MARCH5 RING mutants [36]. MTP18 (Mdm33 in yeast) is located in the inner mitochondrial membrane and faces the intermembrane region [39]. MTP18 expression is activated by phosphatidylinositol (PI) 3-kinase signaling [40]. Mitochondrial fragmentation is caused by MTP18 overexpression, whereas mitochondrial elongation is caused by MTP18 RNAi. The known DLP1-mediated mechanism causes mitochondrial fragmentation when hFis1 is overexpressed. MTP18 knockdown, on the other hand, inhibited this hFis1-induced fragmentation, showing that hFis1 is dependent on MTP18 [39]. The mechanism through which MTP18 contributes to inner mitochondrial membrane fission is unknown. Altered Metabolism Due to Dynamic Dysfunction The etiology of AD has aroused heated debate. Surprisingly, the disease's sick "dots" have begun to be connected through research, revealing the enormously complicated links that cause AD [41,42]. The extracellular coarse aggregate of amyloid-β (Aβ) containing senile plaques has been linked to disease initiation and progression [43,44]. The microtubuleassociated protein tau, on the other hand, is another protein identified as a key participant in AD since its hyperphosphorylated, aggregated fibrils hold neuronal space as neurofibrillary tangles (NFTs) in vulnerable areas of AD brains (i.e., the hippocampus and cortices) [45,46]. Although much about Alzheimer's disease etiology is still unknown, indications point to mitochondrial dynamics as a likely reason ( Figure 2). Neurons rely on mitochondrial ATP production to create axonal and synaptic membrane potentials and sustain ionic gradients. The oxidative phosphorylation (OXPHOS) activity is strictly regulated by the constant Ca 2+ levels in the mitochondrial matrix. Unfortunately, if mitochondrial metabolism becomes intoxicated due to excessive Ca 2+ conversion from the ER or increased cytosolic Ca 2+ , this can increase oxidative stress, obstruct mitochondrial membrane permeabilization, and reduce ATP production, all of which can lead to cell death [47]. Extensive research has accumulated evidence that Ca 2+ dyshomeostasis and mitochondrial dysfunction occur in neurons in the AD brain. As the cell's energetic and energy centers, mitochondria are crucial to cellular proliferation, and defects in mitochondrial dynamics frequently precede many of AD's characteristic diseases [48,49]. Reduced ATP levels and mitochondrial function with increased ROS production are AD signs [48,49]. Mitochondrial damage can also be seen in peripheral tissues of Alzheimer's patients. Likewise, mitochondrial dynamics and bioenergetics are disrupted in fibroblasts from Alzheimer's patients and Ca 2+ dyshomeostasis [50,51]. Other mitochondrial Ca 2+ -related disturbances, such as mitochondrial Ca 2+ buffering, mitochondrial dynamics (trafficking, fission, and fusion), and mitophagy, are thought to be altered in AD and are caused by mitochondrial Ca 2+ dyshomeostasis before cell death [21,[52][53][54][55][56][57][58]. Unlike fission/fusion processes, mitochondrial activity is also controlled by the location of organelles inside the cell. The cytoskeleton and associated proteins regulate mitochondrial distribution, ensuring that locations with high metabolic demands receive the most [59,60]. Interestingly, mitochondrial dynamics impact mitochondrial distribution: both fission alleles with extended mitochondria (such as mutants of DLP-1) and fusion mutants with small, spherical mitochondria (such as OPA-1 mutants) produce alterations in mitochondrial distribution across the cell [61]. More importantly, because neurons have far higher energy demands than any other cell type (such as the extremely energy-intensive operation of ion channels and pumps, signal transduction, axonal/ dendritic transit of signal molecules and vesicles, and so on) and rely on mitochondrial integrity [62,63], this balance is critical for brain function. Any disruptions in mitochondrial function would predispose the neuron to various negative consequences, including the neurodegeneration found in Alzheimer's disease [64] (Figure 3). Late-onset sporadic AD's direct cause(s) is unknown, although age is a significant risk factor [65]. Ca 2+ dyshomeostasis and mitochondrial dysfunction are found in neurons in Alzheimer's disease, according to years of research. Discovering targets to sustain Ca 2+ homeostasis and mitochondrial health is also essential as a promising method for avoiding or lowering the pathology that underpins Alzheimer's disease. 3.1. Mitochondrial-Mediated Oxidative Stress. Current research indicates that oxidative stress is the principal cause of pathology linked with Alzheimer's disease, and there is substantial evidence to support this claim. An oxidative damage biomarker, 8-hydroxyguanosine (8-OHG), appears decades before Aβ senile plaques appear. At the same time, AβPP-mutant Tg2576 transgenic mice demonstrate oxidative damage before Aβ aggregation [66][67][68][69][70][71][72][73][74]. When Aβ is generated, it is oxidized, and the tyrosine cross-links render the insolubility of peptide and hence more likely to accumulate [75]. Aβ is thus damaging to the cell in its consolidated condition because it increases oxidative stress [76] and damages mitochondrial function. The latter mechanism was revealed in the M17 cells, which are overexpressing mutant AβPP (these cells had a four-fold lower rate of mitochondrial fusion [77]); investigations demonstrate that Aβ overexpression creates mitochondrial fragmentation, malfunction, excessive oxidative stress, and decreased ATP synthesis [78]. In vitro, cytochrome oxidase, pyruvate dehydrogenase, and ketoglutarate dehydrogenase have been shown to produce reactive oxygen species (ROS), hydrogen peroxide (H 2 O 2 ), and superoxide radicals (O 2-), which are primarily produced in the electron transport chain [79]. Moreover, the reactions that occur inside a neuron guarantee the regular production of a high number of ROS: the average nonneuronal cell uses 10 13 O 2 molecules per day in metabolic processes, and around 10 11 free radical is produced [79]. Synaptic function is harmed when Aβ accumulates at synaptic terminals. Aβ can also induce damage to synaptic 4 Oxidative Medicine and Cellular Longevity mitochondria. Mitochondrial destruction in synaptic mitochondria is anticipated to be larger than in cell-body mitochondria [80]. Damaged synaptic mitochondria may not provide enough energy to synapses, resulting in decreased neurotransmission and, eventually, intellectual failure. Numerous recent studies have discovered that Aβ binds to mitochondrial proteins such as mitochondrial fission protein Drp1 [81], mitochondrial outer membrane protein VDAC [53], and mitochondrial matrix proteins Aβ-binding alcohol dehydrogenase and CypD [82] and that these anomalous connections cause severe free radical production, mitochondrial fragmentation, and mitochondrial biogenesis, inevitably leading to mitochondrial function. Furthermore, Aβ enhances calcium's ability to enter the cell. The mitochondria then take in the calcium, which is one of the cell's calcium regulators [83]. When high quantities of A interact with VDAC1 and prevent mitochondrial protein transport, mitochondria become dysfunctional. More free radicals are produced due to this malfunction [84]. The creation and clearance of free radicals are out of balance, resulting in oxidative stress, characterized by the synthesis of 8-hydroxyguanosine [61,85]. Furthermore, due to short-term exposure to ROS, mtDNA mutation causes mitochondrial anomalies in fission and fusion [86]. Mfn1/2-null cells and OPA-1-deficient cells have many fractured mitochondria and a high number of endogenous and disconnected respiratory rates. The latter events are caused by a decrease in electron transport rates in complexes I, III, and IV [13,87]. Inhibition of DLP-1 results in reduced ATP synthesis due to inefficient oxidative phosphorylation and decreased complex IV activity [88,89]. Fission/fusion imbalance caused by mtDNA mutations can affect calcium homeostasis. Both abnormal fragmentation and lengthening of mitochondria result in increased ROS production in the cell and excessive iron deposition [90][91][92][93][94][95]. Low Mitochondrial Bioenergetic Performance. In Alzheimer's disease, there is a widespread shift away from glycolytic energy synthesis toward the use of ketone bodies, an alternative fuel. A 45 percent decrease in cerebral glucose utilization in Alzheimer's brains is accompanied by a decrease in glycolytic enzyme expression and activity of the pyruvate dehydrogenase (PDH) complex [96][97][98]. The "cybrid model" of Alzheimer's disease has revealed that mitochondrial malfunction plays a role in the disease's progression [99]. COX activity is reduced, the mitochondrial membrane potential is reduced, mitochondrial mobility and motility are reduced, oxidative stress is enhanced, caspase-3 is overactivated, and Ab production is raised in AD cybrid cells [100][101][102]. Furthermore, offspring of women with Alzheimer's disease have a higher risk of developing the disease, implying human maternal mitochondrial inheritance [ 5 Oxidative Medicine and Cellular Longevity female 3xTg-AD mouse brain was found to match numerous signs of mitochondrial failure reported in human Alzheimer's disease patients, including lower mitochondrial bioenergetics, elevated oxidative stress, and increased mitochondrial amyloid deposition, according to a study [105,106]. PDH and COX expression and activity were dramatically reduced in 3xTg-AD mitochondria. In primary neurons from 3xTg-AD rats, the switch from oxidative phosphorylation to lactic acid-producing glycolysis revealed decreased mitochondrial efficiency [107]. This unstable metabolic state, along with oxidative stress caused by decreased ETC efficiency, results in poor bioenergetics, which compromises brain function and exacerbates Alzheimer's disease [108][109][110]. PDH is a rate-limiting enzyme in the mitochondria that converts pyruvate from glycolysis to acetyl-CoA, which then condenses with oxaloacetate to commence the TCA cycle to generate energy. PDH deficiency induces pyruvate buildup, which increases extracellular acidification and speeds up anaerobic metabolism to lactic acid, as seen by an increase in ECAR in 3xTg-AD neurons [111]. In 3xTg-AD neurons, however, reduced PDH activity results in a shortfall in acetyl-CoA and, as a result, lowers OXPHOS action, as seen by a drop in OCR [112]. These findings are back up by previous PET metabolic analyses in people at high risk of Alzheimer's disease (AD), mild cognitive impairment (MCI), or late-onset AD, in which impaired glucose pickup and consumption were identified as one of the earliest symptoms of AD, occurring long before the disease manifested [113][114][115][116][117][118][119]. Mechanism of Improper Transportation and Autophagy. One of the harmful alterations in major neurodegenerative disorders is altered mitochondrial transport (MT) [120][121][122][123]. The kinesin-1 family (KIF5) is the primary motor that facilitates mitochondrial transport [121,124]. The KIF5 heavy chain (KHC) has an ATPase-active N-terminal motor domain and a cargo-binding C-terminal tail motif. KIF5 motors can attach to mitochondria owing to adaptor proteins like Milton from Drosophila. Milton binds to the C-terminal tail domain of KIF5 and the mitochondrial OM receptor Miro, acting as a KIF5 motor adapter [125,126]. In Drosophila, the Milton mutant consistently reduces mitochondrial transport into Figure 3: Mitochondrial dysfunction in Alzheimer's disease. Mitochondrial abnormalities linked to increased oxidative stress have long been thought to have a factor in the cell death and deterioration seen in Alzheimer's disease. However, as Alzheimer's disease progresses, mitochondria undergo significant changes that result in decreased ATP production and increased reactive oxygen species production (ROS). Mitochondria also lose their calcium (Ca ++ ) buffering capacity, which can set off a chain reaction within the cell that is harmful. When apoptosis is induced, mitochondrial dysfunction releases many proapoptotic molecules. These factors either directly activate apoptosis by associating with cytosolic factors to produce the apoptosome, or they indirectly trigger apoptosis by combining with cytosolic factors to generate the apoptosome. Ultimately, proapoptotic mitochondrial proteins translocate into the nucleus to fragment deoxyribonucleic acid (DNA). Overall, these mitochondrial changes are associated with cell death and deterioration. 6 Oxidative Medicine and Cellular Longevity synapses. Milton orthologues Trak1 and Trak2 have been identified in mammals [127][128][129]. In cultured hippocampal neurons, Trak2 amplification improves axonal mitochondrial motility [120]. Trak1 depletion or mutant expression, on the other hand, causes mitochondrial transport across axons to decrease [130]. Mammalian Trak1 and Trak2 have one Nterminal KIF5B binding site and two dynein/dynactin binding sites, one at the N-terminus and one at the C-terminus according to a new study [131]. Miro, also known as MIRO, gene mutation inhibits anterograde mitochondrial transit and deprives the number of mitochondria in peripheral synaptic terminals in Drosophila [132]. Miro1 and Miro2 are two isoforms of Miro found in mammals. The Miro1-Trak2 adaptor complex modulates mitochondrial transport [128]. Heavy chains (DHC) serve as the motor area for force production, while intermediate (DIC), light intermediate (DLIC), and light chains (DLC) function in cargo adhesion and motility modulation. Drosophila mitochondria are connected with dynein motors, and changes in DHC impact the speed and length of axonal mitochondria backward transit [124]. Mitophagy is a critical cellular mechanism in mitochondrial quality control since it is specialized autophagy to eliminate defective mitochondria. It entails siphoning damaged mitochondria in autophagosomes after that degraded within lysosomes. According to new research, PINK1/Parkin-mediated mitophagy protects mitochondrial viability and efficiency [133][134][135]. The gradual accumulation of PTEN-induced putative kinase protein 1 (PINK1) on the surface of wounded mitochondria precedes Parkin translocation from the cytosol to the mitochondria in this kind of mitophagy. Other mitophagy mechanisms that are not dependent on PINK1/Parkin have also been discovered. For example, the BCL-2 homology 3-(BH3-) containing peptide NIP3-like X (NIX, also known as BNIP3L), a mitochondrial OM protein, has been demonstrated to play a vital role in the removal of mitochondria in erythrocytes [136]. On exclusion membranes, NIX has an amino-terminal LC3interacting region (LIR) that connects to LC3 [137]. In erythroid cells, this allows NIX to function as a unique mitophagy receptor, physically binding the autophagy machinery to the mitochondrial surface. Mitophagy must be studied using a variety of complementing assays because it is such a dynamic system. These tests must be supplemented with the use of drugs that disperse mitochondrial membrane potential and a flux inhibitor to trap newly generated autophagosomes [138][139][140]. Distended patches with aberrant numbers of organelles (including mitochondria) proliferate in axonal degeneration in Alzheimer's patients [141]. PS1 mutations have been demonstrated to impact kinesin-1-based axonal transport by increasing GSK3 activity and phosphorylation of kinesin-1 light chains (KLC). This defect reduces the frequency of APP, synaptic vesicles, and mitochondria in the neuronal processes of hippocampal neurons and sciatic nerves of mutant PS1 knock-in mice [142]. When established neurons are exposed to A or ADDLs, their mitochondrial activity and frequency in axons substantially decrease [143][144][145]. In Drosophila, overexpression of Ab slowed bidirectional axonal mitochondria transmission and impoverished presynaptic mitochondria, resulting in presynaptic malfunction [146]. Reduced anterograde transit of axonal mitochondria, mitochondrial dysfunction, and synaptic deficit were all observed in developing neurons from APP Tg mice, all of which could be attributed to oligomeric Ab accumulation in mitochondria [147]. Induction of mitophagy is linked to changes in mitochondrial mobility. Mitophagy is complemented by diminished anterograde mitochondrial transit due to Parkinmediated Miro degradation [148][149][150][151]. According to a new analysis, Parkin-mediated mitophagy is substantially increased in AD neurons of murine models and human brains. As a result, the anterograde transport of axonal mitochondria is decreased in these neurons [147,152]. For neuronal homeostasis, bidirectional transfer of intracellular components between distal neurites and the cell soma is essential. Autophagosomes and endosomes that combine in the distal axon must be retrogradely delivered to the soma in neurons to fuse with lysosomes and digest their contents [153,154]. As a result, even minor abnormalities in autophagosome formation, maturation, or trafficking are expected to have disastrous effects on autophagic transit and neuronal homeostasis. Nevertheless, the proof is growing, suggesting autophagic processes are critical for brain health maintenance, especially in degenerative disorders [155]. Surplus or defective mitochondria can be preferentially removed in Saccharomyces cerevisiae, and Uth1 and Aup1 are implicated in this mechanism [156][157][158]. Kanki and Klionsky [159] recently discovered that ATG11, a gene previously thought to be required exclusively for competitive autophagy, is also required for mitophagy. Mitophagy is also inhibited even under acute famine situations if the carbon source makes mitochondria necessary for metabolism [159]. These findings show that mitochondrial disintegration is a carefully controlled process and that these organelles are mainly immune to generic autophagic breakdown. Several mitochondria grow and structurally disorganize as the brain ages, while lysosomes eventually amass the nondegradable polymer lipofuscin. The mitochondriallysosomal axis theory of aging was proposed by Terman and Brunk [154], under which mitochondrial attrition begins to decline with age, resulting in increased oxidative stress, accrual of damaged organelles and lipofuscin, reduced ATP production, discharge of apoptotic factors, and, ultimately, cell death. Nixon et al. discovered autophagosomes and other prelysosomal autophagic vacuoles in AD brains [160], especially within neuritic pathways. Autophagosomes, multivesicular bodies, multilamellar bodies, and cathepsincontaining autophagolysosomes were the most common organelles in dystrophic neurites. Autophagy was seen in the perikarya of afflicted neurons, especially in those with neurofibrillary disease, which was linked to a reduction in mitochondria and other organelles [160]. As a result, it was discovered that autophagocytosis of mitochondria is common in Alzheimer's disease [161,162]. Overall, the findings back with a previous study that found a large increase in mtDNA in neuronal cytoplasm and vacuoles associated with lipofuscin in neurons with greater oxidative damage in Alzheimer's disease [68,163,164]. COX-1, which is a mitochondrial protein, was shown to be elevated in the cytosol and linked with mitochondria undergoing phagocytosis, as 7 Oxidative Medicine and Cellular Longevity also reported previously [161]. Overall, these findings support the idea that susceptible AD neurons have a high amount of products of mitochondrial degradation, implying either decreased proteolytic turnover rate or increased mitochondrial turnover by autophagy, resulting in accrual of mitochondrial degradation products. Recently, it was discovered that autophagic vacuoles contain AβPP and secretase, which contribute to the accumulation of Aβ. They are especially rich in γ-secretase enzymatic activity and γ-secretase complex components [165,166]. These findings show that accumulating autophagic vacuoles in dystrophic neurites may also promote the local synthesis of Aβ within plaques. The neuropil's widespread surge in autophagy may be a substantial source of Aβ overproduction in the AD brain. Breakdown and spillage from postlysosomal vesicles cause cytosolic acidification, other membrane and organelle damage, and eruptive cytoplasm degradation, all of which contribute to neuron demise. Defective Mitophagy. Defective mitophagy leads to the accumulation of dysfunctional mitoplast, which leads to the progression of AD. To ensure effective mitophagy, autophagosome containing mitoplast should fuse with a lysosome, and lysosome should cause digestion of these organelles. Recently, it has been found that neurons of patients affected with AD exhibit abnormal accumulation of autophagosomal vacuoles. On further investigation, it was observed that accumulation of autophagosomal vacuoles occurs due to lysosomal dysfunction and defective fusion between autophagosome and lysosome [167]. The sirtuins (SIRTs) are the enzymes that are involved in the prevention of various age-related diseases, including AD. Of the seven isoforms of SIRTs, SIRT-I and SIRT-III play an important role in mitophagy of defective mitoplast through deacetylation/activation of prominent mitophagic proteins. Recent studies have found significantly reduced levels of SIRT-I and SIRT-III (which leads to the accumulation of defective mitoplasts) in cortical regions of the brain of patients with AD [168]. Furthermore, in recent times, NAD + has been found to regulate the delicate balance between biogenesis and mitophagy of mitochondrion. Decreases in NAD + lead to the accumulation of defective mitoplasts. In AD, decreased levels of NAD + have been attributed to the activation of several NAD + consuming classes of enzymes. Furthermore, NAD + constitutes a cofactor of several enzymes involved in the protection of DNA damage; for instance, poly(ADP-ribose) polymerase 1 (PARP1), cyclic ADP ribose hydrolase (CD38), and CD157 classes of enzymes are actively involved in repairing mito DNA damage. Interestingly, any stressful condition in the mitochondrial microenvironment leads to consumption of NAD + as this gets utilized for the synthesis of the above-mentioned enzymes, which ultimately leads to lowering of SIRT-I and SIRT-III activity henceforth promoting amyloidogenesis (180, 181). Therapeutic Strategies for the Improvement of Mitochondrial Dynamics Although numerous elements in disease development have been identified, the intricacies that underpin cognitive impairment and neurodegenerative aspects of Alzheimer's disease are still unknown, of which mitochondrial dysfunction appears to be particularly essential in the development and pathophysiology of AD. Henceforth, researchers worldwide have postulated that effective targeting of these mitochondrial dysfunctions can provide a window in developing novel potential therapeutic strategies for controlling and treating AD and closely associated neurodegenerative diseases. Indeed, researchers have focused on various therapeutic approaches that revolve around mitochondrial repair and effective targeting of mitochondrial antioxidant pathways to check the neurodegenerative cascade. In addition to this, more recently, based on the results obtained from postmortem examination of neurons obtained from geriatric AD patients and animal models, various therapeutic protocols were proposed by clinicians and researchers, which include immunotherapy [169][170][171], cholinergic therapy [172][173][174], anti-inflammatory therapy [175][176][177], antioxidant therapy [178,179], cell cycle therapy [180,181], and hormonal therapy [180,181]. Until a few years ago, the primary hurdle for developing an effective therapy for AD was an inability to increase redox potential inside mitochondria. Recently, various studies have reported potential breakthroughs in ameliorating mitochondria's antioxidant potential by using antioxidants that specifically target mitochondrial free radicals. These molecules offer several advantages, including preferential compartmentalization inside mitochondria, rapid neutralization of free radicals, and recycling of these compounds with no significant mitochondrial toxicity reported so far. However, research is still developing, and further studies are needed to see whether these compounds can be used in geriatric neurodegenerative diseases like AD. Targeting Oxidative Stress by Antioxidants to Improve the Mitochondrial Dynamics in AD. Fortunately, selective antioxidant therapies demonstrate promise in ameliorating cognitive ability and restoring mitochondrial functioning. Henceforth, further investigation can provide a potential therapeutic protocol for the treatment of neurodegenerative diseases. Among the various therapeutic strategies, mitochondrial antioxidant therapy has been found to have a significant effect in amelioration of mitochondrial dysfunction, which results in restoration of mitochondrial dynamics without any appreciable adverse effects [182,183]. Furthermore, encouraging results have been reported using the selective mitochondrial antioxidant treatment for restoring and rescuing mitochondria [183]. These studies used antioxidants viz acetyl-L-carnitine (ALCAR) and R-αlipoic acid (LA) in geriatric rats [184][185][186]. These findings were further supported by significant restoration in the function of hippocampal neurons. These neurons showed a smaller number of giant mitochondria when results were compared with the age-matched control group. On electron, microscopic examination, mitochondrion showed fewer ultrastructural abnormalities and lacked cistern rupture in the ALCAR/LA group. ALCAR/LA dietary supplementation in young rats indicated that more pronounced benefits could be achieved if these antioxidants are supplemented at an early age and early 8 Oxidative Medicine and Cellular Longevity disease manifestation compared to supplementation made in the latter half. ALCAR/LA appears to be a potential therapeutic intervention in ameliorating cognitive dysfunctions in AD. Still, there is an urgent need to further investigate this therapeutic regimen in clinically controlled randomized clinical trials to see these drugs as clinical reality. Furthermore, the therapeutic role of antioxidants in ameliorating cognitive dysfunction in AD was observed in laboratory animal models of AD. In this study, vitamin E supplementation resulted in a significant reduction of β amyloid levels and restored cognitive function in the vitamin E-supplemented group compared to the control group [179]. Similarly, in another study, a tau pathology transgenic mouse model was used, and it was concluded that vitamin E administration could dissolve tau aggregates in the brain [178]. These studies suggest the direct effect of vitamin E on pathogenetic hotspots of AD. Furthermore, different antioxidant agents found to have protective action against AD are tabulated in Table 1. Mitochondrial-Targeted Antioxidants. The mitochondrion is called the cell's powerhouses as the inner membrane of mitochondria generates the driving force for ATP synthesis [207]. Accumulation of ATP molecules results in the generation of negative potential across the inner mitochondrial membrane. Researchers have used this negative potential to transfer lipophilic cations across the mitochondrial membrane and accumulate lipophilic cations within the mitochondrial matrix. Murphy and colleagues used this biological process to move reducing agents to the inner compartment of mitochondria, henceforth developing various mitochondrialtargeted antioxidants, which include MitoQ (a derivative of mitochondrial quinoline), MitoVitE (a derivative of mitochondrially targeted vitamin E) [223,224], and MitoPBN (a derivative of α-phenyl-N-tert-butyl nitrone) [186]. These active lipophilic cations are attached with triphenylphosphonium vehicles, which help translocate these active molecules across mitochondrial lipid bilayer [184][185][186]. MitoQ (Mitochondrially Targeted Ubiquinone): MitoQ consists of two moieties which include oxidized mitoquinone and reduced mitoquinol; this adduct is attached to phosphonium cation [184]. After internalization into the mitochondrial membrane, the compound gets lodged in the inner membrane of mitochondria where it acts as an active donator of the hydrogen atom and henceforth prevents lipid peroxidation of mitochondrial lipid bilayer [225][226][227]. During this process, MitoQ moieties are transformed into semiubiquinone radicals, which disassociate into ubiquinone and ubiquinol [228]. Subsequently, ubiquinone is recycled back into ubiquinol, hence restoring its antioxidant function. The compound is found to accumulate selectively in mitochondria where it offers its role as a potent recyclable antioxidant, which eventually protects against AD (neuronal damage) [229]. Researchers across the globe have postulated that natural antioxidants are well tolerated in an AD mouse model and AD patients [178,186,228]. So, it can be postulated that MitoQ can offer a potential therapeutic option for treating AD in a mouse model and human patients. Amino acid and peptide-based mitochondrial-targeted antioxidants: these compounds are grouped under mitochondrially targeted antioxidants due to structural and conformational properties. The following amino acid and peptide-based mitochondrial-targeted antioxidants were developed to control oxidative damage induced by free radicals inside mitochondria (1) [207]. These compounds are highly permeable to the cell membrane and mitochondrial membrane. Studies have found 1000 folds higher levels of these compounds in the inner mitochondrial membrane than in the cytoplasm. The therapeutic action of these compounds has been attributed to reduced ROS generation, extensive mitochondrial uptake, inhibition of mitochondrial swallowing, mitochondrial depolarization, and prevented cytochrome C release. Hence, these compounds offer broad therapeutic potential for developing age-related diseases like AD [208]. Maintaining the Mitochondrial Bioenergetic Performance. In AD, mitochondrial function and architecture are altered [138,231,232], hence searching for those classes of compound that can restore functional and structural capabilities of mitochondria began in the latter half of the 19 th century. Researchers have identified various compounds that can present an effective therapy for AD treatment. This section will discuss multiple therapeutic regimens that help maintain mitochondrial bioenergetic performance. Cellular Therapy. Cellular therapy is being evaluated in laboratory animal models of AD [233,234]. Neurons and other associated structures of nervous tissue have been successfully generated by using embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs), followed by successful transplantation to laboratory animal models for treatment of neurodegenerative diseases, including AD [235]. The transplantation resulted in a significant increase in mtDNA (mitochondrial DNA), mRNA (messenger RNA), and associated proteins required for mitochondrial biogenesis and mitochondrial fission. Furthermore, this therapy protocol resulted in significant restoration of mitochondrial functioning and increased levels of morphologically well-structured mitochondria in nervous tissue, which was reflected in the amelioration of cognitive functioning and clinical improvement in laboratory animal models [236]. 4.2.2. Targeting the Inflammasome. Among the various inflammasomes, the NLRP3 category of inflammasomes plays a pivotal role in the deposition of Aβ aggregates and 9 Oxidative Medicine and Cellular Longevity In preclinical studies, creatine supplementation was reported to restore motor neuron activity, ameliorates mitochondrial dysfunction, and modulates amyloid-beta-induced cell death [190] Clinical studies Reduced blood levels of 8-hydroxy-2deoxyguanosine which is a biomarker of oxidative damage. Henceforth, amelioration of oxidative stress causes restoration of neurological functions in various neurodegenerative diseases [191] Idebenone Preclinical studies Preclinical studies have found Idebenone supplementation inhibits amyloid-betainduced neurotoxicity [192] Clinical studies In clinical studies, Idebenone was reported to cause improvement in cognitive and molecular score in Alzheimer's disease [193] Latrepirdine Preclinical studies Latrepirdine acts on various pathways that induce mitochondrial dysfunction [194] Clinical studies Latrepirdine causes improvement in clinical score of patients with AD [195] Triterpenoids Preclinical studies Triterpenoids were found to cause activation of the Nrf2/ARE signaling pathway which helps in protection of neurons against various types of insults [196] MitoTEMPOL (4-hydroxy-2,2,6,6, -tetramethylpiperidine-1-oxyl) Preclinical studies Compound acts on the mitochondrial antioxidant pathway and ameliorated oxidative damage, hence restoring mitochondrial function [197] SS (Szeto-Schiller) peptides Preclinical studies These peptides act on multiple pathways, for instance, mitochondrial ROS generation, restore mitochondrial swelling, and henceforth inhibit releases of mitochondrial contents [198] Methylene blue Preclinical studies Methylene blue inhibits pathological pathways in AD [199] Curcumin p25 transgenic mouse model Reduced oxidative damage and Aβ deposits [200] Melatonin Tg2576 mice Decreases the levels of Aβ and protein nitration, henceforth increasing lifespan of laboratory animals [201] Dismutase catalase mimetics AD transgenic mice Prevented cataracts in AD mice [202] Diets supplemented with vitamin E AD patients Amelioration of AD symptoms [203] Combined supplementation of vitamin E and vitamin C Elderly patients Therapeutic and prophylactic action against AD [204] 10 Oxidative Medicine and Cellular Longevity These type of diets results in DNA methylation and posttranslational modification in histone proteins [215] Huperzine A Clinical trials and experimental studies Act as an acetylcholinesterase inhibitor (AchEI), reduces amyloid plaque production, and inhibits cell death by modulating neuronal iron content in animal models of AD [216] [217] Gingko biloba Clinical trials Slows progression and ameliorates mild cognitive impairment in AD [218] Cineole Beta amyloid treated PC12 cells Reduces inflammation and oxidative stress [219] Coconut oil Epidemiological studies The postulated mechanism for the effect is the content of caprylic acid that restores brain function [220] Fish oil Epidemiological studies High content of apolipoprotein E which is neuroprotective in nature [221] Thymoquinone Beta amyloid treated PC12 cells Inhibition of mitochondrial dysfunction and oxidative stress [222] Organic selenium Experimental studies Acts as an antioxidants and helps in regeneration of neurons [223] 11 Oxidative Medicine and Cellular Longevity hence has an important role in the pathogenesis of AD [237][238][239]. Based on these assumptions, researchers conducted several studies using micro inhibitors of NLRP3 inflammasome and found that these molecules ameliorate AD pathology [240]. To support the inflammasome' function in the AD pathogenesis, researchers postulated that brain-penetrant anti-inflammatory neurotrophic drug, CAD-31, which ameliorates cognitive abilities, regenerates synaptic loss, helps in the removal of Aβ aggregates, and restores bioenergetics of mitochondria [138,[231][232][233][234][235][236][237][238][239][240][241][242][243][244]. Targeting the Proteasome. A plethora of studies have found an ameliorative role of proteasome activation in neurodegenerative diseases like AD [245][246][247][248]. Different strategies are used for proteasome activation, an efficient pathway involved in the genesis of protein kinase A and cyclic AMP (cAMP), which increases proteasome function, subsequently decreasing tau aggregate levels and improving cognitive function functioning [247]. In addition to this, other strategies under investigation for enhancing the activity of proteasome functioning include inhibition of USP14 that inhibits processing of proteasome employing pyrazolone and inhibition of the deubiquitinating enzyme that causes proteolysis of the proteasome [249,250]. Targeting mtDNA. With the progression of age, there occurs accumulation of mutations in the mitochondrion [251] which pose a higher risk for the pathogenesis of AD [252].To slow down the pace of these mutations and correct the mutations, various molecular techniques are proposed, which include (i) the clustered regularly interspaced short palindromic repeats (CRISPR)/associated protein 9 (CRISPR/ Cas9) technology is presently being used to correct the deleterious mutations in the mitochondrial genome [253]. The technique uses mitoCas9, explicitly targeting the mitochondrial genome without affecting genomic DNA. (ii) Transcription activator-like effector nuclease (TALEN) is a novel technique that specifically targets mutated mtDNA and effectively causes cleavage of these mutated fragments, which results in a reduction in levels of potential pathogenic mtDNAs, hence retarding the progression of diseases like AD [254]. The technique has been successfully used to correct mtDNA mutations in respiratory diseases and correct enzyme dynamics involved in oxidative phosphorylation [255]. Targeting Mitochondrial Cholesterol. Deposition of cholesterol in mitochondrial membrane results in reduced flexibility and fluidity of membrane structure [256]. To support this proposition, studies have found a direct relationship between neurodegenerative diseases and changes in mitochondrial lipid composition [257]. Recently, studies in the APP23 AD mouse model have found that cholesterol causes inhibition of cytochrome P450 46A1 (CYP46A1), henceforth resulting in Aβ peptide accumulation in the brain [258,259] (Figure 4). Biologics. Manipulating mitochondrion function via selective genomic expression of mtDNA offers crucial therapeutic hope for patients suffering from neurodegenerative diseases like AD. To increase ATP levels from the mitochon-drion, mitochondrial transcription factor A (TFAM) has been engineered in such a way that this engineered molecule passes readily across the cellular and mitochondrial membrane and causes selective genomic expression, which causes reduction in levels of Aβ in 3xTg-AD mice, increases levels of the transthyretin (a potent inhibitor of Aβ aggregate) and causes a reduction in levels of mitochondrial mutations [260]. Laboratory animal studies have revealed that recombinant-human Transcription Factor A Mitochondrial protein (rhTFAM) results in restoring cognitive function in a laboratory model of AD compared to the control group [261][262][263]. Improvement in Proper Transportation and Mitophagy in AD. Of the particular interest in AD, extensive axonal degeneration occurs due to the accumulation of defective mitochondrion. The scientific community is trying to explore how and why mitochondrial trafficking activates the pathological course of AD [264]. In neurons, most of the mitochondria are sessile. In contrast, only a tiny portion of mitochondria move in neurons' upward and downward direction as per the energy requirements [265]. In neurons, two classes of proteins cause directional movement of mitochondria. These are kinesin and dynein. Kinesin regulates upward movement, while dynein regulates the downward trend of mitoplasts [123]. Most therapeutic regimens have targeted these two pathways to improve mitochondria's proper transportation. To ensure smooth mobility of mitochondria inside neurons, the energy source (ATP) and regulators of mitochondrial transport should be present in the vicinity of the mitochondrion [266]. Recent studies have shown that pharmacological intervention to increase cellular energy sensor AMP-activated protein kinase (AMPK) increases the upward movement of mitochondria along the axon and increases the branching of axons [267]. Furthermore, hypoxia in experimental animal studies was found to enhance mitochondrial transportation through induction of hypoxia-upregulated mitochondrial movement regulator (HUMMR) [268,269]. HUMMER interacts with adaptor and docking complexes and enhances anterograde transportation of mitochondria. These studies further added that genetic ablation of HUMMR leads to an increased mitochondrial percentage moving in the retrograde direction, and subsequently, a lesser number of mitochondria moving in the anterograde direction [270]. Additionally, when neurons were subjected to hypoxic and hypoglycemic microenvironmental conditions, these neurons transported mitochondrion from neighboring astrocytes and released defective mitochondria, which restored their bioenergetic functions [271]. These studies concluded the therapeutic role of hypoxia and genetic manipulation of the HUMMR gene for patients suffering from impaired mitochondrial transport in AD. Similarly, parental administration of mitochondria harvested from the liver of young mice resulted in the restoration of bioenergetic functions, amelioration of oxidative damage, and restoration of cognitive and motor function in aged mice [272]. Mitochondrial-targeted antioxidant SS31 restored mitochondrial mobility in a rat model of AD. SS31 peptide is directed towards the inner mitochondrial membrane due to attraction 12 Oxidative Medicine and Cellular Longevity between the positive charge of SS31 molecule and negative charge of cardiolipin molecules located on the inner mitochondrial membrane. In addition to this, SS31 molecule acts as an efficient scavenger of ROS and inhibits the opening of mitochondrial pores, hence swelling of mitochondria. These studies suggest that the SS31 molecule should be studied as a potential drug to treat AD [270]. Furthermore, the antioxidant SS31 peptide reduced levels of mitochondrial fission proteins, increased mitophagy [124], and restored mitochondrial trafficking deficit [273]. Therefore, drugs that promote mitophagy help regenerate dysfunctional mitochondria [274]. Various rate-limiting steps in mitophagy have been identified; for instance, PINKI (protein kinase 1) is a critical molecule in the mitophagy pathway [275]. Henceforth, those drugs that accelerate these mitophagy pathways appear promising for many neurodegenerative diseases, including AD. To exemplify, autophagy inducers like rapamycin help in the prevention of mitochondrial fission in the rat model of AD [276]. Similarly, studies have found that nuclear receptors peroxisome proliferator-activated receptors gamma (PPARγ) and PGC1-alpha play a pivotal role in mitochondrial biogenesis [277]. Interestingly, both PPARγ and PGC1-alpha are significantly reduced in AD; hence, drugs that cause the promotion of mitochondrial biogenesis through activation of PPARγ and PGC1-alpha can emerge as potential therapeutic targets for the treatment of mitochondrial dysfunctions in AD [276]. For instance, the use of drugs like thiazolidinediones that cause activation of these molecules has improved cognitive activity in AD mice and clinical cases with mild degrees of AD [278] (Table 2). More recently, researchers have devised a novel technology of mitochondrial transplantation as an effective thera-peutic regimen to restore cellular function in various animal models of human diseases. The technique was first reported by Katrangi and colleagues when coincubated xenogenic mitochondria derived from mice with human cells devoid of mitochondria [292]. They found that human cells incorporated xenogenic mitochondria, restoring aerobic respiration. However, recently, this strategy has been employed as a therapeutic option for treating different neurodegenerative diseases in a wide range of experimental animals. For instance, mitochondria labeled with the marker (green fluorescent protein) obtained from leg muscle were injected with a damaged spinal cord in an animal model. After two weeks, these mitochondria were found in nervous tissue and motor neurons, producing some neuroprotective effects [293]. Although the current strategy has been translated to clinical cases for the treatment of cardiac and neurodegenerative diseases; however, the effectiveness of the current strategy in age-related neurodegenerative diseases is in its early stages, and to standardize this strategy, there is an urgent need for more studies to evaluate the neuroprotective role of mitochondria transplantation and associated risk involved. Mitochondria isolated from patients with AD were found to have altered calcium homeostasis, which could cause the opening of mitochondrial permeability transition pores (mPTP) [294]. Altered calcium homeostasis causes an elevation in levels of CypD, which serves as a structural element for the synthesis of mPTP and triggers the opening of mPTP. Studies in the knockout model of the CypD gene showed reduced permeability of mPTP and higher efficiency of mitochondria to tolerate calcium imbalance [295]. Furthermore, studies have found that inhibition of CypD improves mitochondrial function; hence, inhibitors of CypD Figure 4: Maintaining the mitochondrial bioenergetic performance. Various means can be employed to maintain the mitochondrial bioenergetic performance including cellular therapy, targeting inflammasomes, proteosome, mitochondrial cholesterol, and mtDNA. 13 Oxidative Medicine and Cellular Longevity can serve as potential drugs for the prevention and treatment of AD [296]. Furthermore, laboratory animal models for AD treated with CsA showed improved cognitive function and motor activity attributed to enhanced mitochondrial transmembrane potential and activation of superoxide dismutase activity [297]. In this realm, researchers have found some compounds that act on several pathways of mitochondrial dysfunction. For instance, in response to free radicals, nuclear factor E2-related factor 2 (Nrf2) is translocated into the nucleus from the cytoplasmic matrix, which results in increased expression of genes associated with antioxidant defense; Nrf2 induces several beneficial modifications in mitochondrial architecture and dynamics, which are of particular interest for the proper functioning of mitochondria under hostile conditions [298]. Of particular interest, the dietary molecule sulforaphane (SFN) acts as a potent activator of Nrf2 and can be processed as a dietary nutraceutical against AD [299]. . CR leads to de novo synthesis of an efficient mitochondrion with an efficient ATP/ROS ratio, leading to diminished free radicals/ATP-generated production. The decreased levels of free radicals by CR are through activation of sirtuin-3-dependent superoxide dismutase 2 [300][301][302]. In addition, CR promotes mitophagy of defective mitochondria [303]. In addition to mitophagy, CR promotes the synthesis of the new mitochondrion and restores the function of various genes, which lose their expression with aging. To special interest, these genes are primarily associated with mitogenesis. If these defective mitochondria are not removed by mitophagy, there is an accumulation of defective mitoplast that promotes the accumulation of Aβ aggregate [304]. In several experimental studies, CR was found to regulate/modulate inflammatory pathways; for instance, CR was found to normalize concentrations of proinflammatory mediators (TNFα and IL-6) in geriatric mice to youthful profile levels [305]. There are various pathways by which fasting exerts its beneficial action; for instance, an elegant study published recently found CR results in the accumulation of ketone bodies which subsequently downregulates inflammatory pathways by blocking inflammasomes and producing cytokines from monocytes. In the context of neurological diseases, CR has specifically demonstrated important immune-inflammatory modulatory actions; in laboratory animal studies, CR has demonstrated modulation of microglia activation and restoration of cognitive dysfunction in the AD rat model [306,307]. (1) Calorie Restriction Mimetic. Given the health-promoting role of CR in AD, there are a vast number of strategies currently under investigation to generate and characterize new compounds collectively called CR mimetics. These compounds stimulate the action of CR on various pathways and biochemical reactions without any harmful effects. 14. Kaempferol and rhapontigenin The survival of glutamatergic and cholinergic neurons was elevated, enhanced animal memory, and revoke tau and amyloid pathogenesis No [291] 14 Oxidative Medicine and Cellular Longevity (a) NAD + Precursors NAD + precursor supplementation causes downregulation of inflammatory pathways and promotes phagocytosis of Aβ aggregate [308,309] and, hence, inhibits AD progression. Interestingly with the progression of age, levels of NAD + decrease, which cause hyperactivation of the abovementioned pathways suggesting one of the potential pathogenetic pathways for the progression of AD [310,311]. Recently, results from preclinical studies have proposed a novel strategy called "turning up of the NAD + -mitophagy axis" [167]. Several therapeutic strategies have been employed for turning up strategies; one of the strategies involves restoration of activity of SIRT3 and SIRT1 by direct increasing levels of NAD + . In preclinical studies, using 3xTgAD mice supplementation with nicotinamide, a precursor of NAD + , resulted in a significant reduction in amyloid plaques. Moreover, treatment of APP mutant transgenic mice with nicotinamide riboside resulted in a turning up of the NAD + -mitophagy axis. These preliminary studies support hypotheses that increasing levels of NAD + can be beneficial for patients with AD. Although the mechanism that governs the amelioration of AD pathology following supplementation with NAD + /NAD + precursor remains poorly understood, researchers have formulated some tentative ideas. The well-accepted hypothesis postulates that increased levels of NAD + cause microglial-dependent elimination of extracellular Aβ plaques and simultaneously cause inhibition of NLRP3 inflammasome [168]. (b) Resveratrol Encouraging results with the use of resveratrol have been observed against neuroinflammation and neuroinflammatory diseases mediated through AMPK and Sirt1 activation. In vitro coculture experiments have found that resveratrol causes quiescence of microglia upon exposure to Aβ aggregate, inhibiting the progression of neuroinflammation and subsequently protecting the death of neurons [312,313]. (c) Metformin It is a standard drug used to treat diabetes mellitus; cumulative evidence suggests that the drug acts on a different biological pathway involved in the aging process. Hence, it can be postulated that drugs can be beneficial against geriatric diseases like AD [314,315]. Mainly, metformin has been found to restrict the recruitment of immune cells and downregulates inflammatory pathways in the central nervous system [316]. The therapeutic action of metformin in AD is still controversial. However, it causes a reduction in levels of microglia and astrocyte, downregulates expression of NF-kB signaling, inhibits phosphorylation of tau protein, and causes effectively clearance of toxic proteins accumulated in AD [317,318]. (d) Spermidine Of particular interest, spermidine helps renew mitochondrial function by promoting old and damaged mitochondrion mitophagy and mitogenesis of new and efficient mitochondrion [319,320]. Spermidine exerts an antiinflammatory action through epigenetic inhibition of lymphocyte migration; the mechanism involves hypermethylation of the Itgal gene, which is involved in the biosynthesis of adhesion molecule LFA-1 [321]. Spermidine induces differential expression of proinflammatory cytokines, including enhanced production of anti-inflammatory cytokines and reduced proinflammatory cytokines [322]. Hence, it can be proposed that spermidine exerts its effect through differential expression of genes and mediators involved in the neuroinflammatory cascade. Different Kinds of Diets. Several published and unpublished studies have established a relationship between diet and AD [323]. Various ingredients in diet like polyunsaturated fatty acids, vitamins, Mediterranean diet, ingredients in fruits and vegetables, and active principles derived from medicinal plants like curcumin have been reported to have both prophylactic and therapeutic activity against AD [324][325][326]. (1) Mediterranean Diet. People living around the Mediterranean Sea consume a high number of fresh fruits, fresh vegetables, and grains. These people use olive oil as the major source of fat [327,328]. Both epidemiological and clinical studies have found a relatively lower incidence of cognitive decline and AD in people living around the Mediterranean Sea [210]. On further investigation, researchers have attributed the higher number of polyphenols like flavonols, resveratrol, and omega-three fatty acids in the Mediterranean diet, which improves the cognitive functioning of the geriatric population, henceforth offering protection against neurodegenerative diseases like AD [211,329,330]. (2) Ketogenic Diet. Preclinical and clinical studies have found a region-specific decrease in glucose utilization in patients with AD while ketone body utilization remains unaffected [213]. Supplementation of ketogenic components in diet augments the supply of alternative fuel to the brain [331]. Although the mechanism underlying ameliorative effects of cognitive function in AD patients is yet to be established, researchers have postulated that the inclusion of ketogenic diet results in the normalization of energy balance in AD patients [332]. Physical Exercises. Several preclinical and clinical studies have reported the significant effect of various nonpharmacological interventions on ameliorating cognitive symptoms in AD patients [330][331][332][333][334][335]. Continuous physical workout causes the generation of free radicals, which activates protective mechanisms and mitochondrial functioning [336]. In animal model studies, exercise was found to reduce Aβ plaques in the brain's hippocampus region [337]. Physical activity in the AD animal model ameliorated oxidative stress, helped insulin resistance, and reduced cholesterol levels [337]. Similarly, studies have found the pivotal role of exercise in the induction of vascularization, neurogenesis, synaptogenesis, and angiogenesis [338,339]. Added advantages are observed in combining exercise and CR in restoring neurological function, cognitive function, and improvement of memory deficits [340]. However, some studies have reported no significant beneficial effects of exercise on cognitive function in diseases like depression and AD [339]. Furthermore, many randomized controlled trials (RCT) have found some benefits of physical exercise in individuals with mild to moderate degrees of AD, while no clinical benefits were observed in severe and chronic AD [341,342]. Similarly, many studies have found significant efficacy of combinational nonpharmacological interventions compared to a single invention in ameliorating cognitive deficit in AD patients [343]. A pioneer RCT was conducted to investigate the effect of multicomponent cognitive intervention in AD patients with mild derangement in cognitive function. The study reported no significant improvement in mental activity in the intervention group compared to the control group [344]. This was followed by another long-term (12 months) multicomponent cognitive interventional RCT study in patients with moderate AD. They reported no change in cognitive function in the intervention group, while there was deterioration in mental activity in the control group [345]. From these studies, it can be postulated that multicomponent cognitive intervention has no significant effect in ameliorating cognitive deficit in clinical cases with mild to moderate AD. But some studies support the role of exercise in the amelioration of cognitive deficit in AD, so it can be postulated that there exists a lack of consensus among different researchers regarding the role of exercise in the resolution of cognitive function in AD patients [346,347]. Recently, a meta-analysis postulated that regular walking and cycling improve cognitive abilities in AD patients with mild to moderate derangement. Strength training effectively restores motor functions, which indirectly reduces the risk of developing AD [348]. Furthermore, this meta-analysis reported the preventive effects of physical exercise against AD. In addition, laboratory animal studies have found improvement in motor function, cognitive function, and reduction in Aβ aggregate in mice under regular physical exercise [349,350]. The same study found that exercise causes increased blood flow to critical areas of the brain, which induces neurogenesis in the hippocampus. Following these results, many authors have proposed that physical exercise causes a reduction in cholesterol levels, restoration of insulin sensitivity, scavenging free radicals' neurogenesis, and synaptogenesis, which indirectly promotes mitophagy of defective mitochondrion and induces de novo synthesis of the new mitochondrion. Based on these findings, it can be proposed that there is a need for extensive studies involving large patient sizes and a need to establish a standard protocol to procure the benefits of physical exercise in patients with AD and other geriatric neurodegenerative diseases ( Figure 5). Conclusion Although mitochondrial dysfunction is a typical indication of Alzheimer's disease, it is unclear whether the cellular systems that maintain mitochondrial integrity malfunction, aggravating mitochondrial pathology. Different levels of vigilance and preventive methods are used to reduce mitochondrial damage and efficiently destroy faulty mitochondria to maintain the mitochondrial equilibrium. The form and function of mitochondria are regulated by mitochondrial fusion and fission. In contrast, mitochondrial transit holds mitochondrial dispersion and transports old and damaged mitochondria from distant axons and synapses to the soma for lysosomal destruction. As the fundamental mechanisms of mitochondrial quality control, several critical properties Mitochondrial targeted antioxidants These include class of compounds that selectively accumulate inside mitochondrion and serve as potent recyclable antioxidants where by they block pathological hotspots of AD Hypoxic and hypoglycemic microenvironment These conditions transport efficient mitochondrion from neighboring astrocytes and release defective mitochondria which resulted in the restoration of their bioenergetic functions Calorie restriction mimetics These class of compounds simulates the action of CR on various pathways and biochemical without any deleterious effects Physical exercises Continuous physical workout results in the production of free radicals which activates protective mechanisms and mitochondrial functioning Mitophagy Drugs/Therapeutic protocols that promote mitophagy help in the regeneration of dysfunctional mitochondria Calories restrictions Calories restriction leads to de novo synthesis of an efficient mitochondrion with an efficient ATP/ROS ratio hence leading to diminished production of free radicals/ATP generated. Henceforth retards progression of AD Figure 5: Therapeutic strategies for the improvement of mitochondrial dynamics. 16 Oxidative Medicine and Cellular Longevity of mitochondria work in tandem with mitophagy. According to the findings, mitochondrial viability and function are managed by mitochondrial fusion, fission, transport, and mitophagy, forming a complex, dynamic, and reciprocal interaction network. According to growing evidence, AD brains have disrupted mitochondrial dynamics and aberrant mitophagy, which may interfere with mitochondrial quality control directly or indirectly [351]. Further research into these processes might help us better understand mitochondrial malfunction in Alzheimer's disease. Given the ability to improve some phenotypes by manipulating genes that regulate mitophagy, there is reason to believe that attempting to subvert mitochondrial dynamics, motility unilaterally, and mitophagy will enhance mitochondrial surveillance mechanisms and decrease the neuropathology of Alzheimer's disease, feasibly leading to new treatment strategies. Many investigations on phosphorylated tau, Aβ, inflammatory reactions, synaptic and mitochondrial activity in the progression of the disease, and pathogenesis have aided researchers in their understanding of AD. Therapeutic techniques have been created and assessed in many trials using postmortem AD brains, AD mouse models, cell cultures, and blood-based indicators. Clinical experiments in the past and present have had mixed results. Future studies focus on synaptic alterations that are crucial to disease development. Hence, AD is a complex disease that warrants more exploration due to a combination of synapse loss and mitochondrial deficits and faulty mitophagy induced by Aβ, tau, and mitochondrial and synaptic problems. Data Availability Data supporting this review are from previously reported studies, which have been cited.
Improving the dimensional accuracy of 3D x-ray microscopy data X-ray microscopy instruments have the unique ability to achieve nondestructive imaging with higher spatial resolutions than traditional x-ray computed tomography (CT) systems. This unique ability is of interest to industrial quality control entities, as they deal with small features in precision manufactured parts (with tolerances in the order of ±25 µm or better). Since many of today’s technology and manufacturing companies demand increasingly higher levels of precision, accuracy, and reliability for dimensional measurements on feature sizes that are much smaller than 5 mm, it would be ideal to further expand the imaging capabilities of x-ray microscopy to the field of precision metrology. To address such demand, this paper describes the development of a measurement workflow, through a package consisting of hardware and software, to improve the accuracy of dimensional data obtained with 3D x-ray microscopes (XRMs)—also known as sub-micrometer CT systems. The measurement workflow, called Metrology Extension (MTX), was designed to adjust and configure the XRM instrument work-zone to perform dimensional measurement tasks. The main adjustments of an XRM instrument through the MTX workflow, which must be implemented before scanning parts of interest for dimensional evaluation, include applying a distortion map correction on the image projections produced by the x-ray detector and a voxel scale correction. The main purpose of this article is to present, evaluate, and analyze the experimental results of various measurement tests to verify the metrological performance of several XRM systems operating with the MTX workflow. The main results show that these systems can produce repeatable and reproducible measurements, with repeatability standard deviations of the order of 0.1 μm and reproducibility standard deviations of less than 0.5 μm. In addition, coordinate-based 3D XRM measurements produce dimensional accuracies comparable to those offered by high-precision tactile coordinate measurement machines (with deviations within the range of ±0.95 µm). Therefore, once the MTX workflow is executed, XRM instruments can be used to measure small volumes, in the order of (5 mm)3 or less, with improved dimensional accuracy. Introduction The use of industrial x-ray imaging for dimensional metrology has become an important aspect of modern quality control. Three-dimensional (3D) imaging techniques, such as x-ray computed tomography (CT), currently contribute to geometric dimensioning and tolerancing of device components, such as mechanical parts, for various technology and manufacturing companies [1][2][3]. X-ray CT is used as a nondestructive dimensional quality control tool in various industries, such as automotive, electronics, aerospace, medical devices, and additive manufacturing, to name a few examples. However, even under optimal measurement conditions, CT dimensional metrology technologies have traditionally been limited to spatial resolutions no better than 4-10 µm. This leads to several challenges in measuring small samples (with volumes on the order of a few mm 3 )-e.g. low signal-to-noise ratio in CT data and limited detection of contrast changes and spatial details. Additionally, in today's precision manufacturing standards, there is a growing demand for tighter tolerances (±25 µm or better) requiring quality control instruments that possess a higher degree of measurement accuracy than most currently available x-ray CT measuring systems. Although sub-micrometer resolution laboratory CT systems, often referred to as 3D x-ray microscopes (XRMs), were introduced more than a decade ago [4][5][6][7], these instruments have been used primarily for failure analysis, material science, and process development. Except for some preliminary results previously presented by the authors in conference proceedings [8,9], there are no other reports in the current literature, to the authors' knowledge, reporting precision dimensional metrology applications with XRMs. The main approach for CT dimensional metrology, until now, has been the use of xray projection-based geometries employing flat panel detectors with effective pixel sizes (pixel pitch) ranging from 127 µm to 200 µm [1,[10][11][12]. To overcome some of the limitations of such CT measuring systems in terms of imaging resolution, this paper introduces a measurement workflow-a solution consisting of hardware and software-for performing accurate dimensional metrology using the resolution capabilities of 3D XRMs. Evaluation of this newly developed measurement workflow, using multisphere phantoms, shows that it can produce repeatable and reproducible measurements; with repeatability standard deviations of the order of 0.1 µm and reproducibility standard deviations of about 0.35 µm. Section 2 of this article further expands on the details of 3D x-ray techniques and the use of multisphere phantoms to verify the accuracy of dimensional measurement instruments, with references to recent literature for the interested reader. A description of the metrology workflow for 3D XRMs used in this paper, including specific details on measurement strategies for accuracy verification, is presented in section 3. From the XRM data obtained for a recently developed multisphere phantom, the accuracy of volume reconstructions are evaluated by comparing 3D XRM-based dimensional measurements with calibrated/reference data (or operative 'true values') obtained from a tactile coordinate measurement machine (CMM) 4 . Experimental data is also presented to verify the repeatability and reproducibility of the XRM measurements, with a detailed analysis of the results. Section 4 provides some examples of possible industrial applications for the 3D XRM metrology workflow presented in this article. Lastly, to outline the merits of the work presented throughout the paper, concluding observations are provided in section 5. The main results show that 3D XRM instruments can perform dimensional measurements in small-scale volumes, on the order of (5 mm) 3 or less, providing measurement accuracies comparable to those offered by high-precision tactile CMMs (with deviations between the XRM data and CMM calibrated/ reference values within the ±0.95 µm range). Although, due to the differences in operating principles between XRMs and CMMs, there remain philosophical questions about the comparability of XRM and CMM data [1,11,13], in order for the measurement results of both techniques to be comparable, the XRM-based dimensional measurements were performed with sampling/probing strategies that resemble those of CMMs (i.e. by using feature construction based on coordinate probing points with a least-squares fit and a Gaussian filter-see section 3). 3D x-ray techniques and dimensional metrology In this section, the differences between x-ray CT and XRM instruments are established. A formal definition of geometrical magnification is introduced and the role of optical magnification in improving the spatial resolution of data reconstruction is highlighted. A brief discussion of dimensional metrology with 3D x-ray data is also presented. X-ray CT and XRM systems In traditional industrial x-ray systems, designed for nondestructive evaluation and industrial metrology tasks, the main approach has been to use projection-based architectures 4 At present, the dimensional measurements obtained by tactile CMMs are generally a more accurate approximation to the 'true values' associated with the measurands in question than the measurements obtained by CT. The measurement uncertainties reported with the CMM technique are typically smaller than the measurement uncertainties associated with CT dimensional data [1,11,24], and the traceability chain is also more clearly defined with CMM systems. in which two-dimensional images, or radiographs, are created by x-rays in a cone-beam that passes through an object and projects radiographs onto a flat panel detector ( figure 1(a)). From several radiographic images collected at different angular positions, using localized x-ray absorption measurements, a CT reconstruction algorithm can create a volumetric representation of the object from which internal and external features can be extracted for dimensional measurements [1]. With flat panel-based CT systems, the object must be positioned as close to the x-ray source as possible, while remaining within the cone beam, to obtain highly magnified radiographic images at the detector and produce 3D data reconstructions with the highest resolution possible. The geometrical magnification (M g ) of a projection-based system is a function that depends on the source-to-object distance (d SO ) and the objectto-detector distance (d OD ), The maximum operating M g value in industrial and laboratory-based x-ray projection instruments is generally determined by the minimum working distance d SO , which is limited by the sample size-the sample needs to spin in the rotary stage without hitting the x-ray source (see figure 1). In general, the spatial resolution of a CT system depends on parameters such as the focal spot size of the x-ray source, the degree of collimation of the x-ray beam, the projection geometry, the size of the sample, the dimensions of the detector, the precision of mechanical motion controllers, and on details of the computer algorithms used for 3D reconstruction [13][14][15][16]. With an x-ray source focal spot in the order of 2-5 µm and a flat panel detector of 2048 × 2048 pixels with a pixel pitch in the range of 127-200 µm, the best spatial resolution achievable is typically limited to the range of 4-10 µm for cylindrical objects with diameters in the 2-25 mm range. Larger samples limit the geometrical magnification M g , thus reducing the best achievable resolution of CT scans to several tens of micrometers. There are two common approaches to further increasing the image resolution capabilities of flat panel CT instruments: a reduction in the x-ray focal spot and/or the use of a higher resolution flat panel detector. However, this would not eliminate the limitations on geometric magnification imposed by larger samples (>25 mm diameter). An alternative would be to incorporate optical lenses after x-ray detection to create a scintillator-lens-CCD 5 detector coupling, as shown in figure 1(b), to optically magnify the image before reaching the CCD detector. This strategy enables XRM images with spatial resolutions down to 500 nm-see table A1 and figure A1 (in appendix A). There are other means to improve XRM resolution, e.g. using x-ray focusing elements such as Fresnel zone plates or Kirkpatrick-Baez mirrors [17][18][19][20], but those techniques (commonly known as nano-XRM) are beyond the scope of this article. Whereas typical pixel resolutions of commercial flat panel detectors are within the range of 75-200 µm, effective detector pixel sizes in XRM systems can be as low as 16 nm [4,7]. for flat panel CT systems and 0.5 µm and 50 nm for XRM detectors. When comparing x-ray CT with XRM instruments, system resolutions behave differently for M g values below 100×. In x-ray CT, the resolution of the system worsens as M g decreases (i.e. when the sample size increases). On the other hand, by leveraging optical magnification, XRMs allow to preserve (and improve) the system's spatial resolution as M g decreases, without major limitations on sample size 6 -as long as the sample physically fits inside the instrument and does not limit x-ray transmission (the measuring range will be determined by the penetration depth of x-rays into the sample, which is dependent on the sample's material composition and the energy spectrum of the x-ray source [13]). The curves in the graph (figure 2) were derived from the approximated system's resolution formula given in [4], where D r is the spatial resolution of the detector and S is the x-ray source focal spot size. Dimensional metrology After the reconstruction phase of x-ray CT or XRM data is completed, a representation of the object's internal and external surfaces can be produced with the use of thresholding algorithms for edge detection and material boundary/surface determination [21][22][23][24][25]. Thereafter, the geometric characteristics of those surfaces can be used as a reference for dimensional metrology. But to minimize errors and improve the accuracy of dimensional measurements, a suitable method is required to determine the voxel size 7 associated with 3D data reconstruction is required. In flat panel CT systems, voxel size dimensions are determined by M g and the x-ray detector's pixel pitch [1,13,26]. In the last two decades, methods have already been proposed to determine and correct voxel size errors in industrial CT scanners using multisphere phantoms 8 [27][28][29]. In fact, since around 2005, several manufacturers have marketed flat panel x-ray CT systems for dimensional metrology. As a result of sub-voxel interpolation algorithms commonly used for surface determination, dimensional measurements with sub-voxel accuracies are now ordinarily reported from CT data. Previous studies have shown that when geometric features ranging from 0.5 mm to 60 mm are dimensioned, the expanded uncertainties (k = 2) associated with flat panelbased CT measurements are typically in the 5-50 µm range [10,11,30]. Still, depending on the sample size, flat panel-based CT systems may have limited spatial resolution capabilities (section 2.1). The spatial resolution of the data generated by such systems is, at best, on the order of micrometers or tens of micrometers. Spatial resolution limitations can hinder essential surface details that, in addition to improve the accuracy of CT-based dimensional data, could enable other nondestructive analyses, such as morphological characterization of internal walls and evaluation of material porosity, which require high spatial resolution features. As an alternative, this paper proposes the use of sub-micrometer resolution XRM instruments for dimensional metrology, an idea that has not been widely explored until some recent work introduced (by the authors) at conference settings [8,9]. This article builds on that work and introduces new data that verify the dimensional accuracy of metrology workflows with XRM data. The repeatability and reproducibility of the XRM metrology workflow proposed in this paper, which employs a small version of a multisphere phantom (contained in a cylindrical volume of 4 mm diameter and about 1.8 mm height), are also verified by various data comparisons. 7 A voxel is a data value representing x-ray attenuation in a 3D element located in the specimen that, in turn, is broadly related to the average density of material in the volume [1,26]. The 'voxel size' represents the dimensions of the basic volume element of the tomographic data into which the 3D reconstructed volume is sub-divided after reconstruction; it represents a '3D pixel' in the volumetric dataset. A unit of 'voxel size' does not represent a measure of 'spatial resolution', see [1,13]. 8 There are also methods that propose the use of a simple two-sphere phantom dumbbell to determine and correct voxel size errors. However, these are not reliable because they measure only one distance, so they rely on a single data point for interpolation and would only scale lengths of similar dimensions to that distance. In addition, current standards for the performance evaluation of 3D x-ray systems used for dimensional metrology, i.e. the VDI/VDE 2630-1.3 [36] and the ASME B89.4.23 [37], require a minimum of 35 or 28 center-tocenter lengths to be measured per scan, not just one, in a total of six or seven different spatial directions. XRM workflow for dimensional metrology A metrology workflow developed by Carl Zeiss X-ray Microscopy, Inc. hereinafter referred to as the Metrology Extension (MTX), is used to improve the accuracy of 3D XRM dimensional data. The MTX workflow includes applying a distortion map correction to the image projections produced by the detecting system and a voxel scale correction before XRM measurement. These corrections are necessary to improve the dimensional accuracy of data reconstruction and of any feature measurements based on the final reconstructed 3D XRM data; although the accuracy of the dimensional data will also depend on other crucial steps during the measurement process, such as surface determination and sampling strategy, as previously discussed elsewhere [1,10,11]. Adjustments and pre-setting of the instrument's work-zone to perform dimensional measurement tasks on an XRM instrument 9 can be implemented through the MTX workflow, prior to scanning the workpieces of interest, e.g. in a weekly or daily basis. Figure 3 shows how the MTX workflow works and integrates into the 3D XRM measurement process. Geometric distortion correction in ZEISS Xradia Versa instruments is performed by using a square grid array composed of grid lines spaced 50 µm apartsee figure B1 (in appendix B). Since strategies to perform geometric distortion corrections in x-ray detectors [29,31,32] and image scale corrections [28,29,[33][34][35] are well documented in recent literature, they are not presented in this paper. This section focuses on evaluating the performance of XRM measurements, via MTX, when dimensioning distances in a multisphere standard. The only two reference documents with procedures for testing dimensional 3D x-ray systems are the VDI/VDE 2630-1.3 [36] and the ASME B89.4.23 [37], which were published as part of ongoing efforts toward standardization of procedures for performance verification and acceptance testing of CT systems used for dimensional metrology. These guidelines provide test protocols for evaluating systems conformance to manufacturers' accuracy specifications, generally indicated by maximum permissible error (MPE) statements. There is also a working document (draft) that is being discussed internationally by the CT task force of the International Standards Organization (ISO), technical committee 213 working group 10, which aims to create an international standard (series ISO 10360-11). The release date for its publication is still unknown. In accordance with the VDI/VDE 2630-1.3 guideline, Carl Zeiss Industrielle Messtechnik GmbH [8,38] developed a length standard multisphere called the 'XRM Check' to verify the accuracy of dimensional CT measurements on small volumes that fit in a 5 mm field-of-view (FOV). This length standard consists of 22 identical 300 µm diameter ruby spheres attached to a supporting pillar structure made of fused silica (or quartz glass, with coefficient of thermal expansion figure 4 or [38]. The roundness of the spheres (shape error) is consistent with the Anti-Friction Bearing Manufacturers Association criteria for Grade 5 ball or better (tolerances specified by deviations of ±0.13 µm of the combined diameter and roundness of the spheres). Given the simple and well-defined geometric characteristics of ruby balls, built with low manufacturing inaccuracies, and easy to measure with a tactile CMM, the use of multisphere standards is a common and straightforward method for evaluating the performance of CT or XRM systems [39,40]. The spatial arrangement of the spheres in the XRM Check standard allows a distinctive number of different lengths (at least five different distances), in a total of seven different spatial directions, to implement the acceptance test suggested in the VDI/VDE 2630-1.3 guideline. Verifying measurement accuracy The test evaluation for CT (or XRM) systems generally includes comparisons between length measurements obtained from CT data and other more precise and accurate reference measurements, such as those acquired from a tactile CMM. In this work, the reference measurements for the different sphere distances in the XRM Check standard were calculated from the center positions of the spheres measured by the Federal Institute of Metrology METAS, Switzerland, with an ultra-precise tactile CMM dedicated for calibrating small size objects (METAS µCMM [41]). With three laser interferometers that measure the displacement of its table and equipped with a probe head that uses a spherical sapphire stylus (diameter 208 µm) with a weak probing force (<0.5 mN), this machine ensures accurate reference measurements. Each sphere was probed on its upper hemisphere along the equator and two half meridians. All measurement points where probed using scanning with a point density of 300 points/mm and using a Gaussian filter of 15 undulations per revolution cut-off frequency. The reported uncertainty for the reference measurements, expressed as a combined uncertainty multiplied by a coverage factor k = 2, is This is the associated expanded uncertainty for the reference measurements, for a confidence level of 95.45%, which was estimated following the guidelines of the Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) [42]. To evaluate the metrological performance of an XRM instrument (for length measurements), after implementation of the MTX workflow, the deviations between the XRM data and the CMM references can be plotted on a graph (e.g. see figure 6). Table 1 lists the parameter settings used for x-ray imaging of the XRM Check multisphere phantom. Following the measurement workflow of figure 3, to perform coordinate-based dimensional measurements on the 3D XRM data, after surface determination with a local adaptive threshold method [21], spherical features were defined on each ruby ball of the XRM Check phantom using a Gaussian multipoint least-squares fit with Calypso software (Carl Zeiss Industrielle Messtechnik GmbH). Figure 5(b) illustrates the sampling strategy used to fit each spherical feature, which resembles a common CMM probing strategy [10]. As seen from figure 6, the deviations of CT dimensional measurements (from reference CMM data) are confined to a range between ±0.7 µm. This range is well within the specification of MPE, for center-to-center sphere distance (SD), of a ZEISS Xradia 620 Versa XRM [8], with the measuring length L given in mm. The results shown in figure 6 provide a verification of the accuracy of the ZEISS Xradia 620 Versa instrument's metrological capabilities (at FOV = 5 mm) with the added MTX workflow, enabling reliable dimensional measurement performance in small-scale volumes, on the order of (5 mm) 3 or less. It is worth noting that, without MTX, the typical deviations of XRM data from calibrated/reference values can be anywhere in the range of 1-30 µm. Appendix C shows several sets of MTX-corrected and uncorrected measurements, for center-to-center sphere distances, extracted from 3D XRM data for a couple of multisphere phantoms obtained with different XRM systems. When looking at the least-squares lines that fit the uncorrected data from figures C1-C 4 (in appendix C), the XRM dimensional measurements performed without MTX would suffer deviations that are largely dependent on the length being measured (i.e. with errors that are directly proportional to L). Verifying measurement repeatability and reproducibility To verify the repeatability and reproducibility of the MTX measurement workflow, 30 different scans of an XRM Check phantom were performed with three different instruments (i.e. 90 scans in total). Figure 7 shows the results of dimensional data for three different measurands: SD 11-4, SD 3-22, and SD 8-18 (naming convention of table 2). The measured value for SD 11-4 (∼3.605 mm) is one of the largest measurements of center-to-center length in the XRM Check multisphere standard. The repeatability standard deviations for measurements of SD 11-4, evaluated by the measurement system are S 1 = 0.08 µm, S 2 = 0.06 µm, and S 3 = 0.11 µm; and the reproducibility standard deviation across the three systems is S R = 0.36 µm. From these measurements for SD 11-4, the deviations between measurements made on the same length feature, repeated through 90 different scans, are in the range of ±0.48 µm. The repeatability and reproducibility standard deviations for the measurement results associated with SD 3-22, and SD 8-18 are listed in figure 7. Although not all measurement results can be explicitly listed in this document due to space limitations, all measurands listed in table 2 were measured and analyzed. Of a total of 35 characteristics measured per scan, in 30 scans obtained per measurement system, using three different systems, i.e. 3150 measurements in total, the repeatability standard deviations of the measurements were in the range of 0.04-0.15 µm and the reproducibility standard deviations ranged from 0.05 to 0.45 µm. These results serve as a verification of the repeatability and reproducibility of the dimensional data produced by the ZEISS Xradia 620 Versa XRM when coupled with MTX. Lastly, in figure 7, the deviations between coordinate-based 3D XRM measurements and CMM reference data are confined to the range ±0.95 µm. This range is also within the MPE SD specification of equation (4). Industrial applications for dimensional metrology XRM instruments have been used in a wide range of applications, including semiconductor packages inspection, new materials process development, and biological or biomedical applications [6,7]. However, apart from some industry proprietary/unpublished industry case studies, there are no reports (to the best of the authors' knowledge) covering precision Table 2. Labels associated with center-to-center sphere distance (SD) evaluations in the 'XRM Check' multisphere (sphere identification labels are shown in figure 5). The measurands are ordered by ascending dimensional magnitude and are grouped into five sets of distances classified by nominal value of length. Table 2 shows the list of measurands associated with each center-to-center sphere distance. Different colored data points represent different test runs. Appendix C shows various sets of MTX-corrected and uncorrected measurements. These examples are, in themselves, interesting and relevant to the quality control industry due to the challenges they pose for nondestructive inspection; but before proceeding, it is worth noting that there are several measurands of interest that are bi-directional in nature, i.e. lengths computed as point-to-point or edge-to-edge distances between geometric elements that lie on diametrically opposite edges (e.g. space between wedges, lengths between walls in internal cavities, diameters, etc). Unlike bi-directional measurements, uni-directional measurements are computed as distances between the centers of two fitted geometrical elements (e.g. circles, cylinders, spheres, etc) or lengths between points that lie on identical edges. Although, with respect to scaling, uniand bi-directional measurements are sensitive to scale errors of the 3D XRM data (which can be minimized with the aid of the MTX workflow), bi-directional measurements are also sensitive to thresholding influences during surface determination. The accuracy of bi-directional measurements depends on the use of refined thresholding algorithms for edge detection, such as a local adaptive method [21]. In practice, uni-directional center-to-center measurements are generally determined more accurately than bi-directional measurements-as shown by recent CT to CMM comparative studies, e.g. see [1,27,30]. As a point of reference for surface-to-surface measurements, when using the MTX workflow, deviations between 3D XRM and tactile CMM data for diameter measurements are presented in figure D1 (in appendix D). Rendering of cross-sectional images and 3D XRM data volume for a smartphone camera lens assembly. To reveal the internal geometry, one half side of the 3D rendering is shown in solid view and the other half in semi-transparent appearance. Dimensional measurements can be performed on the camera lens module, in its assembled state, using 3D XRM data. Smartphone camera lens assembly In the assembled state of the smartphone camera lens modules, the evaluation of the geometric properties of the lenses, such as the thickness of the annular wedges, the centering interlock diameters, the spaces between the wedges, the lens-to-lens tilt, the vertex heights and centration, etc, require a noncontact and nondestructive measurement method. These measurements are important for the functional inspection of camera lens modules and the improvement of designs and manufacturing processes to enable the production of versatile cameras that improve the image quality of mobile phones. Figure 8 shows the 3D rendering of a smartphone camera lens assembly reconstructed from XRM data, including a view of cross-sectional images (2D slices). Since the largest dimension of interest in the assembly is around 4.4 mm, a cylindrical region of about 5 mm diameter was scanned from the full camera lens module for dimensional measurements. Here it is worth noting that the 5 mm FOV diameter limitation of the MTX workflow strictly refers to the measurement volume. Although an object should ideally have a diameter of the order of 5 mm or less, so that a full field-of-view scan can be performed with MTX, interior tomographies can be performed on somewhat larger samples, as shown in figure 8, where a semi-transparent view shows the entire camera lens assembly with an outer diameter greater than 5 mm. Before the acquisition of the XRM data in the region of interest (on a 5 mm diameter cylinder, half shown in blue in figure 8), the MTX workflow was applied to verify the measurement accuracy of the XRM system, so that the dimensional measurements extracted from the virtual reconstruction of the camera lens module are accurate. Figure 8 shows examples of typical dimensions of interest in the camera lens in its assembled state. Fuel injector nozzle To facilitate efficient fuel spraying in internal combustion engines (and to keep emissions requirements lower), fuel injectors use high injection pressures with small nozzle orifices. Since the variability of the geometric characteristics of the injector nozzle, such as the sharpness of the inlet corners and the diameters of the holes, can influence the internal flow and fuel spray in combustion engines, manufacturing deviations of the nominal nozzle design are typically measured for dimensional quality control. In this case, to measure the smallest nozzle holes in the fuel injector (features that are difficult to access with measurement systems using tactile or optical probes), XRM is especially well suited for dimensional measurement. Figure 9 shows cross-sectional image views and cropped/semi-transparent sections of the reconstructed XRM data for a fuel injector nozzle tip, revealing the roughness of the wall surface in the internal cavities and the presence of material porosity. The size of the region of interest, the tip portion of the fuel injector, is revealed by the dimensions shown in the image (∼2.4 mm for the largest diameter). Since the region of interest fits in a volume of less than (5 mm) 3 , the MTX workflow was ran, prior to scanning the fuel injector tip, to verify the accuracy of the XRM system measurement. Then, dimensional inspection of the small holes in the nozzle was carried out. It is worth noting, in figure 9, that the blue spots in the image reveal the presence of voids within the walls of the injector nozzle. This is an added benefit of measurement inspections based on XRM volumetric data, so in addition to reconstructing all internal and external surfaces of a mechanical workpiece, material tests can be performed on the same data set. In this case, the resolution capabilities of the ZEISS Xradia Versa system used for XRM scanning allowed for the visualization of the porosity present within the injector nozzle. Plastic injection-molded connector For the inspection of internal and external structures in small plastic injection molded parts, x-ray CT or XRM may be more suitable for geometric measurement compared to contact or vision inspection techniques. Non-contact measurement is important to avoid distortion of flexible or easy-to-deform components. In injection molded workpieces, dimensional measurements help determine deviations in manufactured parts compared to the nominal geometry specified in the original computer-aided design (CAD) models of the part. This is important for product development, quality assurance of functional characteristics of parts, and evaluation of structural integrity of industrially manufactured components and assembled devices. Figure 10 shows the example of a small plastic connector, reconstructed from XRM data, with dimensional measurements and CAD-to-part comparison data. Again, since the largest length measurement of the connector is approximately 4 mm, this sample fits well within a volume of less than (5 mm) 3 . Therefore, the MTX workflow was used prior to x-ray scanning the sample to verify the accuracy of the XRM system. Then, the XRM data shown in figure 10 was acquired for dimensional measurements. Concluding remarks Due to its unique ability for nondestructive evaluation of part geometries that are not accessible to traditional optical or tactile CMM systems, e.g. internal cavities and difficult-to-reach or 'hidden' features, the number of industrial applications for 3D x-ray imaging techniques have increased in the last two decades. However, in the field of dimensional metrology, the use of 3D x-ray techniques has been limited to lensless imaging methods, such as flat panel-based CT systems, which are typically limited to spatial resolutions greater than 4 µm. To address the ever-increasing demands of 3D x-ray metrology with higher image resolutions, this paper has introduced a measurement workflow, the MTX, to extend dimensional metrology applications to sub-micrometer resolution XRMs. After the implementation of the MTX workflow on various XRMs, the metrological performance of these systems was evaluated. From the analysis of data in more than 90 XRM scans, the main results show that 3D XRM systems can perform precision dimensional measurements in smallscale volumes, on the order of (5 mm) 3 or less, producing repeatable and reproducible measurement data (with repeatability standard deviations in the range of 0.04-0.15 µm and reproducibility standard deviations that ranged from 0.05 to 0.45 µm). In addition, coordinate-based 3D XRM measurements can provide dimensional accuracies comparable to those offered by high-precision tactile CMMs, with deviations within ±0.95 µm range. Without MTX, the typical deviations of the XRM data from the calibrated/reference CMM values can be anywhere in the range of 1-30 µm. As presented in this article, to assess the MTX workflow, the data were limited to center-to-center sphere distances (on multisphere phantoms), which are uni-directional in nature. In industrial applications, however, there are also measurands of interest that are bi-directional, i.e. lengths that are calculated as point-to-point or edge-to-edge distances between geometric elements that sit on diametrically opposite edges (see section 4 and appendix D). These are more sensitive to errors during the surface determination phase than uni-directional measurements. The accuracy of bi-directional measurements is highly dependent on the choice of thresholding algorithms for edge detection and the use of strategies that reduce reconstruction artifacts (beam hardening, scattering radiation, noise, etc.). In general, uni-directional center-to-center measurements can be determined more accurately than bi-directional measurements [1,27]. The uniqueness of 3D XRMs, with the addition of the MTX workflow, offers completely new capabilities and new opportunities in the domain of dimensional metrology. Application examples were presented in section 4. The MTX workflow will undoubtedly support the wider use of 3D XRMs for industrial applications, bridging the gap between x-ray microscopy imaging and dimensional metrology, while preserving the high spatial resolution characteristics of XRM data-a unique feature that is useful for other types of nondestructive evaluation using the same data (e.g. morphological characterization of internal structures, detection of particle inclusions, and pore size distribution analysis in a material). Data availability statement The data that support the findings of this study are available upon reasonable request from the authors. b RaaD working distance defined as clearance around axis of rotation. c Voxel is a geometric term that contributes to but does not determine resolution, and is provided here only for comparison. ZEISS specifies resolution via spatial resolution, the true overall measurement of instrument resolution.
Assessment of Turkish consumer attitudes using an Animal Welfare Attitude Scale (AWAS) Distributed under Creative Commons CC-BY 4.0 Abstract The aim of this study was to examine Turkish consumer attitudes towards animal welfare in terms of cognitive, affective and behavioral dimensions, using a bespoke Animal Welfare Attitude Scale (AWAS). An overall consumer attitude was also determined. The Delphi technique was used to establish an item pool to develop a questionnaire for the construction of the AWAS. This questionnaire was later used for data collection. A total of 2295 consumers were surveyed in 14 cities, in the 7 regions of Turkey. Descriptive statistics, exploratory factor analysis (EFA), confirmatory factor analysis (CFA), reliability analysis, Ward’s hierarchical clustering method and One-way ANOVA were used to validate the questionnaire, and to analyze data. Results of the EFA allowed for allocation of 42 items collected under 3 dimensions (cognitive, affective and behavioral), that explained 72% of the total variance of the model. This factor structure was subsequently confirmed by a CFA performed on a different sample of 425 consumers. The Cronbach’s Alpha coefficient for AWAS was calculated at 0.829. These results confirmed that the developed AWAS had a valid and reliable scale. The questionnaire showed that consumers’ attitudes towards animal welfare were more negative at the behavioral dimension, than either at the cognitive or affective dimensions. Consumers in Turkey were ultimately divided into three groups according to their overall attitudes towards animal welfare as impassive, moderate or sensitive. Onethird of Turkish consumers placed in the sensitive group, thus emphasizing a potential niche for animal-friendly food marketing in Turkey. Introduction Intensive livestock production systems have implemented new and efficient methods to increase productivity while reducing costs, by practices such as lowering feed production expenses, increasing housing density, reducing grazing, the use of performance-enhancing feed additives, and mass farm animal transport and slaughter. 1 The reduction of prices in products of animal origin, as well as global trade, and enhanced advertisement and marketing, have increased animal protein consumption. 2 However, the image of the animal industry has also been adversely affected by food-related disease crises (such as bovine spongiform encephalopathy, salmonellosis, etc.), which negatively impact consumer confidence in modern agricultural technologies. 1,3 Thus, consumers have either adopted an utilitarian (considering health and quality) or an ethical approach (contemplating animal and environmentally friendly production) to increasingly believe that food produced with natural methods is healthier. 2,4 Indeed, several studies report that the rate of consumers who pay attention to high animal welfare standards when purchasing red meat, 5,6 eggs, poultry meat 7,8 and milk 9 is rising in Europe. Moreover, Queiroz et al. 13 reported that a significant number of consumers do not have enough knowledge of animal welfare issues but believe that natural breeding methods will lead to improvement in product quality. Consumer preference when buying animal food products is influenced by many factors. 10,11 Products presented as healthy, tasty or environmentally friendly may have an increased appeal. 4,5,10 Regarding production practices with high animal welfare standards, Kendall et al. 12 have determined that: urban and rural life experience, social structural features (such as socio-economic class and family status) and personal characteristics (such as gender, age and education) are structural determinants of ensuing consumer attitudes. Furthermore, Te-Velde et al. 5 and Vanhonacker et al. 14 reported that the attitude of consumers towards goods produced with high animal welfare standards is influenced by other people opinions, norms, knowledge and interests (economic, social and moral interests). It has also been reported that purchasing higher standard animal welfare products means changing habits for the majority of consumers. 7 The consumer segment that cares about high animal welfare standards creates marketing opportunities, when coupled to communication strategies that enhance consumer confidence, and allow the introduction of acceptably-produced animal goods. 7,14,15 However, attitudes of this consumer target group need to be well examined, in order to overcome the potential challenges related to its preferences. 16 In fact, sociology and marketing studies related to the standards of animal welfare on consumer attitudes have been limited, and not sufficiently explained. There is also a need to study the combination between public-oriented policies and consumer-oriented approaches. 14 Information on the behavior of consumers that have or lack an interest in animal welfare needs to be characterized to appraise the introduction of goods produced by high animal welfare standards, as well as to develop effective communication strategies. 7 To determine consumers' perceptions and attitudes about any issue, questionnaires are frequently used. These should include valid and reliable scales, which can be developed through the Delphi technique established by Dalkey and Helmer 17 . The aim of this technique is to create reliable scales by obtaining an expert group consensus through a series of in depth opinion polls, interspersed with controlled opinion feedback. In short, the Delphi technique is used to take common views of a group of independent experts, unaware of each other, with a rational and written approach, so that program planning, policy development, events and trends can be predicted, and standards can be developed. In this study, a bespoke animal welfare attitude scale (AWAS) was designed to examine the cognitive, affective and behavioral dimensions of consumers' attitudes in terms of animal welfare in Turkey. The AWAS further allowed allocation of consumers in three groups according to their overall attitudes towards animal welfare as impassive, moderate or sensitive. A previous lack of such a comprehensive scale in the literature stresses the importance of this study. Data collection and sample size A questionnaire, constructed by the Delphi technique, 17 was applied to determine a Turkish consumer attitude scale toward animal welfare. Permission to conduct the study was granted by the Scientific Research and Publication Ethics Committee of the Afyon Kocatepe University. Participants willingly contributed their answers when informed that the survey aimed to collect data for scientific purposes. There are numerous definitions of the concept of attitude, one of which comprises feelings, thoughts and behaviors towards something. 18,19 Since in social psychology the attitude consists of cognitive, affective and behavioral dimensions, it was decided a priori -after a comprehensive literature review-that pertinent items of the scale would be placed under these three dimensions. [18][19] In addition, for statistical validity, an exploratory factor analysis (EFA) and a confirmatory factor analysis (CFA) were performed to establish which items belonged to each dimension. The Delphi technique was used when developing the questionnaire because a common animal welfare attitude scale was not encountered in the literature. The followed steps for the process were: ] Problem definition: The questionnaire aimed to determine the consumer attitudes regarding animal welfare in terms of cognitive, affective and behavioral dimensions. ] Election of panel members: Panelists were selected among scientists who could, through their knowledge, research and experience, contribute an educated perspective for question suggestion and placement. Fifteen experts were selected and contacted, from which 11 agreed to participate in the study. ] First Delphi survey (round I): the problem of the study and the determined dimensions (cognitive, affective, behavioral) were sent to the panelists, whom were asked to write "items that can measure the consumer's attitude towards animal welfare" and that could be placed under the established dimensions, to create an item pool. At the end of round I, 58 items were established by combining similar items in the pool. ] Second Delphi survey (round II): The 58 items which were placed under the three dimensions were sent to the panel members to determine their level of agreement according to the 5-point Likert Type rating (1 = Strongly Disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly Agree). Likert-type scales use fixed choice response formats that are designed to measure attitudes or opinions. 21 This ordinal scale measures levels of agreement/disagreement. The acquired data were analyzed to generate median, quartile and range (q 3 -q 1 ) statistics. ] Third Delphi survey (round III): calculated medians, quartiles and ranges for the item pool and dimension placement were sent to the panelists, whom were again asked to give a score from 1 to 5, in order to reach a consensus of which items should be included in the questionnaire, to adequately establish an attitude scale. At this third round, 42 items were selected, and the AWAS was set. Some similar items were purposely used within the scale to check whether the participant's answers to similar questions were analogous. The study population included consumers over 18 years of age, living in all seven regions of Turkey. Due to time, cost and distance constraints, a stratified statistic method was used to determine the sampling size and to establish the sampling plan. According to results from the socio-economic development level studies by the Turkey Statistics Institute, consumers living in 2 cities per region were included (the cities of Kars and Muş in Eastern Anatolia, Gaziantep and Batman in South Eastern Anatolia, Bolu and Samsun in the Black Sea region, Sivas and Konya in Central Anatolia, Burdur and Adana in the Mediterranean Region, Afyonkarahisar and Aydın in the Aegean Region, and Tekirdağ and Balıkesir in the Marmara Region). The sample size was calculated with the formula (n = s 2 .Z α 2 /d 2 ) proposed for large populations (when the population size is larger than 10,000) in survey research. 22 As a result of a pilot study including 50 people, a standard deviation of s = 0.9, an effect size of d = 0.1 and Z 0.05 = 1.96 (for significance level α = 0.05) were used as parameters in the formula. A minimum sample size of 311 consumers for each region was calculated (an equal number of consumers were surveyed for each region and city, since the population size was over 10,000, see formula above) 22 . The total sample size was thus established at 2177 (311×7 = 2177). Accordingly, 2500 questionnaires were applied -as face to face interviews-, where consumers had to evaluate each item using the Likert scale. Two hundred and five questionnaires were discarded due to inconsistency between the answers given to similar questions, or due to incomplete and incorrect data. Final evaluation was performed on 2295 questionnaires. Statistical analyses First, to determine the factor structure of the AWAS, an EFA was performed using varimax rotation. Within the EFA, the Bartlett's test of sphericity was used for applicability of the factor analysis, and the Kaiser-Meyer-Olkin measure was used to evaluate sampling adequacy. In addition, eigenvalues, variance explanation rates and factor loadings were calculated. The reliability, mean, and standard deviation for the items and dimensions were also determined. The mean values for each item were calculated by dividing the sum of scores given in 5-point Likert scale, by the number of respondents (initially panelists and subsequently consumers). Mean values indicated negative attitudes as they approached 1, whilst values closer to 5 denoted positive attitudes as they approached 5. Considering a calculated minimum sample size of 311consumers per region, a CFA was performed on an independent sample of 425 consumers (allowing for potential incomplete or incorrect questionnaires that would have to be eliminated), to confirm the factor structure obtained from the EFA. Also, goodness of fit indices for the CFA were determined by the Root Mean-Square Error of Approximation (RMSEA), the Normed Fit Index (NFI), the Non-normed Fit Index (NNFI), the Comparative Fit Index (CFI), the Standardized Root Mean Square Residual (SRMR), the Adjusted Goodness-of-Fit Index (AGFI) and the Chi-Square/degree of freedom (χ 2 /df). The overall attitude towards animal welfare of the 2295 surveyed consumers was established by the Ward's hierarchical clustering method. The consumers were divided as impassive, moderate or sensitive based on their responses to the 42 items. For clustering, the squared Euclidean distance was used, and a level of 10 was obtained as a minimum reference distance. Further, to obtain a more detailed assessment of found differences between these 3 clusters, a further categorization of consumers as impassive, moderate or sensitive was established within each dimension. One-way ANOVA and the Tukey's post-hoc test were used for comparison between groups obtained from the cluster analysis, and between groups of differing socioeconomic characteristics. Statistical significance was set at p < 0.05. The CFA was performed, using LISREL 8.71. All other data were analyzed with SPSS 21.0 for Windows (SPSS, Inc., Chicago). Results and discussion According to data obtained from the 2295 surveyed consumers, the results for the EFA and the reliability analysis (Cronbach's alpha coefficients), as well as the corrected item-total correlations and calculated means (± SD) for items and dimensions (cognitive, affective and behavioral) for the AWAS attitude scale are presented in Table 1. The KMO and Bartlett's test of sphericity confirmed sampling adequacy and applicability of the factor analysis for the AWAS (Bartlett's Test of Sphericity: χ 2 = 9432.162; p < 0.001 and KMO = 0.915). The 42 items collected under 3 dimensions explained 72.003% of the total variance of the model. The cognitive dimension, which included 20 items and accounted for 32.439% of the total variance, had the higher relative weight on the scale, followed by the behavioral (21.228%) and affective/emotional (18.336%) dimensions. Factor loads of all items were higher than 0.40. Cronbach's alpha coefficients for reliability analysis were calculated as 0.853 for the cognitive dimension, 0.832 for the affective dimension, and 0.845 for the behavioral dimension, with a 0.829 established for the overall scale that comprised all 42 items. Every coefficient was above the 0.70 established as the critical value. The corrected item-total correlation values exceeded 0.35 (Table 1). A negative correlation was found for the 2nd item (A2) in the affective dimension. The Cronbach's alpha coefficients were found to be high ( Table 1). According to the responses given by consumers, the calculated means (X ± SD) for the cognitive, affective and behavioral dimensions were 4.02 ± 0.59, 4.01 ± 0.57 and 3.55 ± 0.78, respectively, indicating that there is a more impassive (negative) attitude in the behavioral dimension than in the cognitive or the affective dimensions. All these values were above 3 in the 5-point Likert score. When considering the calculated means within the cognitive dimension, items C12 (X = 3.37), C13 (X = 3.30) and C15 (X = 3.20), showed lower values, i.e. consumers attitude towards these statements tended to be more negative, when compared to other items in this same dimension. As for the behavioral and affective dimensions, the lower means were found for items B2 (X = 3.12), B12 (X = 3.12) and B3 (X = 3.20), and items A2 (X = 2.89) and A1 (X = 3.67), respectively, revealing a more negative attitudes of consumers for these particular entries. The calculated overall mean for the scale was 3.89 ± 0.54. The CFA that allowed testing the factor structure to determine adequacy of dimensions, as well as how strongly the items belong to each dimension is presented in Figure 1. The items within this figure are identified with letters that correspond to labels and statements found in Table 1. The fit indices 23,24 for construct validity in the CFA are shown in Table 2. The RMSEA, NFI, SRMR, and AGFI indicate an acceptable fit, whereas the NNFI, the CFI and the χ 2 /df denote a good fit. Results for fit indices, standardized partial correlation coefficients (seen to the right of item labels in Figure 1), and error covariance values (seen to the left in Figure 1), showed that a three-factor model fits the data adequately. The classification of consumers according to their overall attitudes towards animal welfare (which included the 42 items), was done by a clustering analysis. Accordingly, consumers were initially divided in two groups, and then one of these groups was further divided in two other groups (due to the established minimum reference distance of 10), finally obtaining three overall groups. The overall attitude of consumers towards animal welfare was defined as low (impassive), middle (moderate) or high (sensitive). Results show that 16.9% (n = 387) of the surveyed consumers had an overall impassive attitude, 49.9% (n = 1146) a moderate attitude and 33.2% (n = 762) a sensitive attitude toward animal welfare. To have a more detailed assessment of cluster attitudes of consumers, the impassive, moderate and sensitive grouping was additionally executed within each dimension and compared by ANOVA ( Table 3). Consumers with an impassive attitude towards animal welfare had the lowest means for within each of the three dimensions in the analysis ( Table 3). The association between consumer attitudes and demographic variables was examined and no differences were observed in terms of gender, age or marital status (p > 0.05). However, differences were found for education and socio-economic levels (p < 0.01), with consumers with lower education and socio-economic levels having a more impassive overall attitude towards animal welfare ( Table 4). In this study, a bespoke scale (AWAS) consisting of 42 items within 3 dimensions was developed to determine an overall consumer attitude towards animal welfare. The 42 items included in this scale were allocated according to the ABC basis of social psychology 20 , i.e. the affective, behavioral and cognitive dimensions. The affective dimension comprises items measuring emotions (happiness, fear, anxiety, etc.) of individuals. The behavioral dimension comprises items regarding active responses related to animal welfare. The cognitive dimension contains items that express ideas of individuals and include some basic knowledge. According to the results of the EFA, the 42 items included in the three dimensions explained more than two-thirds of the total variance in the study. The results of the EFA, the CFA 25 , the Cronbach's Alpha 26 and the Corrected item-total correlation 27 confirmed the validity and reliability of the AWAS without removing any items. The consumer attitudes for each dimension and the overall attitude of the consumers towards animal welfare were defined by mean and standard deviation values of the Likert scale. Interestingly, the overall scores for the behavioral dimension were lower than those for the cognitive and affective dimensions. This indicates that consumers can have positive attitudes in the cognitive and affective dimensions towards animal welfare yet tend to fail to behave accordingly. This implies that consumers may display an unpredictable behavior with respect to acquiring food of animal origin produced under high animal welfare standards. These results may be since the respondents were more sensitive 28 to norms consisting of beliefs and values, or that they were inclined to respond the survey questions in a way that is expected from a social perception. Alternatively, participants may have answered to the cognitive and affective dimension items according to accepted social norms, whereas the behavioral dimension items may have been approached based on their daily purchasing routine. Several studies 5,14 have in fact reported that individuals tend to respond to surveys as members of a social group and give more importance to animal welfare than is the case. Moreover, Grunert 28 reported that people approached animal welfare in accordance with social norms and showed high sensitivity towards this issue, yet still purchased products from the production systems that they strongly criticized as consumers. In the cognitive dimension, consumers stated that naming, slaughtering and religious sacrificing of animals (religious slaughter without stunning) did not considerably affect animal welfare. Results from answers to items in the affective dimension revealed that animals are mainly believed to have been created for human use and are not generally considered as individuals. This supports the argument that the values and norms of the participants may have an utilitarian basis, where farm animals are raised for eggs, milk and meat, or that their slaughtering is a legitimate right. 5 This may also be valid for the religious sacrifice of animals as a fundamental form of worship, where values and norms might be embedded with strong religious foundations. 1,5 However, since participants were not asked whether they partake with religious practices such as animal sacrificing, the answers to this particular item may be biased, as it may be affected by whether or not they practice a religion. In terms of the behavioral dimension, Turkish consumers do not appear to give much consideration to animal welfare, and hence to their potential purchase of animal origin goods produced with high animal welfare standards. Their interest in communicating animal welfare related issues is also low. The fact that animal welfare issues have only very recently been put on the agenda, following accession of Turkey as a member state of the European Union in 2018, with only poultry eggs having labels according to production type, suggest that the conceptual knowledge of Turkish consumers in terms of animal welfare is still inadequate. 29 The lack of animal friendly production labels for milk, meat and other animal products on the market, the absence of promotion or advertising of these products, and the limited activities supporting animal welfare sponsored by producer or consumer organizations also contribute to the inadequate animal welfare practices information status of Turkish consumers. Three basic groups were determined as a result of the cluster analysis, to establish an overall attitude of Turkish consumers toward animal welfare, and differences between them were found. The consumers were defined as impassive (low), moderate (middle) or sensitive (high). Results show that half of the consumers in Turkey were considered as having a moderate overall attitude towards animal welfare, while one out of every six people showed to be impassive, and 33.2% of the consumers placed in the sensitive section. The proportion of consumers with a moderate attitude towards animal welfare presents an area of opportunity for new high animal welfare standard products on the Turkish market. Moderate individuals may be influenced to display more sensitive attitudes following new personal experiences, changes in social and living environments and an improved knowledge of animal welfare. 14 As stated above, the low proportion of participants placed in the sensitive consumer segment could relate to the fact that animal welfare is a relatively new concept in Turkey, and legislation changes are still ongoing. When demographics of the participants were analyzed, education and socio-economic levels of consumers in Turkey seem to also have an impact on consumer overall attitude toward animal welfare. In fact, consumers within the lower education and socio-economic levels more frequently showed an impassive attitude, than that seen in other levels of these categories. There is a wide range of literature on how demographic factors such as socio-economic structure and education level can affect animal welfare perception and attitudes. 11,12,14,30 In fact, a positive association between animal welfare concern, and higher education and economic income levels has been found. The fact that people with lower education and income exhibit a more negative animal welfare attitude may relate to their need to assign utilitarian values and norms to products, and possibly to insufficient knowledge about the impact of industrial production on animal welfare, as well as on human and environmental health. Therefore, these consumers may be less inclined to have a sensitive attitude in relation to ethic purchasing. Similar results have been reported by Kendall et al. 12 , and Kılıç and Bozkurt 30 . In addition, consumers with a lower socio-economic profile may have a stronger connection to rural areas and agriculture, reinforcing their utilitarian approach to animal welfare. 12,14 Moreover, literature reports argue that other social factors, such as life experiences, having children and pets, or being a vegetarian may also enhance a sensitive attitude towards animal welfare standards in production systems. 1,6,12,14 Conclusion The attitude scores of Turkish consumers in the cognitive, affective and behavioral dimensions, as well as an overall attitude score of participants toward animal welfare were determined in this study. Proportions of sensitive attitudes for the cognitive and the affective dimensions were higher than that for the behavioral dimension. That is, while consumers did not approve of inhumane or low animal welfare production standard practices, this was not reflected in their behavior and hence their potential purchasing habits. An increased knowledge of the positive effects of humane animal production on derived products and on the environment, as well as the introduction of improved animal welfare regulations, the development of product label follow-up habits, the introduction of more animal-friendly production-type products, and the increase in advertising and awareness activities of retailers and consumer organizations may progressively have a positive impact on overall consumer behavior. This could enhance the potential marketing opportunities for animal friendly products in Turkey in the near to medium term. However, further studies are warranted to clearly ascertain the understanding, opinions, perceptions, attitudes and behaviors of Turkish consumers regarding sustainable animal production strategies.
Sparse models for Computer Vision The representation of images in the brain is known to be sparse. That is, as neural activity is recorded in a visual area ---for instance the primary visual cortex of primates--- only a few neurons are active at a given time with respect to the whole population. It is believed that such a property reflects the efficient match of the representation with the statistics of natural scenes. Applying such a paradigm to computer vision therefore seems a promising approach towards more biomimetic algorithms. Herein, we will describe a biologically-inspired approach to this problem. First, we will describe an unsupervised learning paradigm which is particularly adapted to the efficient coding of image patches. Then, we will outline a complete multi-scale framework ---SparseLets--- implementing a biologically inspired sparse representation of natural images. Finally, we will propose novel methods for integrating prior information into these algorithms and provide some preliminary experimental results. We will conclude by giving some perspective on applying such algorithms to computer vision. More specifically, we will propose that bio-inspired approaches may be applied to computer vision using predictive coding schemes, sparse models being one simple and efficient instance of such schemes. Efficiency and sparseness in biological representations of natural images The central nervous system is a dynamical, adaptive organ which constantly evolves to provide optimal decisions 1 for interacting with the environment. The early visual pathways provides with a powerful system for probing and modeling these mechanisms. For instance, the primary visual cortex of primates (V1) is absolutely central for most visual tasks. There, it is observed that some neurons from the input layer of V1 present a selectivity for localized, edge-like features -as represented by their "receptive fields" (Hubel and Wiesel, 1968). Crucially, there is experimental evidence for sparse firing in the neocortex (Barth and Poulet, 2012;Willmore et al., 2011) and in particular in V1. A representation is sparse when each input signal is associated with a relatively small sub-set of simultaneously activated neurons within a whole population. For instance, orientation selectivity of simple cells is sharper than the selectivity that would be predicted by linear filtering. Such a procedure produces a rough "sketch" of the image on the surface of V1 that is believed to serve as a "blackboard" for higher-level cortical areas (Marr, 1983). However, it is still largely unknown how neural computations act in V1 to represent the image. More specifically, what is the role of sparseness -as a generic neural signature-in the global function of neural computations? A popular view is that such a population of neurons operates such that relevant sensory information from the retino-thalamic pathway is transformed (or "coded") efficiently. Such efficient representation will allow decisions to be taken optimally in higher-level areas. In this framework, optimality is defined in terms of information theory (Attneave, 1954;Atick, 1992;Wolfe et al., 2010). For instance, the representation produced by the neural activity in V1 is sparse: It is believed that this reduces redundancies and allows to better segregate edges in the image (Field, 1994;Froudarakis et al., 2014). This optimization is operated given biological constraints, such as the limited bandwidth of information transfer to higher processing stages or the limited amount of metabolic resources (energy or wiring length). More generally it allows to increase the storage capacity of associative memories before memory patterns start to interfere with each other (Palm, 2013). Moreover, it is now widely accepted that this redundancy reduction is achieved in a neural population through lateral interactions. Indeed, a link between anatomical data and a functional connectivity between neighboring representations of edges has been found (Bosking et al., 1997), though their conclusions were more recently refined to show that this process may be more complex (Hunt et al., 2011). By linking neighboring neurons representing similar features, one allows thus a more efficient representation in V1. As computer vision systems are subject to similar constraints, applying such a paradigm therefore seems a promising approach towards more biomimetic algorithms. It is believed that such a property reflects the efficient match of the representation with the statistics of natural scenes, that is, to behaviorally relevant sensory inputs. Indeed, sparse representations are prominently observed for cortical responses to natural stimuli (Field, 1987;Vinje and Gallant, 2000;DeWeese et al., 2003;Baudot et al., 2013). As the function of neural systems mostly emerges from unsupervised learning, it follows that these are adapted to the input which are behaviorally the most common and important. More generally, by being adapted to natural scenes, this shows that sparseness is a neural signature of an underlying optimization process. In fact, one goal of neural computation in low-level sensory areas such as V1 is to provide relevant predictions (Rao and Ballard, 1999;Spratling, 2011). This is crucial for living beings as they are often confronted with noise (internal to the brain or external, such as in low light conditions), ambiguities (such as inferring a three dimensional slant from a bi-dimensional retinal image). Also, the system has to compensate for inevitable delays, such as the delay from light stimulation to activation in V1 which is estimated to be of 50 ms in humans. For instance, a tennis ball moving at 20 m s −1 at one meter in the frontal plane elicits an input activation in V1 corresponding to around 45 • of visual angle behind its physical position (Perrinet et al., 2014). Thus, to be able to translate such knowledge to the computer vision community, it is crucial to better understand why the neural processes that produce sparse coding are efficient. Sparseness induces neural organization A breakthrough in the modeling of the representation in V1 was the discovery that sparseness is sufficient to induce the emergence of receptive fields similar to V1 simple cells (Olshausen and Field, 1996). This reflects the fact that, at the learning time scale, coding is optimized relative to the statistics of natural scenes such that independent components of the input are represented (Olshausen and Field, 1997;Bell and Sejnowski, 1997). The emergence of edge-like simple cell receptive fields in the input layer of area V1 of primates may thus be considered as a coupled coding and learning optimization problem: At the coding time scale, the sparseness of the representation is optimized for any given input while at the learning time scale, synaptic weights are tuned to achieve on average an optimal representation efficiency over natural scenes. This theory has allowed to connect the different fields by providing a link between information theory models, neuromimetic models and physiological observations. In practice, most sparse unsupervised learning models aim at optimizing a cost defined on prior assumptions on the sparseness of the representation. These sparse learning algorithms have been applied both for images (Fyfe and Baddeley, 1995;Olshausen and Field, 1996;Zibulevsky and Pearlmutter, 2001;Perrinet et al., 2004;Rehn and Sommer, 2007;Doi et al., 2007;Perrinet, 2010) and sounds (Lewicki and Sejnowski, 2000;Smith and Lewicki, 2006). Sparse coding may also be relevant to the amount of energy the brain needs to use to sustain its function. The total neural activity generated in a brain area is inversely related to the sparseness of the code, therefore the total energy consumption decreases with increasing sparseness. As a matter of fact the probability distribution functions of neural activity observed experimentally can be approximated by so-called exponential distributions, which have the property of maximizing information transmission for a given mean level of activity (Baddeley et al., 1997). To solve such constraints, some models thus directly compute a sparseness cost based on the representation's distribution. For instance, the kurtosis corresponds to the 4th statistical moment (the first three moments being in order the mean, variance and skewness) and measures how the statistics deviates from a Gaussian: A positive kurtosis measures if this distribution has an "heavier tail" than a Gaussian for a similar variance -and thus corresponds to a sparser distribution. Based on such observations, other similar statistical measures of sparseness have been derived in the neuroscience literature (Vinje and Gallant, 2000). A more general approach is to derive a representation cost. For instance, learning is accomplished in the SparseNet algorithmic framework (Olshausen and Field, 1997) on image patches taken from natural images as a sequence of coding and learning steps. First, sparse coding is achieved using a gradient descent over a convex cost. We will see later in this chapter how this cost is derived from a prior on the probability distribution function of the coefficients and how it favors the sparseness of the representation. At this step, the coding is performed using the current state of the "dictionary" of receptive fields. Then, knowing this sparse solution, learning is defined as slowly changing the dictionary using Hebbian learning (Hebb, 1949). As we will see later, the parameterization of the prior has a major impact on the results of the sparse coding and thus on the emergence of edge-like receptive fields and requires proper tuning. Yet, this class of models provides a simple solution to the problem of sparse representation in V1. However, these models are quite abstract and assume that neural computations may estimate some rather complex measures such as gradients -a problem that may also be faced by neuromorphic systems. Efficient, realistic implementations have been proposed which show that imposing sparseness may indeed guide neural organization in neural network models, see for instance (Zylberberg et al., 2011;Hunt et al., 2013). Additionally, it has also been shown that in a neuromorphic model, an efficient coding hypothesis links sparsity and selectivity of neural responses (Blättler and Hahnloser, 2011). More generally, such neural signatures are reminiscent of the shaping of neural activity to account for contextual influences. For instance, it is observed that -depending on the context outside the receptive field of a neuron in area V1-the tuning curve may demonstrate a modulation of its orientation selectivity. This was accounted for instance as a way to optimize the coding efficiency of a population of neighboring neurons (Seriès et al., 2004). As such, sparseness is a relevant neural signature for a large class of neural computations implementing efficient coding. Outline: Sparse models for Computer Vision As a consequence, sparse models provide a fruitful approach for computer vision. It should be noted that other popular approaches for taking advantage of sparse representations exist. The most popular is compressed sensing (Ganguli and Sompolinsky, 2012), for which it has been proven that -assuming sparseness in the input, it is possible to reconstruct the input from a sparse choice of linear coefficients computed from randomly drawn basis functions. Note also that some studies also focus on temporal sparseness. Indeed, by computing for a given neuron the relative numbers of active events relative to a given time window, one computes the so-called lifetime sparseness (see for instance (Willmore et al., 2011)). We will see below that this measure may be related to population sparseness. For a review of sparse modeling approaches, we refer to (Elad, 2010). Herein, we will focus on the particular sub-set of such models based on their biological relevance. Indeed, we will rather focus on biomimetic sparse models as tools to shape future computer vision algorithms (Benoit et al., 2010;Serre and Poggio, 2010). In particular, we will not review models which mimic neural activity, but rather on algorithms which mimic their efficiency, bearing in mind the constraints that are linked to neural systems (no central clock, internal noise, parallel processing, metabolic cost, wiring length). For that purpose, we will complement some previous studies (Perrinet et al., 2004;Fischer et al., 2007a;Perrinet, 2008Perrinet, , 2010 (for a review see (Perrinet and Masson, 2007)) by putting these results in light of most recent theoretical and physiological findings. This chapter is organized as follows. First, in Section 14.2 we will outline how we may implement the unsupervised learning algorithm at a local scale for image patches. Then we will extend in Section 14.3 such an approach to full scale natural images by defining the SparseLets framework. Such formalism will then be extended in Section 14.4 to include context modulation, for instance from higher-order areas. These different algorithms (from the local scale of image patches to more global scales) will each be accompanied by a supporting implementation (with the source code) for which we will show example usage and results. We will in particular highlight novel results and then draw some conclusions on the perspective of sparse models for computer vision. More specifically, we will propose that bio-inspired approaches may be applied to computer vision using predictive coding schemes, sparse models being one simple and efficient instance of such schemes. 14.2 What is sparseness? Application to image patches Definitions of sparseness In low-level sensory areas, the goal of neural computations is to generate efficient intermediate representations as we have seen that this allows more efficient decision making. Classically, a representation is defined as the inversion of an internal generative model of the sensory world, that is, by inferring the sources that generated the input signal. Formally, as in (Olshausen and Field, 1997), we define a Generative Linear Model (GLM) for describing natural, static, grayscale image patches I (represented by column vectors of dimension L pixels), by setting a "dictionary" of M images (also called "atoms" or "filters") as the L × M matrix Φ = {Φ i } 1≤i≤M . Knowing the associated "sources" as a vector of coefficients a = {a i } 1≤i≤M , the image is defined using matrix notation as a sum of weighted atoms: where n is a Gaussian additive noise image. This noise, as in (Olshausen and Field, 1997), is scaled to a variance of σ 2 n to achieve decorrelation by applying Principal Component Analysis to the raw input images, without loss of generality since this preprocessing is invertible. Generally, the dictionary Φ may be much larger than the dimension of the input space (that is, if M L) and it is then said to be over-complete. However, given an over-complete dictionary, the inversion of the GLM leads to a combinatorial search and typically, there may exist many coding solutions a to Eq. 14.1 for one given input I. The goal of efficient coding is to find, given the dictionary Φ and for any observed signal I, the "best" representation vector, that is, as close as possible to the sources that generated the signal. Assuming that for simplicity, each individual coefficient is represented in the neural activity of a single neuron, this would justify the fact that this activity is sparse. It is therefore necessary to define an efficiency criterion in order to choose between these different solutions. Using the GLM, we will infer the "best" coding vector as the most probable. In particular, from the physics of the synthesis of natural images, we know a priori that image representations are sparse: They are most likely generated by a small number of features relatively to the dimension M of the representation space. Similarly to Lewicki and Sejnowski (2000), this can be formalized in the probabilistic framework defined by the GLM (see Eq. 14.1), by assuming that knowing the prior distribution of the coefficients a i for natural images, the representation cost of a for one given natural image is: where P (I) is the partition function which is independent of the coding (and that we thus ignore in the following) and · is the L 2 -norm in image space. This efficiency cost is measured in bits if the logarithm is of base 2, as we will assume without loss of generality thereafter. For any representation a, the cost value corresponds to the description length (Rissanen, 1978): On the right hand side of Eq. 14.2, the second term corresponds to the information from the image which is not coded by the representation (reconstruction cost) and thus to the information that can be at best encoded using entropic coding pixel by pixel (that is, the negative log-likelihood − log P (I|a, Φ) in Bayesian terminology, see chapter 009_series for Bayesian models applied to computer vision). The third term S(a|Φ) = − i log P (a i |Φ) is the representation or sparseness cost: It quantifies representation efficiency as the coding length of each coefficient of a which would be achieved by entropic coding knowing the prior and assuming that they are independent. The rightmost penalty term (see Equation 14.2) gives thus a definition of sparseness S(a|Φ) as the sum of the log prior of coefficients. In practice, the sparseness of coefficients for natural images is often defined by an ad hoc parameterization of the shape of the prior. For instance, the parameterization in Olshausen and Field (1997) yields the coding cost: where β corresponds to the steepness of the prior and σ to its scaling (see Figure 13.2 from (Olshausen, 2002)). This choice is often favored because it results in a convex cost for which known numerical optimization methods such as conjugate gradient may be used. In particular, these terms may be put in parallel to regularization terms that are used in computer vision. For instance, a L2-norm penalty term corresponds to Tikhonov regularization (Tikhonov, 1977) or a L1-norm term corresponds to the Lasso method. See chapter 003_holly_gerhard for a review of possible parametrization of this norm, for instance by using nested L p norms. Classical implementation of sparse coding rely therefore on a parametric measure of sparseness. Let's now derive another measure of sparseness. Indeed, a non-parametric form of sparseness cost may be defined by considering that neurons representing the vector a are either active or inactive. In fact, the spiking nature of neural information demonstrates that the transition from an inactive to an active state is far more significant at the coding time scale than smooth changes of the firing rate. This is for instance perfectly illustrated by the binary nature of the neural code in the auditory cortex of rats (DeWeese et al., 2003). Binary codes also emerge as optimal neural codes for rapid signal transmission (Bethge et al., 2003;Nikitin et al., 2009). This is also relevant for neuromorphic systems which transmit discrete events (such as a network packet). With a binary event-based code, the cost is only incremented when a new neuron gets active, regardless to the analog value. Stating that an active neuron carries a bounded amount of information of λ bits, an upper bound for the representation cost of neural activity on the receiver end is proportional to the count of active neurons, that is, to the 0 pseudo-norm a 0 : This cost is similar with information criteria such as the Akaike Information Criteria (Akaike, 1974) or distortion rate (Mallat, 1998, p. 571). This simple non-parametric cost has the advantage of being doi:10.1002/9783527680863.ch14 dynamic: The number of active cells for one given signal grows in time with the number of spikes reaching the target population. But Eq. 14.4 defines a harder cost to optimize (in comparison to Equation 14.3 for instance) since the hard 0 pseudo-norm sparseness leads to a non-convex optimization problem which is NP-complete with respect to the dimension M of the dictionary (Mallat, 1998, p. 418). 14.2.2 Learning to be sparse: the SparseNet algorithm We have seen above that we may define different models for measuring sparseness depending on our prior assumption on the distribution of coefficients. Note first that, assuming that the statistics are stationary (more generally ergodic), then these measures of sparseness across a population should necessary imply a lifetime sparseness for any neuron. Such a property is essential to extend results from electro-physiology. Indeed, it is easier to record a restricted number of cells than a full population (see for instance (Willmore et al., 2011)). However, the main property in terms of efficiency is that the representation should be sparse at any given time, that is, in our setting, at the presentation of each novel image. Now that we have defined sparseness, how could we use it to induce neural organization? Indeed, given a sparse coding strategy that optimizes any representation efficiency cost as defined above, we may derive an unsupervised learning model by optimizing the dictionary Φ over natural scenes. On the one hand, the flexibility in the definition of the sparseness cost leads to a wide variety of proposed sparse coding solutions (for a review, see (Pece, 2002)) such as numerical optimization (Olshausen and Field, 1997), non-negative matrix factorization (Lee and Seung, 1999;Ranzato et al., 2007) or Matching Pursuit (Perrinet et al., 2004;Smith and Lewicki, 2006;Rehn and Sommer, 2007;Perrinet, 2010). They are all derived from correlation-based inhibition since this is necessary to remove redundancies from the linear representation. This is consistent with the observation that lateral interactions are necessary for the formation of elongated receptive fields (Bolz and Gilbert, 1989;Wolfe et al., 2010). On the other hand, these methods share the same GLM model (see Eq. 14.1) and once the sparse coding algorithm is chosen, the learning scheme is similar. As a consequence, after every coding sweep, we increased the efficiency of the dictionary Φ with respect to Eq. 14.2. This is achieved using the online gradient descent approach given the current sparse solution, ∀i: where η is the learning rate. Similarly to Eq. 17 in (Olshausen and Field, 1997) or to Eq. 2 in (Smith and Lewicki, 2006), the relation is a linear "Hebbian" rule (Hebb, 1949) since it enhances the weight of neurons proportionally to the correlation between pre-and post-synaptic neurons. Note that there is no learning for non-activated coefficients. The novelty of this formulation compared to other linear Hebbian learning rule such as (Oja, 1982) is to take advantage of the sparse representation, hence the name Sparse Hebbian Learning (SHL). The class of SHL algorithms are unstable without homeostasis, that is, without a process that maintains the system in a certain equilibrium. In fact, starting with a random dictionary, the first filters to learn are more likely to correspond to salient features (Perrinet et al., 2004) and are therefore more likely to be selected again in subsequent learning steps. In SparseNet, the homeostatic gain control is implemented by adaptively tuning the norm of the filters. This method equalizes the variance of coefficients across neurons using a geometric stochastic learning rule. The underlying heuristic is that this introduces a bias in the choice of the active coefficients. In fact, if a neuron is not selected often, the geometric homeostasis will decrease the norm of the corresponding filter, and therefore -from Eq. 14.1 and the conjugate gradient optimization-this will increase the value of the associated scalar. Finally, since the prior functions defined in Eq. 14.3 are identical for all neurons, this will increase the relative probability that the neuron is selected with a higher relative We show the probability distribution function of sparse coefficients obtained by our method compared to (Olshausen and Field, 1996) with first, random dictionaries (respectively 'ssc-init' and 'cg-init') and second, with the dictionaries obtained after convergence of respective learning schemes (respectively 'ssc' and 'cg'). At convergence, sparse coefficients are more sparsely distributed than initially, with more kurtotic probability distribution functions for 'ssc' in both cases, as can be seen in the "longer tails"of the distribution. (C) We evaluate the coding efficiency of both methods by plotting the average residual error (L 2 norm) as a function of the 0 pseudo-norm. This provides a measure of the coding efficiency for each dictionary over the set of image patches (error bars represent one standard deviation). Best results are those providing a lower error for a given sparsity (better compression) or a lower sparseness for the same error. value. The parameters of this homeostatic rule have a great importance for the convergence of the global algorithm. In (Perrinet, 2010), we have derived a more general homeostasis mechanism derived from the optimization of the representation efficiency through histogram equalization which we will describe later (see Section 14.4.1). Results: efficiency of different learning strategies The different SHL algorithms simply differ by the coding step. This implies that they only differ by first, how sparseness is defined at a functional level and second, how the inverse problem corresponding to the coding step is solved at the algorithmic level. Most of the schemes cited above use a less strict, parametric definition of sparseness (like the convex L 1 -norm), but for which a mathematical formulation of the optimization problem exists. Few studies such as (Liu and Jia, 2014;Peharz and Pernkopf, 2012) use the stricter 0 pseudo-norm as the coding problem gets more difficult. A thorough comparison of these different strategies was recently presented in (Charles et al., 2012). See also (Aharon et al., 2006) for properties of the coding solutions to the 0 pseudo-norm. Similarly, in (Perrinet, 2010), we preferred to retrieve an approximate solution to the coding problem to have a better match with the measure of efficiency Eq. 14.4. Such an algorithmic framework is implemented in the SHL-scripts package 2 . These scripts allow the retrieval of the database of natural images and the replication of the results of (Perrinet, 2010) reported in this section. With a correct tuning of parameters, we observed that different coding schemes show qualitatively a similar emergence of edge-like filters. The specific coding algorithm used to obtain this sparseness appears to be of secondary importance as long as it is adapted to the data and yields sufficiently efficient sparse representation vectors. However, resulting dictionaries vary qualitatively among these schemes and it was unclear which algorithm is the most efficient and what was the individual role of the different mechanisms that constitute SHL schemes. At the learning level, we have shown that the homeostasis mechanism had a great influence on the qualitative distribution of learned filters (Perrinet, 2010). Results are shown in Figure 14.1. This figure represents the qualitative results of the formation of edge-like filters (receptive fields). More importantly, it shows the quantitative results as the average decrease of the squared error as a function of the sparseness. This gives a direct access to the cost as computed in Equation 14.4. These results are comparable with the SparseNet algorithm. Moreover, this solution, by giving direct access to the atoms (filters) that are chosen, provides with a more direct tool to manipulate sparse components. One further advantage consists in the fact that this unsupervised learning model is non-parametric (compare with Eq. 14.3) and thus does not need to be parametrically tuned. Results show the role of homeostasis on the unsupervised algorithm. In particular, using the comparison of coding and decoding efficiency with and without this specific homeostasis, we have proven that cooperative homeostasis optimized overall representation efficiency (see also Section 14.4.1). It is at this point important to note that in this algorithm, we achieve an exponential convergence of the squared error (Mallat, 1998, p. 422), but also that this curve can be directly derived from the coefficients' values. Indeed, for N coefficients (that is, a 0 = N ), we have the squared error equal to: As a consequence, the sparser the distributions of coefficients, then quicker is the decrease of the residual energy. In the following section, we will describe different variations of this algorithm. To compare their respective efficiency, we will plot the decrease of the coefficients along with the decrease of the residual's energy. Using such tools, we will now explore if such a property extends to full-scale images and not only to image patches, an important condition for using sparse models in computer vision algorithms. 14.3 SparseLets: a multi scale, sparse, biologically inspired representation of natural images 14.3.1 Motivation: architecture of the primary visual cortex Our goal here is to build practical algorithms of sparse coding for computer vision. We have seen above that it is possible to build an adaptive model of sparse coding that we applied to 12 × 12 image patches. Invariably, this has shown that the independent components of image patches are edge-like filters, such as is found in simple cells of V1. This model has shown that for randomly chosen image patches, these may be described by a sparse vector of coefficients. Extending this result to full-field natural images, we can expect that this sparseness would increase by a degree of order. In fact, except in a densely cluttered image such as a close-up of a texture, natural images tend to have wide areas which are void (such as the sky, walls or uniformly filled areas). However, applying directly doi:10.1002/9783527680863.ch14 the SparseNet algorithm to full-field images is impossible in practice as its computer simulation would require too much memory to store the over-complete set of filters. However, it is still possible to define a priori these filters and herein, we will focus on a full-field sparse coding method whose filters are inspired by the architecture of the primary visual cortex. The first step of our method involves defining the dictionary of templates (or filters) for detecting edges. We use a log-Gabor representation, which is well suited to represent a wide range of natural images (Fischer et al., 2007a). This representation gives a generic model of edges parameterized by their shape, orientation, and scale. We set the range of these parameters to match with what has been reported for simple-cell responses in macaque primary visual cortex (V1). Indeed log-Gabor filters are similar to standard Gabors and both are well fitted to V1 simple cells (Daugman, 1980). Log-Gabors are known to produce a sparse set of linear coefficients (Field, 1999). Like Gabors, these filters are defined by Gaussians in Fourier space, but their specificity is that log-Gabors have Gaussians envelopes in log-polar frequency space. This is consistent with physiological measurements which indicate that V1 cell responses are symmetric on the log frequency scale. They have multiple advantages over Gaussians: In particular, they have no DC component, and more generally, their envelopes more broadly cover the frequency space (Fischer et al., 2007b). In this chapter, we set the bandwidth of the Fourier representation of the filters to 0.4 and π/8 respectively in the log-frequency and polar coordinates to get a family of relatively elongated (and thus selective) filters (see Fischer et al. (2007b) and Figure 14.2-A for examples of such edges). Prior to the analysis of each image, we used the spectral whitening filter described by Olshausen and Field (1997) to provide a good balance of the energy of output coefficients (Perrinet et al., 2004;Fischer et al., 2007a). Such a representation is implemented in the LogGabor package 3 . This transform is linear and can be performed by a simple convolution repeated for every edge type. Following Fischer et al. (2007b), convolutions were performed in the Fourier (frequency) domain for computational efficiency. The Fourier transform allows for a convenient definition of the edge filter characteristics, and convolution in the spatial domain is equivalent to a simple multiplication in the frequency domain. By multiplying the envelope of the filter and the Fourier transform of the image, one may obtain a filtered spectral image that may be converted to a filtered spatial image using the inverse Fourier transform. We exploited the fact that by omitting the symmetrical lobe of the envelope of the filter in the frequency domain, we obtain quadrature filters. Indeed, the output of this procedure gives a complex number whose real part corresponds to the response to the symmetrical part of the edge, while the imaginary part corresponds to the asymmetrical part of the edge (see Fischer et al. (2007b) for more details). More generally, the modulus of this complex number gives the energy response to the edge -as can be compared to the response of complex cells in area V1, while its argument gives the exact phase of the filter (from symmetric to non-symmetric). This property further expands the richness of the representation. Given a filter at a given orientation and scale, a linear convolution model provides a translationinvariant representation. Such invariance can be extended to rotations and scalings by choosing to multiplex these sets of filters at different orientations and spatial scales. Ideally, the parameters of edges would vary in a continuous fashion, to a full relative translation, rotation, and scale invariance. However this is difficult to achieve in practice and some compromise has to be found. Indeed, though orthogonal representations are popular in computer vision due to their computational tractability, it is desirable in our context that we have a relatively high over-completeness in the representation to achieve this invariance. For a given set of 256 × 256 images, we first chose to have 8 dyadic levels (that is, by doubling the scale at each level) with 24 different orientations. Orientations are measured as a non-oriented angle in radians, by convention in the range from − π 2 to π 2 (but not including − π 2 ) with respect to the x−axis. Finally, each image is transformed into a pyramid of coefficients. This (Perrinet, 2008). The hue gives the orientation while the value gives the absolute value (white denotes a low coefficient). Note the redundancy of the linear representation, for instance at different scales. The SparseLets framework The resulting dictionary of edge filters is over-complete. The linear representation would thus give a dense, relatively inefficient representation of the distribution of edges, see Figure 14.2-B. Therefore, starting from this linear representation, we searched instead for the most sparse representation. As we saw above in Section 14.2, minimizing the 0 pseudo-norm (the number of non-zero coefficients) leads to an expensive combinatorial search with regard to the dimension of the dictionary (it is NP-hard). As proposed by Perrinet et al. (2004), we may approximate a solution to this problem using a greedy approach. Such an approach is based on the physiology of V1. Indeed, it has been shown that inhibitory interneurons decorrelate excitatory cells to drive sparse code formation (Bolz and Gilbert, 1989;King et al., 2013). We use this local architecture to iteratively modify the linear representation (Fischer et al., 2007a). In general, a greedy approach is applied when the optimal combination is difficult to solve globally, but can be solved progressively, one element at a time. Applied to our problem, the greedy approach corresponds to first choosing the single filter Φ i that best fits the image along with a suitable coefficient a i , such that the single source a i Φ i is a good match to the image. Examining every filter Φ j , we find the filter Φ i with the maximal correlation coefficient ("Matching" step), where: 7) ·, · represents the inner product, and · represents the L 2 (Euclidean) norm. The index ("address") i gives the position (x and y), scale and orientation of the edge. We saw above that since filters at a given scale and orientation are generated by a translation, this operation can be efficiently computed using a convolution, but we keep this notation for its generality. The associated coefficient is the scalar projection: Second, knowing this choice, the image can be decomposed as where R is the residual image ("Pursuit" step). We then repeat this 2-step process on the residual (that is, with I ← R) until some stopping criterion is met. Note also that the norm of the filters has no influence in this algorithm on the matching step or on the reconstruction error. For simplicity and without loss of generality, we will thereafter set the norm of the filters to 1: ∀j, Φ j = 1 (that is, that the spectral energy sums to 1). Globally, this procedure gives us a sequential algorithm for reconstructing the signal using the list of sources (filters with coefficients), which greedily optimizes the 0 pseudo-norm (i.e., achieves a relatively sparse representation given the stopping criterion). The procedure is known as the Matching Pursuit (MP) algorithm (Mallat and Zhang, 1993), which has been shown to generate good approximations for natural images (Perrinet et al., 2004;Perrinet, 2010). We have included two minor improvements over this method: First, we took advantage of the response of the filters as complex numbers. As stated above, the modulus gives a response independent of the phase of the filter, and this value was used to estimate the best match of the residual image with the possible dictionary of filters (Matching step). Then, the phase was extracted as the argument of the corresponding coefficient and used to feed back onto the image in the Pursuit step. This modification allows for a phase-independent detection of edges, and therefore for a richer set of configurations, while preserving the precision of the representation. Second, we used a "smooth" Pursuit step. In the original form of the Matching Pursuit algorithm, the projection of the Matching coefficient is fully removed from the image, which allows for the optimal decrease of the energy of the residual and allows for the quickest convergence of the algorithm with respect to the 0 pseudo-norm (i.e., it rapidly achieves a sparse reconstruction with low error). However, this efficiency comes at a cost, because the algorithm may result in non-optimal representations due to choosing edges sequentially and not globally. This is often a problem when edges are aligned (e.g. on a smooth contour), as the different parts will be removed independently, potentially leading to a residual with gaps in the line. Our goal here is not necessarily to get the fastest decrease of energy, but rather to provide with the best representation of edges along contours. We therefore used a more conservative approach, removing only a fraction (denoted by α) of the energy at each pursuit step (for MP, α = 1). Note that in that case, Equation 14.6 has to be modified to account for the α parameter: We found that α = 0.8 was a good compromise between rapidity and smoothness. One consequence of using α < 1 is that, when removing energy along contours, edges can overlap; even so, the correlation is invariably reduced. Higher and smaller values of α were also tested, and gave representation results similar to those presented here. In summary, the whole coding algorithm is given by the following nested loops in pseudo-code: 1. draw a signal I from the database; its energy is E = I 2 , 2. initialize sparse vector s to zero and linear coefficients ∀j, a j =< I, Φ j >, 3. while the residual energy E = I 2 is above a given threshold do: (a) select the best match: i = ArgMax j |a j |, where | · | denotes the modulus, (b) increment the sparse coefficient: This class of algorithms gives a generic and efficient representation of edges, as illustrated by the example in Figure 14.3-A. We also verified that the dictionary used here is better adapted to the extraction of edges than Gabors (Fischer et al., 2007a). The performance of the algorithm can be measured quantitatively by reconstructing the image from the list of extracted edges. All simulations were performed using Python (version 2.7.8) with packages NumPy (version 1.8.1) and SciPy (version 0.14.0) (Oliphant, 2007) on a cluster of Linux computing nodes. Visualization was performed using Matplotlib (version 1.3.1) (Hunter, 2007). 4 To be comparable, we measured the efficiency with respect to the relative 0 pseudo-norm in bits per unit of image surface (pixel): This is defined as the ratio of active coefficients times the numbers of bits required to code for each coefficient (that is, log 2 (M ), where M is total number of coefficients in the representation) over the size of the image. For different image and framework sizes, the lower this ratio, the higher the sparseness. As shown in Figure 14.3-B, we indeed see that sparseness increases relative to an increase in image size. This reflects the fact that sparseness is not only local (few edges coexist at one place) but is also spatial (edges are clustered, and most regions are empty). Such a behavior is also observed in V1 of monkeys as the size of the stimulation is increased from a stimulation over only the classical receptive field to 4 times around it (Vinje and Gallant, 2000). Note that by definition, our representation of edges is invariant to translations, scalings, and rotations in the plane of the image. We also performed the same edge extraction where images from the database were perturbed by adding independent Gaussian noise to each pixel such that signalto-noise ratio was halved. Qualitative results are degraded but qualitatively similar. In particular, edge extraction in the presence of noise may result in false positives. Quantitatively, one observes that the representation is slightly less sparse. This confirms our intuition that sparseness is causally linked to the efficient extraction of edges in the image. We controlled the quality of the reconstruction from the edge information such that the residual energy is less than 3% over the whole set of images, a criterion met on average when identifying 2048 edges per image for images of size 256 × 256 (that is, a relative sparseness of ≈ 0.01% of activated coefficients). (B) Efficiency for different image sizes as measured by the decrease of the residual's energy as a function of the coding cost (relative 0 pseudonorm). (B, inset) This shows that as the size of images increases, sparseness increases, validating quantitatively our intuition on the sparse positioning of objects in natural images. Note, that the improvement is non significant for a size superior to 128. The SparseLets framework thus shows that sparse models can be extended to full-scale natural images, and that increasing the size improves sparse models by a degree of order (compare a size of 16 with that of 256). Efficiency of the SparseLets framework To examine the robustness of the framework and of sparse models in general, we examined how results changed when changing parameters for the algorithm. In particular, we investigated the effect of filter parameters B f and B θ . We also investigated how the over-completeness factor could influence the result. We manipulated the number of discretization steps along the spatial frequency axis N f (that is, the number of layers in the pyramid) and orientation axis N θ . Results are summarized in Figure 14.4 and show that an optimal efficiency is achieved for certain values of these parameters. These optimal values are in the order of what is found for the range of selectivities observed in V1. Note that these values may change across categories. Further experiments should provide with an adaptation mechanism to allow finding the best parameters in an unsupervised manner. These particular results illustrate the potential of sparse models in computer vision. Indeed, one main advantage of these methods is to explicitly represent edges. A direct application of sparse models is the ability of the representation to reconstruct these images and therefore to use it for compression (Perrinet et al., 2004). Other possible applications are image filtering or edge manipulation for texture synthesis or denoising (Portilla and Simoncelli, 2000). Recent advances have shown that such representations could be used for the classification of natural images (see chapter 013_theriault or for instance (Perrinet and Bednar, 2015)) or of medical images of emphysema (Nava et al., 2013). Classification was also used in a sparse model for the quantification of artistic style through sparse coding analysis in the drawings of Pieter Bruegel the Elder (Hughes et al., 2010). These examples illustrate the different applications of sparse representations and in the following we will illustrate Figure 14.4: Effect of filters' parameters on the efficiency of the SparseLets framework As we tested different parameters for the filters, we measured the gain in efficiency for the algorithm as the ratio of the code length to achieve 85% of energy extraction relative to that for the default parameters (white bar). The average is computed on the same database of natural images and error bars denote the standard deviation of gain over the database. First, we studied the effect of the bandwidth of filters respectively in the (A) spatial frequency and (B) orientation spaces. The minimum is reached for the default parameters: this shows that default parameters provide an optimal compromise between the precision of filters in the frequency and position domains for this database. We may also compare pyramids with different number of filters. Indeed from Equation 14.4, efficiency (in bits) is equal to the number of selected filters times the coding cost for the address of each edge in the pyramid. We plot here the average gain in efficiency which shows an optimal compromise respectively for respectively (C) the number of orientations and (D) the number of spatial frequencies (scales). Note first that with more than 12 directions, the gain remains stable. Note also that a dyadic scale ratio (that is of 2) is efficient but that other solutions -such as using the golden section φprove to be significantly more efficient, though the average gain is relatively small (inferior to 5%). some potential perspectives to further improve their representation efficiency. 14.4 SparseEdges: introducing prior information 14.4.1 Using the prior in first-order statistics of edges In natural images, it has been observed that edges follow some statistical regularities that may be used by the visual system. We will first focus on the most obvious regularity which consists in the anisotropic distribution of orientations in natural images (see chapter 003_holly_gerhard for another qualitative characterization of this anisotropy). Indeed, it has been observed that orientations corresponding to cardinals (that is, to verticals and horizontals) are more likely than other orientations (Ganguli and Simoncelli, 2010;Girshick et al., 2011). This is due to the fact that our point of view is most likely pointing toward the horizon while we stand upright. In addition, gravity shaped our surrounding world around horizontals (mainly the ground) and verticals (such as trees or buildings). Psychophysically, this prior knowledge gives rise to the oblique effect (Keil and Cristóbal, 2000). This is even more striking in images of human scenes (such as a street, or inside a building) as humans mainly build their environment (houses, furnitures) around these cardinal axes. However, we assumed in the cost defined above (see Eq. 14.2) that each coefficient is independently distributed. It is believed that an homeostasis mechanism allows one to optimize this cost knowing this prior information (Laughlin, 1981;Perrinet, 2010). Basically, the solution is to put more filters where there are more orientations (Ganguli and Simoncelli, 2010) such that coefficients are uniformly distributed. In fact, since neural activity in the assembly actually represents the sparse coefficients, we may understand the role of homeostasis as maximizing the average representation cost C(a|Φ). This is equivalent to saying that homeostasis should act such that at any time, and invariantly to the selectivity of features in the dictionary, the probability of selecting one feature is uniform across the dictionary. This optimal uniformity may be achieved in all generality by using an equalization of the histogram (Atick, 1992). This method may be easily derived if we know the probability distribution function dP i of variable a i (see Figure 14.5-A) by choosing a non-linearity as the cumulative distribution function (see Figure 14.5-B) transforming any observed variableā i into: This is equivalent to the change of variables which transforms the sparse vector a to a variable with uniform probability distribution function in [0, 1] M (see Figure 14.5-C). This equalization process has been observed in the neural activity of a variety of species and is, for instance, perfectly illustrated in the compound eye of the fly's neural response to different levels of contrast (Laughlin, 1981). It may evolve dynamically to slowly adapt to varying changes, for instance to luminance or contrast values, such as when the light diminishes at twilight. Then, we use these point non-linearities z i to sample orientation space in an optimal fashion (see Figure 14.5-D). This simple non-parametric homeostatic method is applicable to the SparseLets algorithm by simply using the transformed sampling of the orientation space. It is important to note that the MP algorithm is non-linear and the choice of one element at any step may influence the rest of the choices. In particular, while orientations around cardinals are more prevalent in natural images (see Figure 14.6-A), the output histogram of detected edges is uniform (see Figure 14.6-B). To quantify the gain in efficiency, we measured the residual energy in the SparseLets framework with or without including this prior knowledge. Results show that for a similar number of extracted edges, residual energy is not significantly changed (see Figure 14.6-C). This is again due to the exponential convergence of the squared error (Mallat, 1998, p. 422) on the space spanned by the representation basis. As the tiling of the Fourier space by the set of filters is complete, one is assured of the convergence of the representation in both cases. However thanks to the use of first-order statistics, This shows that as was reported previously (see for instance (Girshick et al., 2011)), cardinals axis are over-represented. This represents a relative inefficiency as the representation in the SparseLets framework represents a priori orientations in an uniform manner. A neuromorphic solution is to use histogram equalization, as was first shown in the fly's compound eye by (Laughlin, 1981). (C) We draw a uniform set of scores on the y-axis of the cumulative function (black horizontal lines), for which we select the corresponding orientations (red vertical lines). Note that by convention these are wrapped up to fit in the (−π/2, π/2] range. (D) This new set of orientations is defined such that they are a priori selected uniformly. Such transformation was found to well describe a range of psychological observations (Ganguli and Simoncelli, 2010) and we will now apply it to our framework. the orientation of edges are distributed such as to maximize the entropy, further improving the efficiency of the representation. This novel improvement to the SparseLets algorithm illustrates the flexibility of the Matching Pursuit framework. This proves that by introducing the prior on first-order statistics, one improves the efficiency of the model for this class of natural images. Of course, this gain is only valid for natural images and would disappear for images where cardinals would not dominate. This is the case for images of close-ups (microscopy) or where gravity is not prevalent such as aerial views. Moreover, this is obviously just a first step as there is more information from natural images that could be taken into account. We compare the efficiency of the modified algorithm where the sampling is optimized thanks to histogram equalization described in Figure 14.5 as the average residual energy with respect to the number of edges. This shows that introducing a prior information on the distribution of orientations in the algorithm may also introduce a slight but insignificant improvement in the sparseness. 14.4.2 Using the prior statistics of edge co-occurences A natural extension of the previous result is to study the co-occurrences of edges in natural images. Indeed, images are not simply built from independent edges at arbitrary positions and orientations but tend to be organized along smooth contours that follow for instance the shape of objects. In particular, it has been shown that contours are more likely to be organized along co-circular shapes (Sigman et al., 2001). This reflects the fact that in nature, round objects are more likely to appear than random shapes. Such a statistical property of images seems to be used by the visual system as it is observed that edge information is integrated on a local "association field" favoring co-linear or co-circular edges (see chapter 013_theriault section 5 for more details and a mathematical description). In V1 for instance, neurons coding for neighboring positions are organized in a similar fashion. We have previously seen that statistically, neurons coding for collinear edges seem to be anatomically connected (Bosking et al., 1997;Hunt et al., 2011) while rare events (such as perpendicular occurrences) are functionally inhibited (Hunt et al., 2011). Using the probabilistic formulation of the edge extraction process (see Section 14.2), one can also apply this prior probability to the choice mechanism (Matching) of the Matching Pursuit algorithm. Indeed at any step of the edge extraction process, one can include the knowledge gained by the extraction of previous edges, that is, the set I = {π i } of extracted edges, to refine the log-likelihood of a new possible edge π * = ( * , a * ) (where * corresponds to the address of the chosen filter, and therefore to its position, orientation and scale). Knowing the probability of co-occurences p(π * |π i ) from the statistics observed in natural images (see Figure 14.7), we deduce that the cost is now at any coding step (where I is the residual image -see Equation 14.9): The relationship between a pair of edges can be quantified in terms of the difference between their orientations θ, the ratio of scale σ relative to the reference edge, the distance d = AB between their centers, and the difference of azimuth (angular location) φ of the second edge relative to the reference edge. Additionally, we define ψ = φ − θ/2 as it is symmetric with respect to the choice of the reference edge, in particular, ψ = 0 for cocircular edges. (B) The probability distribution function p(ψ, θ) represents the distribution of the different geometrical arrangements of edges' angles, which we call a "chevron map". We show here the histogram for natural images, illustrating the preference for co-linear edge configurations. For each chevron configuration, deeper and deeper red circles indicate configurations that are more and more likely with respect to a uniform prior, with an average maximum of about 4 times more likely, and deeper and deeper blue circles indicate configurations less likely than a flat prior (with a minimum of about 0.7 times as likely). Conveniently, this "chevron map" shows in one graph that natural images have on average a preference for co-linear and parallel angles (the horizontal middle axis), along with a slight preference for co-circular configurations (middle vertical axis). where η quantifies the strength of this prediction. Basically, this shows that, similarly to the association field proposed by (Grossberg, 1984) which was subsequently observed in cortical neurons (von der Heydt et al., 1984) and applied by (Field et al., 1993), we facilitate the activity of edges knowing the list of edges that were already extracted. This comes as a complementary local interaction to the inhibitory local interaction implemented in the Pursuit step (see Equation 14.9) and provides a quantitative algorithm to the heuristics proposed in (Fischer et al., 2007a). Note that though this model is purely sequential and feed-forward, this results possibly in a "chain rule" as when edges along a contour are extracted, this activity is facilitated along it as long as the image of this contour exists in the residual image. Such a "chain rule" is similar to what was used to model psychophysical performance (Geisler et al., 2001) or to filter curves in images (August and Zucker, 2001). Our novel implementation provides with a rapid and efficient solution that we illustrate here on a segmentation problem (see Figure 14.8). (A) (B) (C) Figure 14.8: Application to rapid contour segmentation We applied the original sparse edge framework to (A) the synthetic image of a circle embedded in noise. This noise is synthesized by edges at random positions, orientations and scales with a similar first-order statistics as natural scenes. (B) We overlay in red the set of edges which were detected by the SparseLets framework. (C) Then, we have introduced second-order information in the evaluation of the probability in the sparse edges framework (with η = 0.15). This modified the sequence of extracted edges as shown in blue. There is a clear match of the edge extraction with the circle, as would be predicted by psychophysical results of hand segmentation of contours. This shows that second-order information as introduced in this feed-forward chain may be sufficient to account for contour grouping and may not necessitate a recursive chain rule such as implemented in Geisler et al. (2001). Indeed, sparse models have been shown to foster numerous applications in computer vision. Among these are algorithms for segmentation in images (Spratling, 2013a) or for classification (Spratling, 2013b;Dumoulin et al., 2014). We may use the previous application of our algorithm to evaluate the probability of edges belonging to the same contour. We show in Figure 14.8 the application of such a formula (in panel C versus classical sparse edge extraction in panel B) on a synthetic image of a circle embedded in noise (panel A). It shows that, while some edges from the background are extracted in the plain SparseLets framework (panel B), edges belonging to the same circular contour pop-out from the computation similarly to a chain rule (panel C). Note that contrary to classical hierarchical models, these results are done with a simple layer of edge detection filters which communicate through local diffusion. An important novelty to note in this extension of the SparseLets framework is that there is no recursive propagation, as the greedy algorithm is applied in a sequential manner. These types of interaction have been found in area V1. Indeed, the processing may be modulated by simple contextual rules such as favoring co-linear versus co-circular edges (McManus et al., 2011). Such type of modulation opens a wide range of potential applications to computer vision such as robust segmentation and algorithms for the autonomous classification of images (Perrinet and Bednar, 2015). More generally, it shows that simple feed-forward algorithms such as the one we propose may be sufficient to a account for the sparse representation of images in lower-level visual areas. Conclusion In this chapter, we have shown sparse models at increasing structural complexities mimicking the key features of the primary visual cortex in primates. By focusing on a concrete example, the SparseLets framework, we have shown that sparse models provide an efficient framework for biologically-inspired computer vision algorithms. In particular, by including contextual information, such as prior statistics on natural images, we could improve the efficiency of the sparseness of the representation. Such an approach allows to implement a range of practical concepts (such as the good continuity of contours) in a principled way. Indeed, we based our reasoning on inferential processes such as they are reflected in the organization of neural structures. For instance, there is a link between co-circularity and the structure of orientation maps (Hunt et al., 2009). This should be included in further perspectives of these sparse models. As we saw, the (visual) brain is not a computer. Instead of using a sequential stream of semantic symbols, it uses statistical regularities to derive predictive rules. These computations are not written explicitly, as it suffices that they emerge from the collective behavior in populations of neurons. As such, these rules are massively parallel, asynchronous and error prone. Luckily, such neuromorphic computing architectures begin to emerge -yet, we lack a better understanding of how we may implement computer vision algorithms on such hardware. As a conclusion, this drives the need for more biologically-driven computer vision algorithm and of a better understanding of V1. However, such knowledge is largely incomplete (Olshausen and Field, 2005) and we need to develop a better understanding of results from electro-physiology. A promising approach in that sense is to include model-driven stimulation of physiological studies (Sanz-Leon et al., 2012;Simoncini et al., 2012) as they systematically test neural computations for a given visual task.
Symmetry in the Painlevé Systems and Their Extensions to Four-Dimensional Systems We give a new approach to the symmetries of the Painlev\'e equations $P_{V},P_{IV},P_{III}$ and $P_{II}$, respectively. Moreover, we make natural extensions to fourth-order analogues for each of the Painlev\'e equations $P_{V}$ and $P_{III}$, respectively, which are natural in the sense that they preserve the symmetries. Introduction This is the third paper in a series of four papers (see [13,14]), aimed at giving a complete study of the following problem: Problem 0.1. For each affine root system A with affine Weyl group W (A), find a system of differential equations for which W (A) acts as its Bäcklund transformations. At first, let us summarize the results obtained up to now in the following list. Here the symbol R denotes the interaction term for each system. Our idea is to find a system in the following way: (1) We make a set of invariant divisors given by connecting two copies of them given in the case of the Painlevé systems by adding the term with invariant divisor x − z. (2) We make the symmetry associated with a set of invariant divisors given by 1. (3) We make the holomorphy conditions r i associated with the symmetry in 2. 1 (4) We look for a polynomial Hamiltonian system with the holomorphy conditions given by 3. The crucial idea of this work is to use the holomorphy characterization of each system, which can be considered as a generalization of Takano's theory [3,16]. In the next stage, following the above results, we try to seek a system with W (B (1) 3 )-symmetry. At first, one might try to seek a system in dimension four with its symmetry. However, such a system can not be obtained. In this paper, we will change our viewpoint, by seeking a system not in dimension four but in dimension two. In dimension two, it is well-known that the Painlevé systems P J , (J = V I, V, IV, III, II, I) have the affine Weyl group symmetries explicitly given in the following table. P J P I P II P IIII P IV P V P V I Symmetry none W (A This paper is the stage in this project where we find a new viewpoint for the symmetries of the Painlevé equations P V , P IV , P III and P II , that is, we will show that each of the Painlevé equations P V , P IV , P III and P II has hidden affine Weyl group symmetry of types B 3 and A (2) 2 , respectively. We seek these symmetries for a Hamiltonian system in charts other than the original chart in each space of initial conditions constructed by K. Okamoto. In other charts, we can find hidden symmetries different from the ones in the original charts. Furthermore, in the case Painlevé equations Figure 1. Symmetries of the Painlevé equations of dimension four we make natural extensions for each of the Painlevé equations P V and P III , natural in the sense that they preserve the symmetries. This paper is organized as follows. In Sections 1 through 4, we present twodimensional polynomial Hamiltonian systems with W (B (1) 3 ), W (G (1) 2 ), W (D (2) 3 ) and W (A (2) 2 )-symmetry, respectively. We will show that each system coincides with the Painlevé V (resp. IV,III,II) system. We also give an explicit confluence process from the system of type D 2 ) to the system of type B 2 ). In Sections 5 and 6, we present a family of coupled Painlevé V (resp. III) systems in dimension four with W (B (1) 5 ) (resp. W (D (2) 5 ))-symmetry. We also show that this system coincides with a family of coupled Painlevé V (resp. III) systems in dimension four with W (D (1) 5 ) (resp. W (B (1) 4 )-symmetry (see [14]). In the final section, we propose further problems on Problem 0.1. In order to prove Theorem 1.1, we recall the definition of a symplectic transformation and its properties (see [3,16]). Let We say that the mapping is symplectic if where t is considered as a constant or a parameter, namely, if, for t = t 0 , ϕ t 0 = ϕ| t=t 0 is a symplectic mapping from the t 0 -section D t 0 of D to ϕ(D t 0 ). Suppose that the mapping is symplectic. Then any Hamiltonian system dx/dt = ∂H/∂y, dy/dt = −∂H/∂x is transformed to Here t is considered as a variable. By this equation, the function K is determined by H uniquely modulo functions of t, namely, modulo functions independent of X and Y . Proof of Theorem 1.1. At first, we consider the case of the transformation s 0 . Set By resolving in x, y, t, α 0 , . . . , α 3 , we obtain S 0 : By S 0 , we obtain the polynomial Hamiltonian S 0 (H 1 ), and we see that Since H 1 is modulo functions of t, we can check in the case of s 0 . The cases of s 1 , s 2 are similar. We note the relation between H 1 and the transformed Hamiltonian K i (i = 1, 2), respectively: with the notation res : Next, we consider the case of s 3 . Setting Applying the transformation in t and the transformation of the symplectic 2-form: we obtain the rational Hamiltonian S 3 (H 1 ), and we see that Then we can check in the case of s 3 . The case of π is similar. We note the relation between H 1 and the transformed Hamiltonian Π(H 1 ) is given as follows: This completes the proof. Consider the following birational and symplectic transformations r i (cf. [3,16]): These transformations are appeared as the patching data in the space of initial conditions of the system (2). The fact that the space of initial conditions of the system (2) is covered by this data will be cleared in the following paper. Since each transformation r i is symplectic, the system (2) is transformed into a Hamiltonian system, whose Hamiltonian may have poles. It is remarkable that the transformed system becomes again a polynomial system for any i = 0, 1, 2, 3. Furthermore, this holomorphy property uniquely characterizes the system (2). (A2) This system becomes again a polynomial Hamiltonian system in each coordinate r i (i = 0, 1, 2, 3). Then such a system coincides with the system (2). We remark that if we look for a polynomial Hamiltonian system which admits the symmetry (5), we must consider cumbersome polynomials in variables x, y, t, α i . On the other hand, in the holomorphy requirement (7), we only need to consider polynomials in x, y. This reduces the number of unknown coefficients drastically. On relations between s i and r i , see [15]. Proof of Theorem 1.2. At first, resolving the coordinate r 0 in the variables x, y, we obtain The polynomial H satisfying (A1) has 20 unknown coefficients in C(t). By R 0 , we transform H into R 0 (H), which has poles in only y 0 . For R 0 (H), we only have to determine the unknown coefficients so that they cancel the poles of R 0 (H). In this way, we can obtain the Hamiltonian H 1 . By proving the following theorem, we see how the degeneration process given in Theorem 1.3 works on the Bäcklund transformation group W (D 3 ) of the system (2) as ε → 0. Proof of Theorem 1.4. Notice that Let us see the actions of the generators w i , i = 0, 1, 2, 3, 4 on the parameters A i , i = 0, 1, 2, 3 and ε where By a direct calculation, we have Observing these relations, we take a subgroup W D (1) We can easily check and the generators satisfy the following relations: In short, the group W D (1) be considered to be an affine Weyl group of the affine Lie algebra of type B act on T, X and Y . We can verify Here S 3 (T ) = −T can be understood as follows: by using the relation T = α 0 (1 − t), the action of S 3 on T is obtained as By comparing (12), (13) with s i (i = 0, 1, 2, 3) given in Theorem 1.1, we see that our theorem holds. By the following theorem, we will show that the system (2) coincides with the system of type A 3 (see [4,5,6]). Theorem 1.5. For the system (2), we make the change of parameters and variables x, y to β 0 , β 1 , β 2 , β 3 , T, X, Y . Then the system (2) can also be written in the new variables T, X, Y and parameters β 0 , β 1 , β 2 , β 3 as a Hamiltonian system. This new system tends to the system A (1) 3 : with the Hamiltonian By putting q = 1 − 1 X , we have the Painlevé V equation (see [20]): Proof of Theorem 1.5. Notice that and the change of variables from (x, y, t) to (X, Y, T ) in Theorem 1.5 is symplectic. Choose S i as The transformations S 0 , S 1 , S 2 , S 3 are reflections of We can verify The proof has thus been completed. Proposition 1.1. The system (2) admits the following transformation ϕ as its Bäcklund transformation: We note that this transformation ϕ is pulled back the diagram automorphism π of the system (16) by transformations (14) and (15). The system of type G (1) 2 In this section, we present a 2-parameter family of two-dimensional polynomial Hamiltonian systems given by with the polynomial Hamiltonian Here x and y denote unknown complex variables and α 0 , α 1 , α 2 are complex parameters satisfying the relation: x − ∞ y 2 as the group of its Bäcklund transformations (cf. [6]), whose generators are explicitly given as follows: Theorem 2.2. Let us consider a polynomial Hamiltonian system with Hamiltonian (A2) This system becomes again a polynomial Hamiltonian system in each coordinate r i (i = 0, 1, 2) (cf. [3]): . Then such a system coincides with the system (18). (2), we make the change of parameters and variables from α 0 , α 1 , α 2 , α 3 , t, x, y to A 0 , A 1 , A 2 , ε, T, X, Y . Then the system (2) can also be written in the new variables T, X, Y and parameters A 0 , A 1 , A 2 , ε as a Hamiltonian system. This new system tends to the system (18) as ε → 0. By proving the following theorem, we see how the degeneration process given in Theorem 2.3 works on the Bäcklund transformation group W (B 3 ) (cf. [17]). Proof of Theorem 2.4. Notice that A 0 + 2A 1 + 3A 2 = α 0 + α 1 + 2α 2 + 2α 3 = 1 and the change of variables from (x, y) to (X, Y ) is symplectic, however the change of parameters (21) is not one to one differently from the case of P V I → P V . Choose S i (i = 0, 1, 2) as S 0 := s 1 , S 1 := s 2 , S 2 := s 0 s 3 , However, we see that S i (ε) have ambiguities of signature. For example, since we can choose any one of the two branches as S 2 (ε). Among such possibilities, we take a choice as where (1 − 2A 1 ε 2 ) −1/2 = 1 at A 1 ε 2 = 0, or considering in the category of formal power series, we make a convention that (1 − 2A 1 ε 2 ) −1/2 is formal power series of A 1 ε 2 with constant term 1 according to We notice that the generators acting on parameters A 0 , A 1 , A 2 , ε satisfy the following relations: S 2 i = 1, (S 0 S 2 ) 2 = 1, (S 0 S 1 ) 3 = 1, (S 1 S 2 ) 6 = 1. Now we observe the actions of S i , i = 0, 1, 2 on the variables X, Y, T . By means of (22),(23) and we can easily check By (21),(22),(23) and the actions of s 1 , s 2 on x, y, we can easily verify The form of the actions S 2 = s 0 s 3 on X and Y are complicated, but we can see that . The proof has thus been completed. By the following theorem, we will show that the system (18) coincides with the system of type A from α 0 , α 1 , α 2 , t, x, y to β 0 , β 1 , β 2 , T, X, Y . Then the system (18) can also be written in the new variables T, X, Y and parameters β 0 , β 1 , β 2 as a Hamiltonian system. This new system tends to the system A (1) 2 : with the Hamiltonian By putting q = X, we have the Painlevé IV equation: The transformations S 0 , S 1 , S 2 are reflections of We can verify The proof has thus been completed. The system of D (2) 3 In this section, we present a 2-parameter family of polynomial Hamiltonian systems given by with the polynomial Hamiltonian Here x and y denote unknown complex variables and α 0 , α 1 , α 2 are complex parameters satisfying the relation: Theorem 3.1. The system (28) admits extended affine Weyl group symmetry of type D (2) 3 as the group of its Bäcklund transformations (cf. [6]), whose generators are explicitly given as follows: Theorem 3.2. Let us consider a polynomial Hamiltonian system with Hamiltonian H ∈ C(t)[x, y]. We assume that (A1) deg(H) = 5 with respect to x, y. Theorem 3.3. For the system (2), we make the change of parameters and variables: from x, y, t, α 0 , α 1 , α 2 , α 3 to X, Y, T, A 0 , A 1 , A 2 , ε. Then the system (2) can also be written in the new variables T, X, Y and parameters A 0 , A 1 , A 2 , ε as a Hamiltonian system. This new system tends to the system (28) as ε → 0. By proving the following theorem, we see how the degeneration process given in Theorem 3.3 works on the Bäcklund transformation group W (B 3 ) (cf. [17]). Theorem 3.4. For the degeneration process in Theorem 3.3, we can choose a sub- converges to the Bäcklund transformation group W (D (2) 3 ) of the system (28) as ε → 0. By the following theorem, we will show that the system (28) coincides with the system of type C (1) 2 (see [6,20]). Theorem 3.5. For the system (28), we make the change of parameters and variables: from x, y, t, α 0 , α 1 , α 2 to X, Y, T, β 0 , β 1 , β 2 . Then this new system coincides with the system of type C (1) 2 (see [6]): with the Hamiltonian By putting q = x τ , T = τ 2 , we will see that this system (35) is equivalent to the third Painlevé equation: Dynkin diagram of type C The transformations S 0 , S 1 , S 2 are reflections of We can verify The proof has thus been completed. The system of A (2) 2 In this section, we present a 1-parameter family of polynomial Hamiltonian systems given by with the polynomial Hamiltonian Here x and y denote unknown complex variables and α 0 , α 1 are complex parameters satisfying the relation: By putting q := x, we obtain the following equation: Theorem 4.1. The system (37) admits affine Weyl group symmetry of type A 2 as the group of its Bäcklund transformations (cf. [6]), whose generators are explicitly given as follows: s 0 :(x, y, t; α 0 , α 1 ) → (x + α 0 y , y, t; −α 0 , α 1 + 4α 0 ), (A2) This system becomes again a polynomial Hamiltonian system in each coordinate r i (i = 0, 1)(cf. [3]): . Then such a system coincides with the system (37). from α 0 , α 1 , α 2 , t, x, y to A 0 , A 1 , ε, T, X, Y . Then the system (18) can also be written in the new variables T, X, Y and parameters A 0 , A 1 , ε as a Hamiltonian system. This new system tends to the system (37) as ε → 0. By proving the following theorem, we see how the degeneration process given in Theorem 4.3 works on the Bäcklund transformation group W (G 2 ) (cf. [17]). Proof of Theorem 4.4. Notice that 2A 0 + A 1 = α 0 + 2α 1 + 3α 2 = 1 and the change of variables from (x, y) to (X, Y ) is symplectic. Since the change of parameters (40) is not one to one, we consider the degeneration process by introducing formal power series of the new parameters A 0 , A 1 , ε. We choose S 0 , S 1 as Notice the S 0 , S 1 are reflections of the parameters A 0 , A 1 , respectively. Then we can obtain Here, we make the same convention as in Section 2 that (1 + 4A 0 ε 6 ) −1/6 means formal power series of A 0 ε 6 with 1 as constant term. Then we can verify The proof has thus been completed. By the following theorem, we will show that the system (37) coincides with the system of type A 1 . Theorem 4.5. For the system (37), we make the change of parameters and variables from x, y, t, α 0 , α 1 to X, Y, t, β 0 , β 1 . Then this new system coincides with the system of type A (1) 1 (see [6]): with the Hamiltonian By putting q = X, we have the second Painlevé equation (see [6]): Proof of Theorem 4.5. Notice that 2α 0 + α 1 = β 0 + β 1 = 1 (46) and the change of variables from (x, y, t) to (X, Y, t) in Theorem 4.5 is symplectic. Choose S 1 and π as S 1 := s 0 , π := s 1 . We can verify The proof has thus been completed. In this section and next section, we present polynomial Hamiltonian systems in dimension four with affine Weyl group symmetry of types B (1) 5 and D (2) 5 . Our idea is the following way: (1) We make a dynkin diagram given by connecting two dynkin diagrams of type B 3 (resp. D 3 ) by adding the term with invariant divisor x − z. (2) We make the symmetry associated with the dynkin diagram given by 1. (3) We make the holomorphy conditions r i associated with the symmetry given by 2. (4) We look for a polynomial Hamiltonian system with the holomorphy conditions given by 3, that is, Problem 5.1. Let us consider a polynomial Hamiltonian system with Hamiltonian H ∈ C(t)[x, y, z, w]. We assume that (A1) deg(H) = 5 with respect to x, y, z, w. (A2) This system becomes again a polynomial Hamiltonian system in each coordinate r i given in 4. To solve Problem 5.1, for the Hamiltonian satisfying the assumption (A1) we only have to determine unknown coefficients so that they cancel the poles of Hamiltonian transformed by each r i . By using the notation we can verify Figure 12. Dynkin diagram of type B (1) 5 5 , that is, they satisfy the following relations: The proof has thus been completed.
Constructing a highly bioactive tendon-regenerative scaffold by surface modification of tissue-specific stem cell-derived extracellular matrix Abstract Developing highly bioactive scaffold materials to promote stem cell migration, proliferation and tissue-specific differentiation is a crucial requirement in current tissue engineering and regenerative medicine. Our previous work has demonstrated that the decellularized tendon slices (DTSs) are able to promote stem cell proliferation and tenogenic differentiation in vitro and show certain pro-regenerative capacity for rotator cuff tendon regeneration in vivo. In this study, we present a strategy to further improve the bioactivity of the DTSs for constructing a novel highly bioactive tendon-regenerative scaffold by surface modification of tendon-specific stem cell-derived extracellular matrix (tECM), which is expected to greatly enhance the capacity of scaffold material in regulating stem cell behavior, including migration, proliferation and tenogenic differentiation. We prove that the modification of tECM could change the highly aligned surface topographical cues of the DTSs, retain the surface stiffness of the DTSs and significantly increase the content of multiple ECM components in the tECM-DTSs. As a result, the tECM-DTSs dramatically enhance the migration, proliferation as well as tenogenic differentiation of rat bone marrow-derived stem cells compared with the DTSs. Collectively, this strategy would provide a new way for constructing ECM-based biomaterials with enhanced bioactivity for in situ tendon regeneration applications. Introduction The regeneration of damaged tendons represents a grand challenge in orthopedics because of their limited ability for selfrepair. Tissue engineering has become an attractive approach for the treatment of damaged tendons. The classical tissue engineering strategy relies on the use of culture-expanded patient's own cells and natural and/or synthetic biomaterial scaffolds to produce cell-laden tissue constructs for implantation [1]. However, this approach shows notable limitations, such as the donortissue morbidity, the requisite for large number of immuneacceptable cells [2], the long production cycle of engineered tissues in vitro as well as the challenges owing to long-term storage and preservation of engineered tissues [3]. Such disadvantages have hindered the clinical application of engineered tendon constructs to repair damaged tendons by the classical tissue engineering strategy. Latest advances in tissue engineering and regenerative medicine have employed a new strategy to harness the potential of endogenous stem/progenitor cells for in situ tissue repair and regeneration [1,4,5]. Much attention has been focused on the design of biomaterials for in situ tissue regeneration to recruit endogenous stem cells to the injury site. Several studies have proved that the incorporation of stromal cell-derived factor-1 (SDF-1) into scaffold materials via factor adsorption, mini-osmotic pump delivery or genetic engineering method of collagen-binding domain could enhance the recruitment of endogenous stem cells to the injury site [6,7]. In another study, Kim and colleagues demonstrated modifying self-assembling peptide nanofiber using substance P sequence was able to recruit endogenous mesenchymal stem cells (MSCs) [8]. Nair and colleagues found that the biomaterials with varying degrees of pro-inflammatory properties triggered different extents of endogenous stem cell recruitment, and these recruited cells arriving at the implant sites were multipotent [9]. This reminds us that the scaffold material with the capacity to recruitment of stem cells alone was not enough. The success of in situ tissue regeneration not only depends on efficient recruitment of host stem/progenitor cells into the implanted scaffold materials but also needs to effectually induce the recruited stem cells into tissue-specific cell lineages [1]. Lu et al. [10] reported that the oriented acellular cartilage matrix scaffold modified by bone marrow homing peptide could increase the recruitment of endogenous stem cells and chondrogenic differentiation, resulting in a significant improvement in the repair of chondral defects. These previous findings highlighted the necessity of recruiting abundant endogenous stem cells and inducing them to differentiate into tissue-specific cell lineages for tissue regeneration. Recently, cell-derived extracellular matrix (ECM), especially stem cell-derived ECM, attracted increasing attention in the area of tissue engineering and regenerative medicine [11][12][13][14]. Decellularized ECM from in vitro stem cell cultures has been proved to provide an instructive stem cell microenvironment that can rejuvenate aged progenitor cells, promote stem cell expansion and direct stem cell differentiation [13,15,16]. Either on its own or integrated with other scaffold materials, stem cell-derived ECM also can be used as biomaterials to produce tissues de novo or promote endogenous regeneration [17][18][19]. To date, multiple stem cellderived ECM, including pluripotent stem cells [20,21], bone marrow-derived stem cells (BMSCs) [16,18,22], synovium-derived stem cells (SDSCs) [13,15], adipose tissue-derived stem cells [23,24], dental pulp stem cells (DPSCs) [25], umbilical cord MSCs [26] and so forth, have been extensively studied over the past decades. Our recent studies demonstrated that the scaffolds modified with ECM of tendon-derived stem cells (TDSCs) markedly improved BMSCs migration in vitro and could recruit more endogenous stromal cells for accelerating healing of the tendon-bone interface in vivo [27,28]. Nevertheless, the information on the tendonspecific stem cell-derived ECM (tECM) is still scarce and no studies have systematically investigated the use of tECM for constructing a highly bioactive tendon-regenerative scaffold. In our previous studies, we have proved that the decellularized tendon slices (DTSs) that retained the native tendon ECM microenvironment cues are able to promote stem cell proliferation and tenogenic differentiation in vitro and show certain proregenerative capacity for rotator cuff tendon regeneration in vivo [29][30][31]. In the present study, we present a strategy to further improve the bioactivity of the DTSs for constructing a novel highly bioactive tendon-regenerative scaffold by surface modification of the tECM (i.e. tECM-DTSs), which is expected to greatly enhance the capacity of scaffold material in regulating stem cell behavior, including migration, proliferation and tenogenic differentiation. In detail, the surface topography, and surface nanomechanical properties and biochemical components of the tECM-DTSs were first characterized, and then the regulatory capacity of the tECM-DTSs to the migration, proliferation and tenogenic differentiation of rat BMSCs was investigated. It was hypothesized that tECM could confer higher bioactivity to the DTSs so as to endow the tECM-DTSs with a greater capacity to enhance the migration, proliferation as well as tenogenic differentiation of rat BMSCs. Cell isolation and culture We used male Sprague Dawley rats (4-5 weeks old, 100-120 g weight) for the isolation and culture of TDSCs and BMSCs with approval from the Animal Care and Use Committee of Sichuan University. The procedures for the isolation and culture of TDSCs and BMSCs were same as our previously published protocols [30]. Fabrication of the tECM-DTSs A typical process for the fabrication of the tECM-DTSs is presented in Fig. 1. First, the DTSs substrate was fabricated using our previously published protocol [29]. In short, the Achilles tendons of adult beagle dogs were decellularized through the following procedures: repetitive freeze/thaw treatment, frozen section with a thickness of 300 lm and nuclease treatment (including DNase 150 IU/ml and RNase 100 lg/ml) for 12 h at 37 C. Following washing in 50 ml of 0.1 M PBS (3 Â 30 min), the DTSs were lyophilized and sterilized with ethylene oxide (EO). Then, TDSCs were seeded on the top surface of DTSs substrate at 1 Â 10 5 cells per cm 2 and cultured in complete medium supplemented with 20% fetal bovine serum (FBS). After reaching 90% confluence, 50 lM of L-ascorbic acid phosphate (Sigma) was added for additional culture period of 8 days. At the end of 15-day culture period, the composites of TDSCs-DTSs were re-decellularized as described previously with minor alteration [15], using 0.5% Triton X-100 supplemented with 20 mM ammonium hydroxide (NH 4 OH) at 37 C for 15 min, followed by 100 U/ml DNase I at 37 C for 2 h. Finally, the tECM modified DTSs (hereafter referred to as tECM-DTSs) were washed in 50 ml of 0.1 M PBS (6 Â 30 min), frozen at -80 C or lyophilized and sterilized by EO for subsequent use. Evaluation of redecellularization For DNA quantification, lysates of the lyophilized samples (n ¼ 10 for each group) were prepared by digestion in Proteinase K solution (1 mg/ml, Sigma) at 50 C for 24 h. Residual DNA in the lysates was extracted using our previously published protocol [32], and then measured using the PicoGreen assay according to the manufacture instructions (Invitrogen). Figure 1. Schematic illustration of the fabrication of the tECM-DTSs. The Achilles tendons of adult beagle dogs were decellularized to prepare the DTSs substrate, and TDSCs were seeded on the top surface of DTSs substrate to construct the composites of TDSCs-DTSs, and then these composites were redecellularized to fabricate the tECM-DTSs For histological analysis, the frozen samples (n ¼ 4 for each group) were fixed, embedded, and stained with hematoxylin and eosin (H&E), Masson or 4,6-diamidino-2-phenylindole (DAPI). Scanning electron microscopy For surface topography characterization, the frozen samples (n ¼ 3 for each group) were fixed, sputter coated with gold and examined under scanning electron microscopy (SEM) (FEI Inspect F50) at an accelerating voltage of 30 kV. Atomic force microscopy assay To characterize the nanomechanical properties of the microenvironment provided by the DTSs and tECM-DTSs respectively, the surface stiffness of these specimens (n ¼ 5 for each group) was measured using atomic force microscopy (AFM) as our previously published protocol [30]. Western blot analysis For western blot analysis of critical tendon ECM components in the DTSs and tECM-DTSs, the lyophilized samples (n ¼ 3 for each group) were minced and homogenized using the RIPA Lysis Buffer (Beyotime, China) supplemented with 1% PMSF. Total proteins were quantified using the BCA Protein Quantification kit (Beyotime Biotechnology, China). Thirty micrograms of protein from each sample was loaded onto SDS-PAGE gel for electrophoresis, and then transferred to 0.2 lm polyvinylidene fluoride (PVDF) membranes (Millipore) by wet electroblotting. The membrane was incubated with the following primary antibodies: rabbit anti-biglycan (1:1000, Abcam), rabbit anti-fibromodulin (1:1000, GeneTex), mouse anti-fibronectin (1:1000, Abcam), rabbit anti-vitronectin (1:1000, Abcam) or rabbit anti-glyceraldehyde-3phosphate dehydrogenase (GAPDH, 1:1000, Abcam) at 4 C overnight. Then, the membranes were washed in TBST buffer for three times and incubated with corresponding secondary antibodies of horseradish peroxidase (HRP) conjugated goat antirabbit or goat anti-mouse IgG (Western Biotechnology, China) for 1.5 h at room temperature. Finally, these membranes were incubated with chemiluminescence substrate (Shanghai ShineGene Molecular Biotech., China), and exposed to two stacked blue xray films (Kodak) in a cassette. After scanning the film, semiquantification of band intensity was performed with UVP gel image processing system Labworks 4.6 software, and the relative protein expression level was normalized to the band intensity of GAPDH. For western blot analysis of differentiation-related proteins expression in BMSCs induced by the DTSs and tECM-DTSs, the expression of tendon-specific markers on the protein level was examined in BMSCs cultured on the DTSs and tECM-DTSs in complete culture media (10% FBS) for 3, 7 and 14 days. At the designated time points, total proteins (n ¼ 3 for each group) were extracted and quantified. After protein transfer, the PVDF membranes were incubated using the following primary antibodies: rabbit anti-scleraxis (SCX, 1:1000, Bioss), rabbit anti-tenomodulin (TNMD, 1:1000, Abcam), rabbit anti-thrombospondin-4 (THBS4, 1:1000, Abcam) or mouse anti-b-actin (1:2000, Servicebio), followed by incubation with the HRP-conjugated secondary antibodies (Servicebio, China). Then, these membranes were incubated with enhanced chemiluminescence solutions (ECL, Servicebio, China) and the target protein bands were imaged with a chemiluminescence imaging system (ChemiScope 6300, Clinx, China). Semiquantification of band intensity was performed with AlphaEaseFC software (Alpha Innotech, USA), and the relative protein expression level was normalized to the band intensity of b-actin. Cell migration assay For cell migration assay, the conditioned medium of the DTSs and tECM-DTSs was prepared as previously described with some alteration [33]. Briefly, the DTSs or tECM-DTSs samples were incubated in 1% W/V of DMEM containing 5% FBS for 72 h at 37 C to make the conditioned medium for each material. Transwell migration chambers (Corning, USA) with 8 lm pore size were used to evaluate the migration ability of BMSCs regulated by the DTSs and tECM-DTSs. After serum-starvation overnight, BMSCs were harvested and counted, and 1 Â 10 4 cells were resuspended in 200 ll of medium with 5% FBS and added into the upper chambers. To induce chemotaxis, 1 ml of the conditioned medium from the DTSs or tECM-DTSs was added to the lower chambers. After incubation at 37 C for 48 h, the cells that migrated to the lower side of the membrane were fixed in 4% paraformaldehyde, stained with DAPI and quantified with ImageJ software (NIH). Five randomly selected fields of each sample (n ¼ 4 for each group) were counted at 200Â magnification under an inverted fluorescence microscope (Nikon, Japan). Cell proliferation assay To investigate the effect of soluble factors released from the DTSs and tECM-DTSs on cell proliferation, the conditioned medium was prepared as described above. BMSCs were seeded in wells of 96-well plates at a density of 5 Â 10 3 cells per well. After the cells had attached, the medium was replaced with the conditioned medium from the DTSs or tECM-DTSs. The wells with non-conditioned medium only served as blank control. After 1, 2 and 3 days of incubation, cell viability (n ¼ 4) was measured using the alamarBlue assay following the manufacturer's protocol (Invitrogen). To further investigate the effect of DTSs and tECM-DTSs themselves on cell proliferation, BMSCs were directly seeded on the DTSs and tECM-DTSs at 2 Â 10 5 cells per cm 2 and incubated for a period of 3 days. The cell viability was qualitatively assessed using LIVE/DEAD cell staining assay as described previously [30]. Images of live and dead cells were acquired under an inverted fluorescence microscope (Nikon, Japan). Subsequently the cell morphology and alignment from these samples were observed using SEM. Real-time quantitative reverse transcription PCR For real-time quantitative reverse transcription PCR (RT-qPCR) analysis, total cellular RNA (n ¼ 6 for each group) was extracted at the designated time points (3, 7 or 14 days) using TRIzol (Invitrogen, Carlsbad, CA). Reverse transcription was achieved using the First Strand cDNA kit according to the manufacturer's protocol (Promega, Madison, WI, USA). qPCR was performed using the SYBR Green PCR master mix (TakaRa, Japan) with specific primers on a Light Cycler system (Roche, Switzerland). Rat-specific primers for tendon-specific genes, including SCX, TNMD, THBS4, and tendon-related genes, including TNC, COL I and COL III, and the housekeeping gene, GAPDH, were synthesized by Sango Biotech (Shanghai, China). The primer sequences for the tested genes are listed in Table 1. The cycling conditions were as follows: denaturation at 95 C for 2 min, 45 cycles at 95 C for 10 s, optimal annealing temperature (shown in Table 1) for 10 s and 72 C for 10 s. The relative expression level of each target gene was determined using the 2 -DDCt method. Statistical analysis All data were statistically analyzed using SPSS 16.0 software and presented as mean 6SD. For multiple-group comparisons, the data were analyzed using one-way analysis of variance followed by Dunnett's T3 post hoc test. For two-group comparisons, the data were analyzed using the unpaired Student's t-test. A value of P < 0.05 was considered statistically significant. Confirmation of redecellularization effectiveness The protocol for redecellularization of the composites of TDSCs and DTSs substrate was effective in removal of the cellular and nuclear components. The PicoGreen assay indicated the residual DNA content was significantly decreased after redecellularization (Fig. 2). Before redecellularization, the composites of TDSCs and DTSs substrate had 460.83 6 62.15 ng/mg of DNA, which was decreased to 20.60 6 7.84 ng/mg after redecellularization (Fig. 2). As shown in Fig. 3, histological analysis further confirmed that TDSCs formed dense cell sheets on the top surface of the DTSs at the end of 15 days of culture before redecellularization (Fig. 3B, E and H), whereas the cellular and nuclear material were efficiently removed and tECM was effectively deposited on the DTSs substrate after redecellularization (Fig. 3C, F and I). Surface topography, stiffness and biochemical components of the tECM-DTSs SEM observation showed obvious changes of the surface topography before and after modification with tECM (Fig. 4). Before modification, the surfaces of the DTSs were well aligned collagen fibers (Fig. 4A) and revealed the typical banding pattern under high magnification (Fig. 4B). When cultured with TDSCs for 15 days, the top surfaces of the DTSs were entirely covered by the dense cell sheets formed by TDSCs before redecellularization ( Fig. 4C and D), which are more evident in higher magnification SEM micrographs ( Fig. 4B and D). After redecellularization, there was a large amount of tECM deposited on the top surface of the DTSs so that the tECM-DTSs changed the highly aligned surface topographical cues of the DTSs and displayed an intricate and fibrillar ultrastructure (Fig. 4E and F). The results of AFM assay indicated that the surface stiffness of the tECM-DTSs was 1.06 6 0.71 MPa, which was close to that of the DTSs at 1.19 6 0.72 MPa (P > 0.05, Fig. 5). ELISA measurements revealed that the levels of multiple cytokines, including TGF-b1 (Fig. 6A), VEGF (Fig. 6B), IGF-1 (Fig. 6C) and SDF-1 (Fig. 6D) in the tECM-DTSs were significantly higher than those in the DTSs (P < 0.05). Compared to that of DTSs, the content of TGF-b1 in the tECM-DTSs increased by 1.81-fold, VEGF by 7.34-fold, IGF-1 by 7.78fold and SDF-1 by 11.23-fold. Western blot analysis indicated that four critical tendon ECM components (including biglycan, fibromodulin, fibronectin and vitronectin) in the tECM-DTSs were significantly higher than those in the DTSs (P < 0.05, Fig. 7A and B). Enhanced cell migration induced by the tECM-DTSs Enhanced bioactivity was first evidenced by the enhanced BMSCs migratory responses to factors released from the tECM-DTSs. As Fig. 8A and B, DAPI staining of the migrated BMSCs for the two groups and quantitative analyses revealed that the number of BMSCs that migrated toward the conditioned medium of the tECM-DTSs was significantly more than toward the conditioned medium of the DTSs (P < 0.05). Enhanced cell proliferation induced by the tECM-DTSs AlamarBlue assay revealed that there was higher but not statistically significant cell viability in tECM-DTSs group when compared with the DTSs group on the Day 1. On the Days 2 and 3, the conditioned medium from the tECM-DTSs significantly promoted the proliferation of BMSCs as compared with that from the DTSs (Fig. 9). When BMSCs were seeded directly on the surface of the tECM-DTSs or DTSs at a moderate cell density, these cells grew robustly on these materials from 1 to 3 days and showed excellent viability, as indicated by the results of live/dead staining (Fig. 10A-D). The SEM images showed that BMSCs were firmly attached to the surface of the DTSs and tECM-DTSs and displayed elongated spindle morphology or spherical morphology after 1 day of culture (Fig. 11A, B, E and F). Specially, the cells on the DTSs were aligned along the collagen fibrils, whereas the cells on the tECM-DTSs showed random orientation. By 3 days, the cells formed dense confluent cell layers on the surface of the DTSs and tECM-DTSs (Fig. 11C, D, G and H), indicating distinct cell proliferation with time extending. Notably, the cell layers on the tECM-DTSs seemed to be denser than those on the DTSs, which was more prominent in higher magnification images ( Fig. 11D and H). Overall, these results revealed that the surfaces of the tECM-DTSs are more conducive to BMSCs growth and proliferation, compared to the DTSs. Enhanced tenogenic differentiation induced by the tECM-DTSs Tenogenic differentiation of BMSCs cultured on the DTSs and tECM-DTSs at the 3 day, 7 day and 14 day time-points was studied using the RT-qPCR and western blot analysis. On the gene expression level, the expressions of SCX, TNMD and TNC were significantly up-regulated in BMSCs cultured on the tECM-DTSs compared to those on the DTSs at all three time points (Fig. 12A, B and D). Although there was no significant difference between two groups at 3 days, the expressions of THBS4 and COL III were elevated significantly in BMSCs cultured on the tECM-DTSs at 7 or 14 days (Fig. 12C and F). The expression of COL I was significantly enhanced in BMSCs cultured on the tECM-DTSs at 3 or 14 days when compared to those on the DTSs, and no significant difference was found at 7 days (Fig. 12E). On the protein expression level, the expression of SCX exhibited relatively higher levels in the tECM-DTSs group than in the DTSs group at all three time points, though no significant difference was found between two groups (supplementary Fig. S1A and B). TNMD expression was significantly higher in the tECM-DTSs group than in the DTSs group at 3 and 7 days, but the difference between the two groups was negligible at 14 days (supplementary Fig. S1A and B). Unexpectedly, the BMSCs cultured on the DTSs and tECM-DTSs showed detectable but low expression levels of THBS4 at all three time points, and no significant difference was observed between the two groups (supplementary Fig. S1A and B). As a whole, these data suggested that the tECM-DTSs displayed greater ability in promoting tenogenic differentiation of stem cells than the DTSs. Discussion The goal of the current study was to develop a novel highly bioactive tendon-regenerative scaffold (i.e. tECM-DTSs) by surface modification of tissue-specific stem cell-derived ECM on the DTSs, which is expected to have a greater capacity in regulating stem cell behavior with the ultimate purpose of recruiting abundant endogenous stem cells and inducing them toward tenogenic differentiation to promote in situ tendon regeneration. The results presented here demonstrated that the tECM-DTSs, with similar surface stiffness and higher content of multiple ECM components, showed higher bioactivity in inducing the migration, proliferation and tenogenic differentiation of rat BMSCs, compared to the DTSs. TDSCs, as tendon tissue-specific stem cells, showed more advantages than other MSCs for musculoskeletal tissue regeneration [34,35]. Hence, in the current study, TDSCs were chosen to develop the stem cell-derived ECM modified scaffold. TDSCs were seeded on the top surface of the DTSs substrate to form a dense cell sheet and then the composites of TDSCs-DTSs were redecellularized to develop the tECM-DTSs. It is worth noting that AlamarBlue assay of cell proliferation of BMSCs cultured in the conditioned medium from the DTSs and tECM-DTSs at 1, 2 and 3 days. *P < 0.05 as compared with the DTSs ascorbic acid-2-phosphate is essential supplement for robust ECM deposition [13,16], after TDSCs were close to 100% confluence on the surface of DTSs. The results of the PicoGreen assay indicated the average DNA content before redecellularization was significantly increased to 460.83 6 62.15 ng/mg compared to 3.90 6 1.70 ng/mg of the DTSs, suggesting that TDSCs were successfully seeded on the DTSs substrate and grew well. After redecellularization, the average DNA content of the tECM-DTSs was decreased to 20.60 6 7.84 ng/mg, though significantly higher than that of DTSs (3.90 6 1.70 ng/mg). It is currently well accepted that the amount of DNA <50 ng per mg dry weight is the acceptable range for decellularized ECM scaffold material [36]. Our redecellularization protocol was modified from one published protocol that had been widely used in preparation of SDSC-or BMSC-derived ECM [13,37]. In the pre-experiment phase, the published protocol (Step 1: 0.5% Triton X-100 containing 20 mM NH 4 OH at 37 C for 5 min; Step 2: 100 U/ml DNase at 37 C for 1 h) was attempted to use for redecellularization of the composites of TDSCs-DTSs. Unexpectedly, this protocol did not markedly decrease the DNA content after redecellularization (data not shown). Therefore, we modified this protocol by extending the treatment period of Triton X-100/NH 4 OH as well as DNase, and confirmed the efficiency of the modified protocol. In addition to the PicoGreen assay, the results of histological staining, including H&E, Masson and DAPI staining, also confirmed the modified protocol could effectively remove the cellular components, and also proved that visible tECM was present on the DTSs surface. The results of SEM analysis further verified that a large amount of tECM was indeed deposited on the surface of the DTSs after redecellularization. Notably, the tECM-DTSs displayed different surface topography and ceased to be the well aligned collagen fibrils and the typical banding pattern of the DTSs. AFM assay showed that the surface stiffness of the tECM-DTSs was close to that of the DTSs, namely that the tECM-DTSs also had similar stiffness to native tendon [36]. The results of ELISA and western blot assays showed that four important cytokines (including TGF-b1, VEGF, IGF-1 and SDF-1) and four crucial ECM proteins (including biglycan, fibromodulin, fibronectin and vitronectin) were present in the tECM-DTSs and the content of all these ECM components was significantly higher than that in the DTSs. Though TGF-b1 has been reported to have no direct effect on BMSCs recruitment in a previous study of Zhang et al. [38], several other studies demonstrated that the expression of TGF-b1 was increased at the site of tissue injury, which facilitated the homing of BMSCs in vivo [39][40][41]. Dubon et al. [42] found that TGF-b1 induced BMSCs migration through N-cadherin and noncanonical TGF-b signals. In addition, TGF-b1 also can promote the proliferation of BMSCs via activation of Wnt/b-catenin pathway and/or FAK-Akt-mTOR pathway [43,44]. VEGF was proved to regulate BMSC migration and proliferation through stimulating plateletderived growth factor receptors [45]. IGF-1 was found to promote stem cell recruitment via paracrine release of SDF-1 [46], and SDF-1 has been widely demonstrated to regulate stem cell homing, which plays a crucial role in tissue repair and regeneration [6,47,48]. Biglycan and fibromodulin, as two critical components that organize the TDSCs niche, their absence could detour TDSCs fate from tenogenesis to osteogenesis [34]. Fibronectin and vitronectin have also been confirmed to induce chemotaxis and mitogenic activity of human and rabbit BMSCs [49]. In line with our findings, other group has demonstrated that these four ECM proteins are also preserved in the BMSC-derived ECM [16]. In the present study, only eight representative ECM components were selected to detect. There should be many other yet-to-be-detected bioactive components in the tECM-DTSs, which also may participate in regulating stem cell behavior. BMSCs, as the most intensively used stem cells in tissue repair [48], have been proved to contribute to regeneration of various tissues, including tendon tissue [1,50,51]. Therefore, in the current study, BMSCs were selected as a test population to investigate the regulatory capacity of the tECM-DTSs to stem cell migration, proliferation and tenogenic differentiation. Encouragingly, the tECM-DTSs significantly promoted the migration of BMSCs. Our findings are in accordance with Lin's report that the coating of ureaextracted fraction of human BMSC-derived ECM dramatically enhanced BMSCs migration in comparison to the coating of Type I collagen [52]. In addition, our recent work demonstrated that both BMSC-derived ECM-modified DTSs (bECM-DTSs) and tECM-DTSs obviously improved BMSCs migration by comparison with the DTSs; and the tECM-DTSs were significantly superior to the bECM-DTSs, which was probably caused by the release of significantly higher levels of chemokines in the extracts from the tECM-DTSs [27]. Unfortunately, only two chemokines, SDF-1 and monocyte chemotactic protein 1, were verified in these ECM-modified DTSs. In fact, except for these chemokines, multiple growth factors, just as TGF-b1, VEGF and IGF-1 [38-41, 45, 53], as well as some ECM proteins, like fibronectin and vitronectin [49], have also been confirmed to play considerable roles in promoting stem cell migration and recruitment. In addition to promoting the migration of BMSCs, also encouraging is that the tECM-DTSs significantly promoted the proliferation of BMSCs. Previous studies also reported that ECM deposited by SDSCs could serve for cell expansion system, which has dual function of improving the proliferation of the seeded cells and enhancing the chondrogenic potential of the expanded cells [13,15,54,55]. DPSC-derived ECM for dental pulp regeneration has been shown to promote the proliferation of DPSCs in vitro [25]. In the current study, the alamarBlue assay revealed that the conditioned medium from the tECM-DTSs significantly promoted the proliferation of BMSCs in comparison to that from the DTSs. Although the soluble factors that released into the conditioned medium were not detected in this study, we believed the tECM-DTSs can release higher levels of cytokines than the DTSs, which play a critical role in facilitating BMSC proliferation. Due to the DTSs themselves with excellent ability in promoting stem cell proliferation [30], the seeded BMSCs grew robustly on the DTSs and tECM-DTSs, as well as maintained highly cell viability from 1 to 3 days, as indicated by the results of live/dead staining. Interestingly, the SEM images also showed that the tECM-DTSs remarkably promoted BMSCs proliferation. In the time-frame of 3 days, BMSCs on the tECM-DTSs rather than on the DTSs proliferated faster and completely covered the surface of scaffold material. Most strikingly, BMSCs could sense surface topographic differences between the DTSs and tECM-DTSs, and displayed random orientation on the tECM-DTSs without the highly aligned surface topographical cues. Moreover, as an ideal highly bioactive scaffold material for in situ tendon regeneration, recruiting abundant endogenous stem cells into the injury site and providing suitable microenvironment to promote cell proliferation are still not enough; further inducing the tenogenic differentiation of these recruited stem cells is also essential, which plays a critical role in tendon regeneration. Therefore, the scaffold with a greater capacity to induce stem cells tenogenic differentiation is highly desirable. In our previous study, we verified that the DTSs by a scaffold itself enhanced the tenogenic differentiation of rat TDSCs and BMSCs [30]. Promisingly, in the current study, the tECM-DTSs showed a greater capacity to induce BMSCs toward tenogenic differentiation compared to the DTSs, as evidenced by the results of RT-qPCR and western blot analysis. This finding strongly supports the view that ECM derived from stem cells maintain the functional properties of their native microenvironment and exhibit unique signaling that regulates stem cell self-renewal and lineage differentiation [14]. Indeed, in addition to serving as cell expansion system, stem cell-derived ECM can also act as cell differentiation inducers [14]. A previous study reported that differentiated BMSCs exhibited a rapid regression of osteoblastic markers upon the osteogenic cocktail removal but BMSC-derived ECM promoted the osteogenic potential of differentiated BMSCs in the absence of soluble osteoinductive cues, indicating the superiority of stem cell-derived ECM in inducing stem cell differentiation [22]. Though the intrinsic mechanisms are not fully understood, it is currently well accepted that ECM microenvironment cues, including but not limited to biochemical, topographical and biomechanical cues, play crucial roles in modulating stem cell fate. Interestingly, the tECM-DTSs with the modification of tECM on the DTSs substrate were found to change the highly aligned surface topographical cues of the DTSs and display an intricate and fibrillar ultrastructure. Although the topographical cues of scaffold materials that mimicking the aligned architecture of collagen fibers in tendons have been demonstrated to induce tenogenic differentiation of human TDSCs and human MSCs [56,57], we cannot assert the surface topographical change caused by the modification of tECM will compromise the tenogenic differentiation of stem cells. Several studies have unveiled that induction of stem cells into a specific cell shape and arrangement is not consequentially accompanied by a lineage-specific differentiation [56,58]. In the future, the role of the fibrillar ultrastructure of tECM in stem cell fate decision remains a subject for further investigation. As expected, the modification of tECM still retained the surface stiffness of the DTSs, which was about 1.2 MPa. After all, the stiffness of cell-derived ECM including MSC-derived ECM was only $0.1-1 kPa, as reported by Prewitz et al. [16]. Thus, the tECM-DTSs also had similar stiffness to native tendon, which may contribute to the tenogenic differentiation of BMSCs. Besides, most encouragingly, the modification of tECM significantly enhanced the content of multiple ECM components, including the two critical components (i.e. biglycan and fibromodulin) that controlled the tenogenic differentiation fate of TDSCs, which conferred higher bioactivity to the DTSs so that the tECM-DTSs had a greater capacity in inducing the tenogenic differentiation of BMSCs. In sum, these observations reveal the tremendous superiority of the scaffold materials consisting of tendon-specific tissue-derived ECM and stem cell-derived ECM in inducing the migration, proliferation as well as tenogenic differentiation of stem cells, which are hardly reproduced using single ECM proteins or synthetic scaffolds. There are a few limitations to this study. First, a restricted number of biochemical components in the tECM-DTSs were investigated. Ongoing work will address this issue through comprehensive characterization of the critical bioactive components in the tECM-DTSs using proteomics analysis based on mass spectrometry. Second, the exact mechanism of the tECM-DTSs enhancing stem cell migration, proliferation and differentiation is not well understood. Further studies will focus on determining which of these ECM components are crucial for regulating stem cell behavior and analyzing the key signaling pathways to decipher how ECM components regulate stem cell function. Third, since the tECM-DTSs revealed a greater capacity to enhance the migration, proliferation as well as tenogenic differentiation of rat BMSCs compared to the DTSs, further studies are needed to investigate whether the tECM-DTSs are capable of recruiting abundant endogenous stem cells and inducing them toward tenogenic differentiation to promote in situ tendon regeneration. Conclusions In summary, we developed a highly bioactive tendonregenerative scaffold (i.e. tECM-DTSs) by surface modification of tissue-specific stem cell-derived ECM on the DTSs. The tECM-DTSs were found to change the highly aligned surface topographical cues of the DTSs, retain the stiffness of the DTSs and significantly increase the content of multiple ECM components. As a result, the tECM-DTSs dramatically enhanced the migration, proliferation as well as tenogenic differentiation of rat BMSCs compared with the DTSs. These findings further support the utilization of tissue-specific stem cell-derived ECM as a promising strategy to recapitulate the instructive stem cell microenvironment to enhance the bioactivity of scaffold materials. Supplementary data Supplementary data are available at REGBIO online.
Dynamic polygonal spreading of a droplet on a lyophilic pillar-arrayed surface Abstract We experimentally investigated the dynamic polygonal spreading of droplets on lyophilic pillar-arrayed substrates. When deposited on lyophilic rough surfaces, droplets adopt dynamic evolutions of projected shapes from initial circles to final bilayer polygons. These dynamic processes are distinguished in two regimes on the varied substrates. The bilayer structure of a droplet, induced by micropillars on the surface, was explained by the interaction between the fringe (liquid in the space among the micropillars) and the bulk (upper liquid). The evolution of polygonal shapes, following the symmetry of the pillar-arrayed surface, was analysed by the competition effects of excess driving energy and resistance which were induced by micropillars with increasing solid surface area fraction. Though the anisotropic droplets spread in different regimes, they obey the same scaling law S ~ t2/3 (S being the wetted area and t being the spreading time), which is derived from the molecular kinetic theory. These results may expand our knowledge of the liquid dynamics on patterned surfaces and assist surface design in practical applications. Introduction Spreading of a droplet on a micro-structured surface is prevalent in nature, [1,2] and is of key importance in a wide range of applications, such as DNA technologies, [3,4] fog-harvesting, [5,6] inkjet printing, [7][8][9] biomedicine [10,11] and microfluidics. [12,13] To develop these practical applications, dynamic wetting behaviours of a droplet on a textured surface have been investigated extensively in recent years. Compared to a smooth solid surface, the topography induces extensive specific effects to the spreading behaviours. [14][15][16][17][18] The moving contact line (MCL) becomes a complex curved line and propagates in a special stepwise mode. [19,20] The scaling laws for the dynamic contact angles and spreading radius are modified. [1,21] Moreover, the shape of a droplet may spread into polygonal bilayer structure, instead of a simple spherical cap. CONTACT ya-Pu Zhao yzhao@imech.ac.cn supplemental data for this article can be accessed here. When a droplet is released on a lyophilic pillar-arrayed surface, the base of the liquid penetrates into the space among the pillars, forming a fringe film. The upper part of the droplet collapses on the base of the fringe film, named the bulk. Time series of this process is illustrated in Figure 1. Wetting of a droplet on microstructured surface has been investigated intensely for decades. Extrand and cowokers [22] elegantly reported that appropriate lyophilic pillar arrays could effectively drive partial wetting liquids to complete wetting. Later, by varying the geometry of the surface array and the liquid, Courbin et al. [20] found a diversity of final wetted shapes, including polygons and circles, of droplets on pillar-arrayed surfaces. Raj et al. [23] then showed complete control of polygonal wetted shapes via the design of topographic or chemical heterogeneity on the surface. Jokinen et al. [24] developed a method of fabricating irregular pillars on a square array surface, and found directional wetting property of droplets on this surface, where droplets spread to irregular square-like shapes. and Vrancken et al. [7] reported the droplet shapes depend on the array geometry, pillar shape and array space. In previous studies, the researchers focused on the final shapes of droplets on lyophilic pillar-arrayed surfaces, while the evolution of the projected shape from an initial circle to a final polygon is essentially a dynamic process. Since the bulk liquid penetrates into the pillars progressively to supply the fringe propagation, the two parts may propagate with different shapes. Thus, a study of the dynamic spreading is necessary. Courbin et al. [25] reported the dynamics of shape evolution, while the mechanism in the process of evolution is still demanded. Kim and coworkers [26] quantified the dynamics of polygonal spreading and proposed the scaling laws for spreading rates of the bulk and the fringe separately, when the shape evolution of the bilayer structure has not been investigated yet. In this work, we considered the dynamic polygonal spreading of a droplet on a lyophilic pillar-arrayed surface. Deposited on a lyophilic pillar-arrayed surface, the droplet spread with transient projected shapes from an initial singular circle, through evolving polygonal bulk and fringe, to an equilibrium shape. We demonstrated that the combined effect of the interfacial tension and the rough surface make the shape polygonal and bilayer structured. Then, we theoretically analysed the scaling law of the anisotropic dynamic spreading based on the molecular kinetic theory (MKT). Although the droplets spread in distinct patterns, they obeyed the same scaling law derived by MKT. Our work may help understand the mechanism of polygonal droplets on patterned surfaces, and design surface textures in practical applications. Experiments In our experiments, one ethanol droplet with radius of about 0.4 mm was produced and deposited on the pillar-arrayed Polydimethylsiloxane (PDMS) surface using a micropipette ( Figure 2). Since the manipulation of the droplet requires large space above the transparent substrate, an inverted microscope is needed. Since the droplet spreads rapidly on the lyophilic PDMS substrate (the advancing, receding and equilibrium contact angles of an ethanol droplet on a smooth PDMS surface are measured about 33°, 28° and 30°, respectively), a high-speed camera is needed to capture the dynamic spreading process. The high-speed camera (HotShot 512 sc, NAC) was connected to the inverted microscope (IX71, Olympus) beneath the specimen platform. The entire experimental equipment was placed on the vibration damping platform. The PDMS substrates with micropillar arrays fabricated on surface were made in two steps. First, the negative patterns were fabricated on silicon wafers using photolithography followed by a deep reactive ion etching process in the Institute of Microelectronics, Peking University. Then, the silicon negative patterns were used as moulds for patterning the PDMS via a series step of spin coating, curing and peeling. In the second step, the patterned silicon wafers were prepared with cleaning and silanization to facilitate the release of the elastomer PDMS from the wafers after curing. Liquid silicone prepolymer PDMS (Sylgard 184, Dow Corning) was compounded with the mass ratio of the base to crosslink 10:1. After being sufficiently stirred, the liquid PDMS was poured on the wafers and spin coated at 300 r.p.m. for 30 s using a spin coater (MODEL WS-400BZ-6NPP/LITE, Laurell). Then, the silicon wafers together with PDMS films were degased for 20 min and cured at 80°C for 6 h. Finally, the PDMS films were peeled off from the silicon wafers. After the above Figure 2). The spacing between pillars s is obtained s = pd. Solid surface area fraction s = d 2 p 2 and surface roughness (ratio of actual solid area to projected solid area) ro = 1 + 4dh p 2 . These parameters for each sample were listed in Table 1. Experimental results In our experiments, when deposited on different lyophilic pillar-arrayed surfaces, ethanol droplets exhibited two distinct spreading regimes, as shown in Figure 3: (1) in the case of low ϕ s samples (Samples 1 and 2 in Table 1), the wetted area of the spreading droplet developed from an initial circle to an octagonal shape. The bulk and the fringe shared the same outline when spreading (electronic supplementary material, video S1). (2) in the case of high ϕ s Table 1. Table 1), the wetted area of the spreading droplet developed from an initial circle, through a square shape, separated to a bilayer structure, and ended up with a rounded octagonal bulk and a square fringe (electronic supplementary material, video S2). In these regimes, spreading times taken into consideration are less than 0.6% of the duration of spreading experiments (ranging from 30 to 120 s for different samples). So we neglected the effect of evaporation to shape evolution. All these spreading behaviours are repeatable. In these two spreading regimes, there was a remarkable phenomenon to mention: the shade distribution of the wetted area was anisotropic, similar to the anisotropic spreading droplet. The dark area, which was caused by the inclined liquid-vapour interface, [19] increased with the distance to the centre of the wetted region. Moreover, the dark area along the axes was obviously larger than along the diagonal, indicating the existence of the thickness gradient of the bulk. In these observations, one initial spherical droplet on homogeneous surface adopted anisotropic spreading and evolved to a bilayer structure. Fringe shapes To explain these phenomena, it is essential to understand the multiple driving force and resistance that act on the liquid in the dynamic spreading process. There are several forces that conduct the spreading behaviour: gravity, interfacial tension γ, viscosity μ, and inertia. [19] Since the radius of the droplet is much smaller than the capillary length of the ethanol (1.69 mm), the interfacial tension takes priority over gravity. The Weber number We = v 2 l LV (v is the velocity of the MCL, l is the droplet diameter), which represents the ratio of inertia to the surface tension, is about 0.2, much less than the threshold 1.1, indicating the interfacial tension takes priority over the inertia. [27] Hence, the main driving force for the droplet is the interfacial tension. Furthermore, the forest of lyophilic pillars induces two effects to the droplet. On the one hand, the pillars introduce obstacles and excess resistance to the fringe. On the other hand, the excess lyophilic solid surface provides extra driving force to the MCL, inducing the fringe liquid to accelerate at the interior corners between pillars and substrate, which is known as the Concus-Finn effect. [28] These two effects compete with each other. [19] The interfacial tension and the driving effects from micropillars generate the polygonal-shaped fringe. Driven by interfacial tension, the liquid tends to adopt a spherical cap shape. However, the excess driving force and the interior corners force the fringe to a square shape, which corresponds to the symmetry of the patterned surface, as shown in the right side of Figure 4(a). We adopted the solid area fraction ϕ s to characterize excess driving force and the interior corner acceleration. Thus, a circular fringe when ϕ s approaches 0 (a smooth surface) and a square fringe when ϕ s is large enough could be deduced. An intermediate shape of the fringe film is expected between a spherical cap and a square, as the effects of both excess driving force and acceleration of interior corner varied with ϕ s . Considering the symmetry of a circle and a square, the intermediate shape could be inferred an octagon (left side in Figure 4(a)). These predictions were consistent with the experimental results: the octagonal fringe in the low ϕ s regime, and the square fringe in the high ϕ s regime. Bulk shapes Spreading on the fringe, the front of the bulk actually propagates on liquid film. How could the bulk adopt an octagon or rounded octagon shape? Firstly, the bulk is driven merely by surface tension rather than excess driving force from micropillars. So the bulk prefers to adopt a circular shape. Secondly, the bulk shall not propagate beyond the fringe border. Once an initial droplet contacts with the lyophilic solid surface, it collapses to a spherical cap, and the bottom liquid penetrates into the pillars quickly and forms the fringe. The fringe expands very rapidly, while the bulk liquid is strictly restricted within the fringe border. As the result, the bulk spreads together with the square or octagonal fringe in the beginning. In the next stage of propagation, we come to the high ϕ s regime first. In this regime, the fringe advances faster than the bulk owing to the excess driving effects. As a result, the bulk separates from the fringe gradually in the early stage of spreading (~0.5 s from spreading). The mechanism of the rounded-octagonal bulk was illustrated in Figure 4(c) (iii). Suppose a square expand from the centre of a circle. With the side length of the square increasing, the four corners of the square first go beyond the circular area. On the opposite side, the four bows of the circle are beyond the square area. Take the bulk (cyan) as a circle and the fringe (grey) as a square, then two conclusions can be made. First, with the growth of the fringe, the separation between the fringe and the bulk begins in the diagonal directions. Second, the bulk is restricted by the square fringe in the axial directions. Owing to these two conclusions, the rounded octagonal bulk (cyan area) yields. This shape explanation agrees with the experimental result. In the case of the low ϕ s regime, we found the separation of the bulk from the fringe much more difficult than in the high ϕ s regime. The effects induced by low ϕ s are the main factors. Low ϕ s (~0.0100) indicates low excess driving force and weak interior corner acceleration. As mentioned above, micropillars also induce resistance to the spreading. Here, the pinning effect of the MCL characterized by aspect ratio h/(pd) [20] brings in the resistance. To the sample with lowest ϕ s (=0.0400), the low excess driving force is offset by the pinning effect, resulting in no separation between the bulk and the fringe in the experimental observation. To the sample with the second lowest ϕ s (=0.0625), the excess driving force slightly overcomes the pinning effect, hence the experimental observation of mild separation in the later stage of spreading (at 4.824 s from spreading) can be understood. As illustrated in Figure 4(c) (ii), the bulk is restricted to the octagonal fringe both in the axial and diagonal directions. For this reason, the bulk propagates together with the fringe to an octagonal shape (cyan region), which agrees with the experimental result. Shade distributions and surface feature simulations Our explanations were validated by the distribution characteristics of shade. The shade is caused by the inclined liquid-vapour interface (Figure 5(a)), and the inclined liquid-vapour interface is generated necessarily by the restriction of the bulk liquid. The bows in Figure 4(c) (ii) and (iii) represent the restricted areas of the bulk. And these bows were found corresponding to the dark areas in the experiments: (1) the areas of bows along the diagonal are smaller than along the axes, corresponding to the experiments results in the low ϕ s regime; also the dark area decreases with the restriction relaxing gradually in the spreading process. (2) In Figure 3(b), (ii) the bulk is restricted to the square, causing broad dark region along the border, (iii) along the diagonals the bulk is relaxed, while along the axes the bulk is restricted, resulting in light area along the former direction and dark area along the latter direction, (iv) with further spreading, the restriction of the bulk along the axes was relaxed more, resulting in only small dark region in the axial border. By using finiteelement method (FEM), we also simulated the surface feature for bulk droplets limited within an octagonal fringe and a square fringe, and 3-dimensional renderings of bulks ( Figure 5(c)-(d)), respectively. [29] The FEM simulations were carried out employing the Surface Evolver to achieve an equilibrium shape of a droplet on the pillar-arrayed surfaces [30]. Surface Evolver is a program that computes the minimal energy shape of a surface under constraints. The energy includes surface energies, gravity, and the converted constrains. In our situations, when deposited on the solid surface, the bulk droplet was driven to spread by surface tension. The equilibrium contact angle of the liquid on the solid was about 30°, obtained from the experiments. As discussed in previous section, gravity can be neglected, and the bulk was restricted by the fringe. This restriction was expressed by edge constraints in the simulations. The evaporation can be neglected in the early stage of spreading, so the liquid volume is kept constant. We adopted very fine mesh resolution and set the initial edge length less than 0.02 R b (R b is the initial droplet radius) in each performance. The liquid surface evolved towards a minimal energy by a gradient descent method and achieved an equilibrium shape when the total energy achieved a balance between the surface tension and the restriction. The shade distributions in the modelling results agreed well with our experimental observations and explanations. Scaling analysis by MKT To reveal the physical mechanism of a spreading droplet, we adopted MKT to carry out the scaling analysis of liquid propagation on a lyophilic surface. [19] The MKT is first proposed by Glasstone, Laidler & Eyring, who view the liquid motion as a stress-modified molecular rate process. [31] This theory assumes that the fluids are mutually saturated, and on the solid surface, there are plenty of identical sites where liquid molecules could be adsorbed and desorbed. Considering a liquid incompletely wet, the solid, the adsorption at the solid-liquid interface differs from that at the solid-vapour interface. [32,33] According to MKT, the statistical dynamics of molecules at the three-phase region determines the motion of the MCL. When liquid spreads over a solid surface, the solid surface adsorbs the liquid molecules, while the liquid molecules desorb and tend to advance. The adsorption and the desorption cause the energy dissipation. In the equilibrium state, the advancing frequency of liquid molecules κ + and the receding frequency κ − equal to the equilibrium frequency κ 0 , leading to a static contact line: where k B , T, μ, υ m , λ and Wa are the Boltzmann constant, absolute temperature, liquid viscosity, molecular flow volume, the spacing of surface sites and adhesion work between solid and liquid, respectively. Once applied, a driving force to the liquid molecules, the potential surface tilts, and the equilibrium state is disturbed. Modified by the driving work per unit area w, the advancing frequency and receding frequency are respectively, Here, the driving work per unit area w equals to the interfacial energy change in liquid propagation w = SV − SL ro − LV cos , where γ SV , γ SL and γ LV are the solid-vapour, solid-liquid and liquid-vapour interface energies, respectively, θ is the instant contact angle, and surface roughness ro = 1 + 4dh p 2 . The value of w 2 2k B T in our case is in the order of 0.01-0.1, therefore sinh w 2 2k B T ∼ w 2 2k B T. The difference of the advancing and receding frequencies results the velocity of the MCL: When the solid-liquid pair is fixed, λ, υ m and Wa become constants, v ∼ w∕ . Substituting Young's equation SV − SL = LV cos eq [34] for w, v ∼ ro cos eq − cos LV is obtained. Considering the constant droplet volume (as liquid evaporation can be neglected in the early stage of propagation), V 0 = V bulk + V fringe , that is: where R 0 , R b , H b and R f are the radius of the initial spherical droplet, the radius of the bulk, the height of the bulk and the radius of the fringe, respectively; α, β are area coefficients for polygonal shapes, also independent of time. Take account of lubrication approximation ( For rough surface in our case, θ could be simplify to ∼ 1 − s hR 2 f R 3 b [19]. Considering R b ∼ R f (ε is independent of time), which is validated by previous work [26], the velocity could be expressed by Plug the instant wetted area S ∼ R 2 f into Equation (6), and then the dimensionless solution S S 0 ∼ 2∕ 3 is obtained, where S 0 is the initial projected area, S S 0 is the dimensionless wetted area, and = t LV R 0 is the dimensionless time, respectively. We compared the experiments with the MKT result, since the homogeneous PDMS surface and the ethanol liquid used in the experiments satisfy the assumptions of MKT. In our experiments, the evaporation could be neglected when considering the early stage of propagation (within 0.2 s in scaling analysis). Figure 6 shows the experimental results in logarithmic coordinates. The abscissa represents τ evolution, and the ordinate represents S S 0 . Although droplets on five samples follow different spreading regimes, they approximately obey the same scaling law S S 0 ∼ 2∕ 3 . The experiments are in good agreement with the MKT result, indicating that polygonal spreading follows the same scaling law with circular spreading. [19] Conclusions In this work, we investigated the dynamic polygonal spreading of a droplet on a lyophilic pillar-arrayed surface. Firstly, in the experiments with ethanol droplets on varied topological substrates, two dynamic spreading regimes have been distinguished. In these regimes, the projected shapes of droplets evolve from initial circles to bilayer polygons. Controlled by interfacial tension and driving energy from micropillars, which induce acceleration at the interior corners and excess driving force, the fringes follow the symmetry of patterned surface and exhibit circular, octagonal and square projected shapes. Secondly, in the bilayer structure dynamics, the combined evolution and the separation of the bulk and the fringe were caused by the competition between the excess driving energy and resistance from the micropillars. These explanations were also validated by shade distributions in both experimental snapshots and modelling results. Furthermore, the physical mechanism of the dynamic process was theoretically analysed using MKT. The anisotropic spreading in the experiments agree well with MKT scaling of S S 0 ∼ 2∕ 3 . Our results may help in understanding the complex effects caused by the pillar-arrayed surface and increase the toolbox of yielding polygonal patterns by changing topological parameters of the substrate. Disclosure statement No potential conflict of interest was reported by the authors.
EFFECTS OF BY-PRODUCTS OF PEACH PALM AND GRAPE ON NUTRITIONAL, PHYSICO-CHEMICAL AND SENSORY PROPERTIES OF EXTRUDED BREAKFAST CEREALS EFEITO DOS SUBPRODUTOS DE PALMITO PUPUNHA E UVA NAS PROPRIEDADES NUTRICIONAIS, FÍSICO-QUÍMICAS E SENSORIAIS DE CEREAIS MATINAIS EXTRUSADOS Extruded breakfast cereals they have low nutritional and high energetic values. The aim of this study was to develop extruded breakfast cereals by replacing part of corn meal by different proportions of by-products of grape (5-10%) and peach palm (7.5-15%). Samples were sweetened with xylitol while moisture was controlled with grape juice. Six formulations, which were produced in a 2 experimental design, were analyzed regarding their composition, besides their technological, mechanical, antioxidant and sensory properties. Addition of different amounts of flours, produced with the wastes of grape (GF) and peach palm (PF), increased the dietary fibers and antioxidant contents of cereals formulations. However, at the highest levels addition, there was a decrease in expansion and an increase in hardness and water solubility properties. Thus, the incorporation of grape and peach palm by-products at smaller proportions showed higher viability, since they exhibited satisfactory sensory acceptance, technological properties and bowl life. INTRODUCTION Changes in the population's eating habits have increased the demand for healthy foods that also reflect practices of consumption with adequate sensory acceptance. Formulations of various products have been adapted to meet this niche in the market. Commercial ready-to-eat breakfast cereals are practical foods based on starch and sugars; thus, they are considered to have low nutritional value . Agroindustrial byproducts may be used as alternatives in the development of nutritionally enriched and less caloric formulations, since they are rich in dietary fibers and bioactive compounds, such as antioxidants, with low impact on production costs (ELLEUCH et al., 2011). By-products of the wine industry, such as husks and seeds, account for 20% of the total volume of processed raw material (KARNOPP et al., 2017). Likewise, since only the core of peach palm stems has adequate texture to produce heart-of-palm, it generates 85% of waste (BOLANHO et al., 2014). Replacing starch, the main component of extruded breakfast cereals, with high-fibers ingredients is a challenge to food industries, because the behavior of starch during the extrusion process determines their desired texture. As a consequence, nutritional composition and sensory acceptance depend on proportions of ingredients and extrusion conditions. Another factor that has to be considered in the development of extruded foods is crispness loss, since they are consumed after having been immersed in milk (TAKEUCHI et al., 2005). Therefore, it is important to investigate technological properties of breakfast cereals, when different proportions of components are used in their formulation, to guarantee adequate sensory acceptance (RIAZ, 2000). The aim of this study was to evaluate the effect of partial replacement of corn meal with flours of peach palm and grape by-products on nutritional, technological, mechanical and sensory properties of breakfast cereals, by using a statistical approach. The by-product of canned peach palm (Bactis gasipaes), whose name in Brazilian Portuguese is palmito pupunha, was donated by the Marbbel Industry (Antonina, Brazil). Its stem, located below the edible portion, was separated, washed, cut and dried in an oven with forced air circulation (Marconi, Piracicaba, Brasil) at 60º C, up to 10±2 g.100g -1 of moisture. Waste -peels -of Bordeaux grape juice (Vitis labrusca) was donated by Econatura (Garibaldi, Brazil) as a dehydrated product. These by-products were milled (A-11 Basic, IKA, Campinas, Brazil) and sieved so as to have their granulometry standardized (0.65 mm). Reagents used in the analysis of characterization had proven purity and the enzymes were donated by Prozyn ® (São Paulo, Brazil). Formulations of breakfast cereals were produced with the use of a 2 2 factorial planning, as shown in Table 1, which also includes the extruded products. Corn meal was added to complete 100% of each formulation while moisture contents were adjusted with grape juice up to 18 g.100g -1 . Mixtures were submitted to extrusion (Exteec Extruder, Ribeirão Preto, Brazil) by single-screw mechanical friction at 120 ºC, with two rotation knives (60 Hz) and a circular matrix, with feed rate of 0.2 kg.min -1 . After extrusion, cereal formulations were dried at 60 °C in an air convection oven until moisture reached 4±1 g.100g -1 , controlled by an infrared scale (Sartorius, Gottingen, Germany). Subsequently, 6.5 mL saturated aqueous xylitol solution (68 g.L -1 ) was sprayed per 90.25 cm² of product. This volume was fractionated into three applications which were carried out every 10 min. At time intervals, formulations were dried at 80 ºC in an oven with air circulation and kept under these conditions for 1 h at the end of the application (OLIVEIRA et al., 2018). These procedures were defined after preliminary testing. Samples were stored in airtight packaging at 4 o C for subsequent analysis. (15) 5 0 (5) 0 (7.5) 6 0 (5) 0 (7.5) GF = grape by-product flour, PF = peach palm by-product flour Flours of by-products and breakfast cereals formulations were analyzed in agreement with the AOAC's recommendations (HORWIZ; LATIMER 2005) concerning moisture (method 925.09), ashes (method 923.03), proteins (method 920.87), lipids (method 920.85) and soluble, insoluble and total dietary fiber (method 991.43). Total carbohydrates were determined by difference. Energetic values of breakfast cereal formulations were determined by the Atwater conversion factors: 4 kcal.g -1 for carbohydrates and proteins, 9 kcal.g -1 for lipids and 2 kcal.g -1 for dietary fibers. These analyses were performed in triplicate. Bulk density (g.cm 3 ) was determined by the displacement of millet seed mass and results were expressed as the ratio of mass to volume (OLIVEIRA, 2018). The expansion index (EI) was carried out as proposed by Alvarez-Martinez et al. (1988); diameters (mm) of the matrix and the extruded product were measured by a digital caliper (INSIZE 1137-150). Ratio of diameters was multiplied by 100 and results were expressed as percentage. Both analyses were conducted in 15 repetitions. Water solubility (WSI) index was determined by mixing 2.5 g sample and 30 mL water; it was submitted to orbital agitation (Marconi, Piracicaba, Brazil) at 100 rpm for 30 min at 25 o C. After centrifugation at 2.10³ x g for 10 min (Celm Combate, Barueri, Brazil), the supernatant was dried and weighted. Results were expressed as g of soluble solids in water per 100 g sample (g.100g -1 ). To determine the water absorption index (WAI), the weight of each centrifugation residue was measured. Results were expressed as g of water or milk absorbed per g of sample (g.g -1 ) (SEIBEL, BELÉIA 2009, LEORO et al., 2010. These analyses were done in triplicate. Mechanical properties (hardness and crispness) were performed in the formulations before and after immersion in cold milk (from 10 to 15 ºC) for 30 s, 60 s, 120 s and 180 s. Five samples were randomly collected for each formulation. A texturometer (TA XT Plus Texture Analyzer, Stable Micro Systems, Godalming, UK) and an HDP-BS probe were used under the fixed conditions: distance from the probe to the base was 45 mm; test speed was 2 mm.s -1 ; and post-test speed was 10 mm.s -1 . The equipment was adjusted to measure compression force and results of hardness (N) and crispness (number of peaks) were based on the collected data , SACCHETTI et al., 2003. Sensory analysis was performed in white light booths by a panel of untrained testers (n=59) in compliance with ethics requirements Ethics Committee -COPEP: CAAE: 66525917.8.0000.0104. The group was composed of individuals of both sexes, between 17 and 52 years old. Samples were coded by 3-digit random numerals and introduced, one by one, in white cups containing 2 g sample and, separately, a cup with milk was also served. Testers tasted the milkimmersed samples and graded them according to the structured hedonic scale, which ranges from 1 (extremely disliked) to 9 (extremely liked) for the following attributes: appearance, color, odor, texture, taste and overall acceptance (ISO, 2014). The acceptability index was calculated by the ratio between the global acceptance average and the highest grade, multiplied by 100 (DAMASCENO et al., 2016). Collected data were expressed as mean followed by standard deviation. The normality test was performed on data with 6 or more replicates (n≥6); data normality in replicates was assumed when n<6, where n is the number of replicates. The Shapiro-Wilk (p≥0.05) (n<30) and the Levene's (p≥0.05) tests were applied to test normality and homogeneity of variances (homoscedasticity), respectively. Significant mean differences were determined by the one-way analysis of variance (one-way ANOVA). Data sets with normal distribution (p≥0.05) and homoscedastic (p≥0.05) were compared by the Fisher mean comparison test (p≤0.05). The Welch and the Kruskal-Wallis tests (both p≤0.05) were applied to data normally distributed with unequal variance (heteroscedastic) and samples without normal distribution, respectively (GRANATO et al., 2014). Evaluation of quantitative effects of independent variables (grape flour and peach palm flour) on responses was performed by multiple linear regressions, based on the response surface methodology (RSM). Twodimensional contour plots were generated for each response variable from significant regression coefficients. The Statistica software v. 13.3 (StatSoft, USA), licenciated by the Pos Graduate Program in Food Science and Technology of State University of Ponta Grossa, was used for all statistical analyses. RESULTS AND DISCUSSION Flours of grape (GF) and peach palm (PF) by-products exhibited dietary fibers as their main component (>50 g.100 g -1 ), besides considerable contents of proteins (~10 g.100 g -1 ) and ashes (4 -7 g.100 g -1 ) ( Table 2). Thus, it can be inferred that these by-products have potential to promote nutritional enrichment of food products (KARNOPP et al. 2017, BOLANHO et al. 2015. Regarding results of the analysis of extruded breakfast cereals, they were found to vary considerably, a fact that may be due to the complexity of the extrusion process and limited conditions of control of the equipment. Table 3 shows proximal compositions of formulations. Moisture contents ranged from 4.00 to 5.63 g.100 g -1 ; these values provide stability against possible changes caused by microorganisms and chemical reactions during shelf life. Resulting values are in accordance with current legislation (BRASIL, 2019) that establishes maximum value of 15 g.100g -1 of moisture in this kind of product. The higher the contents of PF and GF added to the formulations, the higher the ash content; it shows that the by-products led to increase in minerals in the formulated cereals. The values of lipids content ranged from 0.34 to 1.16 g. 100 g -1 and GF was mainly responsible for increase in the fat content of the products. On the other hand, the highest protein content was found when the highest level of PF was added (formulations 1-GF = 0%, PF = 15% and 4 -GF = 10%, PF = 15%). GF addition also contributed to increase the content of this nutrient by comparison with formulation 3, without the addition of any flour. Addition of the by-products under study increased the total dietary fiber content (TDF) of breakfast cereals, i. e., from 368 to 746%, by comparison with formulation 3 (without any addition of by-products). According to the Food and Drug Administration (FDA, 2019), a product can be considered a "fiber source" when it has at least 3 g fiber in 30 g food. Therefore, formulation 4 (GF = 10, PF = 15) can be classified as a fiber source, with 3.67 g fiber per portion. According to the American Diabetes Association (2019), for good health maintenance, recommended daily fiber intake ranges from 25 to 30 g. However, most people do not reach this goal. Therefore, the development of food formulations containing high fiber content is important. Addition of GF and PF led to increase in both soluble (SDF) and insoluble (IDF) dietary fibers contents. GF showed a tendency to contribute more significantly to increase in SDF. The highest IDF content was found in formulation 4, which contained the highest levels of GF (10%) and PF (15%). Adequate balance in the consumption of soluble and insoluble fibers is important due to the different properties of each fraction. The soluble fiber fraction, composed of pectin, gum and some hemicelluloses, is related to decrease in cholesterol and postprandial glucose. Insoluble fiber is associated with intestinal transit regulation, since it includes cellulose, lignin and most hemicelluloses (DANG, VASANTHAN, 2019). Energetic values of formulations ranged from 355 to 406 kcal 100 g -1 (data not shown). Each 30 g portion exhibited values from 106 (formulation 4) to 122 kcal (formulation 3). Thus, consumption of the portion with the lowest energy leads to a caloric decrease in 12.6%, by comparison with the formulation without any addition of by-products. Partial replacement of corn meal with GF and PF reflected positively in the total phenolic content (TPC) and the antioxidant capacity (DPPH) of cereals formulations (Table 3); values ranged from 43.15 to 139.58 mg GAE.100g -1 and from 3.89 to 15.25 mmol TE.100g -1 , respectively. The highest values of TPC and antioxidant activity (AA) were obtained when the highest contents of GF and PF were used (formulation 4), followed by formulation 2, which contained only GF. When these formulations were compared to formulation 3 (without any by-product addition), there was an increase in TPC and AA over 300%. Danesi et al., (2018) and Kuck;Noreña (2016) reported high TPC values in by-products of peach palm (130 mg GAE.100g -1 ) and grape juice (2626 mg GAE.100g -1 ), respectively, and showed the importance of using this waste in food formulations. The effect of independent variables, GF and PF, on TPC and AA of formulations can be observed in Figure 1, which shows a positive effect (p≤0.20). The highest values of these parameters coincide with the highest level of corn meal replacement with PF and GF, indicating that, even after the extrusion process, antioxidant compounds of by-products were maintained in the final products. In the production of breakfast cereals enriched with apple waste, Leyva-Corral et al., observed increase in antioxidant activity; the extrusion process did not affect these compounds, a fact that corroborates findings of this study. Quiles et al. (2018) also reported that inclusion of byproducts in extruded products contributes to improve their nutritional value by increasing contents of dietary fiber and antioxidant compounds. The antioxidants play a fundamental role in oxidation reactions by neutralizing reactive oxygen species and chelating pro-oxidant transition metals. The color of extruded products is the result of non-enzymatic reactions and of pigment degradation caused by processing conditions. Values of this parameter are shown in Table 4 and the color of formulations can also be observed in Table 1. Breakfast cereals containing GF (formulations 2, 4, 5 and 6) were darker (lower luminosity -L*) and tended more to blue (lower chromaticity b*) than the other formulations, a fact that is related to anthocyanin pigments found in this residue. On the other hand, the formulation without any addition of by-products had the highest values of L* and b*, due to the high percentage of corn flour, a carotenoid-containing raw material, which makes the product become yellow (7). Technological parameters (Table 4) showed that the water absorption index (WAI) ranged from 3.81 to 5.56 g.g -1 . These values are compatible with those found by Carvalho et al. (2009), i. e., from 5.01 to 6.48 g.g -1 , in fried extruded products obtained from the mixture of cassava and peach palm flours. Addition of the by-products under study -GF and PF -at high proportions reduced the amount of starch of the formulations introduced in the extruder and decreased WAI values, since this parameter is associated with the amount of water absorbed by the starch (MERCIER et al., 1998). This result is interesting because the amount of water absorbed by the breakfast cereal is associated with the time it remains crisp and gain in moisture depends on the absorption capacity of each cereal and the critical time related to texture change, from a crunchy to a softened product (TAKEUCHI et al., 2005). Regarding results of the water solubility index (WSI), the highest values were found in formulations 2 (10% GF, 0% PF) and 3 (100% corn meal); they show that these flours had high content of soluble compounds and low molecular weight ones. ISA is related to the number of soluble molecules and the degree of compound degradation during the extrusion process (QUILES et al. 2018). In relation to the milk solubility (MSI) and milk absorption (MAI) indexes, their values were found to be higher than those obtained in water (WAS and WAI). Similar effect was verified by Leoro et al., (2010) in breakfast cereals with passion fruit waste. The lowest value of MAI was obtained in the formulation with the highest content of byproducts (formulation 4 -10%GF and 15%PF). These results were favorable to bowl life of breakfast cereals that are commonly consumed with milk. Extruded products are aerated and have pores formed by the expansion of their component matrices (RIAZ, 2000). The combined use of GF and PF was found to significantly reduce the expansion index (EI) and increase bulk density (BD). It can be explained by the rupture of extruded bubble walls, due to the presence of fibers, which interfere in gas retention and in the expansion. Intermediate results of EI and BD were found in formulations 1 and 2, which contained only one of the by-products (either PF or GF). Regarding mechanical properties of breakfasts cereals (Table 5), formulations showed significant difference in the crispness parameter, whose minimum value was observed in formulation 4 (10% GF and 15% PF); this result may be due to its high fibers contents (12.23 ± 0.62 g.100g -1 ). Differences among formulations were maintained along immersion time and reduction in crispness was observed, except in formulation 4, which showed increase in the number of peaks until 180 s. Mechanical properties are related to sensory quality, hardness and crispness and associated with morning cereal expansion characteristics (DING et al., 2005). Since it is a product consumed with milk, soaking time is important to determine product quality. Evaluation of maximum compressive strength (hardness) of breakfast cereals before milk soaking showed that formulations 1 (15% PF), 2 (10% GF) and 3 (100% corn meal) exhibited lower hardness values than the other samples. This result is positive, because these samples had their dietary fibers contents increased, a fact that enhances application of grape and peach palm by-products to breakfast cereals. In general, hardness was reduced over immersion time in samples under analysis; the highest change in values -from before to after milk immersion -was observed when the highest amounts of flour were used (formulation 4 -10% GF and 15% PF); this formulation also exhibited the highest value of hardness as a dry product. Biosci. J., Uberlândia, v. 36, Supplement 1, p. 317-329, Nov./Dec. 2020 http://dx.doi.org/ BJ-v36n0a2020-53758 Table 4. Color and technological parameters of breakfast cereal formulations produced with different amounts of corn meal and flours of grape and peach palm byproducts. Formulations + L* a* b* WAI (g.g -1 ) WSI (g.100g -1 ) MAI (g.g -1 ) MSI (g.100g -1 ) BD (g.L -1 ) Table 5. Mechanical properties of breakfast cereal formulations produced with different amounts of corn meal and flours of grape and peach palm by-products: dry products and milk-immersed ones at 30s, 60s, 120s and 180s. Dry = dry product; MI30s = immersed in milk for 30s; MI60s = immersed in milk for 60s; MI120s: immersed in milk for 120s; MI180s: immersed in milk for 180s; + According Table 6 shows results of the sensory analysis of milk-immersed breakfast cereals and acceptance rates. Among formulations under evaluation, the one that received the highest percentages of replacement of corn meal with waste flours (sample 4 -10% GF and 15% PF) had the lowest means of all parameters. It was the only one which differed (p<0.05) from the control formulation (without any addition of by-products) concerning appearance, color and aroma. Replacement of corn meal with either 15% PF (formulation 1) or 10% GF (formulation 2) showed similar values (p> 0.05) in texture, flavor and overall acceptance by comparison with formulation 3 (without any addition of PF and GF). Thus, these formulations had the highest acceptability indices (AI) (≥75%); AI values higher than 70% indicates that the formulations are accepted in the sensory point of view (16). These results are associated with their lower hardness values and higher crispness ones (Table 5), as well as their lower fiber contents (Table 6), by comparison with those found in the other formulations (4, 5 and 6). According to Onwulata et al., (2001), fibers content influences texture by increasing hardness of breakfast cereals. As a result, acceptability indices decrease, due to society's cultural issues, since the habit of fiber ingestion is poor. It explains the lowest grades obtained by all attributes under evaluation in formulation 4 (10% GF and 15% PF), which had the highest fiber content (~ 11%). On the other hand, the combination of the lowest levels of GF (5%) and PF (7.5%) -formulations 5 and 6 -led to similar scores in appearance, color, aroma and overall acceptance by comparison with formulations 1, 2 and 3 (p>0.05). In these formulations, averages were higher than those reported by Oliveira et al., (2018) in extruded breakfast cereals enriched with flours of whole grain wheat and jabuticaba skin. CONCLUSION Replacement of corn meal with grape and peach palm by-products led to nutritional enrichment of breakfast cereals, especially in fiber and antioxidant compounds. Formulations containing the lowest levels of PF and GF had more favorable results from sensory and technological points of view. Therefore, the use of the by-products under study proved to be a promising alternative to add value to them in production of breakfast cereals, thus, contributing to strengthen production chains, stimulate sustainability and offer a healthy alternative to the consumer market.
Agrobacterium-Mediated Fungal Resistance Gene Transfer Studies Pertaining to Antibiotic Sensitivity on Cultured Tissues of Lettuce (Lactuca Sativa L. cv. Solan kriti) Lettuce (Lactuca sativa L.) is a widely used leafy vegetable belonging to the family Asteraceae (2n = 18). It is nutritionally rich with medicinal property and well known for its high vitamin A content, and minerals like calcium and iron. This is the only crop which is rich in Lactupicrin and act as an anticancerous as well as, suitable candidate for the production and delivery of therapeutic proteins (Resh, 2001; Ryder, 2002; Mohebodini et al., 2011). However, this crop is severely affected by a number of biotic and abiotic stresses which causes enormous crop yield losses during commercial cultivation of lettuce. Agrobacterium tumefaciens mediated genetic transformation is most common and feasible method to transfer gene of interest into different crop plants and is a widely used method for developing resistance against various diseases (Srivastava, 2003). When International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 6 Number 7 (2017) pp. 1687-1698 Journal homepage: http://www.ijcmas.com Introduction Lettuce (Lactuca sativa L.) is a widely used leafy vegetable belonging to the family Asteraceae (2n = 18). It is nutritionally rich with medicinal property and well known for its high vitamin A content, and minerals like calcium and iron. This is the only crop which is rich in Lactupicrin and act as an anticancerous as well as, suitable candidate for the production and delivery of therapeutic proteins (Resh, 2001;Ryder, 2002;Mohebodini et al., 2011). However, this crop is severely affected by a number of biotic and abiotic stresses which causes enormous crop yield losses during commercial cultivation of lettuce. Agrobacterium tumefaciens mediated genetic transformation is most common and feasible method to transfer gene of interest into different crop plants and is a widely used method for developing resistance against various diseases (Srivastava, 2003). When ISSN: 2319-7706 Volume 6 Number 7 (2017) pp. 1687-1698 Journal homepage: http://www.ijcmas.com Development of an efficient protocol for genetic transformation in plants requires effective shoot regeneration and antibiotic selection systems. Genetically engineered disarmed Agrobacterium tumefaciens strain containing binary vector pCambia with chiII (fungal resistance gene) and hpt (hygromycin resistance) genes was used for genetic transformation studies. Hygromycin and cefotaxime sensitivity studies were conducted using leaf and petiole explants of lettuce (Lactuca sativa cv.Solan Kriti) to explore the aptness of hygromycin resistance as a selectable marker and cefotaxime in controlling excessive bacterial growth during genetic transformation studies. Explants (leaf and petiole) showed decrease in fresh weight as concentration of the hygromycin increased resulting in full or partial inhibition of shoot regeneration. A negative correlation was observed between the concentration of hygromycin and fresh weight of the explants at different intervals of time. Effect of different concentrations of cefotaxime was studied on the regeneration potential in leaf and petiole explants of lettuce. PCR analysis of genomic DNA using specific designed primers was done to detect the presence of chiII and hpt genes in hygromycin resistant plantlets of lettuce. Out of five randomly selected putative transgenic shoots, four shoots were found positive for the presence/integration of chiII and hpt genes during T-DNA transfer and integration into the plant genome. The results indicate that hygromycin and cefotaxime act as an effective selective agent during genetic transformation studies. gene transfer is attempted, efficient selection system is required whereby transformed cells can be separated from untransformed cells. Several studies support the concept that most of the foreign genes introduced by Agrobacterium are normally transmitted to the progeny (Gelvin, 1998). Many groups have reported the transformation ranging from 8% to 20% transformation efficiency in lettuce (Dias et al., 2005;Ziarani et al., 2014). Ideal transformants can be found with difficulty, depending upon the plant material to be transformed and to some extent on the nature and the transgene complexity. The establishment of a transformation procedure requires the use of a selectable marker gene; which allows the preferential growth of transformed cells in the presence of a selective agent. Selection efficiency depends on the size of exposed tissue, developmental stage of the plant cells, regeneration response and concentration of the selective agent. In transformation of lettuce, two most popular aminoglycosides antibiotic resistance marker genes are npt-II (neomycin phosphotransferase-II) for kanamycin resistance and hpt (hygromycin phosphotransferase) for hygromycin-B resistance. Hygromycin B is an aminoglycoside antibiotic for selection which is inactivated by hptIV gene isolated from soil bacterium Streptomyces hygroscopicus and E. coli. Aminoglycoside antibiotic could combine with ribosme 70S subunit in the chloroplast and mitochondria and interferes with protein translocation by causing mistranslation, finally render etiolation and death of plant. The enzyme hygromycin phosphotransferase produced by hpt gene phosphorylates hygromycin-B, and therefore inactivates it. This requires as a first step knowledge of relative tolerance of the plant cells to antibiotics for which resistance marker exist. The sensitivity of plant cells to the selection agents depends upon the genotype, the explants type, the developmental stage, and the tissue culture conditions and should, therefore be determined under actual conditions of the genetic transformation and regeneration processes (Koronfel, 1998). Leaf and petiole explants of lettuce were subjected to increasing doses of hygromycin to identify lowest concentration required to completely inhibit callus growth and adventitious shoot differentiation. If the growth in presence of a normal inhibitory concentration is taken as an indicator of antibiotic activity then that would represent the lowest concentration appropriate for selection of resistant tissue/ callus/ shoots in Agrobacterium co-cultivated explants. One, another most commonly used antibiotic in genetic transformation is cefotaxime, which is required after co-cultivation experiment. Cefotaxime have a broad spectrum of activity against both gram positive and gram negative bacteria where it block the cell wall mucopeptide biosynthesis by inhibiting the cross linking of peptidoglycan by binding and inactivating transpeptidases thus inhibiting cell wall biosynthesis. Thus it requires an efficient knowledge about the relative tolerance of the plant material to antibiotics for which a resistance marker exists. However, it depends on many factors like genotype of the explants, explants type, the development stage and regeneration ability of explants and also the impact of antibiotics used during transformation to eliminate A. tumefaciens. The objective of the present investigation was to study the effect of hygromycin and cefotaxime on cultured tissues of lettuce and based on these results, an efficient in vitro selection system for the further transformation was developed. Plant material and culture medium The certified seeds of lettuce (Lactuca sativa cv.Solan Kriti) were procured from the Department of Vegetable Science, Dr Y. S. Parmar University of Horticulture and Forstery, Nauni, Solan. The leaf and petiole explants were obtained from the fifteen to twenty days old glass house grown seedlings. The explants were washed thoroughly under running tap water for half an hour and then treated with 0.2% bavistin solution for 2 minutes and 1 % HgCl 2 for 1 minute and then washed with sterilized distilled water for 3-4 times to remove any traces of mercuric chloride. Agrobacterium strain and plasmid vector Genetically engineered Agrobacterium tumefaciens strain harbouring plasmid pCAMBIA bar-ubi-chi 11, which contained hpt as a plant selectable marker, and the gene of interest chi 11 (rice chitinase gene), obtained from Dr S. Muthukrishnan, Kansas State University, USA, was utilized for Agrobacterium-mediated gene transfer studies. In this construct, the expression of chi 11 gene was driven by Ubi promoter and hpt gene was under the control of CaMV 35S promoter. The hpt gene was located close to the T-DNA left border and downstream of the chi 11 gene (Fig. 1). Effect of hygromycin on the growth of callus and shoot regeneration The leaf and petiole explants excised from 15-20 days old glasshouse grown seedlings were cut into small pieces, weighed for their fresh weights on mettler balance under aseptic conditions in laminar air flow cabinet and cultured on MS shoot regeneration medium without (control) and with different concentrations of hygromycin in different Petriplates. The initial fresh weight of the explants was recorded. The selective medium for hygromycin sensitivity studies was prepared by adding different concentrations of hygromycin into pre-sterilized molten MS basal medium (Murashige and Skoog, 1962) containing 0.25 mg/l BAP and 0.10 mg/l NAA for leaf explants and MS basal medium containing 0.75 mg/l Kn and 0.10 mg/l NAA for petiole explants under aseptic conditions by filter sterilization through a 0.22μm pore size Millipore membrane filter. Different concentrations (2.5, 5, 7.5, 10, 12.5 and 15 mg/l) of hygromycin were added to study the effect of antibiotic on the relative growth (fresh weight) of the cultured explants. The cultured leaf and petiole explants were observed for callus formation/adventitious shoot regeneration and eventual changes in fresh weights. Morphological changes were observed in these tissues from 0 to 35 days in culture. Relative growth (fresh weight) of explants was calculated at the interval of seven days. Each treatment consisted of six replications, each with five leaf and petiole explants. Effect of cefotaxime on the regeneration potential The leaf and petiole explants were excised and cultured on selective shoot regeneration medium that was prepared by adding different concentrations of cefotaxime (100, 200, 300, 400 and 500 mg/l) into pre-sterilized molten shoot regeneration medium for leaf (MS basal medium containing 0.25 mg/l BAP and 0.10 mg/l NAA) and petiole (MS basal medium containing 0.75 mg/l Kn and 0.25 mg/l NAA) explants under aseptic conditions by filter sterilization through a 0.22μm pore size Millipore membrane filter to study its effect on the regeneration potential of the cultured explants. Morphological changes were observed in these explants for callus formation and adventitious shoot regeneration. Agrobacterium tumefaciens-mediated genetic transformation and callus induction Prior to infection of explants with Agrobacterium, four fresh colonies of A. tumefaciens harbouring plasmid pCAMBIA bar-ubi-chi 11 on YMB medium plate (1% Mannitol, 0.04%, Yeast extract, 0.01% NaCl, 0.02% MgSO4.7H 2 O, 0.05% K 2 HPO4, 1.5% Agar agar, pH 7.0) were inoculated in liquid YMB medium supplemented with 50 mg/l kanamycin and 25 mg/l streptomycin and incubated overnight at 28 0 C with continuous shaking at 250 rpm. Agrobacterium cells were harvested by centrifugation at 10,000 rpm for 10 minutes and resuspended in liquid MS basal medium containing 30g/l sucrose to a final density of 10 8 cells/ml had an OD 0.521 at 540nm, which was fixed for genetic transformation experiment. The explants were pre-cultured on shoot regeneration medium for 72 hours and then infected by immersing in Agrobacterium suspension for 1 minute with gentle shaking three to five times during the infection process. Subsequently, the infected explants were dried on a sterile Whatmann filter paper and transferred onto same pre-culturing medium for 72 hours for co-cultivation at 26±2 0 C. Following cocultivation, the infected explants were transferred onto selective shoot regeneration medium containing 300 mg/l cefotaxime and 7.5mg/l hygromycin and kept at 26±2 0 C under the 16-h light, 8-h dark cycle in the culture room for callus induction. To avoid non transformed callus escaping from the hygromycin selection, the infected explants were subjected to hygromycin selection. DNA isolation and PCR analysis Genomic DNA was isolated from nontransformed shoots (control) and hygromycin resistant shoots/plantlets, according to modified cetyl-trimethyl ammonium bromide method (Doyle and Doyle 1990) with minor modifications. Plasmid DNA was isolated using Plasmid Isolation Kit. PCR analysis were carried out to detect the presence of the hpt and chitinase genes respectively, using primers HPT-F 5'ATGAAAAAGCCTGAACT CACCGCGA3' and HPT-R 5'TCCATCACAGTTTGCCA GTGATACA 3' and CHI-F 5'GGACGCAGTCTCCTT CAAGA 3' and CHI-R 5'ATGTCGCAGTAGCGCTTGTA3'. Linkage analysis of these two genes was conducted using PCR amplification. Results and Discussion Lettuce (Lactuca sativa L. Solan Kriti) is an agronomically important leafy vegetable that can be grown worldwide. The production of lettuce is challenged by many stresses; these stresses either alone or together or in combination with abiotic stresses cause heavy losses. Plant diseases seriously affect the quality and yield of lettuce. The conventional methods for the control of post-harvest fungal diseases are mainly dependent on the intensive and extensive use of chemical fungicides, which have drawbacks such as damage to the ecological system and residual poisoning to human and animals. Therefore, it is desirable to develop fungal resistant plants through plant genetic engineering. Among fungicidal genes, chitinase genes have been proven effective in controlling fungal pathogens such as Pythium spp. Pythium ultimum, Bremialactucae, Sclerotinia Sclerotium in many crop plants. Genetically engineered disarmed Agrobacterium tumefaciens strain containing binary vector pCAMBIA bar-ubi with chi (fungal resistance gene) and hpt (hygromycin phosphotransferase) genes was used for genetic transformation studies. All transformation systems for creating transgenic plants entail separate processes for introducing cloned DNA into living plant cells, for identifying or selecting those cells that have integrated the DNA into the appropriate plant genome (nuclear or plastid) and for regenerating or recovering fully developed plants from the transformed cell. Selectable marker genes have been pivotal to the development of plant transformation technologies because the marker genes allow scientists to identify or isolate the cells that are expressing the cloned DNA and to monitor and select for the transformed progeny. As only a very small proportion of cells are transformed in most experiments, the chances of recovering transgenic lines without selection are usually low. Since the selectable marker gene is expected to function in a range of cell types. Hygromycin resistance gene is most widely used selectable marker for plant cell transformation and sensitivity of a particular species to hygomycin is a key element in the development of any new transformation system in which a hygromycin resistance gene will be employed. The sensitivity of leaf and petiole explants for hygromycin was tested according to the fresh weight of the explant/callus and percentage of shoot regeneration. Both leaf and petiole explants showed appropriate growth i.e. callus initiation and shoot formation on the shoot regeneration medium devoid of hygromycin. Hygromycin sensitivity of cultured tissues of leaf and petiole explants of lettuce had shown similar results, i.e. both explants are highly sensitive to hygromycin even as low as 2.5 mg/l concentration of hygromycin. The nontransformed tissue did not survive on the selective medium containing hygromycin during transformation experiment. The sensitivity of leaf and petiole explants for hygromycin was tested according to the fresh weight of the explant/callus and percentage of shoot regeneration. Both leaf and petiole explants showed appropriate growth i.e. callus initiation and shoot formation on the shoot regeneration medium devoid of hygromycin, but on the selective media at concentration as low as (2.5 mg/l and 5.0 mg/l) of hygromycin in the culture medium, the colour of explants/tissues had changed to pale greenish yellow and finally turned brown after 35 days of culture. But increase in concentration (7.5mg/l and 10mg/l), could fasten the change in colour of explants to pale yellow and was sufficient to differentiate these from the control. At much higher concentration (12.5mg/l and 15.0mg/l), colour change and browning of the explants within 2 weeks resulted in complete necrosis of the explants. No shoot regeneration or shoot bud formation was observed from both the explants even after 5 weeks of culturing on the selective shoot regeneration medium containing different concentrations of hygromycin. In control experiment, adventitious shoot bud regeneration was observed within 35 days on the culture medium from both the leaf and petiole explants. A gradual decline in fresh weight of leaf and petiole explants were recorded with increased concentration of hygromycin (2.5 to 15 mg/l) up to 35 days. Concentration above 10.0mg/l of hygromycin inhibits shoot growth and differentiation in cultured explants of lettuce and similar results were reported by Dias et al., 2006. The maximum decline in the fresh weight was observed at 15mg/l hygromycin in both the explants, whereas in case of control (without hygromycin), callus was induced from the cut edge of the both explants and a gradual increase in the fresh weights was observed. Dias et al., 2006 they also used 10 mg/l hygromycin as the lethal dose for the selection of transgenic shoots containing gene against fungal tolerance in lettuce cv.Veronica. Statistical analysis of the above data have shown that there is a significant difference between the fresh weights of leaf and petiole explants/callus at different intervals in the control and six different concentrations of hygromycin (Table 1 and 2). Negative correlation coefficient was observed between different concentrations of hygromycin used versus fresh weight of explant (s)/ tissue/ callus at different intervals of time in both the explants. These results indicate that hygromycin has an inhibitory effect on the growth of cultured tissues, as it is a potent inhibitor of protein synthesis. Therefore, based on the above results, 10.0 mg/l hygromycin was used and demonstrated for the selection of transformed cells to be an effective selection agent in Agrobacteriummediated genetic transformation studies in lettuce. Hygromycin has been reported for the selection of transformed and non-transformed explants with very low frequency of selection escape in lettuce (Enkhchimeg et al., 2005;Dias et al., 2006;Deng et al., 2007) and also in many other crops like Banana (Maziah et al., 2007), pigeon pea plants (Kumar et al., 2004) and American ginseng plants (Chen and Punja, 2002). For successful Agrobacterium-mediated transformation, elimination of bacteria from culture is necessary after the co-cultivation period. This is realized by the addition of antibiotics into the culture medium; cefotaxime, an antibiotic commonly used to kill Agrobacterium after co-cultivation with plant material with plant material. But antibiotics, which are commonly used to eliminate A. tumefaciens from plant tissues, have also been shown to influence morphogenesis and regeneration potential either positively or negatively (Ling et al., 1988;Ahmed et al., 2007). Ahmed and his co-workers reported cefotaxime severely inhibited regeneration form Agrobacterium leaf explants of lettuce cultivar 'Evola' and found 50 mg/l cefotaxime to be optimum for the suppression of Agrobacterium. In case of lettuce (Lactuca sativa cv. Solan Kriti), increase in cefotaxime concentration (100mg/l to 500mg/l) showed a gradual decrease in the percent shoot regeneration in both the explants. Maximum percent (71.99%) shoot regeneration with average number of shoots per explants (0.98) were obtained on shoot regeneration medium with 100 mg/l cefotaxime concentration in leaf explants, whereas, maximum percent (59.99%) shoot regeneration with average number of shoots per explants (0.96) were obtained on shoot regeneration medium with 100 mg/l cefotaxime in petiole explants. In contrast maximum decline was observed at a higher concentration of 500mg/l cefotaxime (Table 5 and 6) showing a negative effect of the shoot regeneration potential. the PCR amplification to detect the presence of hpt and chi genes. The results showed that a 500bp fragment of hpt gene was amplified in all the five samples of transformed DNA and was absent in nontransformed control callus (Fig 2a). However, a 237bp fragment of chi gene was found to be amplified in only four out of the five callus lines (Fig 2b). PCR result indicated that hygromycin is an effective selective agent and the above selection protocol applied for lettuce transformants was effective and no nontransformant escaped from hygromycin selection. Selection and identification of transformed cells and tissues are crucial steps of genetic transformation which prove to be helpful in improving the selection and transformation efficiency. This study thus reports an efficient hygromycin selection protocol for Agrobacterium-mediated lettuce transformation.
Structural Control on the Formation of Pb-Zn Deposits: An Example from the Pyrenean Axial Zone Structural Control on the Formation of Pb-Zn Deposits: An Example from the Pyrenean Axial Zone. Abstract: Pb-Zn deposits and specifically Sedimentary-Exhalative (SEDEX) deposits are frequently found in deformed and/or metamorphosed geological terranes. Ore bodies structure is generally difficult to observe and its relationships to the regional structural framework is often lacking. In the Pyrenean Axial Zone (PAZ), the main Pb-Zn mineralizations are commonly considered as Ordovician SEDEX deposits in the literature. New structural field analyzes focusing on the relations between mineralization and regional structures allowed us to classify these Pb-Zn mineralizations into three types: (I) Type 1 corresponds to minor disseminated mineralization, probably syngenetic and from an exhalative source. (II) Type 2a is a stratabound mineralization, epigenetic and synchronous to the Variscan D 1 regional deformation event and (III) Type 2b is a vein mineralization, epigenetic and synchronous to the late Variscan D 2 regional deformation event. Structural control appears to be a key parameter in concentrating Pb-Zn in the PAZ, as mineralizations occur associated to fold hinges, cleavage, and/or faults. Here we show that the main exploited type 2a and type 2b Pb-Zn mineralizations are intimately controlled by Variscan tectonics. This study demonstrates the predominant role of structural study for unraveling the formation of Pb-Zn deposits especially in deformed/metamorphosed terranes. Introduction The world's most important Pb-Zn resources consist in Sedimentary-Exhalative (SEDEX) mineralizations [1]. These types of ore deposits are syngenetic sedimentary to diagenetic. Occurrence of laminated sulfides parallel to bedding associated to sedimentary features (graded beds, etc.) are the key geological argument [2]. These important deposits occur often in ancient metamorphosed and highly deformed terranes for example in Red Dog, Alaska [3,4]; Rampura, India [5]; or Broken Hill, Australia [6]. In these cases, the processes of ore formation are still largely debated. In consequence, unraveling the relationships between mineralization and orogenic remobilization(s) is essential in order to understand the genesis of Pb-Zn deposits in deformed and metamorphosed environments. For example, in Broken Hill [6][7][8] and Cannington [9] deposits in Australia some authors argued for a metamorphogenic and epigenetic mineralization as large metasomatic zones may have refined pre-existing Pb-Zn rich rocks. Other authors consider a pre-metamorphic and syngenetic origin with only limited remobilization linked to tectonic events [10][11][12]. In the world-class Jinding Pb-Zn deposit, authors proposed a syngenetic origin of the deposit [14,15] whereas others argued for an epigenetic genesis of the deposit based on field study, textural evidences [16][17][18][19], fluid inclusion [19,20], and paleomagnetic age [13]. Nowadays, these high-tonnage Pb-Zn deposits are the preferential target of numerous academic and industrial studies also for the presence of rare metals like Ge, Ga, In, or Cd associated with sulfides. The Pb-Zn deposits hosted in the Pyrenean Axial Zone (PAZ) area that has suffered Variscan tectonics [21][22][23] are usually considered to be SEDEX. As an example, due to their geometry and the presence of distal volcanic rocks, Bois et al. [24] and Pouit et al. [25] considered as SEDEX the Pb-Zn mineralizations located in the Pierrefitte anticlinorium. In Bentaillou area, Fert [26] and Pouit [27,28] demonstrated that the stratigraphic and sedimentary controls were dominant processes during the genesis of these mineralizations. In the Aran Valley, deposits (Liat, Victoria-Solitaria, and Margalida) have been studied by Pujals [29] and Cardellach et al. [30,31]. These authors concluded on a stratiform and possibly exhalative formation of Pb-Zn mineralizations associated with a poor remobilization during Variscan deformation. Only few authors have documented the impact of Variscan tectonics on the genesis of these mineralizations. These are Alonso [32] in Liat, Urets, and Horcalh deposits or Nicol [33] for Pierrefitte anticlinorium deposits. In the Benasque Pass area, south of the Bossòst anticlinorium, Garcia Sansegundo et al. [34] indicated probable Ordovician stratiform or stratabound Pb-Zn mineralizations intensely reworked during Variscan tectonics. The Pb isotopes study realized by Marcoux [35] showed a unique major event of Pb-Zn mineralization interpreted as sedimentarycontrolled and Ordovician or Devonian in age. Remobilization processes of Pb isotopes seem however poorly constrained and a complete structural study related to these analyzes is lacking. Pyrenean sulfide mineralizations are an excellent target for investigating the links between orogenic deformation(s) and the genesis of associated mineralization(s), as well as finding key arguments to make the distinction between strictly syngenetic or rather epigenetic mineralizations and structurally remobilized mineralizations. In this work we will demonstrate that Pb-Zn deposits from five districts in the PAZ, previously largely considered SEDEX, were actually formed through processes involving a strong structural control. Geological Setting The Pyrenean Axial Zone (PAZ, Figure 1) is the result of the collision between the Iberian and Eurasian plates since the Lower Cretaceous. Deep parts of the crust were exhumed during this orogeny. The PAZ is composed of Paleozoic metasedimentary rocks locally intruded by Ordovician granites deformed and metamorphosed during the Variscan orogeny, like the Aston or Canigou gneiss domes [23,36]. The PAZ is generally divided in two domains [21,[36][37][38][39]: (i) a deep-seated domain called Infrastructure, which contains medium to high-grade metamorphic rocks and (ii) a shallow-seated domain called the Superstructure, which is composed of low-grade metamorphic rocks. The Infrastructure presents flat-lying foliations but highly deformed domains appear locally with steep and penetrative crenulation foliations. Alternatively, the Superstructure presents moderate deformation associated to a slaty cleavage [40,41] These two domains are intruded by Late-Carboniferous granites, like the Bossòst and the Lys-Caillaouas granites [37,42,43]. In the PAZ several deformation phases essentially Variscan in age (325-290 Ma) are recognized. The first deformation event (D 1 ) is marked by a cleavage (S 1 ) that is often parallel to the stratification (S 0 ). Regional M 1 metamorphism is of Medium-Pressure and Low-Temperature (MP/LT) and synchronous of this first D 1 deformation [22]. The second deformation event (D 2 ) is expressed by a moderate to steep axial planar (S 2 ) cleavage. M 2 is a Low-Pressure and High-Temperature (LP/HT) metamorphism linked to the Late-Variscan granitic intrusions, and it is superposed to the M 1 metamorphism [44,45]. Late-Variscan and/or Pyrenean-Alpine D 3 deformations are locally expressed as fold and shear zones like the Merens and/or probably the Bossòst faults [41,46,47]. The Pyrenean Pb-Zn regional district is the second largest in France with~400,000 t Zn and 180,000 t Pb extracted [48,49]. These sulfides deposits are localized in the PAZ in the Pierrefitte and Bossòst anticlinoriums ( Figure 1b). Sphalerite (ZnS) and galena (PbS) are essentially present in Ordovician and Devonian metasediments. Few Pb-Zn deposits are hosted in granitic rocks [50]. This study focuses on Pb-Zn deposits located in the Bossòst anticlinorium ( Figure 1) [42,44,51] and includes a comparison with Pb-Zn deposits occurring in the Pierrefitte anticlinorium. The southern part of the Bossòst anticlinorium forms the Aran Valley synclinorium. The northern part is limited by the North Pyrenean fault (Figure 2a). It is mostly composed of Cambrian to Devonian rocks and an intruding Late-Variscan leucocratic granite named the Bossòst granite. The PAZ is generally divided in two domains [21,[36][37][38][39]: (i) a deep-seated domain called Infrastructure, which contains medium to high-grade metamorphic rocks and (ii) a shallow-seated domain called the Superstructure, which is composed of low-grade metamorphic rocks. The Infrastructure presents flat-lying foliations but highly deformed domains appear locally with steep and penetrative crenulation foliations. Alternatively, the Superstructure presents moderate deformation associated to a slaty cleavage [40,41] These two domains are intruded by Late-Carboniferous granites, like the Bossòst and the Lys-Caillaouas granites [37,42,43]. In the PAZ several deformation phases essentially Variscan in age (325-290 Ma) are recognized. The first deformation event (D1) is marked by a cleavage (S1) that is often parallel to the stratification (S0). Regional M1 metamorphism is of Medium-Pressure and Low-Temperature (MP/LT) and synchronous of this first D1 deformation [22]. The second deformation event (D2) is expressed by a moderate to steep axial planar (S2) cleavage. M2 is a Low-Pressure and High-Temperature (LP/HT) metamorphism linked to the Late-Variscan granitic intrusions, and it is superposed to the M1 metamorphism [44,45]. Late-Variscan and/or Pyrenean-Alpine D3 deformations are locally expressed as fold and shear zones like the Merens and/or probably the Bossòst faults [41,46,47]. The Pyrenean Pb-Zn regional district is the second largest in France with ~400,000 t Zn and ~180,000 t Pb extracted [48,49]. These sulfides deposits are localized in the PAZ in the Pierrefitte and Bossòst anticlinoriums (Figure 1b). Sphalerite (ZnS) and galena (PbS) are essentially present in Ordovician and Devonian metasediments. Few Pb-Zn deposits are hosted in granitic rocks [50]. This study focuses on Pb-Zn deposits located in the Bossòst anticlinorium ( Figure 1) [42,44,51] and includes a comparison with Pb-Zn deposits occurring in the Pierrefitte anticlinorium. The southern part of the Bossòst anticlinorium forms the Aran Valley synclinorium. The northern part is limited by the North Pyrenean fault (Figure 2a). It is mostly composed of Cambrian to Devonian rocks and an intruding Late-Variscan leucocratic granite named the Bossòst granite. [52][53][54]) and IGME (Spain, Aran Valley; Garcia-Sansegundo et al. [55]). Metamorphic dome boundaries are related to andalousite isograd presented by Zwart; (b) Structural map with foliation trajectories of S0-S1, subvertical S2, and related F2 folds. Note preferential apparition of Pb-Zn deposits when S2 cleavage is well-expressed. (c) Schmidt stereographic projections (lower hemisphere) of poles to S0-S1 and S2 subvertical foliation planes. [52][53][54]) and IGME (Spain, Aran Valley; Garcia-Sansegundo et al. [55]). Metamorphic dome boundaries are related to andalousite isograd presented by Zwart; (b) Structural map with foliation trajectories of S 0 -S 1 , subvertical S 2 , and related F 2 folds. Note preferential apparition of Pb-Zn deposits when S 2 cleavage is well-expressed. (c) Schmidt stereographic projections (lower hemisphere) of poles to S 0 -S 1 and S 2 subvertical foliation planes. Three main Pb-Zn districts are recognized in the Bossòst anticlinorium ( Figure 2): (I) The Bentaillou-Liat-Urets district is located in the eastern part of the anticlinorium and was the most productive in the Bossòst anticlinorium, ~1.4 Mt at 9% of Zn and 2% of Pb metals [32,33]. (II) The Margalida-Victoria-Solitaria district is located in the southern part of the anticlinorium, close to the Bossòst granite. Production reached ~555,000 t with 11% Zn and 0.1% Pb [49]. (III) The Pale Bidau-Argut-Pale de Rase district is located in the northern part of the anticlinorium. Pb-Zn production did not exceed ~7000 t of Zn and ~3000 t of Pb [57]. [56]). Note presence of Pb-Zn mineralization at rock competence interface and close to F1 fold hinge in Bentaillou mine. Three main Pb-Zn districts are recognized in the Bossòst anticlinorium ( Figure 2): (I) The Bentaillou-Liat-Urets district is located in the eastern part of the anticlinorium and was the most productive in the Bossòst anticlinorium,~1.4 Mt at 9% of Zn and 2% of Pb metals [32,33]. (II) The Margalida-Victoria-Solitaria district is located in the southern part of the anticlinorium, close to the Bossòst granite. Production reached~555,000 t with 11% Zn and 0.1% Pb [49]. (III) The Pale Bidau-Argut-Pale de Rase district is located in the northern part of the anticlinorium. Pb-Zn production did not exceed~7000 t of Zn and~3000 t of Pb [57]. Pierrefitte anticlinorium is located north of the Cauteret granite and intersected by the Eaux-Chaudes thrust (ECT; Figure 1). It is essentially composed of Ordovician rocks in the West and Devonian terranes in the East. Two districts are studied: (I) Pierrefitte mines is the largest district in the PAZ which produced ~180,000 t of Zn, ~100,000 t of Pb and ~150 t of Ag [48]. (II) Arre and Anglas mines are located west to Pierrefitte mines. Pb-Zn production did not exceed ~6500 t of Zn [48]. Pierrefitte anticlinorium is located north of the Cauteret granite and intersected by the Eaux-Chaudes thrust (ECT; Figure 1). It is essentially composed of Ordovician rocks in the West and Devonian terranes in the East. Two districts are studied: (I) Pierrefitte mines is the largest district in the PAZ which produced~180,000 t of Zn,~100,000 t of Pb and~150 t of Ag [48]. (II) Arre and Anglas mines are located west to Pierrefitte mines. Pb-Zn production did not exceed~6500 t of Zn [48]. Structural Analysis of Three Pb-Zn Districts in the Bossòst Anticlinorium The Bossòst anticlinorium is a 30 × 20 km E-W-trending asymmetric antiform hosting a metamorphic dome ( Figure 2a). Pre-Silurian lithologies are dominated by Cambro-Ordovician schists. Locally, other lithologies are present like the Cambro-Ordovician Bentaillou marble or the late-Ordovician microconglomerate and limestone ( Figure 2b). Two distinct cleavages can be observed in the Bossòst anticlinorium. S 1 transposes the S 0 stratification and is roughly oriented N090-N120 • E with varied dip angles both to the north and to the south (Figure 2b,c). S 0 -S 1 dip angles are low in the metamorphic dome ( Figure 2b) but this pattern is not restricted to the core of the anticlinorium. In the eastern part of the anticlinorium, foliation is generally low to moderately dipping (0-45 • N or S, Figure 2b) and Garcia-Sansegundo and Alonso [54] supposed the presence of large recumbent F 1 folds in the Bentaillou and Horcalh-Malh de Bolard areas. The presence of a Late Ordovician microconglomerate at the base of Bentaillou limestone is described by Garcia-Sansegundo and Alonso [56] and confirms this hypothesis. Furthermore, the presence of these folds is inferred by the observation of dm-to pluri-m north-verging recumbent F 1 folds in Bentaillou marble in the underground levels of the mine and also by their presence in the Devonian schists. Close to the southern boundary of the Bossòst granite, S 0 -S 1 foliation in high-grade schists is steeply dipping (Figure 2b). The S 2 cleavage trends N080-120 • E and is generally sub-vertical ( Figure 2c) as axial plane of F 2 south-verging folds. S 2 cleavage and related F 2 folds are particularly well developed in the southern part of the Bossòst anticlinorium ( Figure 2b). In the PAZ districts, three Pb-Zn mineralization types are commonly observed and two of these will be described below: Stratabound mineralization is subparallel to S 0 -S 1 and Vein mineralization is parallel to S 2 . Disseminated mineralization is not a key mineralization type and is spread in the host rocks. District of Bentaillou-Liat-Urets This district is located in the southeastern part of the Bossòst anticlinorium. Three main extraction areas are present in this district: (i) Bentaillou mine is located in the north of the district (Figure 3a). Exploitation finished in 1953 and produced~110,000 t of Zn and~40,000 t of Pb. At that time, it was the second largest mine in the Pyrenees [58], (ii) the Liat mine lays southwest of the district and (iii) Urets is located southeast of the district (Figure 3a). Both produced~60,000 t of Zn [49]. Mineralization occurrences will be described in the following parts. Bentaillou Area Mineralization lays close to the hinge of a N090-110 • E kilometer-size F 1 recumbent fold ( Figure 3b) and is essentially located at the top of the Cambro-Ordovician marble, below the Late-Ordovician schists ( Figure 4a). Mineralized stratabound bodies are broadly parallel to S 0 -S 1 which is sub horizontal with a progressive increase of the dip from 45 • N to 80 • N to the lowest underground mine levels in the north (Figure 3b). Relicts axial planar S 1 of F 1 recumbent isoclinal folds are locally underlined by recrystallized calcite in N090-100 • E axial planes ( Figure 4b). Pb-Zn stratabound mineralizations are present in cm-to pluri-m N-S open-filling structures which can be assimilated to pull-apart features ( Figure 4c) that were formed in association with a dextral top to the north kinematic. These mineralized bodies show typical impregnation textures (Figure 4d) and sphalerite presents mm to cm grain sizes. Pb-Zn mineralization is absent in weakly D 1 deformed areas whereas it occurs in highly deformed domains associated to the appearance of S 1 cleavage in F 1 fold hinges (Figure 4e,f). Liat Area Pb-Zn mineralization is located at the rock interface ( Figure 4g) and can be hosted in Bentaillou marble, especially on top of the marble, between the microconglomerate and Liat beds or between Liat beds and the Silurian black-shale. The large hm-size open F 2 fold is bordered to the south by a Silurian synclinorium (Figure 3a,b). S 1 cleavage is strictly parallel to S 0 in the area. D 2 deformation is well expressed in the south at the contact between Silurian black-shale and Late Ordovician schists. Mineralized stratabound bodies with pluri-dm to m-thickness appear parallel to the shallow dipping S 0 -S 1 . Folds in Liat schists are present locally at the base of the mineralization (Figure 4h). It presents a brecciated texture (Figure 4i) with clasts of quartz and schists. Sphalerite presents cm grain sizes. At the contact with the Silurian black-shale the dip of Late Ordovician schist increases and a normal fault is inferred. Vertical Pb-Zn vein mineralization parallel to S 2 is present in this fault. It intersects S 0 stratification, S 1 cleavage as well as stratabound mineralizations (Figure 4j,k). Vein mineralization also presents a brecciated texture and sulfide grains are oriented parallel to S 2 . Sphalerite presents an infra-mm grain size. Urets Area This Pb-Zn mineralization is hosted in Liat schist. D 2 deformation is intensively present in this area, with numerous N100-130 • E F 2 open to isoclinal folds associated to a subvertical N90-120 • E S 2 cleavage. Stratabound pluri-dm to m Pb-Zn mineralization is mainly located in F 2 fold hinges ( Figure 4l) and can locally intersect S 0 stratification ( Figure 4m). Pb-Zn mineralization has a brecciated texture with mm sphalerite grains and mm to cm quartz clasts. District of Margalida-Victoria-Solitaria This district is located south of the Bossòst anticlinorium ( Figure 2a). Three main extraction areas are present in this district from north to the south ( Figure 5a): (i) Margalida mine is located close to the Bossòst granite next to the Bossòst fault, (ii) Victoria mine, and (iii) Solitaria mine lays south of the granite and north and west to Arres village. Margalida and Solitaria mine produced less than 50,000 t of ore with 10% of Zn and 1% of Pb [49]. Victoria produced~504,000 t with 11% of Zn and 1% of Pb [49]. Silurian synclinorium (Figure 3a,b). S1 cleavage is strictly parallel to S0 in the area. D2 deformation is well expressed in the south at the contact between Silurian black-shale and Late Ordovician schists. Mineralized stratabound bodies with pluri-dm to m-thickness appear parallel to the shallow dipping S0-S1. Folds in Liat schists are present locally at the base of the mineralization (Figure 4h). It presents a brecciated texture (Figure 4i) with clasts of quartz and schists. Sphalerite presents cm grain sizes. At the contact with the Silurian black-shale the dip of Late Ordovician schist increases and a normal fault is inferred. Vertical Pb-Zn vein mineralization parallel to S2 is present in this fault. It intersects S0 stratification, S1 cleavage as well as stratabound mineralizations (Figure 4j,k). Vein mineralization also presents a brecciated texture and sulfide grains are oriented parallel to S2. Sphalerite presents an infra-mm grain size. Urets Area This Pb-Zn mineralization is hosted in Liat schist. D2 deformation is intensively present in this area, with numerous N100-130°E F2 open to isoclinal folds associated to a subvertical N90-120°E S2 cleavage. Stratabound pluri-dm to m Pb-Zn mineralization is mainly located in F2 fold hinges ( Figure 4l) and can locally intersect S0 stratification (Figure 4m). Pb-Zn mineralization has a brecciated texture with mm sphalerite grains and mm to cm quartz clasts. District of Margalida-Victoria-Solitaria This district is located south of the Bossòst anticlinorium ( Figure 2a). Three main extraction areas are present in this district from north to the south ( Figure 5a): (i) Margalida mine is located close to the Bossòst granite next to the Bossòst fault, (ii) Victoria mine, and (iii) Solitaria mine lays south of the granite and north and west to Arres village. Margalida and Solitaria mine produced less than 50,000 t of ore with ~10% of Zn and 1% of Pb [49]. Victoria produced ~504,000 t with 11% of Zn and 1% of Pb [49]. Margalida Area Pb-Zn mineralization is located in Late-Ordovician sandwich limestone (Figure 5a,b) which forms the core of an anticlinal presenting a vertical N100 • E-trending axial plane (supposed F 2 fold). Mineralization is located in the damaged zone of the Bossòst N090 • E-trending fault. Mineralization appears as pluri-dm lenses generally parallel to S 0 -S 1 . Still mineralization is not always concordant to S 0 -S 1 (Figure 5c). The texture of sulfide mineralization in Margalida area is different to this in Victoria-Solitaria area as sulfide grain size is infra-mm. Victoria-Solitaria Areas Pb-Zn mineralization is hosted by Late Ordovician schists (Figure 5a,b,d) and generally parallel to S 0 -S 1 . Locally S 0 -S 1 is intensively folded by F 2 asymmetrical isoclinal N090-N120 • E folds and a vertical S 2 N070-110 • E axial planar cleavage can be observed. Stratabound mineralization appears only in domains where F 2 folds imprint is intense (Figure 5e). Furthermore, in Victoria and Solitaria mines exploitation was preferentially undertaken in F 2 fold hinges. Pb-Zn mineralization is thicker in fold hinge (dm to m in thickness) and probably reworked during this D 2 deformation phase (Figure 5e,f). Sphalerite grains are often sub-millimetric. The presence of vein mineralization cannot be completely excluded as vertical galleries are present. District of Pale Bidau-Argut-Pale de Rase The general structural description of the district is given in [53]. In this section more details are given on the structural features of the Pale Bidau area (see location on Figure 2a). Two different Pb-Zn mineralization geometries appear: a first stratabound mineralization is hosted only in F 2 fold pelitic level and concordant to S 0 -S 1 , marked by cm to pluri-m box-work texture. The second mineralization consists of veins oriented N090-120 • E and consists of dm to m veins largely developed when D 2 deformation is important. Various dips are present for this mineralization but is mainly subvertical. Geometry of this mineralization can be interpreted as a pull-apart (Figure 6a) opened in a dextral top to the north movement and controlled by S 2 cleavage. Where S 2 cleavage is less pronounced, mineralization is thinner and seems to present in the sub-horizontal to 45 • N S 0 -S 1 cleavage (Figure 6b,c). Sphalerite crystals did not reach mm grain size. Margalida Area Pb-Zn mineralization is located in Late-Ordovician sandwich limestone (Figure 5a,b) which forms the core of an anticlinal presenting a vertical N100°E-trending axial plane (supposed F2 fold). Mineralization is located in the damaged zone of the Bossòst N090°E-trending fault. Mineralization appears as pluri-dm lenses generally parallel to S0-S1. Still mineralization is not always concordant to S0-S1 (Figure 5c). The texture of sulfide mineralization in Margalida area is different to this in Victoria-Solitaria area as sulfide grain size is infra-mm. Victoria-Solitaria Areas Pb-Zn mineralization is hosted by Late Ordovician schists (Figure 5a,b,d) and generally parallel to S0-S1. Locally S0-S1 is intensively folded by F2 asymmetrical isoclinal N090-N120°E folds and a vertical S2 N070-110°E axial planar cleavage can be observed. Stratabound mineralization appears only in domains where F2 folds imprint is intense (Figure 5e). Furthermore, in Victoria and Solitaria mines exploitation was preferentially undertaken in F2 fold hinges. Pb-Zn mineralization is thicker in fold hinge (dm to m in thickness) and probably reworked during this D2 deformation phase ( Figure 5e,f). Sphalerite grains are often sub-millimetric. The presence of vein mineralization cannot be completely excluded as vertical galleries are present. District of Pale Bidau-Argut-Pale de Rase The general structural description of the district is given in [53]. In this section more details are given on the structural features of the Pale Bidau area (see location on Figure 2a). Two different Pb-Zn mineralization geometries appear: a first stratabound mineralization is hosted only in F2 fold pelitic level and concordant to S0-S1, marked by cm to pluri-m box-work texture. The second mineralization consists of veins oriented N090-120°E and consists of dm to m veins largely developed when D2 deformation is important. Various dips are present for this mineralization but is mainly subvertical. Geometry of this mineralization can be interpreted as a pull-apart ( Figure 6a) opened in a dextral top to the north movement and controlled by S2 cleavage. Where S2 cleavage is less pronounced, mineralization is thinner and seems to present in the sub-horizontal to 45°N S0-S1 cleavage (Figure 6b,c). Sphalerite crystals did not reach mm grain size. Comparison with the Pierrefitte Anticlinorium: Pierrefitte and Arre-Anglas-Uzious Districts The Pierrefitte anticlinorium is a 25 × 10 km NNW-SSE anticlinorium located in the western part of the PAZ (Figure 1b and 7a). Its core is composed of Ordovician schists and Late-Ordovician carbonated breccias. Upper stratigraphic levels are made of Silurian black-shales and Devonian rocks. In western parts km-scale Valentin NNW-SSE anticlinal is included in the Pierrefitte anticlinorium. Comparison with the Pierrefitte Anticlinorium: Pierrefitte and Arre-Anglas-Uzious Districts The Pierrefitte anticlinorium is a 25 × 10 km NNW-SSE anticlinorium located in the western part of the PAZ (Figures 1b and 7a). Its core is composed of Ordovician schists and Late-Ordovician carbonated breccias. Upper stratigraphic levels are made of Silurian black-shales and Devonian rocks. In western parts km-scale Valentin NNW-SSE anticlinal is included in the Pierrefitte anticlinorium. Compared to the Bossòst anticlinorium, the volume of late-Variscan granite or pegmatitic rocks outcropping is smaller and there is no metamorphic dome in the core (Figure 7a). associated to D1 deformation. S2 vertical N090-100°E cleavage is well expressed in Devonian levels at the rim of the anticlinorium but is less visible in the Ordovician core. Numerous Pb-Zn mines are present in Late-Ordovician and Devonian terranes. These have produced ~3 Mt (average 9% of Zn and 5% of Pb). Pierrefitte District The Pierrefitte mines (Garaoulere, Couledous, Vieille-Mine) are located at the contact with Late Ordovician rocks mainly carbonate breccia. N100-110°E S0-S1 foliation moderately dips (20° to 60°) to the south (Figure 7d). The Pierrefitte anticlinorium is structured by several thrusts within Silurian levels (Figure 7b,c) associated to D 1 deformation. S 2 vertical N090-100 • E cleavage is well expressed in Devonian levels at the rim of the anticlinorium but is less visible in the Ordovician core. Numerous Pb-Zn mines are present in Late-Ordovician and Devonian terranes. These have produced~3 Mt (average 9% of Zn and 5% of Pb). Stratabound mineralization lays at the top of the Late Ordovician series at the contact or within the Silurian black-shales (Figure 7c), which follows a regional thrust parallel to S 0 -S 1 . The presence of a thrust in Pierrefitte area is reported in [21,60,61] and this observation is supported in galleries by the occurrence of dm-scale dextral shear bands with a top-to-the-north-east kinematic. The mine galleries and the main exploited ore follow this regional thrust zone. S 1 cleavage often transposed S 0 stratification and corresponds to axial planes of isoclinal recumbent F 1 N090-120 • E folds. Arre-Anglas-Uzious District Arre and Anglas-Uzious mines are hosted by Devonian schists and Lower Devonian limestone respectively (Figure 7e). S 2 cleavage is well-expressed even in Devonian limestone in the area and subvertical with a N090-100 • E trend. Arre mine is located in the western hinge of the Pierrefitte anticlinorium close to the contact of limestone and schistose rocks. The mineralization is composed of two ore bodies showing a trend of N040-090 • E and a dip of 70 • N to 90 • N. Mineralization appears parallel to S 2 cleavage and discordant to S 0 -S 1 (Figure 7f) which is typical of a vein mineralization. Anglas and Uzious mines are located in the northern part of the Pierrefitte anticlinorium. Mineralization consists in multiple pluri-centimeters to m vein orebodies, with several orientations from N060 • to N100 • E and subvertical dips. Uzious veins intersect magmatic aplite with a N050 • E trend and have a pull-apart geometry (Figure 7g) linked to the presence of N090-100 • E S 2 weak structures (Figure 7h). Many conjugate fractures N030-50 • E with various dips are filled with mineralization close to the veins but their extension is limited to few dm. Ore Petrology and Microstructures A synthetic paragenetic sequence of the three Pb-Zn mineralization geometries investigated in this study is presented in the Figure 8. Disseminated mineralization represents the primary layered ore that is essentially composed of sparsely disseminated pluri-µm to mm grains of sphalerite, pyrite, magnetite, and galena. In all the studied deposits this mineralization is minor and does not constitute the exploited ore. Sulfides may appear in graded-beds or have a typical framboidal appearance (Figure 9a). Stratabound and vein mineralizations constitute the main sulfides mineralizations. Sphalerite is the more widespread sulfide in these two mineralizations. Pyrite, galena pyrrhotite, chalcopyrite, and arsenopyrite are present in minor amounts. Metamorphic muscovite, chlorite, or biotite are intimately associated to sulfide mineralization. In Victoria-Solitaria, metamorphic Zn-spinel or gahnite is present in the host rocks and in breccia clasts in stratabound sulfide mineralization. The presence of gahnite in Victoria was previously reported [62]. In the host rock gahnite is elongated parallel to S 1 and is intersected by stratabound mineralization (Figure 9b). Stratabound Pb-Zn mineralization is a post-disseminated mineralization. SEM images show a primary framboidal galena intersected by a secondary stratabound pull-apart mineralization in the Bentaillou mine (Figure 9a). In the Pierrefitte anticlinorium stratabound magnetite is abundant, especially in the Pierrefitte mine. It has crystallized prior to sphalerite. In the Pierrefitte mine syn-cinematic sphalerite crystallizes in asymmetric pressure shadows around a clast of magnetite (Figure 9c). Sphalerite appears parallel to S1 cleavage and intersects S0 stratification in an isoclinal F1 fold hinge (Figure 9d). In the Bossòst anticlinorium and especially in Liat mine, sphalerite and quartz mineralization intersect F2 folded pelitic rocks (Figure 9e). The same quartz associated to sphalerite is present in crack and seal veins (Figure 9e). In Margalida a typical durchbewegung texture with quartz spheroids in a sphalerite matrix shows a deformational imprint on this mineralization. Stratabound Pb-Zn mineralization is a post-disseminated mineralization. SEM images show a primary framboidal galena intersected by a secondary stratabound pull-apart mineralization in the Bentaillou mine (Figure 9a). In the Pierrefitte anticlinorium stratabound magnetite is abundant, especially in the Pierrefitte mine. It has crystallized prior to sphalerite. In the Pierrefitte mine syn-cinematic sphalerite crystallizes in asymmetric pressure shadows around a clast of magnetite (Figure 9c). Sphalerite appears parallel to S 1 cleavage and intersects S 0 stratification in an isoclinal F 1 fold hinge (Figure 9d). In the Bossòst anticlinorium and especially in Liat mine, sphalerite and quartz mineralization intersect F 2 folded pelitic rocks (Figure 9e). The same quartz associated to sphalerite is present in crack and seal veins (Figure 9e). In Margalida a typical durchbewegung texture with quartz spheroids in a sphalerite matrix shows a deformational imprint on this mineralization. Stratabound mineralization contain apatite, ilmenite, and tourmaline minerals that are only observed in this mineralization. In the Pierrefitte mineralization, the abundance of chlorite and muscovite associated to the mineralization is remarkable compared to other Pyrenean deposits. Vein mineralization intersects S 0 at the micron scale (Figure 9f). In the Anglas deposit, vein mineralization is essentially composed of sphalerite, galena, quartz and calcite. The hanging wall of the vein is parallel to S 2 foliation and is marked by cordierite crystallization. Sphalerite in vein mineralization appears highly deformed and recrystallized with mm relictual grains and recrystallized µm-size crystals (like in Arre deposit, see Figure 9g). In Pale Bidau deposit, vein mineralization is only present in domains where the S 2 cleavage is well-marked. Note that Ge-minerals are exclusively present in the vein mineralization (Figure 8). Types of Pb-Zn Mineralizations in the PAZ The presence of three major types of Pb-Zn mineralizations is demonstrated in this study: Disseminated but layered mineralization, which is now defined as Type 1, appears with graded-beds and framboidal appearance (Figure 9a). Stratabound mineralization (now defined as Type 2a) is a syn-D 1 mineralization concordant to the S 1 foliation. Vein mineralization (now defined as Type 2b) is a syn-to post-D 2 vein-type mineralization, parallel to the subvertical S 2 foliation. Type 2a and Type 2b are undoubtedly epigenetic and were formed as a consequence of Variscan tectonics. The first and earlier Type 1 mineralization ( Figure 10) is recognized in all the studied deposits in the Bossòst and Pierrefitte anticlinoriums, but it is not the main exploited resources. Its formation may be linked to the early volcanic Ordovician or Devonian events as proposed by Pouit [27] and Reyx [63]. In Pierrefitte anticlinorium, Nicol [55] proposed a unique Devonian source for the Pb-Zn mineralizations. Syngenetic formation is preferred for the Type 1 mineralization as sulfides appear layered and with sedimentary affinities. Nevertheless, framboidal texture may as well form in a post-sedimentation environment like in hydrothermal veins [64]. Stratabound mineralization contain apatite, ilmenite, and tourmaline minerals that are only observed in this mineralization. In the Pierrefitte mineralization, the abundance of chlorite and muscovite associated to the mineralization is remarkable compared to other Pyrenean deposits. Vein mineralization intersects S0 at the micron scale (Figure 9f). In the Anglas deposit, vein mineralization is essentially composed of sphalerite, galena, quartz and calcite. The hanging wall of the vein is parallel to S2 foliation and is marked by cordierite crystallization. Sphalerite in vein mineralization appears highly deformed and recrystallized with mm relictual grains and recrystallized µm-size crystals (like in Arre deposit, see Figure 9g). In Pale Bidau deposit, vein mineralization is only present in domains where the S2 cleavage is well-marked. Note that Geminerals are exclusively present in the vein mineralization ( Figure 8). Types of Pb-Zn Mineralizations in the PAZ The presence of three major types of Pb-Zn mineralizations is demonstrated in this study: Disseminated but layered mineralization, which is now defined as Type 1, appears with graded-beds and framboidal appearance (Figure 9a). Stratabound mineralization (now defined as Type 2a) is a syn-D1 mineralization concordant to the S1 foliation. Vein mineralization (now defined as Type 2b) is a syn-to post-D2 vein-type mineralization, parallel to the subvertical S2 foliation. Type 2a and Type 2b are undoubtedly epigenetic and were formed as a consequence of Variscan tectonics. The first and earlier Type 1 mineralization ( Figure 10) is recognized in all the studied deposits in the Bossòst and Pierrefitte anticlinoriums, but it is not the main exploited resources. Its formation may be linked to the early volcanic Ordovician or Devonian events as proposed by Pouit [27] and Reyx [63]. In Pierrefitte anticlinorium, Nicol [55] proposed a unique Devonian source for the Pb-Zn mineralizations. Syngenetic formation is preferred for the Type 1 mineralization as sulfides appear layered and with sedimentary affinities. Nevertheless, framboidal texture may as well form in a postsedimentation environment like in hydrothermal veins [64]. The second stratabound Type 2a mineralization (Figure 10) is deposited parallel to S 0 -S 1 . It corresponds to the main Pb-Zn mineralization episode in the PAZ (~95% of the total exploited ore volume). In the Bentaillou area, Type 2a mineralization intersects S 0 stratification and is hosted by S 1 cleavage (Figure 4e), which is axial planar to isoclinal F 1 folds. Fert [26] and Pouit [27,28] proposed a syngenetic model for the Bentaillou deposit and described a normal stratigraphic succession that has been later folded by F 2 folds. F 1 isoclinal recumbent N090 • E folds are absent in their model. Here we observe that Bentaillou Pb-Zn mineralization is localized essentially close to F 1 fold hinges at the interface between marble and schist or microconglomerate (Figure 4c). The source for Type 2a sulfides may be related to layered and supposed syngenetic Type 1 sulfides that are disseminated in the Ordovician and Devonian neighboring metasediments, or to Late-Variscan granitic intrusions, probably at least temporally close to the Type 2a mineralizations. Opening of top to the north cm to pluri-m pull-apart-type structures (Figure 4c) enables the formation of the large amount of mineralizations in Bentaillou. Pb-Zn ore is not observed at the base of Bentaillou marbles due to important karstification (Cigalere cave, Figure 3a), however it is deposed at Bularic [65] both above and below this marble level. In the Liat area, Pujals [29] described a syngenetic or diagenetic mineralization with apparent limited reworking. Our model shows that Type 2a stratabound mineralization is linked to the Variscan D 1 deformation. In the Victoria-Solitaria area, Type 2a stratabound mineralization occurs where D 2 -related structures are present and can be locally remobilized in fold hinges. These thicker mineralizations in fold hinge may be linked to the saddle-reef process [66][67][68] associated with formation of the dilatation zone during folding. These deposits have been studied by Pujals [29], Cardellach et al. [30,69], Alvarez-Perez et al. [70], and Ovejero-Zappino [49,71]. These authors argued for a SEDEX origin based on syngenetic mineralization associated to the presence of syn-sedimentary faults. These models differ from our hypothesis: here we report that S 1 cleavage is parallel to axial plane of recumbent km-size isoclinal folds and transposes the S 0 stratification. F 2 folded Type 2a stratabound mineralization is thicker in fold hinge and intersects metamorphic minerals as gahnite. Presence of this Zn-spinel may be linked to a primary minor sulfide mineralization (Type 1, Figure 10) or to a D 1 metamorphic fluid rich in Zn. Chemistry of gahnite was analyzed by Pujals [29] and its composition is typical to metamorphosed zinc deposits. This testifies that Type 2a Pb-Zn mineralization is syn-to post-M 1 metamorphism. Alonso [32] demonstrated a predominant role of mechanical remobilization associated to deformation in the Bossòst anticlinorium and, especially, F 2 folds in Horcalh and fault in Liat. Our model is similar as we consider that the Variscan D 2 deformation locally remobilized Type 2a mineralization. The Margalida deposit records an additional deformational event compared to Victoria and Solitaria. Hosted in a ductile deformed marble and close to the Bossòst ductile fault, the Type 2a mineralization appears largely deformed with a typical durchbewegung texture. No sedimentary structure is recognized in the marble [70]. This attests for a Late Hercynian and/or Pyrenean deformation associated to the fault on the mineralization. Comparison with the Pierrefitte anticlinorium shows the same syn-D 1 Type 2a mineralization associated to regional thrust tectonics. The main exploited Pb-Zn mineralization in Pierrefitte mine was pluri-m scale levels parallel to S 0 -S 1 and the regional thrust ( Figure 10). Our work comforts the study of Nicol [60] which has shown an important remobilization of the mineralization in Ordovician and Devonian metasediments linked to D 1 deformation. On the contrary, Bois et al. [24] proposed a syngenetic deposition related to the activity of Late-Ordovician syn-sedimentary faults and volcanism that may have induced these mineralizations. In this case, remobilization is weak and sulfides crystalize prior to Variscan metamorphism [24]. But the presence of sphalerite parallel to S 1 cleavage and in pressure shadows around magnetite clast concordant to S 1 rather attests for a syn-D 1 mineralization event. The third Type 2b vein mineralization ( Figure 10) is parallel to S 2 cleavage. It intersects S 0 -S 1 cleavage and former Type 2a stratabound mineralization. It has been recognized in the Pale Bidau-Argut-Pale de Rase districts [57] and Arre-Uzious-Anglas districts. It appears in a limited number of deposits in the PAZ. Type 2b mineralization is present in pluri-dm veins with restricted extension and highly differs structurally and mineralogically. The presence of Ge-minerals and absence of apatite, tourmaline, or ilmenite are remarkable here. Nonetheless, possible Type 2a remobilization with external contribution is not excluded in the Type 2b vein formation. In the Uzious mine mineralization intersects magmatic aplite. Therefore it has probably emplaced syn-or post-Cauteret granite and is certainly late-Variscan in age (aplite from late-Variscan Cauteret granite) as supposed by Reyx [63]. Deformation of sphalerite, which is supposed to be syn-D 2 and/or syn-D 3 , and the unusual sulfide paragenesis are inconsistent with a Mesozoic age as described in Aulus-Les Argentieres undeformed sphalerite [72]. Other Pb-Zn deposits, like the La Gela deposit [73] or Carboire deposit, could be attached to this third type as they are characterized by vertical Pb-Zn veins and presence of Ge-minerals. These late-Variscan Pb-Zn deposits have been recognized in Saint-Salvy (cf. M 2 mineralization) even if the main Pb-Zn mineralization event is Mesozoic [74]. Genetic Model of PAZ Pb-Zn Deposits Formation Linked to Regional Geology The genetic model comprises four stages ( Figure 11) based on the regional tectonic event model of Mezger and Passchier [22] and Garcia-Sansegundo and Alonso [56]. with external contribution is not excluded in the Type 2b vein formation. In the Uzious mine mineralization intersects magmatic aplite. Therefore it has probably emplaced syn-or post-Cauteret granite and is certainly late-Variscan in age (aplite from late-Variscan Cauteret granite) as supposed by Reyx [63]. Deformation of sphalerite, which is supposed to be syn-D2 and/or syn-D3, and the unusual sulfide paragenesis are inconsistent with a Mesozoic age as described in Aulus-Les Argentieres undeformed sphalerite [72]. Other Pb-Zn deposits, like the La Gela deposit [73] or Carboire deposit, could be attached to this third type as they are characterized by vertical Pb-Zn veins and presence of Ge-minerals. These late-Variscan Pb-Zn deposits have been recognized in Saint-Salvy (cf. M2 mineralization) even if the main Pb-Zn mineralization event is Mesozoic [74]. Genetic Model of PAZ Pb-Zn Deposits Formation Linked to Regional Geology The genetic model comprises four stages ( Figure 11) based on the regional tectonic event model of Mezger and Passchier [22] and Garcia-Sansegundo and Alonso [56]. Stage 1 represents the syn-sedimentary layered mineralization (SEDEX deposit, Pb-Zn Type 1 disseminated mineralization). Primary sulfides were recognized in all pre-Silurian stratigraphic succession in the Bossòst area ( Figure 11) and in Devonian rocks in Anglas-Uzious-Arre district. In the Pierrefitte area primary sphalerite is absent, which is probably linked to important hydrothermal low-grade alteration and D 1 overprint. Stage 2a starts during the D 1 Variscan deformation and induces Type 2a stratabound mineralization. This mineralization occurs preferentially where a rheological contrast exists between two lithologies (e.g., marble-schist; schist-microconglomerates) and in highly D 1 deformed area ( Figure 11). Stage 2a continues with D 2 Variscan deformation and the formation of N090-110 • E F 2 upright folds. Granitic intrusions occur at that stage ( Figure 11). This D 2 deformation locally reworked mineralization like in Victoria mines where the mineralization is remobilized in fold hinges. Horcalh mineralized fault [32] is interpreted as synchronous to D 2 deformation. Stage 2b occurs during the doming phase and the late-Variscan Type 2b vein mineralizations ( Figure 11). This mineralization type preferentially occurs parallel to the vertical S 2 cleavage and is mostly observed in the Pierrefitte and Bossòst anticlinoriums. Pull-apart-type structures are observed in Pale Bidau and Uzious mines. A late deformation D 3 corresponds to faults like the Bossòst mylonitic fault close to Margalida district. We have shown that the Pb-Zn deposits in the PAZ were polyphased and closely linked to Variscan tectonics. There are at least three Pb-Zn mineralization-forming events, and two of them are evidently structurally controlled. Type 1 may be syngenetic, but little ore is present. The main exploited ores are Type 2a and Type 2b which have emplaced under a marked structurally control, either associated to S 1 and trapped in F 1 fold hinge, at lithology interface or in highly D 1 or D 2 deformed areas. Is Pb-Zn Deposits Emplacement Sedimentary-or Structurally-Controlled? SEDEX deposits are sedimentary controlled and syn-to diagenetic, and sulfides in them are laminated and included into the bedding [2]. In our study area, Pyrenean Pb-Zn mineralizations have been previously described as SEDEX by many authors [24,[28][29][30]75,76]. The origin of several world-class Pb-Zn deposits is debated as well. For example, the geneses of Broken Hill-type deposit [6][7][8][9][10][11][12] or Jinding deposit [14][15][16][17][18][19][20] are still not understood and the authors have not yet decided between syngenetic or epigenetic models. In the Pyrenees, authors interpreted stratiform and lenses ore body shapes. The stratiform argument is not relevant because frequently S 0 stratification is parallel to the S 1 axial plane of isoclinal recumbent folds, typical of intensively deformed areas. Crystallization of sphalerite secant to isoclinal recumbent fold hinges attests that the main mineralization is parallel to S 1 and not to S 0 . Structural observations are supported by the mineralogical study. The three PAZ Pb-Zn mineralization types contain the same constitutive minerals, like sphalerite, galena, and pyrite, but various trace minerals are present according to the type. These mineralogical differences are key parameters to distinguish between different Pb-Zn mineralization events in a single deposit. In intensely deformed and metamorphosed terranes, the simple geometric link between mineralization and stratification is not relevant enough to distinguish between sedimentary or structural control. Structures are often parallelized due to pervasive tectonic events which makes the structural analysis difficult. Reworking of the ore-body during deformation can have obliterated geochemical tracers like isotopic data, especially Pb isotopes [77][78][79]. Consequently, a detailed structural study from regional to micro-scale focusing on the relationships between mineralization and cleavages is crucial. Pinpointing locations where structures like cleavage are secant (fold hinge), as well as deciphering textural relations between metamorphic minerals and mineralization, will lead to a better understanding of the ore-body genesis. Conclusions Three main types of Pb-Zn mineralizations have been distinguished in the Pyrenean Axial Zone. A minor type (Type 1) is a stratiform disseminated mineralization that presents syngenetic characteristics. The two other mineralization types, previously described as SEDEX, are in reality post-sedimentation and formed as a result of Variscan polyphased tectonics: Type 2a is a syn-D 1 stratabound mineralization that is parallel to S 1 foliation. Type 2b is a syn to post-D 2 vein-type mineralization that is parallel to subvertical S 2 cleavage. Structural control is thus a key parameter for the remobilization of Pb-Zn mineralizations in this area like in (D 1 and D 2 ) fold hinges (saddle reef), high (D 1 ) deformed zones, rock contrast interfaces, and S 2 cleavages. A multiscale detailed structural study is essential for unraveling the formation of Pb-Zn deposits, especially in deformed and/or metamorphosed terranes.
Molluscs and echinoderms aquaculture: biological aspects, current status, technical progress and future perspectives for the most promising species in Italy Shellfish aquaculture is a widespread activity in the Italian peninsula. However, only two bivalve species are mainly cultured along the coastline of that country: the Mediterranean mussel Mytilus galloprovincialis and the Manila clam Venerupis philippinarum (Ruditapes philippinarum). By contrast, just a few other mollusc species of commercial interest are scarcely reared at a small-scale level. After analysing the current status of Italian shellfish production, this paper reports and discusses the potential for culturing several different invertebrate species [i.e., the European flat oyster Ostrea edulis, the grooved carpet shell Venerupis decussata (Ruditapes decussatus), the razor clams Ensis minor and Solen marginatus, the cephalopod Octopus vulgaris, and the purple sea urchin Paracentrotus lividus] in this country. In addition, a detailed overview of the progress made in aquacultural techniques for these species in the Mediterranean basin is presented, highlighting the most relevant bottlenecks and the way forward to shift from the experimental to the aquaculture phase. Finally, an outlook of the main economic and environmental benefits arising from these shellfish culture practices is also given. Introduction Current status of the Italian shellfish aquaculture The Italian shellfish production amounted to 181,455 t in 2008181,455 t in (FAO, 2011, corresponding to 68% of the total Italian aquaculture production and ranking Italy in the 3 rd position in Europe, after France and Spain (189,070 and 185,153 t, respectively). Similarly to what happens to finfish production and unlike the two best shellfish producers in Europe, shellfish aquacultural practices are very scarcely diversified in Italy, since the only two species predominantly cultured are the Mediterranean mussel (Mytilus galloprovincialis; 123,010 t) and the Manila clam [Venerupis philippinarum (Ruditapes philippinarum); 58,445 t]. Only sporadic activities of rearing can be listed for other species, currently at a small-scale aquaculture level, as in the case of the oysters Crassostrea gigas and Ostrea edulis, and the grooved carpet shell Venerupis decussata (Ruditapes decussatus) or still as an experimental aquaculture example (Modiolus barbatus). About molluscs culture in Italy, old traditions coexists with modern intensive farming techniques (Prioli, 2004). Mussel culture covers a total surface of about 20,000 ha and is realised by 263 companies that give employment to 1,400 people (Prioli, 2008). The cycle is based on the recruitment of wild spat largely available in many areas (namely, Apulia, Veneto and Emilia-Romagna regions) where the growout plants are located. The culture of Mytilus galloprovincialis is widely diffused along the coasts of the country (included those of Sardinia and Sicily, but not those of Basilicata, Calabria and Tuscany), where different kind of rearing techniques are applied. The traditional ones (fixed systems) are mostly located in sheltered coastal and in lagoon areas (gulf of Trieste, gulf of Taranto, Veneto lagoon, etc.), while the new long-lines rearing systems (single long-line ventia and Trieste or multi ventia long-line in open sea and in partially or fully sheltered areas, respectively) have been more and more diffused in offshore farms. The single ventia plants have been spread for 20 years and currently represent around 75% of total linear metres (2,000,000) of long-line quoted in Italy. The culture of the Manila clam started in Italy in the 1980s, when 200,000 juveniles from a North European hatchery were introduced in the Venice lagoon (southern basin) (Cesari and Pellizzato, 1985). Afterwards, this species was introduced in other areas of the Po river delta [Sacca di Goro (Ferrara), Sacca del Canarin-Porto Tolle (Rovigo) and Grado-Marano lagoon (Udine)]. Currently, the Venice lagoon is the most important production site for this bivalve, where 50% of the total Italian production is realised (Zentilin et al., 2008), followed by the area of the Po river delta in Emilia Romagna region (mainly Sacca di Goro) (28% of the total production) and in Veneto region (21%). The Grado-Marano lagoon very scarcely impacts on the total production (1%). Thanks to the optimal conditions found in North Adriatic areas, this species spreads spontaneously and now the farmers move from rearing practices to management of production areas in a more or less controlled way. Manila clam culture covers a surface of about 940,000 ha and gives employment to 4000 to 5000 people (Turolla, 2008). Similarly to mussel culture, the spat is mainly collected from the wild (95% of the total utilised), but a hatchery seed production is also realised. Only two hatcheries (i.e., ALMAR and TURBOT, this last producing 10 to 50 million of clam seeds year -1 ) are currently operating in Italy, while in Europe a total of 34 mollusc hatcheries were recently listed (Robert, 2009). In agreement with the EU Regulations [CE Reg. 2073(European Commission, 2005 and CE Reg. 853/2004(European Commission, 2004] for the marketing of shellfish, 125 centres for shellfish depuration and 320 centres for shellfish shipping (20 of which are located on boats in the service of equipment) are operational in Italy (Prioli, 2008). A third cultured species is the grooved carpet shell Venerupis decussata, amounting to about 100 t in 2008 (FAO Fisheries and Aquaculture Department, 2011). The interest for the cultivation of that species has significantly increased in the last years due to its higher commercial value in comparison to that of the non-native species Manila clam. Also for oysters, despite their high market request, the amount produced is currently negligible, which puts Italy in the first position among the importer countries (with a stable quantity around 7400 t in the years 2006-2007: http://www.globefish.org/oysters-may-2008.html). The main problems involving traditional shellfish culture are related to the seasonality of the production (for mussels mainly concentrated between May and September), the lack of traditional processes to obtain a product with a higher added value (currently the amount transferred to industry for processing is lower than 1% of total mussel production) (Prioli, 2004), the extensive seaweed blooms, and the algal blooms responsible for biotoxin risks. The lack of experimentation on the actual possibility to develop effective and economic systems for detoxification is a critical point for the future of this important aquaculture sector, due to serious economic losses during the closure of commercial shellfisheries caused by periodic harmful algal blooms. However, some results recently documented in the literature seem very promising about the possibility to detoxify the molluscs once contaminated (Marcaillou et al., 2010;Medhioub et al., 2010), or to prevent the contamination by innovative rearing systems (Serratore, 2011). The perspectives of traditional cultures for mussels currently are: i) a positive trend towards the settlement of new production sites along the coastline of those Italian regions offering more favourable conditions for mussel growing; ii) a marked interest for finding new simplified rearing techniques. Also in the case of the clam culture, some priorities can be listed for promoting further development, such as: i) an adequate management of the lagoons where this species is cultured. Indeed, during summer there are often anoxic crises and excessive growth of seaweed due to the peculiar condition of eutrophy; ii) a proper management of the nursery areas, appointed to the production of the seed which is mainly obtained (95%) from the wild (Turolla, 2008). The high consumption of shellfish associated with the seasonality of the national production (mainly in the case of mussels) and the lack of diversification of production typical of the Italian shellfish aquaculture generate large volumes of imported products, as shown in Table 1. In 2011, the importing of musselsranking the first position among the seafood consumed in Italy -dramatically increased (58,300 t, +50% compared to 2010) (http:// www.globefish.org/bivalves-june-2012.html), partly due to the current economic crisis that has pushed consumption towards products of lower commercial value. To give impetus to the sector, some inter-ventions on traditional species and a greater diversification of farmed species are essential. As far as the first aspect is concerned, appropriate business strategies need to be adopted in order to maintain the functional product price and to create new market niches through appropriate techniques of processing and of storage (freezing) of the product. At a national and regional level, some research has recently been carried out to stimulate the culture of new species of molluscs. Most of the results are documented in grey literature and in final reports written in Italian language, and have had a very scarce impact on the international literature. Althoguh much research has been carried out since the 1990s, no significant progress has been made in the Italian aquaculture landscape. As a matter of fact, for the cultured species it barely changed over the last 20 years. The main issues tackled in the studies of mollusc species which are deemed as promising for Italian aquaculture are summarised in Table 2. In the following chapters, a detailed analysis of the most promising species for diversification of Italian shellfish culture is reported. Precisely, our attention will focus on some species that are supposed to change the landscape of Italian aquaculture in: i) the short term, as rearing techniques are already established (cf. the oyster Ostrea edulis and the grooved carpet shell Venerupis decussata); or in ii) the medium-long term, as much research has been done or is going to develop the different phases of the breeding cycle (cf. the razor clams Ensis minor and Solen marginatus, and the common octopus Octopus vulgaris). Lastly, the state of the art of the purple sea urchin Paracentrotus lividus is also given. Indeed, because of its significant marketing potential, the aquacultural activities of this species are undergoing a noteworthy improvement and its gonad quality is currently undertaken in many European regions. Perspectives of Italian shellfish aquaculture European flat oyster Distribution, habitat and exploitation The European flat oyster Ostrea edulis (Linnaeus, 1758) belongs to the Ostreidae family (Rafinesque, 1815) and is native of Europe. This species naturally lives in a region going from the Norwegian fjords to Morocco (North eastern Atlantic coasts) and in the Mediterranean sea (Jaziri, 1990) up to the Black sea coasts (Alcaraz and Dominguez, 1985). It is also found in South Africa, Northeastern America (from Maine to Rhode Island), Canada, Nova Scotia, New Brunswick and British Columbia, probably imported from population whose ancestors were from the Netherlands (Vercaemer et al., 2006). Ostrea edulis is a typical filter feeder, filtering phytoplankton, copepod larvae, protozoans and detritus as food. Being a sessile organism, it lives fixed to a hard substrate and its feeding entirely depends on the resources naturally present in the surrounding environment. As a matter of fact, food is pumped in with the seawater and removed by the gills (Laing et al., 2005), filtering even up to 25 L h 1 depending on animal size and temperature (Korringa, 1952). This species is typical of coastland, estuarine and marine environments and sheltered areas, preferring hard substrates as rocks or artificial structures but also muddy sand, muddy gravel with shells, and hard silt. It lives in brackish and marine seawater, having an optimum of salinity rounding between 17 and 26 PSU (Blanco et al., 1951) up to 40 metres deep. Ostrea edulis is similar to other species of oysters widely cultivated in many regions of the world, like the Pacific cupped oyster Crassostrea gigas (Thunberg, 1793). This latter, however, has a more elongated, distorted and irregular shell and, above all, is characterised by a different sexuality. Oysters are prey of several organisms, including fish, crabs, snails, starfish and flatworms but also of boring sponges, seaworms, molluscs, pea crabs and fouling in general, that can be cause irritation problems or compete for food. With regard to disease, the protist Bonamia ostrea is one of the most dangerous pathogens: in 1920 it caused massive mortality events among flat oyster populations (da Silva et al., 2005). These populations were then reintroduced in Europe where the disease was transferred to other established populations. Reproduction The European flat oyster is a protandric hermaphrodite (da Silva et al., 2005) and shows an alternation of sexuality within one spawning season: early in the reproductive period it is male, but when it reaches the sexual maturity can alternate between the female and male stages for the rest of its life (Laing et al., 2005). Males are mature after about one year of age when they release sperms into the water depending on temperature values (with a minimum of 14°C to 16°C) (Walne, 1979). Females collect sperms by using their feeding and respiration system (Laing et al., 2005). The oogenesis can produce up to 1 million eggs per spawning event, releasing them from the gonad and retaining them in the mantle cavity where they can be fertilised by externally released sperms (i.e., larviparous species). After an incubation period of about 8 to 10 days, larvae develop a formed shell, a digestive system and the ciliated swimming and feeding organ (i.e., the velum) reaching about 160 μm in size. At this point, they are released into the sea open water where they live at pelagic stage Molluscs and echinoderms culture in Italy Table 2. Topics studied about several mollusc species and considered as promising for Italian aquaculture. Scientific name (8 to 10 days), feeding on phytoplankton for 2 to 3 weeks before settlement (Korringa, 1941(Korringa, , 1952Laing et al., 2005). The amount of larvae released into the seawater is correlated to the parent size, which ranges between 1.1 and 1.5 millions for oyster from 4 to 7 years old (Walne, 1979). By contrast, Crassostrea gigas inverts its sex after one spawning season and releases its gametes (eggs or sperms) into the environment at one time or in small amounts over a long period (i.e., oviparous species). Thus, fertilisation occurs externally and the resulting larvae develop in the seawater. Therefore, during the larval stage life is typically planktonic and, as metamorphosis progresses, oyster moves with an extensible foot in search of a suitable substrate. Once the oyster finds it, it attaches itself by a byssus formation and then by cementation (with a physiological and morphological metamorphosis lasting 3 to 4 days) and starts its sessile life as juvenile, thus becoming spat (Laing et al., 2005). From this event, the growth is quite quick for about 18 months, then it stabilises remaining constant at about 20 g of fresh weight per year and finally slows down after 5 years (Laing et al., 2005). Depending on the environmental conditions, these bivalves can achieve the marketable size of 7 cm in shell length in 4 to 5 years and live in natural beds up to 20 years growing up to 20 cm of size. Aquacultural activities Oyster spat can be obtained by both wild stocks and hatchery production. Like in other bivalves, sexual maturation and subsequent reproduction is obtained by modifying the temperature of the water (i.e., increasing it) and by administering the phytoplankton ad libitum, thus imitating the natural reproductive cycle. Compared to the conditioning of other species, the fertilisation of O. edulis specimens is more difficult due to a lower larval survival rate, so that a period of incubation is necessary. In general, spat is cultured using traditional techniques for bivalves at nursery stages and, when it reaches the size of 5-6 mm, it can be moved to open water to grow up. On the contrary, natural spat harvesting is based on the employment of collectors. Some examples are mussel shells sown in density of about 30 to 60 m 3 ha -1 (the Netherlands), or tubular nets containing mussel shells (about 600) suspended under steel frames in shallow waters (France). Recently, PVC dishes are used in intertidal areas. Therefore, seed can be transferred to the growing or fattening area. Yet, this is not always necessary for the seeding area that can also become the growing and fattening area in some facilities. Breeding methods are generally categorised into on-bottom and off-bottom, each of them having its advantages and disadvantages. Thus, it should be better to choose the most suitable method for the selected site and for the specific financial possibility. On-bottom techniques require that oysters are seeded directly in subtidal or intertidal grounds with a stable, non-shifting bottom (Quayle, 1980), at a density of about 50-100 kg ha -1 . Seeding is generally carried out in the period between May and June, when molluscs are about 1 cm long (1 year old), and here they reach the marketable size. Traditionally, cotton nets or steel frames are used to preserve the culture from predators. The on-bottom method certainly is the simplest and cheapest one, but mortality, stock loss caused by predation and siltation events are highest, and even harvesting is difficult. On the other hand, off-bottom techniques allow oysters to be cultured in suspension. This method is certainly more expensive than the first one and requires more maintenance, but it is compensated by the rapid growth and high quality of the cultured oysters. The technique consists of using floating structures, rafts, long-line systems, suspended ropes, lanterns or plastic baskets pending from a raft/rope, where oysters are located in. Product is thinned out as it grows. Harvesting should be programmed when oysters are in their best conditions, with full and creamy meat. From on-bottom cultures, molluscs can be dredged or handily collected, whereas in the off-bottom ones they can be handily-picked. Finally, before marketed, they are temporarily stored in clean water and subjected to depuration procedures as all other bivalve molluscs. Rearing in Europe In Europe, several rearing experiments with this species have been carried out in the last decades. In particular, much attention has been paid to both survival and growth of experimental batches of hatchery-reared O. edulis larvae and spat (Davis and Calabrese, 1969;Laing and Millican, 1986;Spencer and Gough, 1978;Utting, 1988;Berntsson et al., 1997;Rodstrom and Jonsson, 2000), and to the biochemical composition of larvae fed on different food regimes (Ferreiro et al., 1990;Millican and Helm, 1994). Grooved carpet shell Distribution, habitat and exploitation The grooved carpet shell Venerupis decussata (Linnaeus, 1758) is a bivalve belonging to the Veneridae family (Rafinesque, 1815). This species is found all through the Mediterranean sea, but it is widely distributed along the western Atlantic coasts from Norway to Congo and also in the northern part of the Red sea where it migrated from the Suez Canal. This species typically lives buried in sandy and silt-muddy bottom, inhabiting the areas near and below the mean sea level (intertidal zone and subtidal zone, respectively), and buried 15 to 20 cm into the sediment. Moreover, it continuously filters surrounding water through its two siphons protruded from the substrate, picking up organic particles and phytoplanktonic cells as nourishment and to allow gas exchange between oxygen and carbon dioxide that occur with breathing. Overall, clams bear quite well the variations of chemical and physical variables of water, such as temperature, salinity, dissolved oxygen, turbidity, typical of lagoon environments or estuarine areas where they live. As a consequence, their favourite sites are generally located away from areas with high hydrodynamic, and from windy areas where the substrate where they are buried can be destabilised. Nevertheless, it is important the presence of a slight and a constant current that allows good water exchange and the constant flow of food. For this reason, clams can live on a variety of substrates although a mixture of sand, silt and granules is the most suitable composition which allows a good oxygenation and a comfortable softness of the bottom. It is important to emphasize that other filter feeders species (e.g. Bivalves, Hydroids, Bryozoans, Serpulids, etc.), can compete with a clam population for food availability. At the same time, another form of competition can take place during the recruitment, depending on the availability of suitable sub- strates (Paesanti and Pellizzato, 2000;FAO, 2004). In Europe, the harvesting of Venerupis decussata mainly occurs in countries like Spain and France, but also in Italy, especially in Sardinia, where the semi-extensive culture of the allochthonous species Venerupis philippinarum has been banned by the Regional Government in order to protect the native Mediterranean carpet clam (Chessa et al., 2005;Pais et al., 2006b). Reproduction Even though occasional cases of hermaphroditism can be observed (Delgado and Pérez Camacho, 2002) especially in juvenile forms (Lucas, 1975), this clam is strictly gonochoristic and the reproduction takes place externally in the aqueous medium, mainly in summer when temperature is higher and food is abundant. Resulting larvae are freely floating for 10 to 15 days until they settle as spat (about 0.5 mm in length) and continue their growth to adult form, once they found a suitable substrate. Like most of other marine bivalves, Venerupis decussata is characterised by a cyclical pattern of reproduction, which can be divided into different phases: gametogenesis and vitellogenesis, spawning and fertilisation, larval development and growth. Each bivalve species evolved a number of adaptive strategies (genetic or not) to coordinate these events with the environment in order to maximise the reproductive process (Newell et al., 1982). In this regard, numerous studies show that gametogenic cycle in marine invertebrates is strictly conditioned by the interaction between exogenous factors (i.e., temperature, salinity, light, availability of food, parasitic infestations) as well as by internal factors (Rodríguez-Moscoso and Arnaiz, 1998). Temperature is certainly one of the most important factors influencing the reproductive cycle (Sastry, 1975), defining both the starting point and the rate of gonadal development, whereas food availability can determine the extension of the reproductive process (Lubet, 1959). These two factors are subject to natural seasonal fluctuations and their variability is closely related to the energy available for growth and reproduction. In particular, clam reproduction requires abundant energy for providing a suitable gonadic development so that the success directly depends on ingested food or on previously stored reserves (Delgado and Pérez Camacho, 2005). In general, when food is abundant, reserves accumulated before and after gametogenesis (i.e., glycogen, lipid and protein) are utilised to produce gametes when metabolic demand is high (Bayne, 1976). As a consequence, gametogenesis can differ from location to location depending on the geographic area considered: for example, in adult clams from southern Europe the cycle generally starts in March, gonads become ripe in May-June and spawning occurs in summer, following a phase of inactivity in winter (Shafee and Daoudi, 1991). Aquacultural activities Until a few decades ago, the management of this species was exclusively related to the availability of natural seed. However, nowadays the manipulation of its gonadal cycle is a quite common practice. In fact, artificial spawning techniques and larval rearing programs have been recently developed. These methods are applied in highly specialised systems -the hatcheries -where breeders (previously selected from natural beds on the basis of their appearance, size and shape) are stocked into tanks for 30 to 40 days at 20°C of temperature, and richly fed with phytoplanktonic algae. In order to guarantee the continuous availability of this nourishment for breeders and future larvae, hatcheries have to possess algal culture systems. The specimens selected are abundantly fed to maximise their gonadic maturation until they are ready to reproduction. At this phase, the release of gametes is induced by a thermal shock of the water of about 10°C (from 18 to 28°C), repeated for one or more cycles of about 30 min each. Generally, males emit before females and fertilisation occurs in small containers. The eggs obtained are counted, filtered and placed into little aquariums (about 10 L in volume), where veliger larvae appear after 8 days. Subsequently, they are filtered through a 100 μm mesh, daily fed with phytoplankton for the first week and then every 2 days. At the pediveliger stage, clams have a diameter of about 180 to 220 μm, they already have the foot but the velum is still present. Indeed, they spend most of their time swimming and sometimes they are fixed on the container surfaces. After about 3 weeks, the metamorphosis process is completed and the spat stage is reached (about 250 μm in size). The little molluscs can be now reared in greenhouses, fed with phytoplankton or with pumping environmental water into inland tanks, where they are placed inside small containers having a rigid mesh as bottom (i.e., nursery). From this stage onwards, methods of farming may be different depending on the features of the hatchery (e.g. standing water, constant water flow, downwelling and upwelling forced water flow). As said above, spat can be obtained both from natural populations in the vernal period (digging them with sand by a small rank and riddling it to retain the seed) and from hatcheries. When a size of about 1 mm is reached, a new phase of rearing can start using a controlled system: the so-called pre-fattening. Clams grow up to about 10 to 15 mm in 2 to 4 months and it would be convenient to complete their weaning period outside, pumping natural seawater or brackish water since their maintenance into the hatchery is quite difficult for both management and for economic reasons. Once they have reached this size, depending on the preferences and the possibilities of the farmer, molluscs can be transferred to the ground (with a density of about 5000 individuals m -2 ) or in special facilities that allow their growth in suspension, such as net bags (pôches) or stacked baskets (at lower density). Moreover, if they are sown directly on the substrate, it is advisable to protect the seed from predation by plastic nets. In this way, clams are able to attain a size of 20 to 25 mm in about 2 months. At this phase, management regards only the preparation and maintenance of breeding substrate (i.e., cleaning and removal of algae or predators) or the control of the suspension systems (i.e., attachment and clearing of encrusting organisms or fouling). The last procedure of the production cycle is fattening, i.e., when carpet shells grow in the bottom within the sediment. In this way, molluscs live following their natural pattern, filtering water and then feeding until they achieve the commercial size of at least 25 mm in length. According to the environmental conditions and breeding, the fattening stage can be completed in a period of 12 to 28 months. After reaching the commercial size, clams can be gathered in different ways depending on the type of farming. When and where it is possible, fishermen manually collect the bivalves by walking using a rake equipped with an appropriate net, whose mesh is sized to hold the molluscs and allow the escape of the sediment. Alternatively, the harvest can be made from boats (with oars or engines) furnished with an extended rake. Rearing in Europe During the last decades, clam aquaculture has conspicuously developed in Europe and particularly in Italy where, after its introduction into the northern Adriatic lagoons, the Pacific carpet clam Venerupis philippinarum (Adams and Reeve, 1850) has been intensively exploited due to its rapid growth and propagation (Paesanti and Pellizzato, 2000 present, although intensive research on the rearing of this species has been carried out throughout the continent. In particular, several studies have been conducted both on gametogenesis (Xie and Burnell, 1994;Rodríguez-Moscoso and Arnaiz, 1998;Ojea et al., 2004;Serdar and Lö k, 2009) and reproductive cycle (Breber, 1980;Beninger and Lucas, 1984;Laruelle et al., 1994;Urrutia et al., 1999;Pérez Camacho, 2003, 2007) of this species. Razor clams Only two genera of razor clams are commercially exploited in Europe: genus Ensis (E. arcuatus, E. minor, and E. siliqua), belonging to the Pharidae family; and genus Solen (S. marginatus), belonging to the Solenidae family. The razor clams have a high and increasing commercial value due to the high prices reached in European and international markets (Barón et al., 2004). Spain, Italy, France, Portugal and the Netherlands are considered as the most important countries involved in this market. In 2004, the import value of the razor clam market within the EU25 was 550 million € (BIM, 2005). The world landings of these species are low compared with other traditional shellfish species (i.e., oysters, scallops, or clams), and the fishing pressure on wild populations is increasing. Signs of severe exploitation of natural stocks are documented (Gaspar et al., 1998;Tuck et al., 2000;Fahy and Gaffney, 2001) also in Italy, where razor clams (E. minor and S. marginatus, a less valuable species) are widely distributed, and the quantity harvested from the wild is decreasing. This species is of interest for aquaculture both for improving natural stocks and for producing food. The aquaculture potential of E. arcuatus and S. marginatus has been recently assessed by a number of specific trials carried out mainly in Spain, a country importing large quantity of this seafood (47% of the European importation value, according to data obtained from Eurostat information database). The experiments carried out on the production of razor clam species diffused in Spain, using hatchery and semi-intensive aquaculture techniques, are deeply summarised in a recently published report (Guerra Diaz et al., 2011). Despite interest in razor clam aquaculture, little is known about its growth and reproduction, even though the studies of the cultivation of 3 razor clams (S. marginatus, E. siliqua and E. arcuatus) date back to 1990. Available literature is scarce and mainly represented by documents for internal use (reports, MSc, or PhD thesis), or by posters and short presentations documented as short abstracts in international or, more often, national meetings. The availability of these last documents is reduced also for the use of native languages other than English. Some documents are in form of grey literature, thus reducing the possibility for exchanging results among researchers. Currently, no information is available about the aquaculture potential of E. minor, which is located exclusively in the Mediterranean sea basin. Reproduction, larval and post-larval rearing In Solen marginatus the spawning takes place in a few weeks during spring, in Ensis minor in March-April in the southern Adriatic sea, while in E. siliqua there is only one spawning period (May-June) (Guerra Diaz et al., 2011). The increase of seawater temperature during broodstock conditioning helps the maturation process in most of the species, except for E. arcuatus that is conditioned to ripeness at low temperature. In some species (S. marginatus, E. siliqua), the ripe adult can be successfully induced to spawn using thermal shock (Loosanoff and Davies, 1963;Martinez-Patiño et al., 2007;da Costa and Martinez-Patiño, 2009), while in E. arcuatus the change of water levels by simulating tides is the only effective method (da Costa et al., 2008). The management of eggs for fecundation and of larvae during rearing can be carried out according to the same protocol utilised for other bivalve species. Larval culture duration is very short in S. marginatus (8 days), due to the high levels of stored reserves in eggs (da Costa et al., 2011b), and a larval survival ranging from 28 to 81% (53% on average) was recently achieved by da Costa and Martinez-Patiño (2009) in specimens obtained from adults spawned in hatchery by induction. As a consequence of the large size of eggs and the short length of the larval stage, a different pat-tern in the use of gross biochemical and fatty acid reserves during larval development compared to other razor clams and bivalve species was recently found (da Costa et al., 2011b). A dramatic reduction in survival was obtained by the same authors in 1-month-old spats (8.6%), the bottleneck phase resulting at the post-larvae age corresponding to 1 mm in length (15 to 22 day from settlement). Using seed 1.3 mm in length, da Costa and Martinez-Patiño (2009) achieved better survival when the rearing was done without substrate than when 2 types of sand were utilised. In E. arcuatus, recently da Costa et al. (2011a) achieved the settlement of larvae on day 20, a survival from egg to newly settled postlarvae ranging between 4.8 and 24.8% (14.35% on average), and a very low (4.8%) survival from settlement (day 20) to 3month-old spat (15.5 mm in length). Grow-out The effect of substrate (fine-grain sand or coarse sand, 150 to 600 or 300 to 1200 μm grain diameter, respectively) in comparison to the absence of substrate was tested by da Costa et al. (2011a) in seed culture of E. arcuatus (3.76 mm in length) in nursery during a experiment lasting 30 days. No differences among treatments resulted for length, while the fine sand induced a lower weight and the absence of substrate a lower survival. Substrate and stocking density influenced the E. arcuatus juveniles performance (length and survival), that resulted the worst at high density (36 g per a 5 L bottle) and with fine sand as substratum (vs coarse sand). The on-growing in cage buried in sand, in natural environment, from the initial size of 60 to 80 mm highlighted high mortality and high sensitivity to changes in salinity. For those reasons, the choice of the site is highly affecting juveniles grow-out and performance. Da Costa and Martinez-Patiño (2009) in S. marginatus broodstock from the wild managed spawning induction and fecundation, larval and spat rearing in hatchery, and assessed growth performance of juveniles produced in hatchery when transferred to natural beds. Two-year-old razor clams showed a survival ranging from 50 to 83% and after 2 to 3 years from seeding they reached the commercial size (80 mm). Since some of those specimens were utilised as broodstock, the culture cycle of S. marginatus was closed. In this species, a greater tolerance for salinity variations was found in comparison to E. arcuatus. In Froglia (1975) found that the size of 100 mm was reached in 2-year-old individuals. After 2 years, the growth is reduced as a consequence of the gametogenic activity both in E. arcuatus and in S. marginatus. Critical points Among the limiting factors for aquaculture of razor clams, the followings can be listed: short reproduction period, requirement of substrate and, for the on-growing phase, appropriate seed holding systems, low survival times in absence of soft substrate, difficulties for checking growth due to burying behaviour, need for appropriate substrate. At the moment, postlarval and seed survival obtained is very low for S. marginatus, and mainly for E. arcuatus. For S. marginatus, low survival during the on-growing of juveniles has been also achieved. The information related to larval and postlarval nutrition and their influence on growth and survival presently is very scarce. In addition to these aspects which constitute a major bottleneck for the reproduction management, the requirement of an appropriate substrate for the culture process can also be envisaged as an obstacle for promoting the development of culture in hatchery. Therefore, further investigations on the specific requirements at different stages of the development for the diverse species could improve the results and the potential for culture of razor clams. Octopus Statistics The benthic octopuses of the family Octopodidae (Order: Octopodida, Leach, 1818) are one of the most familiar groups amongst the cephalopods. In this family, there are over 200 species, which range in size from pygmy taxa mature at <1 g (e.g., Octopus wolfi) to giant forms exceeding 100 kg (e.g., Enteroctopus dofleini). These species inhabit all marine habitats from tropical intertidal reefs to polar latitudes and in the deep sea to nearly 4000 m (Villanueva and Norman, 2008). Only a few cephalopods are commercially fished on a large scale (Kreuzer, 1984): squid (genus Loligo) is by far the main species, representing 73% of cephalopod world catches; cuttlefish (genus Sepia) is the second with 15%; and octopus the third with 8.8%. Octopus vulgaris Cuvier, 1797 (order Octopoda, suborder Incirrata) is one of the most important species in terms of commercial value and landings. Thirty-one per cent of the total production is put out by Mexico (11,855 t), which is the world leader, followed by Portugal (9,965 t; 27%), Spain (5,792 t; 16%) and Italy (3,018 t; 8%) (FAO, 2010). Potential for aquaculture Octopus vulgaris presents many characteristics: easy adaptation to captivity due to their benthic mode of life, reclusive behaviour, and low swimming activity; high growth rates (Aguado-Giménez and García García, 2002;Iglesias et al., 2006Iglesias et al., , 2007 (between 3 and 15% body weight day -1 ); high food conversion rates (incorporating 40 to 60% of ingested food into tissue) (Mangold and Boletzky, 1973); high fecundity (producing from 100 to 500 thousand eggs per female) with well developed hatchlings compared to other molluscs; high market size and price in areas where cephalopod consumption is high; and a non-geographically limited request by the markets (Berger, 2010). The life cycle of this octopus species was carried out for the first time in captivity by Iglesias et al. (2002) using Artemia and zoeae of spider crab (Maja squinado) as live feed. The authors obtained a survival of 31.5% day -1 post-hatching, a weight of 0.5 to 0.6 kg in 6month-old animals and of 1.6 kg in 8-monthold animals . Reproduction Octopuses are dioecious. Octopus vulgaris, as other benthic octopuses (O. mimus), produces numerous small eggs (2.7 mm in length) that hatch into planktonic, free-swimming hatchlings (1.2 mg) very different in their morphology, physiology, ecology and behaviour compared to the adult stage. Other octopuses (i.e., O. maya, O. bimaculoides), by contrast, produce relatively few, large eggs resulting in better developed hatchlings with a benthic habit that resembles the adult (Villanueva and Norman, 2008). They readily mate in captivity and easily spawn, and females reared to the sexual maturity can produce viable spawns (Iglesias et al., , 2004. No further information is available about reproduction in captivity. However, reproduction is not considered as a limiting factor for this species and Iglesias et al. (2000) recommend a correct feeding of broodstock before mating and a correct management of females after eggs deposition. The optimal water conditions for broodstock (maintained at a 1:1 ratio of males and females or 1:3 ratio, according to Iglesias et al., 2007) are temperature ranging from 13°C to 20°C and salinity around 32-35 PSU. Normally rectangular tanks (5 to 10 m 3 ), maintained with low light to obtain spawning as swiftly as possible, have been utilised De Wolf et al., 2011) and a diet based on frozen crustaceans (80%), fish (15%) and bivalve molluscs (5%) seemed to be favourable to obtain highly viable spawns . Males show a copulatory activity and egg deposition occurs on the walls and on the roof of den where the broodstock move. The eggs, cared for by the females, hatch after 34 days at 20±1°C, in the Mediterranean area (Villanueva, 1995), while more time is taken for hatching in captivity as the results obtained by Iglesias et al. (2000) showed. Currently, the rearing of paralarvae is not possible, and the impossibility of obtaining benthic juveniles on a commercial scale in captivity Villanueva, 2000, 2003;Iglesias et al., 2004Iglesias et al., , 2007 forces farmers to catch from the wild the juveniles that will be utilised in industrial on-growing. This behaviour is highly objectionable from an ethical point of view because it potentially increases fishing pressure on octopus stocks. Larval rearing and feeding Many rearing systems have been experimented, different for tank colour, size and shape, larval and prey densities and environmental factors (light, water flow and temperature), resulting in different survival of paralarvae. However, the phase of paralarvae rearing is considered inadequately explored and survival of paralarvae older than 50 days posthatch in captivity conditions is very rare, and referred only by Iglesias et al. (2002Iglesias et al. ( , 2004) that reared octopus until 8 months by Artemia and zoeae. In Italy, De Wolf et al. (2011) carried out many trials fom 2003 to 2007. They produced eggs and paralarvae in captivity and reared juveniles that reached the age of 160 days, by using standard aquaculture procedures for feeding (based on Artemia and rotifers as live prey) and rearing paralarvae. Currently, the most recognised opinion is that nutritional aspects are the most important factor influencing the performance and mortality of paralarvae (Iglesias et al., 2007). For feeding paralarvae different prey were experimented: Palaemon serrifer zoeae (Itami et al., 1963), Artemia (Imamura, 1990), Artemia enriched with Nannochloropsis sp. (Hamazaki et al., 1991) or Tetraselmis suecica or Chlorella sp., Liocarcinus depurator and Pagurus prideaux (Villanueva, 1994(Villanueva, , 1995Villanueva et al., 1995Villanueva et al., , 1996, Artemia with spider crab Maja brachydactyla zoeae Iglesias et al., 2004) prey (species, strain, size, culture technique) and with trial, usually resulting very low after 1 or 2 months from hatching: 8.3% and 0.8 to 9% (Villanueva, 1994(Villanueva, , 1995Villanueva et al., 1995Villanueva et al., , 1996, respectively. As regards to live prey, spider crab was preferred and induced better performance, probably for its high content in the essential fatty acids (EFA), particularly in arachidonic acid (ARA) (Iglesias et al., 2007). A mix of Artemia and inert diet (Navarro and Villanueva, 2000), and millicapsules (Villanueva et al., 2002;Navarro and Villanueva, 2003) was also tested, thus obtaining reduced growth performance and a survival limited to first month. Even though much research was carried out on this topic, high mortality and scarce growth performance have been obtained. At the moment, the high diversity of the experimental protocol of the trials and the lack of standardised methodologies do not let find the most effective diet for adequately feeding paralarvae. The low performance of this stage of culture can be also due to inappropriate techniques utilised during rearing. An inadequate size of tanks and the hydrodynamic behaviour of water can be responsible for mortality or damages in paralarvae arms and mantle (Rasmussen and McLean, 2004). In large volume tanks, higher survival and longevity was obtained by De Wolf et al. (2011) and better growth by Sanchez et al. (2011), the larger volumes mitigating the fluctuations in water temperature and in other water parameters. A drastic reduction of survival (4 to 0.6%) was noticed when density increased from 3 to 15 ind. L -1 (De Wolf et al., 2011). The negative effect of high density could be explained by the releasing of paralysing substances or by the competition for space during prey location and capture. Useful information was obtained about the nutritional requirements of paralarvae by the comparison of the chemical profiles of mature ovaries, eggs at different developmental stages, fresh hatchlings and wild juveniles with the chemical profiles of paralarvae cultured for one month with different live natural prey and with those of the prey administered Villanueva, 2000, 2003;Villanueva et al., 2002Villanueva et al., , 2004Villanueva et al., , 2009Villanueva and Bustamante, 2006). Special attention was paid to polyunsaturated fatty acids (PUFA), especially docosahexaenoic (DHA), DHA/ eicosapentaenoic (EPA) ratio, phospholipids, cholesterol, to some essential amino acids (lysine, leucine and arginine representing about 50% of the total essential amino acids of paralarvae), mineral (copper) and vitamins (Vitamin A, but especially Vitamin E; Villanueva et al., 2009). The requirement of Vitamin E, which is higher than for other marine molluscs and fish larvae, is probably due to the high percentage of PUFA in paralarval and juvenile cephalopods, that consequently need of a strong antioxidant system. Grow-out The on-growing of octopus started in Galicia in the mid-1990s with wild sub-adults 750 g in weight. The first findings on on-growing experimental trials date at the end of the last century and have been obtained in Spain (Iglesias et al., 1997, Portugal, and Italy (Cagnetta and Sublimi, 2000). Rearing parameters Much information on the behaviour of octopus, which can be usefully used for an appropriate design of tanks and equipment for rearing, is obtained by observing octopus in laboratory or directly in nature. Even though octopus shows a rapid and easy adaptation to life in captivity in different kinds of containers [aquaria, cylindrical-conical containers, raceways, floating cages and also benthic cages, as recently showed by Estefanell et al. (2012)], the habit to attach to any surface can be a problem for handling the specimen maintained in captivity, also as a consequence of their tendency to escape from containers. To counteract the aggressive behaviour and to respect the natural habits of octopuses that prefer dark sites, inside tanks or floating cages, shelters should be put in adequate number. The feeding habits and the utilisation in captivity of live food or dead whole marine organisms need frequently cleaned tanks or the utilisation of self-cleaning tanks (Vaz-Pires et al., 2004). About water quality parameters, octopus shows a very low tolerance to low concentrations of salts, normally living in a range fluctuating from 35 to around 27 g L -1 (Boletzky and Hanlon, 1983). For on-growing, the temperature should be kept between 10°C and 20°C, and better performances are achieved at the higher temperatures in this range. Temperature above 23°C can be responsible for high mortality, so that rearing in the Mediterranean area can be critical in summer months. This parameter affects the most important zootechnical parameters (growth, food conversion and ingestion). The results obtained by Cerezo Valverde et al. (2003) highlighted that ammonia excretion is very important in this species compared to finfish, such as sea bass or sea bream. Thus, checking this parameter during rearing is fundamental. The same can be said for oxygen (Cerezo Valverde and García García, 2004), mainly in post-prandial period (from 6 to 16 hours after the meal ingestion), due to high consumption found in octopus. At a temperature of 17°C to 20°C, the optimum oxygen saturation resulted ranging from 100 to 65%, suboptimal saturation from 65 to 35%, dangerous below 35%, and lethal below 11% of saturation (Cerezo Valverde and García García, 2005). Octopus vulgaris shows a great resistance to hypoxia, similarly to other cephalopod species. The resistance to low oxygen levels was higher in small sized octopuses and at low temperature, this parameter playing an important role by increasing the lethal oxygen value. Given the relevant changes produced in the most suitable oxygen levels, temperature variations are highly critical for octopus. For this reason the utilisation of benthic cages could reduce the abrupt changes of environmental parameters associated with rearing in floating cages (Estefanell et al., 2012). Males and females showed a different behaviour, the latter having greater oxygen consumption, conversely to what the same authors previously found on immature males and females (Cerezo Valverde and . The separation according to sex improved culture yield, as the non-fertilised females continued to grow to commercial size without interferences due to the stopping of feeding during eggs care. The rearing performance of female can be further improved by using high light intensity in order to delay the sexual maturation and to achieve a greater somatic growth , because egg formation generates higher metabolic requirements during sexual maturation of females (O'Dor and Wells, 1978). In other experiments, differences in growth and food intake between males and female were not found (Aguado-Gimenéz and García Garcia, 2002). In relation to the hierarchical behaviour of octopus, the influence of rearing density has been studied. Domingues et al. (2010) tested growth and survival during on-growing of octopus at 3 densities (4 vs 8 vs 15 kg m -3 ) in an experiment lasting 70 days. No differences were found for growth, but a lower survival characterised the highest density group. Even though the groups showed similar feeding rates, the food conversion rates were lower in medium and high density octopuses, probably for their more stressing and uncomfortable condition. Previously, Otero et al. (1999) tested 10 and 20 kg m -3 as stocking density, suggesting a density no higher than 10 kg m -3 to obtain the best growth performance. A modulation of density in relation to water temperature (higher density at lower temperature) and to [Ital J Anim Sci vol.11:e72, 2012] [page 405] culture system (higher in cage than in tank) should be considered. Feeding Lee (1994) studied the feeding behaviour of cephalopods and highlighted the importance of visual stimuli, chemical and textural properties of food, and nutritional quality of the diet, this last affecting the ingestion. Octopuses prefer live food, giving better performance with mixed diet than with monodiet. Cagnetta and Sublimi (2000) obtained a better growth when crabs were used in comparison to a diet based on squid or fish, even though squid simplify the management of tank for cleaning, due to reduced waste. In a recent paper, Estefanell et al. (2012) showed how feeding octopused on monodiet based on bogue (Boops boops) discarded from fish farms produced high growth (1.8 to 1.9% day −1 ) and high survival (91 to 97%), and the best biomass increment (178 to 212%) and food conversion rates (2.3 to 2.6) ever recorded for O. vulgaris under industrial rearing conditions. The species can be easily adapted to dead food (Boucaud-Camou and Boucher-Rodoni, 1983) and to commercial dehydrated foods, as pelleted diets (Lee, 1994), that can be appreciated and ingested in relation to the specific texture characteristics. Different digestibility for lipids (46%) and proteins (96%) justifies the nutritional superiority of food poorer in lipids and richer in protein that is the main source of energy in octopus (Lee, 1994;O'Dor et al., 1983). The knowledge of aminoacid composition of food could be essential for evaluating the adequacy of protein to satisfy energetic requirements. Crabs are also preferred to fish for specific pre-ingestional (size, shape, texture, flavour) and/or post-ingestional stimuli (digestibility, assimilation or energetic benefit) that this food can generate (Lee, 1994). The high variability of natural food tested in trials referred in different papers is responsible for the large differences found in documented growth responses and for non-consistent results (Aguado-Giménez and García García, 2002). Even if research on artificial diets for cephalopods started 2 decades ago (in the early 1990s), the lack of an appropriate artificial diet for nutritional requirements of octopus still represents a limiting factor for its fattening phase. Quality The shelf life of octopus is very short, even at low positive temperatures, due to high protein deterioration which is responsible for an increase in nitrogen released during storage. Hurtado et al. (1999) reported a shelf life lasting 6 to 7 days after catch at 2.5°C and Barbosa and Vaz-Pires (2003) found a shelf life of 10 days at 0°C. Different methods and different parameters have been tested for monitoring quality and quality evolution during the shelf life, based on physical, chemical, and microbiological analysis. A sensorial scheme based on the Quality Index Method was developed by Barbosa and Vaz-Pirez (2003). In their trail, the authors found that the rejection of octopus kept in crushed ice occurs at day 8. Several attempts were carried out to increase the shelf life by an appropriate treatment (high pressure, heat combined to high pressure, gamma radiation) and to improve the textural properties inducing the softening of meat (Hurtado et al., 2001a(Hurtado et al., , 2001b(Hurtado et al., , 2001cSinanoglou et al., 2007). Recently, Mendes et al. (2011) tested the active packaging based on soluble CO2 stabilisation (SGS) methodology to obtain readyto-eat products. Even though no extension of shelf life was obtained, the bacteriostatic effect allowed an effective extension of the period of use by date. Besides the short shelf life, octopus shows other peculiarity as regards its quality, for example the high solubility of fibrillar protein, responsible for a considerable leaching when processing in water is performed, reducing nutritive value and affecting sensorial characteristics. The edible portion of octopus is very high, compared to fish or crustaceans (80 to 85 vs 40 to 75 vs 40% to 45%) (Kreuzer, 1984). Also, its composition presents very low content in fat (0.54 to 0.94%, according to the season), where the total polyunsaturated fatty acids of the n-3 species (PUFAn-3) range between 42 and 47% (Ozogul et al., 2008), and EPA and DHA represent a high percentage of total fatty acids (Zlatanos et al., 2006;Ozogul et al., 2008). Critical points The high market request in the Mediterranean, Latin America and Asian countries, the declining landings by fishery, the features of subadults and adults constitute excellent starting points for proposing octopus as a new candidate for aquaculture. Currently, the production from aquaculture is constantly increasing, even though it is still scarce (30 t). Some reviews summarising useful information for the different steps of the octopus cycle in aquaculture conditions are available. Unfortunately, the results achieved until now are not conclusive and some bottlenecks reduce the feasibility of farming this species in a manner completely independent from the wild. The high mortality in the paralarvae stage (Iglesias et al., 2006) and the inexistence of adequate artificial feeds for paralarvae and subadults (Domingues et al., 2005(Domingues et al., , 2006Rosas et al., 2007; currently are the two major bottlenecks for the commercial aquaculture of this species . Evaluating the economic viability of O. vulgaris on-growing, García García et al. (2004) found that juveniles represented around 41% of total costs. Even though that incidence is highly variable depending on catches, the on-growing activity is now retained a high risk business with low profits. The success of paralarval rearing in order to achieve economic and environmental sustainability for octopus aquaculture is essential. The management of this species in captivity (i.e., during the different steps of the culture until slaughtering) should take into account that, like other cephalopods, O. vulgaris has some behavioural characteristics (Mather, 2008) which highlight the needs for the animal welfare consideration. In UK, Canada and Australia an appropriate legislation is already at stake (Berger, 2010). Purple sea urchin Distribution, habitat and exploitation The purple sea urchin Paracentrotus lividus (Echinodermata: Echinoidea) is widely distributed in the Mediterranean sea and along the North-eastern Atlantic coast, from Scotland and Ireland to southern Morocco (Boudouresque and Verlaque, 2001). This sea urchin lives on rocky substrates and in seagrass meadows, from shallow waters down to about 20 m depth. It is a species of commercial importance, with a high market demand for its roe, particularly in the Mediterranean basin (Régis et al., 1986) and more recently in other European non-Mediterranean areas (Byrne, 1990;Barnes and Crook, 2001). In the last decades, its populations have shown a wide scale decline in many European countries due to overfishing (Boudouresque and Verlaque, 2001). In Italy, the harvesting of P. lividus is a widespread activity mainly exerted in southern regions (Tortonese, 1965;Guidetti et al., 2004;Gianguzza et al., 2006; and, despite the fishery of this species being regulated by a number of decrees (i.e., fishing periods, minimum size of harvestable individuals and quantity per day per fisherman), the harvesting of P. lividus is intensively practised, particularly due to high tourism demand. For this reason, shallow rocky reef populations of P. lividus are mainly exploited by both authorised fishermen and poachers equipped with SCUBA, but also by occasional collectors throughout the year (Pais et al., 2012b). Reproduction In general, Paracentrotus lividus biology has been well studied, and much research has been carried out to determine all the phases of the reproductive cycle from many European countries (Byrne, 1990;Lozano et al., 1995;Spirlet et al., 1998b;Sanchez-Espana et al., 2004;Pais et al., 2006a). This Echinoid has external fertilisation and gametes are released in the water. The development of the embryo is quite fast and is a multi-phase process. In controlled conditions, adult sea urchins are taken out from the rearing seawater (18°C to 20°C) in order to obtain gametes. To do so, 0.5 to 1 mL (according to the specimen size) of a 5×10 -1 M KCl solution are injected inside the mouth using a syringe. Afterwards, sea urchins are gently rotated putting them upside down on glass beakers containing sterilised and ultra-filtered (Millipore filters 0.22 μm) (Millipore Co., Billerica, MA, USA) seawater and the gonadal products (orange-red eggs from females and white emulsion from males) are emitted from the genital pores. After cleavage, the larval period can last differently, and metamorphosis may be delayed due to environmental features. In fact, the ciliary movement of the larvae can be negatively affected by several factors and, consequently, may influence their feeding ability. Generally, however, the echinopluteus stage is reached within 48 hours and if the echinoplutei are properly fed, they can undergo the metamorphosis within 3 weeks (Yokota et al., 2002). Subsequently, the newly formed juveniles can settle in the substrate and their benthic phase begins. Aquacultural activities In case of severe depletion of Paracentrotus lividus wild stocks, aquacultural practices could represent a valid alternative to fishing (Fernandez and Pergent, 1998). Actually, as several recent studies demonstrated the possibility of improving cultivated edible sea urchin gonadal quality (Spirlet et al., 2000;Shpigel et al., 2005), local and tourism roe market demand could be supplied by industrial applications of similar techniques. Furthermore, the optimisation of P. lividus gametogenesis (Shpigel et al., 2004;Luis et al., 2005), if aimed at obtaining fertilised eggs, larvae and juveniles, could be useful to test restocking practises in the most severely exploited areas. As described above, P. lividus artificial reproduction is a very easy practice. In contrast, the subsequent rearing phases are to some extent more difficult to carry out. Postmetamorphic juveniles can be fed using different algal species (Cellario and Fenaux, 1990;Grosjean et al., 1996Grosjean et al., , 1998 until their mean individual size reaches 3 to 5 mm. However, since the growth of the juveniles is not homogeneous, sea urchins are graded regularly, and those with a diameter larger than 5 mm are transferred into pre-growing nurseries. Subsequently, the subadults (i.e., specimens whose size exceeds 10 mm, but it is below the minimum marketable size of 40 to 50 mm) are positioned in rearing baskets with all sides made out of mesh. These rearing baskets are suspended in tanks to allow a good seawater circulation around and inside them, and an effective removal of solid wastes produced by the sea urchins. In general, during the growing phase sea urchins are fed ad libitum with fresh algae (or, in alternative, with artificial diets) twice a week. The cleaning of the baskets and tanks is regularly done and dead specimens are daily removed. Furthermore, sorting of sea urchins is done to divide the batches into different size classes. During the entire production cycle, a photoperiod of 12h/12h light/dark is usually used. Rearing in Europe In the past decades, several studies carried out in Europe have pointed out the possibility of successfully rearing this Echinoid. Indeed, much research has been done to reproduce the entire life cycle of Paracentrotus lividus, since aquaculture techniques have the potential for production of this species for both human consumption and for its use as model in research in developmental biology. Therefore, a number of studies have been conducted on the feeding preferences of juvenile and adult sea urchins and, as these echinoderms are herbivorous, they have focused on the use of several algal species (Le Gall, 1989;Frantzis and Grémare, 1992;Boudouresque et al., 1996;Lemée et al., 1996;Grosjean et al., 1996Grosjean et al., , 1998Cook and Kelly, 2007a;Cook et al., 2007). On the other hand, the use of different artificial diets has been tested in trials aimed to improve gonadal quality and somatic growth of the sea urchins (Fernandez, 1997;Basuyaux and Blin, 1998;Fernandez and Pergent, 1998;Spirlet et al., 1998bSpirlet et al., , 2001Fernandez and Boudouresque, 2000;Pantazis, 2009;Fabbrocini et al., 2011). In addition, a number of polyculture systems have been tried out in small-scale rearing experiments in order to enhance the production performances of P. lividus (Schuenhoff et al., 2003;Kelly, 2007b, 2009). At present, however, despite all the above mentioned farming experiences, no rearing activities of commercial importance are present in Europe. The only exception is represented by the Dunmanus Seafood Ltd. (Durrus, Ireland), in which hatchery-reared juveniles of this species are grown to market size mainly by ranching and are then seeded to rock pools or subtidal areas (Kelly and Chamberlain, 2010). Conclusions Shellfish culture can be regarded as a positive example of aquaculture activity. In particular, bivalve culture can be looked at as a paradigmatic example of sustainable economy safeguarding the environment. Indeed, it is at a low trophic level, it does not require the provision of feed input, it can help to reduce the level of water eutrophication, and it is suitable for integrated forms of aquaculture [i.e., integrated multi-trophic aquaculture (IMTA)]. Nevertheless, we cannot hide or leave out some criticism on this activity, such as the environmental consequences due to the translocation of non native species which in turn are vectors of coastwise introduction of non-indigenous species (protists, algae, macroalgae, invertebrates, etc.), the impact generated on benthic communities (when the intensiveness of the culture is very high), or due the tools utilised for harvesting (as in the case of burrowing species). The two sides of the same coin dictate more careful choices than those made in the past. These new choices should favour the cultivation of native species and avoiding the translocation of seed, juveniles and adults. This approach can be an economic driver, stimulating the development of a complete production chain, enhancing the creation of hatchery for spat production and reducing the potential dependence from other countries for seed supply. Although significant improvements could be achieved through specific actions on the species traditionally cultured, the way forward for the future is the culture of new species. The above survey, focused on the perspectives offered by some mollusc native species and the echinoderm Paracentrotus lividus, shows that we can make a virtue of necessity. This can be done thanks to research and experimentations carried out primarily in other countries that insist on the Mediterranean, but also in Italy where the transition from the experimental trials to the effective culture could be immediate for some species (i.e., flat oyster and grooved carpet shell), or less immediate but certainly feasible for other species (i.e., razor clams, octopus and purple sea urchin) for which we are still in a more or less advanced development of the whole culture process. Therefore, if we move from theory to practice, enormous economic and environmental benefits can be achieved through the creation of new activities, the demand for new jobs, an expanded product offer for the market and, simultaneously, a reduced impact on natural resources that the traditional fishery for shellfish species of commercial importance notoriously produces.
N arratioN iN medical care . S elected aSpectS of Narrative mediciNe iN pSychiatry Narrative in medical practice is a way to explore the nature of the disease in the face of the patient’s uniqueness and individuality, regardless of the diagnosed disorder or disease. The narrative approach provides patients with a representation of their own story, a demonstration of interest on the part of their treatment team, and a positive relationship with the highest level of health care provision, always in relation to the patient’s current health and life situation. The use of narrative medicine is important in forming relationships with patients or when professional work becomes a source of burnout and lack of perceived satisfaction. Aim of the study was to present selected issues related to the practice of narrative in medical care, including the care of patients with mental disorders. Thirty-three selected original papers were thoroughly analysed. All works were written in the period 2011-2020 and could be characterized as demonstrative or research papers and case studies. Medline, PubMed, SAGE, and other databases were used. Literature analysis confirms the presence of narrative medicine, including in the field of psychiatry. Researchers emphasize the importance of narrative in creating relationships with patients, shaping both their activity and involvement, as well as with members of the therapeutic team. In practice, narrative involves professionals expressing empa-thy, using narrative techniques, and being accountable when interacting with patients who discover themselves. The importance of narrative in the care of patients with various disorders, including mental disorders, is significant in providing care that is consistent with patients’ expectations and needs. IntroductIon Narrative medicine has been defined by Rita Charon as the ability to recognize, assimilate, interpret, and respond to the stories of others [1][2][3]. In the practice of medicine, it helps us to understand the nature of disease or disorder by taking into account the needs, values, personality, and uniqueness of the patient in the face of diverse relationships and dependency systems [4][5][6][7]. Narrative also helps us to understand how the patient experiences the disease and all its consequences [2,5,8,9]. Incorporating narrative into daily practice contributes to the provision of quality care that is consistent with the patient's expectations [4,10,11]. This is possible by ensuring relevant health information and medical interventions, taking into account the patient's individuality [3,4,12]. Narrative in medicine means, above all, the opportunity for the patients to present (tell) their own history and to show them attention and care [13][14][15]. Authentic interest in the other person, perceiving him/her in a broader perspective than just through the biomedical dimension of his/her functioning, emphasizes the context and complexity of relations and influences occurring in the patient-therapeutic team [4,6] and patient-social relations [8]. The interactions and relationships formed as a result of conversation, as well as its very context and course, significantly influence the creation of the story being told, including the one that arises while providing everyday care (clinical encounter narrative) [13][14][15]. Narrative in medical care may have a healing and therapeutic role, so to speak -it enables the patients to tell their own history of illness, gives value and meaning to their statements, and emphasizes the validity of active listening [16]. Moreover, narrative promotes authentic understanding and helps to establish an empathic NarratioN iN medical care. Selected aSpectS of Narrative mediciNe iN pSychiatry therapeutic relationship [4,10,11], strengthens the patient-staff relationship, enhances the capacity for reflection [6,13,16], and influences both the personal and professional development of therapeutic team members [16]. There are 3 basic components in narrative medicine: attentiveness, representation, and affiliation. Attentiveness refers to an increased focus of attention on the content, form, or circumstances of a message. It requires openness to perceptual impressions of the storyteller as well as verbal and nonverbal messages. Representation usually takes a descriptive form that summarizes the story that has been told as well as heard (applies to both teller and listener). Affiliation is associated with deep and attentive listening as well as knowledge derived from representation [17]. An important aspect of practicing narrative medicine is reflexivity and reflection. Reflection includes consideration of individualized needs, self-reflection and self-awareness, reflection of action, its consequences, and reflection of taking action in the future [4]. Reflexivity, on the other hand, is understood as the ability to observe ourselves with the same methods that are used to study phenomena. Reflexivity requires analysis of own actions, critical insight, and evaluation of our own role in co-creating reality [17]. Some of the patients' stories are simple and easy to understand. However, others are more complex or perhaps told in a particular way that requires more information, further research, and specialized interpretation. Encountering a patient requires insight and reflection on the part of health care providers about their role and their own impact on the resulting interaction [13]. Each person has his/her own story, which is a sequence of interrelated events and facts that can be presented in the form of a subjectively shaped vision of life that is both an expression of identity and a factor in its formation [7]. Self-talk allows one to get to know oneself better and to give meaning to the things that have a non-obvious impact on functioning. Being able to look at our own lives and the factors that shape them illustrates and reminds us of often overlooked or forgotten aspects of our own story [7]. This paper is based on selected scientific literature available in the Medline, PubMed, and SAGE databases and ACADEMIA.EDU, The Lancet, Ejournals.eu, Longdom Publishing SL, RUJ, Biblioteka Nauki, and CEEOL websites. Thirty-three selected original papers were thoroughly analysed. All works were written between 2011 and 2020, and they could be characterized as demonstrative or research papers and case studies. The aim of this paper is to present selected issues related to the practice of narratives in medical care, including the care of patients with mental disorders. narratIve In patIent care Individual and unique narratives enable the expression of individual experiences, beliefs, values, and preferences resulting from both the patient's personality and socio-cultural conditions of functioning. Narrative-based care encourages patients not only to describe events that are important to them and not necessarily directly related to their current medical condition [4], but also to explore the dynamics of the patient-treatment team relationship in the context of relational and diverse threads [7]. Narrative care involves a genuine interest in the patient's person and sensitivity to their needs, not judging their attitudes, values, or choices, and demonstrating flexibility and willingness to make changes in the therapeutic relationship. Medical staff should actively listen to patients, provide opportunities to express needs and expectations, and motivate and support them to participate in decisions about the care and therapeutic process [4,6]. Narrative care focuses not only on biomedical issues such as symptom specificity, diagnostic management, side effects, and complications of treatment, but also on issues related to psychological and emotional needs (e.g., anxiety, worry), social circumstances, and expectations of the care provided. Considering only the somatic dimension of a patient's functioning limits the ability to fully understand how they experience their illness [9]. Disease symptoms or diagnostic test results deprived of interpretation are often insufficient to make a diagnosis, which is an interpretive process aimed at trying to understand the patient's narrative [4,8]. It is also important to assess the patient's behaviours related to adaptation to the disease and treatment, and to assess the family, social, occupational, and psychosocial problems that occur relating to the impact of disease on the patient and their loved ones, the need to change their social roles, reduced economic status, available and received support, family relationships, and spiritual issues [4]. Three main areas can be distinguished in patient narratives. The first is the restitution narrative, in which patients assume that their health will return to normal (pre-disease state). Another area is the chaos narrative related to the patient's experience of loss of control over their lives and the sense of burden resulting from the disease, especially chronic disease, including, e.g., pain sensations. This narrative relates to the process of taming the situation as a result of disease and treats the difficulty in expressing experiences or experiences related to it [18]. The third area is the search narrative, in which patients view their illness as a challenge and a spiritual journey. Additionally, they express hope that the experience of illness will become valuable to them in some way [14]. Professionals in the health care system can be tasked with helping the patient integrate the story with the ing questions to encourage the patient to reflect and make a possible change in their thinking or actions, which may sound like the following: How else can you explain…?; Are there any other possibilities?; Let's sup-pose…; What would happen if…?; If you had a magic wand, what would you do?; What must happen for the situation to change?; If the situation did change, what would happen then?; and What would happen if nothing changed? [13]. Moreover, it is very important to assure the patient about one's readiness to help, to explain the importance of knowing the patient's needs, to show active listening, to notice changes in the patient's statements, and to create hypotheses about the possible meanings of his/her statements. It is also important to assess the patient's perception and presentation of the problem that led him/her to seek help. For health care professionals, this provides a basis for asking detailed questions in the medical interview [17]. If it is difficult to determine why a patient is seeking help, health care professionals should ask the patient about it and ask themselves why this patient is seeking help at this time [19]. The implementation of narrative medicine principles also includes respecting silence during the conversation, allowing the patient to take an initiative, observing nonverbal communication, and ensuring that the conversation continues if the conversation must be interrupted due to circumstances. In practicing narration, it is important to beware of judging, making fixed assumptions, and rushing to solve the problem [19]. The basic as well as postgraduate education of health care professionals should include passing on knowledge and skills, as well as presenting attitudes that aim to perform care with a holistic approach. This sometimes requires breaking out of the traditional clinical paradigm that focuses solely on the biological realm of patient functioning and recognizing how a patient's illness affects their functioning, expression of emotions, performance of social roles, or maintenance of relationships [11]. The narrative approach cannot be learned quickly and exclusively by acquiring a set of narrative techniques. Moving from a conventional form of practice to a narrativeoriented practice requires persistence, attentiveness to habits and routines, and ongoing self-reflection. Learning narrative skills begins with listening to and exploring the patient's history. It requires a willingness to hear the story and a desire to understand it, and refinement of these skills occurs over the course of listening to and analysing multiple narratives [19]. Selected aSpectS of narratIve In pSychIatry Psychiatry seems to be an area where the narrative aspect is particularly important [4,20,21], because all search narrative and facilitate the development of new solutions and plans and alternative hopes for the future [14,17]. narratIve competence Narration, as an important source of knowledge, provides information not only about the symptoms of disease and ailments experienced by the patient, but also information that goes beyond the biomedical understanding of disease. In addition to the use of medical knowledge necessary in the treatment process, professionals should know and understand the patient's narratives; narrative competence should be used at every stage of contact with the patient [3,4,12]. Narrative competence means the ability to thoroughly "read" the patients' history, and it determines the acquisition of knowledge about the problem [2]. They are defined as professional skills of health care professionals, which should be based on empathy, reflection, professionalism, and credibility. These competencies are also described as the ability to analyse the story presented -the story structure, taking multiple perspectives, understanding metaphors and hidden meanings of narratives [4]. In the narrative approach one "follows" the patients, paying attention to their fears, expectations, feelings, emotions and reactions, ideas related to the course of disease, and other aspects of human functioning. It is also important to observe and interpret the patient's body language and communication skills [5]. The demonstration of narrative competence by health care providers means emphasizing the role of the patients' stories in their daily functioning and providing patient-centred medical interventions [4]. In turn, a lack of narrative competence can result in decreased effectiveness of clinical work due to an inability to perceive the patients' narratives while providing clinical care [3]. Practical tips for incorporating narrative into everyday health care practice include showing interest in the patient, not interrupting them (allowing them to finish their thoughts), asking open-ended questions [19], and exploratory questions, such as: What does this mean to you? [4]; How has the illness affected your life and relationships?; What is it like to find out that you have the disease?; In what ways has your illness changed your life the most?; What are your hopes and fears?; and What is important in your life? [14]; Tell me more about it; Is there anything else?; Is there anything you worry about?; What worries you the most?; Has this ever happened before?; What else was going on at the time?; What do you think about…?; What do others think about…?; How do you feel [or react] when…?; What does it mean to you?; What do you think might cause…?; How would you describe…?; and How would you explain…? [13]. It is also possible to have a conversation based on ask-NarratioN iN medical care. Selected aSpectS of Narrative mediciNe iN pSychiatry tute a fundamental element in medical practice [26], and in the opinion of others they can be helpful in forensic opinions [27]. The specificity of mental illness forces a change in the current way of the patient's functioning and experiencing, it introduces a sense of chaos, uncertainty, and confusion. Therefore, the proposed narrative in the form of expressive writing is very often considered as a source of knowledge about the disease and allows for ordering of thoughts and experiences that accompany the disease. The way it is presented, especially at the beginning of treatment, can be difficult, incomprehensible both for the patient him/ herself and his/her environment, and even for the psychiatrist, and it requires proper interpretation [25]. Narrative in practice treats the patient as a human being and allows the patient to express his/her own point of view, which can be important in providing guidance for further management [28], and this seems to be extremely valuable in intervening with people with mental disorders. Emphasizing the importance of the patient's statement in the professional exchange of information provides evidence of changes in medicine [29], treating patients as expert of knowledge on mental illness [25], and furthermore, the psychiatrist adopts an attitude of collaboration with those seeking help [30]. In addition, it is evidence of presenting a way of understanding and interpreting the current situation and reality not only by the patient but also by the medical personnel [25]. In the approach of narrative medicine, it is important to emphasize the fact that psychiatrists can hear the most personal stories of their patients, and therefore they are even more responsible for the relationship with their clients [31]. In shaping the complex patient-physician relationship, patient acceptance, the importance of patient involvement, and activity is emphasized, because conscious participation in the treatment process affects the broader therapy and quality of care [32]. Narrative has its importance in the contact between the patient and the psychiatrist, the patient and the therapist, and the patient and the nurse. This emphasizes the nurse's role in the therapeutic team. Through narrative, nurses working with individuals with mental disorders can explore the whole experience and understand the person's history in relation to the environment in which they function. The nurses' narrative techniques (to some extent intuitively) that enable discussions revealing patients' motivations and problems confirm their professionalism [22]. The patient's telling of their own story is a key to building their sense of dignity, worth, and subjectivity and often proves that life satisfaction can be achieved despite illness [25]. The use of narrative medicine is valuable in cases of difficulties in forming relationships with patients, emergence of professional burn-types of mental disorders can be better understood through narrative inquiry. Opinions are present in the literature that refer to the lack of presence of narrative. This situation results in clinicians referring only to symptoms indicative of the clinical unit, treating the mental disorder and not the person [22]. Narrative in psychiatry confirms that there are many possibilities for telling the story of mental health and there is no single specific way. Clinical dialogue, on the other hand, shows a new point of view of the patient's problems. It should be emphasized that narrative in this field of medicine does not reduce the validity of the biological approach [23], and it shows a positive relationship with the outcome of clinical examination and diagnosis. Narrative psychiatry does not aim to negate or disqualify other knowledge and research [24]. In its realm, narrative also captures the process of communicating with the patient and the fact that the patient's narrative is not only telling, but also experiencing and responding [25]. The narrative helps to formulate the problem at hand; thus, the construction of questions starting with "how" is more important than formulating questions with "why" [22]. Narration in psychiatry draws attention not only to the existing psychopathology, but also to a broader aspect of the patient's functioning and undertaken activities, to the whole of his/her life, because mental health disorders are only a fragment of the patient's full knowledge [25]. During contact with the patient, it is particularly important to pay attention to the content of the stories told by them, because thanks to this it is possible to correctly read their entire context and assess their needs and emotions, and this has a direct impact on making effective interventions [21]. The patient with a mental disorder can communicate a lot about their childhood, disease symptoms, reaction to the disease, work experiences, and social support in his story. The patient's narrative may also include an area related to physical health. If the patient has comorbid somatic illnesses, attention must be paid to them as well, because only then is it possible to undertake of an integrated nature, referring to the holism of care [22]. Patient histories also allow for a broader context for assessing available and received social support and its sources, rather than simply confirming or denying the possession of such resources. This is of particular importance in defining the role of the patient's support persons and determining what their support consists of [22]. Therapeutic narrative is also of great importance in the conducted therapies because the heard story allows the therapist to direct a new point of view of the person asking for help and to indicate new possibilities of coping with the current situation. In the opinion of some specialists, patient narratives consti-out, or lack of perceived professional satisfaction. In practice, it also means developing interpersonal skills and professionalism [33], which are essential when it comes to dealing with people with mental disorders. Following the principles of narrative medicine can be helpful in meeting current health challenges and can bring tangible benefits to the medical field of psychiatry [21]. Summary The benefits of practicing narrative medicine include improvement in communication and patienttherapeutic team relationships, refinement of medical information obtained through a standardized interview or analysis of test results, understanding of certain beliefs, attitudes, or behaviours expressed by the patient, as well as increased mutual trust, empathy, and commitment to shared decision-making. The narrative approach also improves relationships within the therapeutic team and positively influences the quality of care provided, allows for identification and understanding of mistakes made (and avoiding them in the future), and increases job satisfaction by reducing the risk of burnout. In addition, it contributes to the development of professionals and their awareness of possible prejudices, stereotypes, and fears. Patients rate the clinical competence of professionals highly, but high scores are also attributed to interpersonal skills, active listening, showing empathy and engagement, which are significantly related to narrative care. Therefore, it is possible and advisable to use an approach that combines both clinical (EBM) and narrative (NBM) medicine in medical practice; this combination has a positive impact on the effectiveness, quality, and satisfaction of medical care. concluSIonS Narrative medicine determines the quality of patient care based on an active communication process using narrative techniques. The narrative approach breaks the focus on the traditional clinical paradigm, requires empathy, responsibility, and openness towards the patient, and it emphasizes the patient's subjectivity and active involvement in the treatment process, in which the interventions taken are based on knowledge from the patients' accounts.
Source-Free Domain Adaptation via Distribution Estimation Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different. However, the training data in source domain required by most of the existing methods is usually unavailable in real-world applications due to privacy preserving policies. Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data. In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation. Firstly, we produce robust pseudo-labels for target data with spherical k-means clustering, whose initial class centers are the weight vectors (anchors) learned by the classifier of pretrained model. Furthermore, we propose to estimate the class-conditioned feature distribution of source domain by exploiting target data and corresponding anchors. Finally, we sample surrogate features from the estimated distribution, which are then utilized to align two domains by minimizing a contrastive adaptation loss function. Extensive experiments show that the proposed method achieves state-of-the-art performance on multiple DA benchmarks, and even outperforms traditional DA methods which require plenty of source data. Introduction In the past few years, deep Convolutional Neural Networks (CNNs) have achieved remarkable performance on many visual tasks such as classification [19], object detection [8], semantic segmentation [28], etc. However, the success of CNNs relies heavily on the hypothesis that the distributions of the training data is identical to that of the test data. Thus, models trained with data from a certain scenario (source domain) can hardly generalize well to other real-world application scenarios (target domains), and may * Corresponding author. suffer from severe performance drop. Moreover, the difficulty of collecting enough labeled training data also hinders CNNs from directly learning with target domain data. Unfortunately, CNNs deployed in real-world scenarios always encounter new situations, such as the change of weather and variation of illumination in autonomous driving. Therefore, a lot of attention is paid to the domain shift problem [1,43] mentioned above, and Domain Adaptation (DA) theory has been developed to solve it. DA algorithms directly help deep models transfer knowledge learned from a fully annotated source domain to a separately distributed target domain whose annotations are entirely unavailable. Existing advances in deep learning-based DA methods [4,24,30,34] generally achieve model transferability by means of mapping two different data distributions simultaneously into a mutual feature space shared across two domains. However, people are getting more aware of the importance of privacy data protection nowadays. Strict policies regarding data privacy concerns have been published all around the world. More AI companies also choose to open source their pretrained models only, yet keep the source dataset used for training unreleased [46]. Therefore, most of the traditional DA methods become infeasible to transfer knowledge to target domain when source data is no longer accessible since these methods basically assume that data from both source domain and target domain is available. To overcome this data-absent problem, some recent works [18,20,23,25,57] explored more general approaches to achieve domain adaptation without accessing source data. Only unlabeled target domain data and the model pretrained on source domain are required to accomplish the cross-domain knowledge transfer. Such a new unsupervised learning setting for domain adaptation task is called Source-Free Domain Adaptation (SFDA). SHOT [25] utilizes information maximization and entropy minimization. 3C-GAN [23] uses a generative model to enrich target data to enhances model performance. G-SFDA [58] learns different feature activations by exploiting neighborhood structure of target data. A 2 Net [52] introduces a new classifier and adopt adversarial training to align two domains. Despite the fact that these SFDA methods utilize the source domain knowledge contained by the pretrained model, none of them explicitly align the distributions between source domain and target domain to achieve adaptation. In this paper, we focus on image classification task under SFDA setting. We manage to estimate the source distribution without accessing source data. Specifically, we utilize the domain information captured by the model pretrained on source data and treat the weights learned by source classifier as class anchors. Then, these anchors are used as the initialization of feature center for each class and spherical k-means is performed to cluster target features in order to produce robust pseudo-labels for target data. Furthermore, we dynamically estimate the feature distributions of source domain class-wisely by utilizing the semantic statistics of target data along with their corresponding anchors, which is called Source Distributions Estimation (SDE). Finally, we sample surrogate features from distributions derived from SDE to simulate the real but unknown source features, and then align them with target features by minimizing a contrastive adaptation loss function to facilitate source-free domain adaptation. In short, if the feature distribution of target domain is well-aligned with source domain, the source classifier will naturally adapt to the target domain data. We validate our proposed SFDA-DE method on three public DA benchmarks: Office-31 [38], Office-Home [50] and VisDA-2017 [37]. Experiment results show that the proposed SFDA-DE method achieves state-of-the-art performance on Office-Home (72.9%) and VisDA-2017 (86.5%) among all SFDA methods, and is even superior to some recently proposed traditional DA methods that require accessing source domain data. Related Work Traditional domain adaptation. Domain Adaptation (DA) as a research topic has been studied for a long time [1]. With the emergence of deep learning [19,41], CNNs with superior capacity to capture high level features become the first choice to perform adaptation. As a result, many related tasks have been developed in the field of visual DA, such as multi-source DA [36,53], semi-supervised DA [11,39], partial DA [2], open set DA [27,35], universal DA [40], etc. DA aims to improve the generalizability of a model which is learned on a labeled source domain. When fed with data drawn from a different target distribution, model performance declines drastically. This is referred to as covariate shift [42][43][44] or domain shift problem. To tackle this problem, lots of methods try to align feature distributions of different domains via minimizing Maximum Mean Discrepancy (MMD) [29,31,32,43], which is a non-parametric kernel function embedded into reproducing kernel Hilbert space (RKHS) to measure the difference between two probability distributions [9,15]. Moreover, Kang et al. [17] incorporates contrastive learning technique [3] into MMDbased method to further boost model transferability. Meanwhile, Zellinger et al. [60] and Sun et al. [45] propose to align high order statistics captured by networks like central moment to achieve domain adaptation. Apart from directly aligning two distributions, some recent works [7,30,49] employ adversarial training by adding an extra feature discriminator. In this way, the networks are forced to learn domaininvariant features to confuse the discriminator. Source-free domain adaptation. All the methods mentioned above expect both labeled source data and unlabeled target data to achieve domain adaptation process, which is often impractical in real-world scenario. In most cases, one can only access the unlabeled target data and the model pretrained by source data. To this end, some recent works [20,23,25,26,48,52,56,58] regarding sourcefree domain adaptation emerge. These methods provide solutions to adapt the model to unseen domains without using original training data. SHOT [25] utilizes information maximization and entropy minimization via pseudolabeling strategy to adapt the trained classifier to target features. [23,26] both use generative models to model the distribution of target data by generating target-style images to enhance the model performance on target domain. G-SFDA [58] forces the network to activate different channels for different domains while paying attention to the neighborhood structure of data. A 2 Net [52] introduces a new target classifier to align two domains via adversarial training manner. SoFA [59] uses a Variational Auto-Encoder to encode target distribution in latent space while reconstructing the target data in image space to constrain the latent features. Many of the above methods freeze the source classifier during adaptation to preserve class information, and assign pseudo-labels based on the classifier's output. Here we follow the idea of freezing the source classifier [25] but use a more robust pseudo-labeling strategy via spherical kmeans clustering. Moreover, we propose Source Distribution Estimation (SDE), aiming to approximate the source feature distribution without accessing the source data. After that, the target distribution can be directly aligned with the estimated distribution to adapt to the source classifier. Method In this section, we first describe the problem setting for source-free domain adaptation and notations to be used afterward. Then we elaborate our proposed SFDA-DE method in three steps to address SFDA problem. First of all, we obtain robust pseudo-labels for target data by utilizing source anchors and spherical k-means clustering. Secondly, we estimate the class-conditioned feature distribution of source domain. Finally, surrogate features are sampled from the estimated distribution to align two domains by minimizing a contrastive adaptation loss function. Preliminaries and notations In this paper, we use D s = {(x s i , y s i )} ns i=1 to denote the source domain dataset with n s labeled samples, where y ∈ Y ⊆ R K is the one-hot ground-truth label and K is the total number of classes of the label set C = {1, 2, · · · , K}. denotes the target domain dataset with n t unlabeled samples which has the same underlying label set C as that of D s . In SFDA scenario, we have access to the model G(F(·)) which is already pretrained on D s in a supervised manner by cross-entropy loss, where F denotes the CNN feature extractor followed by a linear classifier G. During training, only data in D t is available and no data in D s can be used. Besides, we use f = F(x) ∈ R m to denote m-dimensional feature representations and use w G ∈ R m×K to denote the weights learned by G, where w G k ∈ R m is the k-th weight vector of w G . Pseudo-labeling by exploiting anchors In many works, pseudo-labeling is an important technique to obtain category information for those unlabeled samples and is usually realized by exploiting the highlyconfident outputs derived by the classifier. However in SFDA task, the classifier G is pretrained on source domain data and will encounter the distribution shift problem when classifying target domain data. Therefore, it's crucial to find a robust way to solve the distribution shift problem and assign correct labels to unlabeled target data. Thus, we consider obtaining pseudo-labeling via spherical k-means. Specifically, given a class label predicted by the linear classifier G aŝ whereŷ i ∈ R K is the logits vector before softmax. Note that each element in the class probability vector is derived by the dot product between the feature and each weight vector of the classifier. Thus, data of the k-th class tends to yield feature representation that activates the k-th weight vector in G. Features of data from the k-th class should gather around w G k . Therefore, w G k can be treated as an anchor of the k-th class which contains overall characteristics that represent the whole k-th class. In SFDA task, target features would drift away from source anchors which makes it hard to directly predict labels for target data with pretrained classifier G. Thus, we propose to assign pseudo-labels for target data via spherical k-means. We first cluster the target data by setting anchors as the initial cluster centers: Then we perform spherical k-means iteratively between (1) assigning pseudo-labels via minimum-distance classifier: |a|·|b| ) is the cosine distance, m denotes the number of current iterations and 1(·) is the indicator function. Iteration will stop when all class centers converge. After clustering is done, a confidence threshold τ ∈ (0, 1) is set to filter out ambiguous samples so that a confidently pseudo-labeled target dataset D t is constructed: Given the robust pseudo-labels derived above, we use x t i,k to denote the target data x t i with pseudo-labelŷ t i = k, and use f t i,k = F(x t i,k ) to denote its corresponding feature representation in the following of this paper. Similar to the idea proposed in [25], we freeze G to fix the source anchors in order to stabilize the adaptation to target domain. Source Distribution Estimation In traditional DA setting, feature distributions of data from both source and target domains can be estimated by mini-batch sampling from D s and D t , respectively. Then the target distribution can be explicitly aligned with the source one and classified by the pretrained source classifier G [17,31,32]. However, source data is unavailable in SFDA setting, which makes it impossible to know the source distribution. To tackle this problem, Yang et al. [58] focuses on neighborhood structure and channel activation. Liang et al. [25] exploits information maximization and self-supervision to implicitly align feature representations. Nevertheless, none of the existing methods address SFDA problem by explicitly aligning the source distribution with the target distribution, and thus achieve sub-optimal results. We manage to explicitly estimate the source feature distribution without accessing source data by presenting Source Distribution Estimation (SDE) method. Concretely, we assume feature representations of source domain follow a class-conditioned multivariate Gaussian and k∈C={1, 2, · · · , K}. Essentially, µ s k can be viewed as the center of feature representations of the k-th class data in source domain and Σ s k is the covariance matrix which captures the variation in features of the k-th class and contains rich semantic information [51]. Then we can use a surrogate distribution N sur k (μ s k ,Σ s k ) to approximate the actual but unknown source distribution N s k for each class k ∈ C. A good estimator for µ s k should be discriminative enough and reflect the intrinsic characteristics of the k-th class data in source domain. If we directly use the feature mean of as an estimator for µ s k , obviously the above conditions cannot be satisfied due to the existence of domain shift problem. Recall the observation in Sec. 3.2 that anchors contain overall characteristics of the corresponding class. Thus, we propose to utilize anchors to calibrate the estimator for mean of the surrogate source distribution: which implies that the direction of estimated source feature mean is the same as the corresponding anchor but the scale of it is derived from target features. Another reason for the calibration is that there is usually a difference in norm between anchors and features, which is w G k 2 < f t i,k 2 ≈ f s i,k 2 , empirically. Therefore, it's not appropriate to directly use anchors as the estimator of mean either. As for covariance matrices, many works [5,22,24,51] study the statistics of deep features and reveal that classconditioned covariance implies the activated semantic directions and correlations between different feature channels. We assume that the intra-class semantic information of target features is roughly consistent with that of the source. Hence we derive the estimator for source covariance Σ s k from statistics of target features: is a matrix whose columns are centralized target features of the k-th class in D t . We use a controlling coefficient γ to adjust the sampling range and semantic diversity of sampled surrogate features. Details of selecting γ will be studied in Sec. 4.3. By exploiting anchors and target features, we derive K class-conditioned surrogate source distributions from which we can sample surrogate features f sur k ∼ N sur k (μ s k ,Σ s k ) to simulate the real source features. Source-free domain adaptation In the previous section, we are able to estimate the source distribution without accessing source data by exploiting domain knowledge preserved in the pretrained model with the proposed SDE method. Thus, we can sample data from the estimated distribution as surrogate source data, and the SFDA problem becomes the traditional DA problem. We adopt Contrastive Domain Discrepancy (CDD) introduced by Kang et al. [17] to explicitly align the target distribution with the estimated source distribution. Specifically, we choose a random subset C ⊂ C from the label set C={1, 2, · · · , K} before each forward pass. Then for each k ∈ C , we sample n b target images from D t to construct a set of data |k ∈ C }. Correspondingly, we sample n b features from surrogate source distributions for each k ∈ C to construct the source mini-batch {{f sur j,k ∼ N sur k } n b j=1 |k ∈ C }. Therefore, for any two class k 1 , k 2 ∈ C , a class-conditioned version of MMD that measures discrepancy between surrogate source distribution and target distribution is defined as Algorithm 1: SFDA training process within one epoch Input: unlabeled target images {x t i } nt i=1 , label set C, pretrained feature extractor F, frozen classifier G, confidence threshold τ , number of iterations t. 1 Initialize the cluster center with source anchors w G k learned by G for each k ∈ C = {1, 2, · · · , K}; 2 Apply spherical k-means on target features and construct confident pseudo-labeled set D t with τ ; 3 Perform SDE to derive K surrogate source distributions N sur k (μ s k ,Σ s k ) according to Eq. (5); Compute CDD loss according to Eq. (7); 8 Do backward and update weights of F. 9 end where k(·, ·) is kernel functions that embeds feature representations in Reproducing Kernel Hilbert Space (RKHS). Utilizing the data in both source batch and target batch, the CDD loss is calculated as in which the first term represents intra-class domain discrepancy to be diminished and the second represents interclass domain discrepancy to be enlarged. By explicitly treating data from different classes as negative sample pairs, CDD loss facilitates intra-class compactness and inter-class separability, which is beneficial to learning discriminative target features. Algorithm 1 shows the overall training process of our proposed SFDA method within one epoch. As the adaptation proceeds, target features are driven closer and closer to approach anchors and the statistics of target features will change constantly. Therefore, we perform both pseudolabeling method in Sec. 3.2 and SDE method in Sec. 3.3 at the beginning of every epoch to dynamically re-estimate the surrogate source distribution N sur k . Experiments In this section, we first validate the effectiveness of the proposed SFDA-DE method based on three benchmarks. Then we conduct extensive experiments on hyper-parameter selection, ablation study, visualization, etc. Pretraining on source domain. We use momentum SGD optimizer with exponential decay learning rate schedule η = η 0 (1 + α · i) −β , where η 0 is the initial learning rate and i is the training steps. Weight decay is set to 5e-4 and momentum is set to 0.9. For Office-31 and Office-Home dataset, we employ ResNet-50 [12] as our feature extractor F and a single fully-connected layer as classifier G. We set η 0 = 0.001, α = 0.001 and β = 0.75. The model is trained for 50 epochs on all source domains. For VisDA-2017 dataset, we employ ResNet-101 as the feature extractor F and train it for 500 steps on source domain. We set η 0 = 0.001, α = 0.0005 and β = 2.25. For all 3 datasets, the learning rate of G is set to be 10 times bigger than F and the batch size is set to 64 for all domains. The source dataset is randomly split into a training set accounting for 90% and a validation set accounting for 10% in order to guarantee the model converges. SFDA implementation detail. We follow the standard SFDA setups adopted by [25,52]. We use all weights of the pretrained model as initialization and freeze all anchors w G k in classifier G during SFDA stage. For Office31 and Office-Home dataset, we use the same optimization setting and learning rate schedule as the aforementioned pretraining stage. We empirically set τ = 0.6, γ = 1, |C | = 12 and n b = 3. For VisDA-2017 dataset, we use the same optimization setting and learning rate schedule as the pretraining stage but set the initial learning rate η 0 to be 1e-4 for all convolutional layers and 1e-3 for all BatchNorm layers. We empirically set τ = 0.078, γ = 2, |C | = 6 and n b = 10. Selection of hyper-parameters will be studied in Sec. 4.3. All results reported below are the average of 3 independent runs and we manually set the random seed to guarantee reproducibility. All experiments are conducted with PyTorch and MindSpore [14] on NVIDIA 1080Ti GPUs. Experimental results Tabs. 1 to 3 demonstrate the experimental results of several recent SFDA methods and traditional DA methods. Best results among SFDA methods are shown in bold font. We achieve state-of-the-art performance on Office-Home (72.9%) and VisDA-2017 (86.5%). As the scale of dataset gets larger, our method performs increasingly better. Tab. 1 shows the adaptation results on Office-31 dataset. Our method has the same best result (90.1%) as A 2 Net [52] and is comparable to some traditional domain adaptation algorithms which require source data. Unlike Office-Home and VisDA-2017, Office-31 is a small-scale dataset whose image number of each class is around 40 on average. Therefore, it is hard for our method to accurately estimate the source distributions from statistics of target data. Yet we still achieve the best results on average and on 2 of 6 tasks. Tab. 2 shows the results on Office-Home benchmark, in which our method achieves state-of-the-art average performance (72.9%) and performs the best on 7 of 12 transfer tasks among all the SFDA methods. Our method is even superior to some of the traditional domain adaptation methods which require source data. This dataset is larger in scale than Office-31 and thus is able to provide adequate target data to estimate source distributions more accurately. Tab. 3 shows the per-class and average accuracy on VisDA-2017 benchmark. Our method achieves state-of-theart performance among all SFDA methods and is higher than the second best A 2 Net [52] by a margin of 1.1%. Despite the huge domain gap between the source domain (Synthetic) and the target (Real), our method still achieves 86.5% average accuracy due to the vast number of target images (∼55K) for estimating the source distributions, which is the key to our success. Statistics derived from sufficient of data can better reflect the real distribution. Ablation studies Confidence threshold τ . The precision of the estimation of class-conditioned source distributions relies on the correctness of target pseudo-labels included by D t in Eq. (2). Figs. 3a and 3b shows the sensitivity analysis on model per- formance, pseudo-label accuracy of D t and the number of data included by D t w.r.t. confidence threshold τ . Specifically, a small threshold τ would reject more incorrectly labeled data but the total number of data in D t would be reduced. Conversely, a large threshold will enlarge the scale of D t but introduce more false labels. Therefore, τ needs to be selected carefully. As shown in Fig. 3a, for Ar→Pr task in Office-Home dataset, despite the drop in pseudo-label accuracy caused by increasing τ , the performance of our method keeps improving in synchronization with the number of target data included by D t . We conjecture that having sufficient pseudo-labeled data is more important than the accuracy of pseudo-labels to the estimation of source distributions for small-scale dataset. So we set τ = 0.6 for both Office-31 and Office-Home to allow more pseudo-labels. However for VisDA-2017 dataset, as shown in Fig. 3b, the best performance is obtained when τ = 0.078. Since VisDA is a large-scale dataset, a small τ can guarantee both the accuracy of pseudo-labels and the number of selected confident data simultaneously. Covariance coefficient γ. Figs. 3c and 3d shows the experimental results with different γ ∈ {0.5, 1, 1.5, 2, 2.5, 3} on Ar→Pr task of Office-Home dataset and on VisDA-2017 dataset, respectively. Larger covariance matrix leads to more flexible feature activations. Thus the value of γ in Eq. (4) controls the semantic diversity of features sampled from the surrogate source distribution. By expanding the sampling range, features far from anchors can be sampled. Fig. 3d shows that the performance on VisDA-2017 is improved by a margin of 0.2% when γ = 2. However, an inappropriate value of γ may lead to a sub-optimal solution. Estimation of the source mean. To verify the effectiveness of the estimatorμ s k in Eq. (3), we use several variants to replace our estimation. If we directly treat the intra-class feature mean derived from confident target data in D t as the mean of surrogate source distribution the performance becomes even worse. This suggests that information of source anchors and information of target features complement each other. In addition, update once in Tab. 4 means we only update bothμ s k andΣ s k for once at the very beginning of the whole SFDA training process, which leads to a sub-optimal result. Robustness of pseudo-labeling strategy. Obtaining robust pseudo-labels is important to the following SDE process, since high-quality pseudo-labels can provide accurate estimation for the mean and covariance of each distribution. If pseudo-labels are corrupted, the estimated distribution would be diverged from the real distribution, which makes the sampled surrogate features unable to represent the real source features of a certain class. To validate the robustness of our anchor-based spherical k-means clustering pseudo-labeling method, we conduct experiments and show the results in Tab. 5. Instead, we use a maximum probability-based strategy to assign pseudo-labels: y t i = arg max k σ k (G(F(x t i ))), where σ is the K-way softmax function that generate probabilities for each class. We also set a threshold τ to select confident samples whose maximum probabilities are greater than τ to construct the confident target dataset D t . Multiple values of τ are tested to guarantee a fair comparison. Tab. 5 shows that our anchor-based clustering pseudo-labeling method outperforms maximum probability-based method on both Office-Home dataset and VisDA-2017 dataset. Visualization and empirical analysis We visualize the experimental results on VisDA-2017 dataset and analyse the proposed SFDA-DE method. Training curves. Fig. 4c shows the training curves of CDD loss and model accuracy on target domain during source-free adaptation process. Our method converges stably and shows superior performance from an early stage. Domain shift. We utilize t-SNE visualization to demonstrate the distributions of feature representations in both source and target domains. As shown in Fig. 4a, a large amount of target data (represented by orange dots) disperses in the feature space before adaptation due to severe domain shift problem while source data (represented by blue dots) gathers around the anchors and forms intra-class clusters. Visualization of surrogate features. Red dots in Fig. 4b represent the surrogate features derived from SDE with covariance multiplier γ = 2, which enlarges the sampling range. These surrogates are distributed around corresponding anchors to simulate source features of each class. Effectiveness of our method. After SFDA training, as shown in Fig. 4b, target features are pulled towards corresponding anchors and merged into the surrogate feature clusters. Besides, low density area can be clearly observed in the feature space after adaptation. This suggests our SFDA-DE method can learn discriminative features for unlabeled target domain without using source data. Calibration of distribution mean. In SDE, anchors are utilized to calibrate the mean of estimated source distribution according to Eq. (4), since target features drift away from source features at the early stage of training. Therefore, target class centersf t k = i f t i,k x t i ∈D t 1(ŷ t i =k) cannot serve as a good estimator of µ s k . As shown in Fig. 5a, the distance between target feature centers and source anchors is diminished as training proceeds. Target features gradually approach the corresponding anchors of the same class, which means the calibration ofμ s k is effective. Estimation bias of covariance. Fig. 5b visualizes the classwise estimation bias of distribution covarianceΣ s k =Σ t k over the ground truth source covariance Σ s k w.r.t. training steps. The gap in between is mitigated in the early stage of training and is kept at a low level, which verifies our assumption made in Sec. 3.3. Thus, class-conditioned source covariance can be approximated via high-quality pseudolabeled target data. Conclusions In this paper, we propose a novel framework named SFDA-DE to address source-free domain adaptation problem via estimating feature distributions of source domain in the absence of source data. We utilize domain knowledge preserved by source anchors to obtain high-quality pseudolabels for target data to achieve our goal. Sufficient experiments validate the effectiveness and superiority of our method against other strong SFDA baselines.
Analysis of Long Lived Particle Decays with the MATHUSLA Detector The MATHUSLA detector is a simple large-volume tracking detector to be located on the surface above one of the general-purpose experiments at the Large Hadron Collider. This detector was proposed in [1] to detect exotic, neutral, long-lived particles that might be produced in high-energy proton-proton collisions. In this paper, we consider the use of the limited information that MATHUSLA would provide on the decay products of the long-lived particle. For the case in which the long-lived particle is pair-produced in Higgs boson decays, we show that it is possible to measure the mass of this particle and determine the dominant decay mode with less than 100 observed events. We discuss the ability of MATHUSLA to distinguish the production mode of the long-lived particle and to determine its mass and spin in more general cases. Introduction Despite the successes of the Standard Model of particle physics, there are strong motivations to believe in new fundamental interactions that lie outside this model. The Standard Model does not contain a particle that could explain the dark matter of the universe. Its theory of the Higgs boson and its symmetry-breaking potential is completely ad hoc. Many models have been proposed to generalize the Standard Model, but there is no compelling experimental evidence supporting any of these models. It is therefore important to propose additional windows through which to search for these new interactions. Searches for LLPs have typically involved the study of low-energy reactions, for example, using fixed target experiments with electron, proton, or neutrino beams. One strategy has been to position a detector behind a beam dump, where it can observe decays of neutral particles with weak interaction cross sections on matter. However, this approach to the search of LLPs is limited in mass scale. It is also limited because it requires the LLP to have large enough coupling to quarks and leptons. The Large Hadron Collider (LHC) offers new mechanisms for the production of LLPs that are available only in high-energy collisions. These include production through W boson fusion, through the decay of heavy SM particles like the Higgs or Z, through the decay of new heavy parent particles such as squarks or gluinos, and through new, heavy scalar and vector bosons produced in the s-channel in quarkantiquark or gluon-gluon collisions. The most interesting and most highly motivated of these mechanisms is the exotic decay of the 125 GeV Higgs boson to a pair of LLPs [39][40][41][42]. However, though the LHC might have large production rates for LLPs, the ability of the LHC detectors to observe these particles is limited. As large as ATLAS and CMS are, the size of these detectors is a constraint. Furthermore, LLP events suffer from significant backgrounds, especially if the LLPs decay to hadrons [43]. The MATHUSLA detector was proposed in [1] to address this problem [44]. MATHUSLA is a large-volume detector on the surface above an LHC experiment. Essentially, it is an empty barn that provides a decay volume for LLPs, and, near its roof, is equipped with charged particle tracking to detect an LLP decay. It is shown in [1] that the limited instrumentation proposed allows one to reject cosmic-ray and other backgrounds with very high confidence. This dedicated detector would increase the sensitivity to LLPs over the capabilities of the current central detectors by several orders of magnitude. The comparison to ATLAS is shown in Fig. 1. Because of its large size and because -as yet -there is no evidence for LLPs, the MATHUSLA detector must be built from relatively inexpensive components. The original concept for MATHUSLA in [1] imagined an empty building offering 20 m of decay space and, above this, ∼ 5 layers of Resistive Plate Chambers (RPCs), along with some plastic scintillator for additional timing and veto information. This paper offered an explicit physics case for MATHUSLA, with estimates of its performance in the search for LLPs produced in exotic Higgs decays as a well-motivated benchmark model [45] From this description, it is not obvious that MATHUSLA has any capability beyond the discovery of LLP events via the detection of decay vertices originating in its decay volume. However, we find that, by applying some simple arguments, it is possible to use the limited information provided by MATHUSLA to learn a surprising amount. In this paper, we analyze the performance of MATHUSLA for the most interesting and also most constrained situation-the decay of the Higgs boson to a pair of LLPs, such that the LLP has a dominant 2-body decay mode. In Section 2, we briefly review the design of MATHUSLA. In Section 3, we show that, under the assumption of this production mode, it is possible to measure the mass of the LLP and to identify its most important decay modes, using only the information provided by MATHUSLA, with as few as 30 − 100 observed decays. In Section 4, we show how the production by Higgs decay may be distinguished from other hypotheses, and we discuss the generalization of this analysis to other LLP production modes. Design of the MATHUSLA detector For concreteness, we define a simple design for the MATHUSLA detector that we will use in our study. This closely follows the concept originally presented in [1], with one suggested modification for additional diagnostic capability. The detector geometry relative to the LHC interaction point is shown in Fig. 2. MATHUSLA is an empty building of area 40,000 m 2 and height ∼ 25 m. The floor, ceiling and walls contain a layer of scintillator to provide a veto for charged particles emerging from below. The veto is important in dealing with backgrounds, which, as is shown in [1], can be reduced to negligible levels. However, it plays no role in the analysis we present here. At a height of 20 m, we place the first of 5 RPC tracking layers. These layers are spaced about 1 m apart, with the last layer just below the roof. The RPCs are arranged to record charged particle hits with a pixel size of 1 cm 2 and a time resolution of 1 ns. This allows the angle of charged tracks to the tracking planes to be determined with a precision of about 2 mrad. On the right side of Fig. 2, we show schematically the pattern of charged tracks Without additional material, electrons and muons may not be distinguishable, while photons are invisible. We therefore suggest a possible modification to the original design of [1] by inserting an un-instrumented steel sheet of several cm thickness between the 1st and 2nd tracking layer. This provides 1-2 X 0 to convert photons and electrons, producing visible electromagnetic showers. The thickness of the sheet for hadronic interactions is about 0.1 λ i . This minimal detector does not allow measurement of the energy or momentum for any particles, but it does allow the various particle types to be distinguished qualitatively. The details of this possible modification, including the exact thickness, type, and location of material, as well as its viability in terms of cost and effect on tracking performance, are left for future investigation. In our simulations, we assume the following minimum detection thresholds on particle three-momenta to ensure the particles leave hits in all tracking layers: pions: 200 MeV, charged kaons: 600 MeV, muons: 200 MeV, electrons: 1 GeV, protons: 600 GeV, photons: 200 MeV. We ignore charged pion and kaon decays within the detector volume. If the Higgs boson decays to a pair of LLPs with a branching ratio of 1%, the ATLAS or CMS detector will produce 1,500,000 pairs of LLPs with the 3 ab −1 of luminosity projected for the High-Luminosity LHC. Assuming the best case of a ∼ 100 m lifetime, we expect a sample of about 10,000 LLP decays within the detector acceptance. We will see below that it is possible to draw interesting conclusions from samples with as few as 100 LPP events. Diagnosing LLPs produced in exotic Higgs decays In this section, we assume that LLPs are produced in pairs as the decay products of the Higgs boson. Our objective is to measure the LLP mass and determine the dominant decay modes, using only geometrical charged particle trajectories that can be measured by MATHUSLA. We postpone to the next section the problem of distinguishing this production mechanism from other possibilities. To open up the largest possible number of decay final states, we study LLP masses above the bb threshold, m X ∈ (15, 55) GeV. The method is easily generalized to lower masses, though spatial track resolution may become more important for very light LLPs. We assume possible decays X → ee, µµ, τ τ, γγ or jj. We write τ h, to refer to an explicitly hadronic or leptonic τ . In the simulations described below, we model Higgs production via gluon fusion at the HL-LHC, with subsequent decay to two LLPs XX and decay of one X in the MATHUSLA detector, by using the Hidden Abelian Higgs model [40] in MadGraph 5 [47], matched up to one extra jet, and showered in Pythia 8.162 [48,49]. In this model, the LLP X is modeled as a spin-0 particle, except in the case of 'gauge-ordered' 2-jet decay, described below, where it is spin-1. Only LLPs with an angle to the beam axis in the range [0.3, 0.8], corresponding approximately to the angular coverage of MATHUSLA, are analyzed [50]. Qualitative analysis We have already illustrated in the previous section and in Fig. 2 that the various possibilities for the 2-body decay of the X can be distinguished qualitatively from the pattern of tracks in the MATHUSLA detector. After requiring at least two detected charged tracks per displaced vertex, we can impose the following criteria to sort the events: • two tracks: µµ • two tracks, which shower after the first layer: ee • two showers but no hits in the first layer: γγ • between 3 and 6 tracks: at least partially hadronic τ h τ h, . • more than 6 tracks: jj Without the material layer, photons are undetectable and electrons look like muons, but all of our other conclusions are unaffected. Decays to τ + τ − are recognized by the characteristic 1-prong against 3-prong topology that appears in 26% of τ + τ − decays. The identification of subdominant decays modes and the measurement of their branching ratios depends strongly on what the dominant decay mode might be. In some cases, there is an obvious analysis using the criteria above. If the dominant decay mode is bb, this generates backgrounds to other decay models that must be studied with care. The full analysis of that problem is beyond the scope of this paper. The category of X decays to jj contains a number of more specific possibilities. Three simple benchmark scenarios are: (1) decay to gluon jets, (2) "gauge-ordered" decay to qq jets with democratic flavor content, as would be generated by the decay of a dark photon [40], and (3) "Yukawa-ordered" decay to jets which are dominantly bb, as would be generated by the decay of a dark singlet scalar that mixes with the SM Higgs. These possibilities cannot be distinguished on an event-by-event basis, but we will show below that they can be distinguished in samples as small as 100 events. The left-hand figure shows the angles θ 1 , θ 2 ; the right-hand figure illustrates the boost back to the LLP rest frame in which the 2 products are back-to-back. Note thatp i orp i (β X ) denote momentum vectors normalized to unit length in each frame. For convenience we work in the coordinate system wherep X is along the z-axis and the decay products are in the (x, z) plane (far right). Measurement of the LLP mass Now we discuss the determination of the LLP mass. It is crucial that the decay vertex can be precisely located within the MATHUSLA decay volume. Since the LLP X originates from the nearby LHC collision region, the vector from the point of origin to the decay vertex is very well known. This allows the velocity β X of the LLP to be found from the geometry of the decay. Consider first a decay to 2 final-state charged particles, such as ee or µµ. Let θ 1 and θ 2 be the angles of the two decay products with respect to the X direction, as shown in Fig. 3. The 4-vectors of the two products then have the form with θ 1 and θ 2 both positive quantities and E 1 β 1 sin θ 1 = E 2 β 2 sin θ 2 by momentum balance. Since all components are known up to a an overall prefactor, we can boost both p i back along the direction of p X until they are back-to-back, recovering the LLP rest frame. This yields Since the distance of the LLP decay to the LHC interaction point is much greater than the distance to the tracking planes, the precision of the measured angles θ 1 , θ 2 is simply the precision of the measured angles between the tracks and the trackers, about 0.2% for θ i ∼ O(1) and approximately independent of the uncertainty on the displaced vertex location [51]. For the two-body decays we consider, the products will be relativistic, with β i close to 1. This makes the error induced by assuming that β i = 1 negligible. In any case, the timing of the MATHUSLA detector tracking elements allows each β i to be measured to 5% or better. For h → XX, the transverse energy of the X with respect to the LHC beam direction should be roughly m h /2, so the expected mean velocity of the produced X particles decreases as the mass increases. In Fig. 4, we show the expected distribution of b = p X /m X from our simulation for three values of the X mass, illustrating this effect. From the figure, we estimate that a sample of 100 reconstructed events will give the X mass with a statistical error of about 1 GeV. The systematic error on this measurement, coming from the uncertainties in the measurement of lepton directions, is at the part-per-mil level. The precise knowledge of the LLP speed in each event gives an error on the production time of a few ns, making it possible to identify the LHC bunch crossing in which the LLP was produced. This means that the event properties measured in the central detector can potentially be used to constrain hypotheses on the LLP production process. This simple method of determining β X fails for hadronic LLP decays, since jet axes cannot be reliably reconstructed from charged particle directions alone. Fortunately, we can achieve almost identical results using an only slightly more sophisticated method that we outline in Section 3.4. We now discuss this mass measurement more quantitatively for the three cases of X → µµ, X → jj and X → τ τ . LLP decay to µµ For µµ decays, Eq. (2) gives the reconstructed LLP boost as long as both muons hit the roof of the detector. This is the case in about 95% (50%) of decays in MATH-USLA for m X = 15 (55) GeV. This geometrical effect is the dominant factor in the efficiency for an event to be reconstructed by the MATHUSLA detector. To determine the expected precision of a mass measurement with N reconstructed LLP decays, we conducted 1000 pseudoexperiments and made a maximum likelihood fit of the measured boost distribution to template-functions obtained from the same boost distributions in the maximum-statistics limit. For a given pseudoexperiment, we define N obs to be the number of decays in the MATHUSLA detector volume and N reconstructed to be the number of decays in which the tracks are oriented such that mass can be computed from the available information. The reconstruction efficiency = N reconstructed /N obs varies from 0.95 for m X = 15 GeV to 0.55 for m X = 55 GeV. The distributions of the reconstructed boosts are very close to the truth-level distributions shown in Fig. 4. We define the expected mass precision, ∆m/m , to be the average spread of best-fit mass values amongst the 1000 pseudoexperiments. In Fig. 5, we show the dependence of this quantity on the total number of LLPs decaying in the MATHUSLA detector volume. A 10% mass measurement requires only 20-30 observed decays. For few reconstructed events, the precision of the mass measurement is better for heavier LLPs, while for many reconstructed events, it is better for lighter LLPs. This is not an artifact of the mass-dependent in Fig. 5, but is likely due to the fact that under the assumptions of our LLP production mode in Higgs decays, the LLP mass is bounded from above. Therefore, for very few observed events, measuring a handful of very low boosts has to be due to LLPs near threshold. As the number of reconstructed events becomes large, these parameter space "edge effects" become less important, and the slightly narrower boost distribution of light LLPs makes their mass measurement more precise. LLP decay to jets In our mass range of interest, LLP decays to jets produce events with 10 − 20 charged tracks in the detector. This is illustrated in Fig. 6 for m X = 15 and 55 GeV and our three benchmark jet flavor compositions. This high multiplicity is a boon for several reasons. In terms of background rejection, a displaced vertex with full timing information and this many tracks is supremely difficult to fake by cosmic rays. In terms of signal analysis, the multiplicity distribution contains information about the jet flavor composition. Furthermore, the detection efficiency for LLPs decaying to jets (defined as the fraction of decays with at least 6 charged tracks hitting the roof) is very close to 100%. On the other hand, the high multiplicity means it is not so simple to determine the jet directions that must be input into Eq. (2). One might, for example, try to extract two jet axesp a ,p b by maximizing the quantity summed over charged track momentum unit vectorsp i in the event. The sum tends to be dominated by tracks carrying low fractions of the total jet momentum. In the absence of energy and momentum measurements, the jet axes cannot be reliably determined by this or similar methods. Fortunately, we can exploit the high multiplicity for a different kind of approximate boost reconstruction. Naively, if the LLP decays to high multiplicity, the distribution of tracks in its rest frame should be spherically symmetric. Applying this assumption to individual events, we can estimate the LLP boost event-by-event by solving for β X in the constraintp The resulting sphericity-based boost distribution, using only upward going tracks and assuming all final states to be ultra-relativistic in the lab frame, is shown as the dotted distributions in Fig. 4. In our simulation of LLP decays to jj, this method is surprisingly powerful. It gives a boost distribution very close to the original boost distribution from Monte Carlo truth. Even more importantly, its discriminating power for LLP mass is almost unaffected by the deviation between these distributions. There are several reasons why the sphericity-based method might give an accurate result. If the parent of the LLP has spin 0 and CP violation can be ignored in its decays, the distribution of tracks in the LLP rest frame will be front-to-back symmetric on average. The same result applies if the parent has spin but is produced with zero longitudinal polarization along the LLP direction. For the decay to jj, though, a more important reason for the accuracy of the sphericity-based method is that the sum in (4) is dominated by hadrons that are soft in the rest frame of the LLP. The momentum distribution of these soft particles depends only on the color flow and is independent of any LLP polarization. Their high multiplicity ensures that the shift of any one sphericity-based boost measurement is much smaller than the width of the overall boost-distribution, allowing the LLP mass to be accurately extracted. It should be noted that these same soft hadrons are also partially responsible for the noticeable positive bias of the sphericity-based boost distribution compared to the Monte Carlo truth. That effect deserves a dedicated discussion, which we present in the Appendix. We can use the sphericity-based method to measure the mass of a LLP decaying as X → jj without having to determine the jet flavor content first. The event-by-event precision of the sphericity-based β X measurement is sufficient to determine the LHC bunch crossing in which the LLP was created to within about 2 (6) bunch crossings for m X = 15 (55) GeV. We can estimate the required number of observed events N obs for a given mass measurement precision in a fashion identical to that used for X → µµ above. The only difference is the use of sphericity-based boost distributions as the templates. The efficiency for reconstructing these events is close to 1. The required number of observed events is shown in Fig. 5 (right). The result is very similar to that for X → µµ, with only about 20 -30 events required for a statistical error on the mass measurement of 10%. The mass measurement also has a systematic error from hadronization uncertainties in modeling the final state of the LLP decay. Varying Pythia tunes, we find this to be less than 1%, but a full analysis should also investigate the effect of using other generators such as Herwig [52,53] or Sherpa [54]. The different multiplicity of charged final states can be exploited to determine the flavor content of the X → jj final state. We make use here of the fact that a gluon jet has a higher multiplicity than a b quark jet, which in turn has higher multiplicity than a light quark jet. Although the differences in the multiplicity distributions are not large enough to identify the jet flavor on an event-by-event basis, this becomes an effective discriminator when applied to large enough samples. The effect is robust even taking in account the discrepancies in the predictions of different hadronization schemes [55]. A straightforward generalization of the mass measurement method to a 2D likelihood fit in boost and multiplicity reveals that the different decay modes can be reliably distinguished with about 100 observed LLP decays. The charged track distribution contains even more information, but it is likely more dependent on the hadronization model and assumed detector capabilities. For example, the minimum charged particle velocity in each event is significantly higher for gauge-ordered jets than for Yukawa-ordered or gluon jets for m X = 15 GeV, while for m X = 55 GeV the gluon jets have higher fraction of slower particles than both types of quark jets. The angular correlations also contain information about the LLP spin. We have not exploited this property in our analysis, but with further study it would likely improve the diagnosis of hadronically decaying LLPs. Finally, we point out that even though jet axes are not useful for measuring LLP boost, a rough determination of the LLP decay plane is possible by minimizing for choice of plane normal vector a. The resulting decay plane corresponds to the truth-level expectation up to a deviation angle ∆θ ∼ 0.2 − 0.5. This is crude, but it could be useful for diagnosing significant invisible components in LLP decays. LLP decay to τ τ For X → τ τ events, each τ decay gives 1 track or 3 well-collimated tracks. For the events with two 1-prong decays, we use the direction of the observed track as a proxy for the τ direction. For events with 3-prong decays, maximization of the quantity V 2 in (3) provides good approximations to the two τ directions. Using the two τ vectors estimated in this way, we apply the method of Section 3.2. The fraction of events for which at least two charged particles hit the roof of the detector is about 90% (60%) for m X = 15 (55) GeV. We note that the sphericity-based method described in Section 3.3 gives slightly better results for the case of a spin 0 LLP; however, it is less robust with respect to the effects of possible LLP polarization. The event-by-event precision of the β X measurement is sufficient to determine the LHC bunch crossing in which the LLP was created to within about 2 (4) bunch crossings for m X = 15 (55) GeV. The required number of observed events for a given precision of mass measurement is very similar to the cases already presented in Fig. 5. Determining the LLP Production Mode The analysis of the previous section made explicit use of the assumption that the LLP is produced in pairs in Higgs decay. However, with enough events and a library of possible production mode hypotheses as templates, it may be possible that the LLP production mode, decay mode and mass can all be independently determined in a global fit. In Fig. 7, we compare the boost distributions for a 35 GeV LLP produced through the following mechanisms at the 14 TeV LHC: Z → XX, h → XX, vector boson fusion through a 200 GeV mediator in the t-channel, gluino decay, qq → XX through a vector contact interaction, and vector boson fusion through through a W W → XX contact interaction. For clarity of presentation, we have generated an equal number of events for each sample. These six cases give six shapes with different, distinguishable, features. The event-by-event boost measurements also allow the LHC bunch crossing in which the LLP was produced to be narrowed down either uniquely or to one of a few choices. Given the low rate per bunch crossing of high momentum transfer events, it is likely that, if the production event is triggered on and recorded, it can be identified and studied. Even if only a small fraction of events were recorded during these bunch crossings this still might put interesting limits on the energy spectrum of associated objects, distinguishing, for example, the hypotheses of Higgs or Z boson origin from hypotheses involving W fusion or contact interactions. Conclusions It is a real possibility that the Higgs boson decays to long-lived particles that couple very weakly to all other particles of the Standard Model, and that would be invisible to LHC detectors. In [1], a relatively simple large-volume detector was proposed to search for such particles. In this paper, we have explained that this simple detector nevertheless has the power to provide qualitative and even quantitative information about the nature of these long-lived particles. This could well be our first source of information on a new sector of particles that coexist with those of the Standard Model and open a new dimension into the fundamental interactions. A Bias and spread of the sphericity-based boost measurement The sphericity-based boost measurement discussed in Section 3.4 and shown in Fig. 4 has three noticeable features: 1. The width and shape of the sphericity-based distributions are about the same as the truth-level boost distributions. 2. For lower masses, giving a high-velocity LLP, log 10 b 0.5, there is a positive bias of log 10 b measured − log 10 b truth ∼ 0.1 which is approximately massindependent. 3. For higher masses, giving a low-velocity LLP, the bias is again positive and significantly larger. These points are important in preserving the sensitivity of the sphericity-based boost measurement to the LLP mass. To investigate these effects, we found it useful to consider a toy model of LLP decay in which the charged final states are distributed isotropically in the LLP rest frame (without respecting momentum conservation, due to the undetected neutral hadrons). The charged particle multiplicity is sampled from a Poisson distribution centered on N ch = 10. It is instructive to consider two possibilities for the charged final state momenta: either all light-like, or with mass m = m π and energy distributed according to a thermal spectrum, P (E) ∝ exp[−E/T ] with T = 140 MeV as a crude model of soft pion emission [56]. We used this simple model to generate LLP decay "events", boosted them to the lab frame by assuming a fixed LLP boost b LLP , and reconstructed the sphericity-based boosts. For simplicity, we neglected the horizontal off-set of MATHUSLA from the LLP production point in this toy analysis. For each fixed central value b LLP , we defined the spread as the standard deviation of the resulting sphericity-based boost distribution ∆(log 10 b LLP meas ), and we defined the bias to be the deviation of the average boost, log 10 b LLP meas − log 10 b LLP . The results from the toy model are shown in Fig. 8. We compare four scenarios. This makes sense, since the analysis boosts the light-like particles correctly. The simulations with a thermal energy distribution shows a significant upwardd bias of 0.2, independent of velocity. Lightlike momenta are stiffer under boosts, and the analysis method treats these massive momenta as lightlike, so a larger boost is needed to balance the longitudinal momenta. The value of the bias is of the same order but somewhat larger than that in Fig. 4, due to the fact that this simulation omits the leading particles in jets, which are very relativistic. For low LLP velocities, considering upward-going particles only removes a significant part of the track distribution. The removal of the downward tracks increases the upward bias to about 0.4 in the region b LLP < 1. The spread, shown in the right-hand plot of Fig. 8, has a value of about 0.15, independent of velocity, for both types of simulation. This is consistent with Fig. 4. For analyses that consider all particles, the spread increases at low momenta: The true LLP velocity is close to zero, so the reconstructed LLP boost is dominated by random deviations of the charged track distribution from spherical symmetry in the LLP rest frame. For the analyses with upward-going tracks only, the reconstructed velocity is determined by the bias, mitigating this effect. The MATHUSLA detector has the capability of measuring the velocities of charged particles. A velocity measurement of 5% will be straightforward. This requires recording hits to about 1 nsec precision over the typical 10 m flight path through the RPC's. A more aggressive design might allow a velocity measurement with 1% error. Thus, it is interesting to apply velocity information to the measured tracks to see if the bias can be reduced. The effect on the spread turns out to be quite small. The results on the bias are shown in Fig 9. The solid lines correspond to treating all tracks as light-like as in the left-hand plot in Fig. 8. The dotted lines show the effect of using the velocity information for each track, assuming a velocity measurement with 5% or 1% error, spanning the range of capabilities estimated for MATHUSLA. Even with 5% errors, there is a significant effect for low LLP velocities. The bias returns at very low velocities when we restrict to upward-going tracks only. This effect of the track velocity measurement should be considered in more detailed design studies for MATHUSLA.
Discrete Family Symmetry from F-Theory GUTs We consider realistic F-theory GUT models based on discrete family symmetries $A_4$ and $S_3$, combined with $SU(5)$ GUT, comparing our results to existing field theory models based on these groups. We provide an explicit calculation to support the emergence of the family symmetry from the discrete monodromies arising in F-theory. We work within the spectral cover picture where in the present context the discrete symmetries are associated to monodromies among the roots of a five degree polynomial and hence constitute a subgroup of the $S_5$ permutation symmetry. We focus on the cases of $A_4$ and $S_3$ subgroups, motivated by successful phenomenological models interpreting the fermion mass hierarchy and in particular the neutrino data. More precisely, we study the implications on the effective field theories by analysing the relevant discriminants and the topological properties of the polynomial coefficients, while we propose a discrete version of the doublet-triplet splitting mechanism. Introduction F-theory is defined on an elliptically fibered Calabi-Yau four-fold over a threefold base [1]. In the elliptic fibration the singularities of the internal manifold are associated to the gauge symmetry. The basic objects in these constructions are the D7-branes which are located at the "points" where the fibre degenerates, while matter fields appear at their intersections. The interesting fact in this picture is that the topological properties of the internal space are converted to constraints on the effective field theory model in a direct manner. Moreover, in these constructions it is possible to implement a flux mechanism which breaks the symmetry and generates chirality in the spectrum. F-theory Grand Unified Theories (F-GUTs) [2,3,4,5,6,7,8] represent a promising framework for addressing the flavour problem of quarks and leptons (for reviews see [9,10,11,12,13,14]). F-GUTs are associated with D7-branes wrapping a complex surface S in an elliptically fibered eight dimensional internal space. The precise gauge group is determined by the specific structure of the singular fibres over the compact surface S, which is strongly constrained by the Kodaira conditions. The so-called "semi-local" approach imposes constraints from requiring that S is embedded into a local Calabi-Yau four-fold, which in practice leads to the presence of a local E 8 singularity [15], which is the highest non-Abelian symmetry allowed by the elliptic fibration. In the convenient Higgs bundle picture and in particular the spectral cover approach, one may work locally by picking up a subgroup of E 8 as the gauge group of the four-dimensional effective model while the commutant of it with respect to E 8 is associated to the geometrical properities in the vicinity. Monodromy actions, which are always present in F-theory constructions, may reduce the rank of the latter, leaving intact only a subgroup of it. The remaining symmetries could be U (1) factors in the Cartan subalgebra or some discrete symmetry. Therefore, in these constructions GUTs are always accompanied by additional symmetries which play important role in low energy pheomenology through the restrictions they impose on superpotential couplings. In the above approach, all Yukawa couplings originate from this single point of E 8 enhancement. As such, we can learn about the matter and couplings of the semi-local theory by decomposing the adjoint of E 8 in terms of representations of the GUT group and the perpendicular gauge group. In terms of the local picture considered so far, matter is localised on curves where the GUT brane intersects other 7-branes with extra U (1) symmetries associated to them, with this matter transforming in bi-fundamental representations of the GUT group and the U (1). Yukawa couplings are then induced at points where three matter curves intersect, corresponding to a further enhancement of the gauge group. In this paper we extend the analysis in [35] in order to construct realistic models based on the cases A 4 and S 3 , combined with SU (5) GUT, comparing our results to existing field theory models based on these groups. We provide an explicit calculation to support the emergence of the family symmetry as from the discrete monodromies. In section 2 we start with a short description of the basic ingredients of F-theory model building and present the splitting of the spectral cover in the components associated to the S 4 and S 3 discrete group factors. In section 3 we discuss the conditions for the transition of S 4 to A 4 discrete family symmetry "escorting" the SU (5) GUT and propose a discrete version of the doublet-triplet splitting mechanism for A 4 , before constructing a realistic model which is analysed in detail. In section 4 we then analyse in detail an S 3 model which was not considered at all in [35] and in section 5 we present our conclusions. Additional computational details are left for the Appendices. General Principles F-theory is a non-perturbative formulation of type IIB superstring theory, emerging from compactifications on a Calabi-Yau fourfold which is an elliptically fibered space over a base B 3 of three complex dimensions. Our GUT symmetry in the present work is SU (5) which is associated to a holomorphic divisor residing inside the threefold base, B 3 . If we designate with z the 'normal' direction to this GUT surface, the divisor can be thought of as the zero limit of the holomorphic section z in B 3 , i.e. at z → 0. The fibration is described by the Weierstrass equation where f (z), g(z) are eighth and twelveth degree polynomials respectively. The singularities of the fiber are determined by the zeroes of the discriminant ∆ = 4f 3 + 27g 2 and are associated to non-Abelian gauge groups. For a smooth Weierstrass model they have been classified by Kodaira and in the case of F-theory these have been used to describe the non-Abelian gauge group. 5 Under these conditions, the highest symmetry in the elliptic fibration is E 8 and since the GUT symmetry in the present work is chosen to be SU (5), its commutant is SU (5) ⊥ . The physics of the latter is nicely captured by the spectral cover, described by a five-degree polynomial where b k are holomorphic sections and s is an affine parameter. Under the action of certain fluxes and possible monodromies, the polynomial could in principle be factorised to a number of irreducible components C 5 → C a 1 × · · · × C an , 1 + · · · + n < 5 provided that new coefficients preserve the holomorphicity. Given the rank of the associated group (SU (5) ⊥ ), the simplest possibility is the decomposition into four U (1) factors, but this is one among many possibilities. As a matter of fact, in an F-theory context, the roots of the spectral cover equation are related by non-trivial monodromies. For the SU (5) ⊥ case at hand, under specific circumstances (related mainly to the properties of the internal manifold and flux data) these monodromies can be described by any possible subgroup of the Weyl group S 5 . This has tremendous implications in the effective field theory model, particularly in the superpotential couplings. The spectral cover equation (1) has roots t i , which correspond to the weights of SU (5) ⊥ , i.e. b 0 5 i=1 (s − t i ) = 0. The equation describes the matter curves of a particular theory, with roots being related by monodromies depending on the factorisation of this equation. Thus, we may choose to assume that the spectral cover can be factorised, with new coefficients a j that lie within the same field F as b i . Depending on how we factorise, we will see different monodromy groups. Motivated by the peculiar properties of the neutrino sector, here we will attempt to explore the low energy implications of the following factorisations of the spectral cover equation Case i) involves the transitive group S 4 and its subgroups A 4 andD 4 while cases ii) and iii) incorporate the S 3 , which is isomorphic to D 3 . For later convenience these cases are depicted in figure 1. In case i) for example, the polynomial in equation (1) should be separable in the following two factors C 4 × C 1 : a 1 + a 2 s + a 3 s 2 + a 4 s 3 + a 5 s 4 (a 6 + a 7 s) = 0 which implies the 'breaking' of the SU (5) ⊥ to the monodromy group S 4 , (or one of its subgroups such as A 4 ), described by the fourth degree polynomial and a U (1) associated with the linear part. New and old polynomial coefficients satisfy simple relations b k = b k (a i ) which can be easily extracted comparing same powers of (1) and (3) with respect to the parameter s. Table 1 summarizes the relations between the coefficients of the unfactorised spectral cover and the a j coefficients for the cases under consideration in the present work. The homologies of the coefficients b i are given in terms of the first Chern class of the tangent bundle (c 1 ) and of the normal bundle (−t), b i a j coefficients for 4+1 a j coefficients for 3+2 a j coefficients for 3+1+1 b 0 a 5 a 7 a 4 a 7 a 4 a 6 a 8 b 1 a 5 a 6 + a 4 a 7 a 4 a 6 + a 3 a 7 a 4 a 6 a 7 + a 4 a 5 a 8 + a 3 a 6 a 8 b 2 a 4 a 6 + a 3 a 7 a 4 a 5 + a 3 a 6 + a 2 a 7 a 4 a 5 a 7 + a 3 a 5 a 8 + a 3 a 6 a 7 + a 2 a 6 a 8 b 3 a 3 a 6 + a 2 a 7 a 3 a 5 + a 2 a 6 + a 1 a 7 a 3 a 5 a 7 + a 2 a 5 a 8 + a 2 a 6 a 7 + a 1 a 6 a 8 b 4 a 2 a 6 + a 1 a 7 a 2 a 5 + a 1 a 6 a 2 a 5 a 7 + a 1 a 6 a 7 + a 1 a 5 a 8 b 5 a 1 a 6 a 1 a 5 a 1 a 5 a 7 . ., allowing us to rearrange for the required homologies. Note that since we have in general more a j coefficients than our fully determined b i coefficients, the homologies of the new coefficients cannot be fully determined. For example, if we factorise in a 3 + 1 + 1 arrangement, we must have 3 unknown parameters, which we call χ k=1,2,3 . In the following sections we will examine in detail the predictions of the A 4 and S 3 models. A 4 models in F-theory We assume that the spectral cover equation factorises to a quartic polynomial and a linear part, as shown in (3). T he homologies of the new coefficients may be derived from the original b i coefficients. Referring to Table 1, we can see that the homologies for this factorisation are easily calculable, up to some arbitrariness of one of the coefficients -we have seven a j and only six b i . We choose [a 6 ] = χ in order to make this tractable. It can then be shown that the homologies obey: This amounts to asserting that the five of SU (5) ⊥ 'breaks' to a discrete symmetry between four of its weights (S 4 or one of its subgroups) and a U (1) ⊥ . The roots of the spectral cover equation must obey: where t i are the weights of the five representation of SU (5) ⊥ . When s = 0, this defines the tenplet matter curves of the SU (5) GUT [36], with the number of curves being determined by how the result factorises. In the case under consideration, when s = 0, b 5 = 0. After referring Curve Equation Homology Hyperflux -N Multiplicity Table 2: table of matter curves, their homologies, charges and multiplicities. to Table 1, we see that this implies that P 10 = a 1 a 6 = 0. Therefore there are two tenplet matter curves, whose homologies are given by those of a 1 and a 6 . We shall assume at this point that these are the only two distinct curves, though a 1 appears to be associated with S 4 (or a subgroup) and hence should be reducible to a triplet and singlet. Similarly, for the fiveplets, we have which can be shown 6 to give the defining condition for the fiveplets: Table 1, we can write this in terms of the a j coefficients: Using the condition that SU (5) must be traceless, and hence b 1 = 0, we have that a 4 a 7 +a 5 a 6 = 0. An Ansatz solution of this condition is a 4 = ±a 0 a 6 and a 5 = ∓a 0 a 7 , where a 0 is some appropriate scaling with homology [a 0 ] = η − 2(c 1 + χ), which is trivially derived from the homologies of a 4 and a 6 (or indeed a 5 and a 7 ) [35]. If we introduce this, then P 5 splits into two matter curves: P 5 = a 2 2 a 7 + a 2 a 3 a 6 ∓ a 0 a 1 a 2 6 a 3 a 2 6 + (a 2 a 6 + a 1 a 7 )a 7 = 0 . The homologies of these curves are calculated from those of the b i coefficients and are presented in Table 2. We may also impose flux restrictions if we define: where N ∈ Z and F Y is the hypercharge flux. Considering equation (7), we see that b 5 /b 0 = t 1 t 2 t 3 t 4 t 5 , so there are at most five ten-curves, one for each of the weights. Under S 4 and it's subgroups, four of these are identified, which corroborates with the two matter curves seen in Table 1. As such we identify t i=1,2,3,4 with this monodromy group and the coefficient a 1 and leave t 5 to be associated to a 6 . Similarly, equation (8) shows that we have at most ten five-curves when s = 0, given in the form t i + t j with i = j. Examining the equations for the two five curves that are manifest in this model after application of our monodromy, the quadruplet involving t i + t 5 forms the curve labeled 5 d , while the remaining sextet -t i + t j with i, j = 5 -sits on the 5 c curve. The discriminant The above considerations apply equally to both the S 4 as well as A 4 discrete groups. From the effective model point of view, all the useful information is encoded in the properties of the polynomial coefficients a k and if we wish to distinguish these two models further assumptions for the latter coefficients have to be made. Indeed, if we assume that in the above polynomial, the coefficients belong to a certain field a k ∈ F, without imposing any additional specific restrictions on a k , the roots exhibit an S 4 symmetry. If, as desired, the symmetry acting on roots is the subgroup A 4 the coefficients a k must respect certain conditions. Such constraints emerge from the study of partially symmetric functions of roots. In the present case in particular, we recall that the A 4 discrete symmetry is associated only to even permutations of the four roots t i . Further, we note now that the partially symmetic function is invariant only under the even permutations of roots. The quantity δ is the square root of the discriminant, ∆ = δ 2 (12) and as such δ should be written as a function of the polynomial coefficients a k ∈ F so that δ ∈ F too. The discriminant is computed by standard formulae and is found to be ∆(a k ) = 256a 3 1 a 3 5 − 27a 4 2 − 144a 1 a 3 a 2 2 + 192a 2 1 a 4 a 2 + 128a 2 1 a 2 3 a 2 5 − 2 2 a 2 2 − 4a 1 a 3 a 3 3 − 9a 2 2 − 40a 1 a 3 a 2 a 4 a 3 + 3 a 2 2 − 24a 1 a 3 a 1 a 2 4 a 5 − a 2 4 4a 4 a 3 2 + a 2 3 a 2 2 − 18a 1 a 3 a 4 a 2 + 4a 3 3 + 27a 1 a 2 4 a 1 In order to examine the implications of (12) we write the discriminant as a polynomial of the coefficient a 3 [35] ∆ ≡ g(a 3 ) = 4 n=0 c n a n 3 (14) where the c n are functions of the remaining coefficients a k , k = 3 and can be easily computed by comparison with (13). We may equivalently demand that g(a 3 ) is a square of a second degree polynomial g(a 3 ) = (κa 2 3 + λa 3 + µ) 2 A necessary condition that the polynomial g(a 3 ) is a square, is its own discriminant ∆ g to be zero. One finds We observe that there are two ways to eliminate the discriminant of the polynomial, either putting D 1 = 0 or by demanding D 2 = 0 [35]. In the first case, we can achieve ∆ = δ 2 if we solve the constraint D 1 = 0 as follows Substituting the solutions (16) in the discriminant we find The above constitute the necessary conditions to obtain the reduction of the symmetry [35] down to the Klein group V ∼ Z 2 × Z 2 . On the other hand, the second condition D 2 = 0, implies a non-trivial relation among the coefficients Plugging in the b 1 = 0 solution, the constraint (44) take the form (a 2 2 a 7 + a 0 a 1 a 2 6 ) 2 = a 0 a 2 a 6 + 16a 1 a 7 3 3 which is just the condition on the polynomial coefficients to obtain the transition S 4 → A 4 . Towards an SU (5) × A 4 model Using the previous analysis, in this section we will present a specific example based on the SU (5) × A 4 × U (1) symmetry. We will make specific choices of the flux parameters and derive the spectrum and its superpotential, focusing in particular on the neutrino sector. It can be shown that if we assume an A 4 monodromy any quadruplet is reducible to a triplet and singlet representation, while the sextet of the fives reduces to two triplets (details can be found in the appendix). Singlet-Triplet Splitting Mechanism It is known from group theory and a physical understanding of the group that the four roots forming the basis under A 4 may be reduced to a singlet and triplet. As such we might suppose intuitively that the quartic curve of A 4 decomposes into two curves -a singlet and a triplet of A 4 . As a mechanism for this we consider an analogy to the breaking of the SU (5) GU T group by U (1) Y . We then postulate a mechanism to facilitate Singlet-Triplet splitting in a similar vein. Switching on a flux in some direction of the perpendicular group, we propose that the singlet and triplet of A 4 will split to form two curves. This flux should be proportional to one of the generators of A 4 , so that the broken group commutes with it. If we choose to switch on U (1) s flux in the direction of the singlet of A 4 , then the discrete symmetry will remain unbroken by this choice. Continuing our previous analogy, this would split the curve as follows: The homologies of the new curves are not immediately known. However, they can be constrained by the previously known homologies given in Table 2. The coefficient describing the curve should be expressed as the product of two coefficients, one describing each of the new curves -a i = c 1 c 2 . As such, the homologies of the new curves will be determined by If we assign the U (1) flux parameters by hand, we can set the constraints on the homologies of our new curves. For example, for the curve given in Table 2 as 10 a would decompose into two curves -10 1 and 10 2 , say. Assigning the flux parameter, N , to the 10 2 curve, we constrain the homologies of the two new curves as follows: Similar constraints may also be placed on the five-curves after decomposition. Using our procedure, we can postulate that the charge N will be associated to the singlet curve by the mechanism of a flux in the singlet direction. This protects the overall charge of N in the theory. With the fiveplet curves it is not immediately clear how to apply this since the sextet of A 4 can be shown to factorise into two triplets. Closer examination points to the necessity to cancel anomalies. As such the curves carrying H u and H d must both have the same charge under N . This will insure that they cancel anomalies correctly. These motivating ideas have been applied in Table 3. GUT-group doublet-triplet splitting Initially massless states residing on the matter curves comprise complete vector multiplets. Chirality is generated by switching on appropriate fluxes. At the SU (5) level, we assume the existence of M 5 fiveplets and M 10 tenplets. The multiplicities are not entirely independent, since where it is assumed the reducible representation of the monodromy group may split the matter curves. The curves are also assumed to have an R-symmetry we require anomaly cancellation, 7 which amounts to the requirement that i M 5 i + j M 10 j = 0. Next, turning on the hypercharge flux, under the SU (5) symmetry breaking the 10 and 5,5 representations split into different numbers of Standard Model multiplets [55]. Assuming N units of hyperflux piercing a given matter curve, the fiveplets split according to: Similarly, the M 10 tenplets decompose under the influence of N hyperflux units to the following SM-representations: n(3, 2) +1/6 − n(3, 2) −1/6 = M 10 , Using the relations for the multiplicities of our matter states, we can construct a model with the spectrum parametrised in terms of a few integers in a manner presented in Table 3. In order to curtail the number of possible couplings and suppress operators surplus to requirement, we also call on the services of an R-symmetry. This is commonly found in supersymmetric models, and requires that all couplings have a total R-symmetry of 2. Curves carrying SM-like fermions are taken to have R = 1, with all other curves R = 0. A simple model: N = 0 Any realistic model based on this table must contain at least 3 generations of quark matter (10 M i ), 3 generations of leptonic matter (5 M i ), and one each of 5 Hu and 5 H d . We shall attempt to construct a model with these properties using simple choices for our free variables. In order to build a simple model, let us first choose the simple case where N=0, then we make the following assignments: Note that it does not immediately appear possible to select a matter arrangement that provides a renormalisable top-coupling, since we will be required to use our GUT-singlets to cancel residual t 5 charges in our couplings, at the cost of renormalisability. Basis The bases of the triplets are such that triplet products, 3 a × 3 b = 1 + 1 + 1 + 3 1 + 3 2 , behave as: This has been demonstrated in the Appendix A, where we show that the quadruplet of weights decomposes to a singlet and triplet in this basis. Generations Full coupling Top-type Third generation Note that all couplings must of course produce singlets of A 4 by use of these triplet products where appropriate. Top-type quarks The Top-type quarks admit a total of six mass terms, as shown in Table 5. The third generation has only one valid Yukawa coupling -T 3 · T 3 · H u · θ a . Using the above algebra, we find that this coupling is: With the choice of vacuum expectation values (VEVs): this will give the Top quark it's mass, m t = yva. The choice is partly motivated by A 4 algebra, as the VEV will preserve the S-generators. This choice of VEVs will also kill off the the operators T · T 3 · H u · (θ a ) 2 and T · T · H u · (θ a ) 2 · θ b , which can be seen by applying the algebra above. The full algebra of the contributions from the remaining operators is included in Appendix B. Under the already assigned VEVs, the remaining operators contribute to give the overall mass matrix for the Top-type quarks: This matrix is clearly hierarchical with the third generation dominating the hierarchy, since the couplings should be suppressed by the higher order nature of the operators involved. Due to the rank theorem [37], the two lighter generations can only have one massive eigenvalue. However, corrections due to instantons and non-commutative fluxes are known as mechanisms to recover a light mass for the first generation [37][38]. Charged Leptons The Charged Lepton and Bottom-type quark masses come from the same GUT operators. Unlike the Top-type quarks, these masses will involve SM-fermionic matter that lives on curves that are triplets under A 4 . It will be possible to avoid unwanted relations between these generations using the ten-curves, which are strictly singlets of the monodromy group. The operators, as per Table 5, are computed in full in Appendix B. Since we wish to have a reasonably hierarchical structure, we shall require that the dominating terms be in the third generation. This is best served by selecting the VEV H d = (0, 0, v) T . Taking the lowest order of operator to dominate each element, since we have non-renormalisable operators, we see that we have then: We should again be able to use the Rank Theorem to argue that while the first generation should not get a mass by this mechanism, the mass may be generated by other effects [37] [38]. We also expect there might be small corrections due to the higher order contributions, though we shall not consider these here. The bottom-type quarks in SU (5) have the same masses as the charged leptons, with the exact relation between the Yukawa matrices being due to a transpose. However this fact is known to be inconsistent with experiment. In general, when renormalization group running effects are taken into account, the problem can be evaded only for the third generation. Indeed, the mass relation m b = m τ at M GU T can be made consistent with the low energy measured ratio m b /m τ for suitable values of tan β. In field theory SU (5) GUTs the successful Georgi-Jarlskog GUT relation m s /m µ = 1/3 can be obtained from a term involving the representations5 · 10 · 45 but in the F-theory context this is not possible due to the absence of the 45 representation. Nevertheless, the order one Yukawa coefficients may be different because the intersection points need not be at the same enhanced symmetry point. The final structure of the mass matrices is revealed when flux and other threshold effects are taken into account. These issues will not be discussed further here and a more detailed exposition may be found in [49], with other useful discussion to be found in [58]. Neutrino sector Neutrinos are unique in the realms of currently known matter in that they may have both Dirac and Majorana mass terms. The couplings for these must involve an SU (5) singlet to account for the required right-handed neutrinos, which we might suppose is θ c = (1, 3) 0 . It is evident from Table 5 that the Dirac mass is the formed of a handful of couplings at different orders in operators. We also have a Majorana operator for the right-handed neutrinos, which will be subject to corrections due to the θ d singlet, which we assign the most general VEV, If we now analyze the operators for the neutrino sector in brief, the two leading order contribution are from the θ c · F · H u · θ a and θ c · F · H u · θ b operators. With the VEV alignments θ a = (a, 0, 0) T and H u = (v, 0, 0) T , we have a total matrix for these contributions that displays strong mixing between the second and third generations: where y 0 = y 1 +y 2 +y 3 . The higher order operators, θ c ·F ·H u ·θ a ·θ d and θ c ·F ·H u ·θ b ·θ d , will serve to add corrections to this matrix, which may be necessary to generate mixing outside the already evident large 2-3 mixing from the lowest order operators. If we consider the We use z i coefficients to denote the suppression expected to affect these couplings due to renormalisability requirements. We need only concern ourselves with the combinations that add contributions to the off-diagonal elements where the lower order operators have not given a contribution, as these lower orders should dominate the corrections. Hence, the remaining allowed combinations will not be considered for the sake of simplicity. If we do this we are left a matrix of the form: The right-handed neutrinos admit Majorana operators of the type θ c · θ c · (θ d ) n , with n ∈ {0, 1, . . . }. The n = 0 operator will fill out the diagonal of the mass matrix, while the n = 1 operator fills the off-diagonal. Higher order operators can again be taken as dominated by these first two, lower order operators. The Majorana mass matrix can then be used along with the Dirac mass matrix in order to generate light effective neutrino masses via a see-saw mechanism. The Dirac mass matrix can be summarised as in equation (29). This matrix is rank 3, with a clear large mixing between two generations that we expect to generate a large θ 23 . In order to reduce the parameters involved in the effective mass matrix, we will simplify the problem by searching only for solutions where z 1 = z 3 and z 2 = z 4 , which significantly narrows the parameter space. We will then define some dimensionless parameters that will simplify the matrix: If we implement these definitions, we find the Dirac mass matrix becomes: The Right-handed neutrino Majorana mass matrix can be approximated if we take only the θ c ·θ c operator, since this should give a large mass scale to the right-handed neutrinos and dominate the matrix. This will leave the Weinberg operator for effective neutrino mass, 32.0 31.1 → 33.0 Table 6: Summary of neutrino parameters, using best fit values as found at nu-fit.org, the work of which relies upon [45] . Where we have also defined a mass parameter: We then proceed to diagonalise this matrix computationally in terms of three mixing angles as is the standard procedure [42], before attempting to fit the result to experimental inputs. Analysis We shall focus on the ratio of the mass squared differences: which is known due to the well measured mass differences, ∆m 2 32 and ∆m 2 21 [45]. These give us a value of R ≈ 32, which we may solve for numerically in our model using Mathematica or another suitable maths package. If we then fit the optimised values to the mass scales measured by experiment, we may predict absolute neutrino masses and further compare them with cosmological constraints. The fit depends on a total of six coefficients, as can be seen from examining the undiagonalised effective mass matrix. Optimising R, we should also attempt to find mixing angles in line with those known to parameterize the neutrino sector -i.e. large θ 23 and θ 12 , with a comparatively small (but non-zero) θ 13 . This is necessary to obtain results compatible with neutrino oscillation experiments. Table 6 summarises the neutrino parameters the model must be in keeping with in order to be acceptable. We should note that the parameter m 0 will be trivially matched up with the mass differences shown in Table 6. If we take some choice values of three of our five free parameters, we can construct a contour plot for curves with constant R using the other two. Figure 2 shows this for a series of fixed parameters. Each of the lines is for R = 32, so we can see that there is a deal of flexibility in the parameter space for finding allowed values of the ratio. In order to further determine which parts of the broad parameter space are most suitable for returning phenomenologically acceptable neutrino parameters, we can plot the value of sin 2 (θ 12 ) or sin 2 (θ 23 ) in the same parameter space as Figure 2 -(Y 1 , Y 2 ). The first plot in Figure 3 shows that the angle θ 12 constraints are best satisfied at lower values of Y 1 , while there are the each line spans a large part of the Y 2 space. The second plot of Figure 3 suggests a preference for comparatively small values of Y 2 based on the constraints on θ 23 . As such, we might expect that for this corner of the parameter space there will be some solutions that satisfy all the constraints. Figure 4 also shows a plot for contours of best fitting values of R, with the free variables chosen as Y 3 and Z 1 . As before, this shows that for a range of the other parameters, we can usually find suitable values of (Y 3 , Z 1 ) that satisfy the constraints on R. This being the case, we expect that it should be possible to find benchmark points that will allow for the other constraints to also be satisfied. This flexibility in the parameter space translates to the other experimental parameters, such that the points that allow experimentally allowed solutions are abundant enough that we can fit all the parameters quite well. Table 7 shows a collection of so-called benchmark points, which are points in the parameter space where all constraints are satisfied within current experimental errors -see Planck data [57] puts the sum of neutrino masses to be Σm ν ≤ 0.23eV, which the bench mark points are also consistent with. Proton decay Proton decay is a recurring problem in many SU (5) GUT models, with the "dangerous" dimension six operators, with the effective operator form: Since there are strong bounds on the proton lifetime (τ p ≥ 10 3 3yr) then these operators should be highly suppressed or not allowed in any GUT model. Within the context of the SU (5) × A 4 × U (1) in F-theory, these operators arise from effective operators of the type: 10 · 10 · 10 ·5 , where the5 contains the SU (2) Lepton doublet and the d c , and the quark doublet, u c and e c arise from 10 of SU (5). The interaction will be mediated by the H u and H d doublets. In the model under consideration, two matter curves are in the 10 representation of the GUT group: T 3 containing the third generation, and T containing the lighter two generations. In general these can be expressed as: Here, the role of R-symmetry in the model becomes important, since due to the assignment of this symmetry, these operators are all disallowed. Further more, the operators which have i = 0 will have net charge due to the U (1) ⊥ , requiring them to have flavons to balance the charge. This would offer further suppression in the event that R-symmetry were not enforced. There are also proton decay operators mediated by D-Higgs triplets and their anti-particles, which arise from the same operators, but in a similar way, these will be disallowed by R-symmetry thus preventing proton decay via dimension six operators. The dimension four operators, which are mediated by superpartners of the Standard Model, will also be prevented by R-symmetry. However, even in the absence of this symmetry, the need to balance the charge of the U (1) ⊥ would lead to the presence of additional GUT group singlets in the operators, leading to further, strong suppression of the operator. Unification The spectrum in Table 4 is equivalent to three families of quarks and leptons plus three families of 5 + 5 representations which include the two Higgs doublets that get VEVs. Such a spectrum does not by itself lead to gauge coupling unification at the field theory level, and the splittings which may be present in F-theory cannot be sufficiently large to allow for unification, as discussed in [25]. However, as discussed in [25], where the low energy spectrum is identical to this model (although achieved in a different way) there may be additional bulk exotics which are capable of restoring gauge coupling unification and so unification is certainly possible in this mode. We refer the reader to the literature for a full discussion. S 3 models Motivated by phenomenological explorations of the neutrino properties under S 3 , in this section we are interested for SU (5) with S 3 discrete symmetry and its subgroup Z 3 . More specifically, we analyse monodromies which induce the breaking of SU (5) ⊥ to group factors containing the aforementioned non-abelian discrete group. Indeed, in this section we encountered two such symmetry breaking chains, namely cases ii) and iii) of (2). With respect to the present point of view, novel features are found for case iii). In the subsequent we present in brief case ii) and next we analyse in detail case iii). 4.1 The C 3 × C 2 spectral cover split As in the A 4 case, because these discrete groups originate form the SU (5) ⊥ we need to work out the conditions on the associated coefficients a i . For C 3 × C 2 split the spectral cover equation is The equations connecting b k 's with a i 's are of the form b k ∼ n a n a 9−n−k , the sum referring to appropriate values of n which can be read off from (42) or from Table 1. We recall that the b k coefficients are characterised by homologies [b k ] = η − k c 1 . Using this fact as well as the corresponding equations b k (a i ) given in the last column of Table 1, we can determine the corresponding homologies of the a i 's in terms of only one arbitrary parameter which we may take to be the homology [a 6 ] = χ. Furthermore the constraint b 1 = a 2 a 6 + a 3 a 5 = 0 is solved by introducing a suitable section λ such that a 3 = −λ a 6 and a 2 = λ a 5 . Apart from the constraint b 1 = 0, there are no other restrictions on the coefficients a i in the case of the S 3 symmetry. If, however, we wish to reduce the S 3 symmetry to A 3 (which from the point of view of low energy phenomenology is essentially Z 3 ), additional conditions should be imposed. In this case the model has an SU (5) × Z 3 × U (1) symmetry. As in the case of A 4 discussed previously, in order to derive the constraints on a k 's for the symmetry reduction S 3 → Z 3 we compute the discriminant, which turns out to be and demand ∆ = δ 2 . In analogy with the method followed in A 4 we re-organise the terms in powers of the x ≡ a 1 : First, we observe that in order to write the above expression as a square, the product a 1 a 3 must be positive definite sign(a 1 a 3 ) = +. Provided this condition is fulfilled, then we require the vanishing of the discriminant ∆ f of the cubic polynomial f (x), namely: This can occur if the non-trivial relation a 3 2 = 27a 0 a 2 3 holds. Substituting back to (43) we find that the condition is fulfilled for a 2 2 ∝ a 1 a 3 . The two constraints can be combined to give the simpler ones a 0 a 3 + a 1 a 2 = 0, a 2 2 + 27a 1 a 3 = 0 The details concerning the spectrum, homologies and flux restrictions of this model can be found in [19,22]. Identifying t 1,2,3 = t a and t 4,5 = t b ( due to monodromies) we distribute the matter and Higgs fields over the curves as follows We have already pointed out that the monodromies organise the SU (5) GU T singlets θ ij obtained from the 24 ∈ SU (5) ⊥ into two categories. One class carries U (1) i -charges and they denoted with θ ab , θ ba while the second class θ aa , θ bb has no t i -'charges'. The KK excitations of the latter could be identified with the right-handed neutrinos. Notice that in the present model the left handed states of the three families reside on the same matter curve. To generate flavour and in particular neutrino mixing in this model, one may appeal for example to the mechanism discussed in [44]. Detailed phenomenological implications for Z 3 models have been discussed elsewhere and will not be presented here. Within the present point of view, novel interesting features are found in 3 + 1 + 1 splitting which will be discussed in the next sections. SU (5) spectrum for the (3, 1, 1) factorisation In this case the relevant spectral cover polynomial splits into three factors according to 5 k=0 b k s 5−k = a 4 s 3 + a 3 s 2 + a 2 s + a 1 (a 5 + sa 6 ) (a 7 + sa 8 ) We can easily extract the equations determining the coefficients b k (a i ), while the corresponding one for the homologies reads [b k ] = η − kc 1 = [a l ] + [a m ] + [a n ], k = 0, 1, . . . , 5, k + l + m + n = 18, l, m, n ≤ 8 (46) As in the previous case, in order to embed the symmetry in SU (5) ⊥ , the condition b 1 = 0 has to be implemented. The non-trivial representations are found as follows: The tenplets are determined by b 5 = a 1 a 5 a 7 = 0 As before, the equation for fiveplets is given by Table 1 homology Table 8: Matter curves with their defining equations, homologies, and multiplicities in the case of (3,1,1) factorisation. These, together with the tenplets, are given in Table 8. S 3 and Z 3 models for (3, 1, 1) factorisation In the following we present one characteristic example of F-theory derived effective models when we quotient the theory with a S 3 monodromy. As already stated, if no other conditions are imposed on a k this model is considered as an S 3 variant of the 3 + 1 + 1 example given in [19,22]. In this case the 10 t i , i = 1, 2, 3 residing on a curve -characterised by a common defining equation a 1 = 0 -are organised in two irreducible S 3 representations 2 + 1. The same reasoning applies to the remaining representations. In Table 9 we present the spectrum of a model with N χ = −1 and N ψ = 0. Because singlets play a vital role, here, in addition we include the singlet field spectrum. Notice that the multiplicities of θ i4 , θ 4i are not determined by the U (1) fluxes assumed here, hence they are treated as free parameters. (1) Table 9: Matter content for an SU (5) GU T × S 3 × U (1). S 3 monodromy organises 10 a ,5 a ,5 b and 5 c representations in doublets and singlets. The Yukawa matrices in S 3 Models To construct the mass matrices in the case of S 3 models we first recall a few useful properties. There are six elements of the group in three classes, and their irreducible representations are 1, 1 and 2. The tensor product of two doublets, in the real representation, contains two singlets and a doublet: Thus, if (x 1 , x 2 ) and (y 1 , y 2 ) represent the components of the doublets, the above product gives 1 : (x 1 y 1 + x 2 y 2 ), 1 : (x 1 y 2 − x 2 y 1 ), 2 : The singlets are muliplied according to the rules: 1 ⊗ 1 = 1 and 1 ⊗ 1 = 1. Note that 1 is not an S 3 invariant. With these simple rules in mind, we proceed with the construction of the fermion mass matrices, starting from the quark sector. Quark sector We start our analysis of the Top-type quarks. We see from table 9 that we have two types of operators contribute to the Top-type quark matrix. 1) A tree level coupling: g10 (2) a · 10 (2) 2) Dimension 4 operators: λ 1 10 (1) a and λ 2 10 (2) a · 10 In order to generate a hierarchical mass spectrum we accommodate the charm and top quarks in the 10 (2) a curve and the first generation on the 10 (1) a curve. In this case, only the first (tree level) coupling contributes to the Top quark terms. Using the S 3 algebra above while choosing 5 1 a = H u = υ u and θ 1 a = θ 0 , θ 2 a = (θ 1 , 0) T we obtain the following mass matrix for the Top-quarks Because two generations live on the same matter curve (10 a curve) we implement the Rank theorem. For this reason we have suppressed the element-22 in the matrix above with a small scale parameter . The quark eigenmasses are obtained from For reasonable values of the parameters this matrix leads to mass eigenvalues with the required mass hierarchy and a Cabbibo mixing angle. The smaller mixing angles are expected to be generated from the down quark mass matrix. Indeed, the following Yukawa couplings emerge for the Bottom-type quarks: 1) First generation: g 1 10 2) Second and third generation: g 2 10 3) First-second, third generation: g 3 10 a and g 4 10 4) Second-third generation: g 5 10 We assume that the doublet H d ∈5 (1) b and the singlet θ 2 a (being a doublet under S 3 ) develop VEVs designated as H d = υ d and θ 2 a = (θ 1 , θ 2 ) T . Then, applying the S 3 algebra, the Yukawa couplings above induce the following mass matrix for the Bottom-type quarks: For appropriate Singlet VEVs the structure of the Bottom quark mass matrix is capable to reproduce the hierarchical mass spectrum and the required CKM mixing. Leptons The charged leptons will have the same couplings as the Bottom-type quarks. To simplify the analysis, let us start with a simple case where the Singlet VEVs exhibit the hierarchy θ 2 < θ 1 < θ 0 . Furthermore, taking the limit θ 2 → 0 and switching-off the Yukawas coefficients g 3 , g 4 in (52) we achieve a block diagonal form of the charged lepton matrix with eigenvalues m e = g 1 θ 0 , m µ = g 2 θ 0 − g 5 θ 1 , m τ = g 2 θ 0 + g 5 θ 1 (54) and maximal mixing between the second and third generations. We turn now our attention to the couplings of the neutrinos. We identify the right-handed neutrinos with the SU(5)-singlet θ c = 1 ij . Under the S 3 symmetry, θ c splits into a singlet, named θ (1) c and a doublet, θ (2) c . As in the case of the quarks and the charged leptons we distribute the right handed neutrino species as follows The Dirac neutrino mass matrix arises from the following couplings and has the following form (for θ 2 → 0) Although the Dirac mass matrix has the same form with the charged lepton matrix (52) in general they have different Yukawas coefficients. Thus, substantial mixing effects may also occur even in the case of a diagonal heavy Majorana mass matrix. In the following we construct effective neutrino mass matrices compatible with the well known neutrino data in two different ways. In the first approach we take the simplest scenario for a diagonal heavy Majorana mass matrix and generate the TB-mixing combining charged lepton and neutrino block-diagonal textures. In the second case we consider the most general form of the Majorana matrix and we try to generate TB-mixing only from the Neutrino sector. Block diagonal case We start with the attempt to generate the TB-mixing combining charged lepton and neutrino block-diagonal textures. The Majorana matrix will simply be the identity matrix scaled by a RH-neutrino mass M . The effective neutrino mass matrix M ef f = M D M −1 M M T D now reads: (y 2 y 3 + y 1 y 4 )θ 0 θ 1 y 3 y 5 θ 2 1 (y 2 y 3 + y 1 y 4 )θ 0 θ 1 y 2 2 θ 2 0 + (y 2 4 + y 2 5 )θ 2 1 2y 2 y 5 θ 0 θ 1 y 3 y 5 θ 2 1 2y 2 y 5 θ 0 θ 1 where we used the Dirac mass matrix as given in (55). First of all we observe that we can reduce the number of the parameters by defining Then M ef f ν is written In the limit of a small y 5 Yukawa (or c → 0) we achieve a block diagonal form given by This can be diagonalised by a unitary matrix Now, we may appeal to the block diagonal form of the charged lepton matrix (53) which introduces a maximal θ 23 angle so that the final mixing is Moreover, diagonalisation of the neutrino mass matrix yields tan(2θ 12 ) = 2(xα + yb) The TB-mixing matrix now arises for tan (2θ 12 ) ≈ 2.828. In figure (5) we plot contours for the above relation in the plane (α, x) for various values of the pairs (b, y).As can be observed, tan (2θ 12 ) takes the desired value for reasonable range of the parameters α, b, x, y. For example We conclude that the simplified (block-diagonal) forms of the charged lepton and neutrino mass matrices are compatible with the TB-mixing. It is easy now to obtain the known deviations of the TB-mixing allowing small values for the parameters c, θ 2 in (53) and (58) respectively. However, we also need to reconcile the ratio of the mass square differences R = ∆m 2 32 /∆m 2 21 with the experimental data R ≈ 32. To this end, we first compute the mass eigenvalues of the effective neutrino mass matrix Notice that ∆ is a positive quantity and as a result m 3 > m 2 . We can find easily solutions for a wide range of the parameters consistent with the experimental data. Note that for the same values as in (61) we achieve a reasonable value of R ≈ 28.16. In figure(6) we plot contours of the ratio in the plane (α, b) for various values of the pair (x, y). We have stressed above that we could generate the θ 13 angle by assuming small values of the Yukawas y 5 . However, this case turns out to be too restrictive since the structure of (58) results to maximal (1 − 2) mixing in contradiction with the experiment. The issue could be remedied by a fine-tuning of the charged lepton mixing, however we would like to look up for a natural solution. Therefore, we proceed with other options. TB mixing from neutrino sector. In the previous analysis we considered the simplest scenario for the Majorana matrix. The general form of the Majorana mass matrix arises by taking into account all the possible flavon terms contributions and has the following form To reduce the number of parameters we consider that f i = f for i = 1, 2, 3 and y 3 , y 4 → 0 in the Dirac matrix. In this case the elements of the effective neutrino mass matrix are with an overall factor ∼ υ 2 u θ 2 1 M 2 (2f 3 −2f 2 m−f 2 M +m 2 M ) and the parameters are defined as a = m/M , b = f /M , c = y 5 , x = y 2 , y = y 1 and θ 0 = θ 1 . The matrix assumes the general structure: Maximal atmospheric neutrino mixing and θ 13 = 0 immediately follow from this structure. The solar mixing angle θ 12 is not predicted, but it is expected to be large. Next we try to generate TB -mixing only from the neutrino sector (assuming that the charged lepton mixing is negligible so that it can be used to lift θ 13 = 0). Then, it is enough to compare the entries of the effective mass matrix with the most general mass matrix form which complies with TB-mixing A quick comparison results to the following simple relations while the (23) element is subject to the constraint: which results to a quadratic equation of b with solutions being functions of the remaining parameters b = B ± (a, c, x, y). We choose one of the roots, b = B − , and substitute it back to the equations (66) to express the parameters u, v and w as functions of (a, c, x, y). The requirement that all the large mixing effects emerge from the neutrino sector imposes severe restrictions on the parameter space. Hence we need to check their compatibility with the mass square differences ratio R. We can express the latter as a function of the parameters R = R(a, c, x, y) by noting that the mass eigenvalues are given by Direct substitution gives the desired expression R(a, c, x, y) which is plotted in figure 7. It is straightforward to notice that there is a wide range of parameters consistent with the experimental data. In the first graph of the figure we plot contours for the ratio in the plane (x, y) for various values of a and constant value c = 0.5. In the second graph we plot the ratio in the (a, c) plane with constant x = 0.33. Note that in both cases, the a, c, x, y parameters take values < 1. Having checked that the parameters a, c, x, y are in the perturbative range, while consistent with the TB-mixing and the mass data, we also should require that b = f /M remains in the perturbative regime, i.e. b < 1. In figure 8 we plot the bounds put by this constraint. In particular we plot the mass square ratio in the (x, y) plane for R = 30 and R = 34 and we notice that there exists an overlapping region for values of b between 0.5 and 0.6. In this region Conclusions In this work we considered the phenomenological implications of F-theory SU (5) models with non-abelian discrete family symmetries. We discussed the physics of these constructions in the context of the spectral cover, which, in the elliptical fibration and under the specific choice of SU (5) GUT, implies that the discrete family symmetry must be a subgroup of the permutation symmetry S 5 . Furthermore, we exploited the topological properties of the associated 5-degree polynomial coefficients (inherited from the internal manifold) to derive constraints on the effective field theory models. Since we dealt with discrete gauge groups, we also proposed a discrete version of the flux mechanism for the splitting of representations. We started our analysis splitting appropriately with the spectral cover in order to implement the A 4 discrete symmetry as a subgroup of S 4 . Hence, using Galois Theory techniques, we studied the necessary conditions on the discriminant in order to reduce the symmetry from S 4 to A 4 . Moreover, we derived the properties of the matter curves accommodating the massless spectrum and the constraints on the Yukawa sector of the effective models. Then, we first made a choice of our flux parameters and picked up a suitable combination of trivial and non-trivial A 4 representations to accommodate the three generations so that a hierarchical mass spectrum for the charged fermion sector is guaranteed. Next, we focused on the implications of the neutrino sector. Because of the rich structure of the effective theory emerging from the covering E 8 group, we found a considerable number of Yukawa operators contributing to the neutrino mass matrices. Despite their complexity, it is remarkable that the F-theory constraints and the induced discrete symmetry organise them in a systematic manner so that they accommodate naturally the observed large mixing effects and the smaller θ 13 angle of the neutrino mixing matrix. In the second part of the present article, using the appropriate factorisation of the spectral cover we derive the S 3 group as a family symmetry which accompanies the SU (5) GUT. Because now the family symmetry is smaller than before, the resulting fermion mass structures turn out to be less constrained. In this respect, the A 4 symmetry appears to be more predictive. Nevertheless, to start with, we choose to focus on a particular region of the parameter space assuming some of the Yukawa matrix elements are zero and imposing a diagonal heavy Majorana mass matrix. In such cases, we can easily derive block diagonal lepton mass matrices which incorporate large neutrino mixing effects as required by the experimental data. Next, in a more involved example, we allow for a general Majorana mass matrix and initially determine stable regions of the parameter space which are consistent with TB-mixing. The tiny θ 13 angle can easily arise from small deviations of these values or by charged lepton mixing effects. Both models derived here satisfy the neutrino mass squared difference ratio predicted by neutrino oscillation experiments. In conclusion, F-theory SU (5) models with non-abelian discrete family symmetries provide a promising theoretical framework within which the flavour problem may be addressed. The present paper presents the first such realistic examples based on A 4 and S 3 , which are amongst the most popular discrete symmetries used in the field theory literature in order to account for neutrino masses and mixing angles. By formulating such models in the framework of F-theory SU (5), a deeper understanding of the origin of these discrete symmetries is obtained, and theoretical issues such as doublet-triplet splitting may be elegantly addressed. A.1 Four dimensional case From considering the symmetry properties of a regular tetrahedron, we can see quite easily that it can be parameterised by four coordinates and its transformations can be decomposed into a mere two generators. If we write these coordinates as a basis for A 4 , which is the symmetry group of the tetrahedron, it would be of the form (t 1 , t 2 , t 3 , t 4 ) T . The two generators can then be written in matrix form explicitly as: However, it is well known that A 4 has an irreducible representation in the form of a singlet and triplet under these generators. If we consider the tetrahedron again, this can be physically interpreted by observing that under any rotation through one of the vertices of the tetrahedron the vertex chosen remains unmoved under the transformation. 8 In order to find the irreducible representation, we must note some conditions that this decomposition will satisfy. In order to obtain the correct basis, we must find a unitary transformation V that block diagonalises the generators of the group. As such, we have the following conditions: as well as the usual conditions that must be satisfied by the generators: S 2 = T 3 = (ST ) 3 = I. It will also be useful to observe three extra conditions, which will expedite finding the solution. Namely that the block diagonal of one of the two generators must have zeros on the diagonal to insure the triplet changes within itself. If we write an explicit form for V, we can extract a set of quadratic equations and attempt to solve for the elements of the matrix. Note that we have assumed as a starting point that v ij ∈ R∀i, j. The complete list is included in the appendix. The problem is quite simple, but at the same time would be awkward to solve numerically, so we shall attempt to simplify the problem analytically first. If we start be using: we can trivially see two quadratics, Since we assume that all our elements or V are real numbers, it must be true then that: We may now substitute this result into a number of equations. However, we chose to focus on the following two: Taking the difference of these two equations, we can easily see there is a solution where v 11 = v 12 , and as such by the previous result: We are free to choose whichever sign for these four elements we please, provided they all have the same sign. This outcome reduces the number of useful equations to twelve, as nine of them can be summarised as Let us consider the first of these three derived conditions, along with the conditions: Squaring the condition i v 2i = 0 and using these relations, we can derive easily that v 21 = ± 1 2 . Likewise we can derive the same for v 31 and v 41 . As before, we might chose either sign for each of these elements, with each possibility yielding a different outcome for the basis, though our choices will constrain the signs of the remaining elements in V. Let us make a choice for the signs of our known coefficients in the matrix and choose them all to be positive for simplicity. We are now left with a much smaller set of conditions: After a few choice rearrangements, these coefficients can be calculated numerically in Mathematica. This yields a unitary matrix, up to exchanges of the bottom three rows, which arises due to the fact the triplet arising in this representation may be ordered arbitrarily. There is also a degree of choice involved regarding the sign of the rows. However, this is again largely unimportant as the result would be equivalent. If we apply this transformation to our original basis t i , we find that we have a singlet and a triplet in the new basis, and that our generators become block-diagonal: B Yukawa coupling algebra Table 5 specifies all the allowed operators for the N = 0 SU (5) × A 4 × U (1) model discussed in the main text. Here we include the full algebra for calculation of the Yukawa matrices given in the text. All couplings must have zero t 5 charge, respect R-symmetry and be A 4 singlets. In the basis derived in Appendix A, we have the triplet product: B.1 Top-type quarks The top-type quarks have four non-vanishing couplings, while the T · T 3 · H u · θ a · θ a and T ·T ·H u ·θ a ·θ a ·θ b couplings vanishings due to the chosen vacuum expectations: H u = (v, 0, 0) T and θ a = (a, 0, 0) T . The contribution to the heaviest generation self-interaction is due to the T 3 · T 3 · H u · θ a operator: We note that this is the lowest order operator in the top-type quarks, so should dominate the hierarchy. The interaction between the third generation and the lighter two generations is determined by the T · T 3 · H u · θ a · θ b operator: The remaining, first-second generation operators give contributions, in brief: These will be subject to Rank Theorem arguments, so that only one of the generations directly gets a mass from the Yukawa interaction. However the remaining generation will gain a mass due to instantons and non-commutative fluxes, as in [37] [38]. B.2 Charged Leptons The charged Leptons and Bottom-type quarks come from the same operators in the GUT group, though in this exposition we shall work in terms of the Charged Leptons. The complication for Charged leptons is that the Left-handed doublet is an A 4 triplet, while the right-handed singlets of the weak interaction are singlets of the monodromy group. There are a total of six contributions to the Yukawa matrix, with the third generation right-handed types being generated by two operators. The operators giving mass to the interactions of the right-handed third generation are dominated by the tree level operator F · H d · T 3 , which gives a contribution as: Clearly this should dominated the next order operator, however when we choose a vacuum expectation for the H d field, we will have contributions from F · H d · T 3 · θ d : The generation of Yukawas for the lighter two generations comes, at leading order, from the operators F · H d · T · θ b and F · H d · T · θ a : where the vacuum expectations for θ a and θ b are as before. The next order of operator take the same form, but with corrections due to the flavon triplet, θ d . B.3 Neutrinos The neutrino sector admits masses of both Dirac and Majorana types. In the A 4 model, the right-handed neutrino is assigned to a matter curve constituting a singlet of the GUT group. However it is a triplet of the A 4 family symmetry, which along with the SU (2) doublet will generate complicated structures under the group algebra. B.3.1 Dirac Mass Terms The Dirac mass terms coupling left and right-handed neutrinos comes from a maximum of four operators. The leading order operators are θ c · F · H u · θ b and θ c · F · H u · θ a , where as we have already seen the GUT singlet flavons θ a and θ b are used to cancel t 5 charges. The right-handed neutrino is presumed to live on the GUT singlet θ d . The first of the operators, θ c · F · H u · θ b , contributes via two channels: With the VEV alignments θ a = (a, 0, 0) T and H u = (v, 0, 0) T , we have a total matrix for the operator: The second leading order operator, θ c · F · H u · θ a , is more cimplicated due to the presence of four A 4 triplet fields. The simpelst contribution to the operator is: which only contributes to the diagonal. This is accompanied by two similar operators in the way of: The remaining contribtuions are the complicated four-triplet products. However, upon retaining to our previous vacuum expectation values, these will all vanish, leaving an overall matrix of: Where y 0 = y 1 + y 2 + y 3 as before. These contributions will produce a large mixing between the second and third generations, however they do not allow for mixing with the first generation. Corrections from the next order operators will give a weaker mixing with the first generation. These correcting terms are θ c · F · H u · θ d · θ b and θ c · F · H u · θ d · θ a , though we choose to only consider the first of these two operators, since the flavon θ a will generate a very complicated structure, hindering computations with little obvious benefit in terms of model building. The θ c · F · H u · θ d · θ b operator has of diagonal contributions as: This is mirrored by similar combinations from the other 3 triplet-triplet combinations allowed by the algebra. Overall, this gives: Due to the choice of Higgs vacuum expectation, the diagonal contributions will only correct the first generation mass, giving a contribution to it ∼ vd 1 b. B.3.2 Majorana operators The right-handed neutrinos are also given a mass by Majorana terms. These are as it transpires relatively simple. The leading order term θ c · θ c , gives a diagonal contribtuion: There may also be corrections to the off diagonal, due to operators such as θ c · θ c · θ d . These yield: Higher orders of the flavon θ d are also permitted, but should be suppressed by the coupling. C Flux mechanism For completeness, we discribe here in a simple manner the flux mechanism introduced to break symmetries and generate chirality. • We start with the U (1) Y -flux inside of SU (5) GU T . The 5's and 10's reside on matter curves Σ 5 i , Σ 10 j while are characterised by their defining equations. From the latter, we can deduce the corresponding homologies χ i following the standard procedure. If we turn on a U (1) Y -flux F Y , we can determine the flux restrictions on them which are expressed in terms of integers through the "dot product" The flux is responsible for the SU (5) breaking down to the Standard Model and this can happen in such a way that the U (1) Y gauge boson remains massless [3,2]. On the other hand, flux affects the multiplicities of the SM-representations carrying non-zero U (1) Y -charge. Thus, on a certain Σ 5 i matter curve for example, we have where N Y i = F Y · χ i as above. We can arrange for example M 5 + N Y i = 0 to eliminate the doublets or M 5 = 0 to eliminate the triplet. • Let's turn now to the SU (5) × S 3 . The S 3 factor is associated to the three roots t 1,2,3 which can split to a singlet and a doublet 1 S 3 = t s = t 1 + t 2 + t 3 , 2 S 3 = {t 1 − t 2 , t 1 + t 2 − 2t 3 } T It is convenient to introduce the two new linear combinations t a = t 1 − t 3 , t b = t 2 − t 3 and rewrite the doublet as follows Under the whole symmetry the SU (5) GU T 10 t i , i = 1, 2, 3 representations transform (10, 1 S 3 ) + (10, 2 S 3 ) Our intention is to turn on fluxes along certain directions. We can think of the following two different choices: 1) We can turn on a flux N a along t a 9 . The singlet (10, 1 S 3 ) does not transform under t a , hence this flux will split the multiplicities as follows This choice will also break the S 3 symmetry to Z 3 . 2) Turning on a flux along the singlet direction t s will preserve S 3 symmetry. The multiplicities now read To get rid of the doublets we choose M = 0 while because flux restricts non-trivially on the matter curve, the number of singlets can differ by just choosing N s = 0. 9 In the old basis we would require Nt 1 = 2 3 Na and Nt 2 = Nt 3 = − 1 3 Na. D The b 1 = 0 constraint To solve the b 1 = 0 constraint we have repeatidly introduced a new section a 0 and assumed factorisation of the involved a i coefficients. To check the validity of this assumption, we take as an example the S 3 × Z 2 case, where b 1 = a 2 a 6 + a 3 a 5 = 0. We note first that the coefficients b k are holomorphic functions of z, and as such they can be expressed as power series of the form b k = b k,0 + b k,1 z + · · · where b k,m do not depend on z. Hence, the coefficients a k have a z-independent part a k = m=0 a k,m z m while the product of two of them can be cast to the form a l a k = p=0 β p z p , with β p = p n=0 a ln a k,p−n Clearly the condition b 1 = a 2 a 6 + a 3 a 5 = 0 has to be satisfied term-by-term. To this end, at the next to zeroth order we define λ = a 3,1 a 5,0 + a 2,1 a 6,0 a 5,1 a 6,0 − a 5,0 a 6,1 The requirement a 5,1 a 6,0 = a 5,0 a 6,1 ensures finiteness of λ, while at the same time excludes a relation of the form a 5 ∝ κa 6 where κ would be a new section. We can write the expansions for a 2 , a 3 as follows i.e., satified up to second order in z. Hence, locally we can set z = 0 and simply write a 2 = λ a 5 , a 3 = −λ a 6
Stepwise and Microemulsions Epoxidation of Limonene by Dimethyldioxirane: A Comparative Study Limonene dioxide is recognized as a green monomer for the synthesis of a wide variety of polymers such as polycarbonates, epoxy resins, and nonisocyanate polyurethanes (NIPU). The developed green technologies for its synthesis over heterogeneous catalysts present a challenge in that the selectivity of limonene dioxide is rather low. Homogeneous epoxidation in the presence of dimethyldioxirane for limonene dioxide synthesis is a promising technology. This study reports the epoxidation of limonene by dimethyldioxirane (DMDO) using two approaches. The isolated synthesis of DMDO solution in acetone was followed by epoxidation of limonene in another reactor in 100% organic phase (stepwise epoxidation). Following this procedure, limonene dioxide could be produced with almost 100% conversion and yield. A second approach allowed using in situ generated in aqueous-phase DMDO to epoxidize the limonene forming a microemulsion with a solubilized surfactant in the absence of any organic solvent. The surfactants tested were hydrosulfate (CTAHS), bromide (CTAB), and chloride (CTAC) cetyltrimethylammonium. All these surfactants showed good stability of microemulsions at aqueous surfactant concentrations above their critical micellar concentrations (CMC). Stability is obtained at the lowest concentration when using CTAHS because of its very low CMC compared to CTAB and CTAC. The major advantages of epoxidation in microemulsions compared to DMDO stepwise epoxidation are the absence of an organic solvent (favoring a low reaction volume) and the very high oxygen yield of 60 to 70% versus 5% in a stepwise approach. The epoxides formed are easily separated from the aqueous medium and the surfactant by liquid–liquid extraction. Therefore, the developed in situ epoxidation process is a green technology conducted under mild conditions and convenient for large-scale applications. INTRODUCTION The environmental and health restrictions on polymers formed from bisphenol A, phosgene, and epichlorohydrin that are usual components of fossil carbon-derived materials make the development of alternative polymers from biomass a great necessity. 1−3 Limonene, a most common terpene, is produced by more than 300 plants and available as a waste product from citrus juice industries. It is very competitive as an alternative green monomer because of its availability and low cost. 4−6 It is estimated that only orange juice industries could have produced more than 520 kilotons of R-(+)-limonene in 2014. 5 The quantity extracted and commercialized on the market does not exceed 90 kilotons per year; it is used in great part in the cosmetic, pharmaceutical, and food fields. This brings a major interest to valorize this abundant waste. Several researchers have been interested in limonene as a monomer for the synthesis of renewable polymers, but most research on the polymerization of this natural diolefin in its crude form has resulted in polymers that do not have the desired properties (low glass transition temperature, low molecular weight, and/or low conversion) to compete with currently commercially available polyolefins. 7−9 A more attractive approach is to functionalize limonene into an oxygenated intermediate that can produce a wide variety of more competitive polymers. 7,10−12 Specifically, 1,2-limonene oxide or limonene dioxide have all been identified as raw materials for the synthesis of green polymers such as nonisocyanate polyurethanes (NIPU) and green polycarbonates. 12−14 Studies have shown that by cycloaddition, limonene biscarbonate can be synthesized by the reaction of carbon dioxide with limonene dioxide, which in turn can react with polyfunctional amines to produce NIPUs with thermal and mechanical properties required from thermosetting or thermoplastic materials. 12,13,15 Also, poly(limonene carbonate) or poly(limonene biscarbonate) can be synthesized over a variety of catalysts forming green polycarbonates with properties that can compete with those of their petrochemical equivalents. 10,11,14,16 Given the importance of limonene dioxide for the synthesis of biobased polymers, the development of an epoxidation process addressing health and environmental issues is crucial. Several green processes for limonene epoxidation are reported in the literature. 17−19 A tungsten-based polyoxometalate catalyst used with aqueous H 2 O 2 as an oxidant was proposed for the solvent-free catalytic epoxidation of trisubstituted alkene bonds from a wide range of biorenewable terpene substrates. 19 1,2-Limonene oxide was obtained in 95% yield after 1 h of reaction using 1 equiv of aqueous H 2 O 2 (30%) at pH = 7, and no trace of limonene dioxide was detected under these conditions. Limonene dioxide was only obtained using 50% H 2 O 2 , 30 mol % Na 2 SO 4 , and toluene as the solvent in 69% yield after 7 h of reaction. Madadi et al. synthesized cobalt-substituted mesoporous SBA-16 catalysts for use in epoxidation with molecular oxygen. 20 Limonene epoxides were obtained using isobutyraldehyde as a co-reactant under very mild conditions in the presence of ethyl acetate as a green solvent. A conversion of 100% is achieved but the yield of limonene dioxide does not exceed 35%. In most cases, a high yield of limonene dioxide is a challenge for the heterogeneous epoxidation of limonene. The same challenge was observed when Charbonneau et al. epoxidized limonene over a Ti-SBA 16 catalyst. 21 The reaction was conducted at 75°C in acetonitrile with a tert-butyl hydroperoxide (TBHP)/ limonene molar ratio of 11/6. Limonene conversion reached 80% with a 1,2-limonene oxide selectivity of 79%, but no limonene dioxide was obtained. Liquid-phase reactions in the absence of any heterogenous catalyst yielded almost 100% of limonene dioxide. 17,22−24 Very recently, the two-step stereoselective epoxidation of limonene without a catalyst was performed in the presence of Nbromosuccinimide. 22 The technique consists in a first step to perform di-bromohydration of the endocyclic and exocyclic bonds of limonene to form a dibromohydrin intermediate. The latter is added to a NaOH solution to form limonene dioxide. Thus, after 5 min of reaction, a 97% yield of trans-limonene dioxide was obtained using a molar ratio limonene/NBS of 1:2 at 60°C by optimizing various parameters such as reaction temperature, molar ratio (limonene/N-bromosuccinimide (NBS)), and reaction time. 22 Another relevant technique for the liquid-phase epoxidation of limonene is the one-step epoxidation by in situ generation of dimethyldioxirane using oxone as the oxidant with excess acetone. 23 In this technique, the double epoxidation of limonene is carried out in semicontinuous fed mode at room temperature with a flow rate of 4 mL min −1 of aqueous oxone for a period of 45 min with a stoichiometric excess of 30% of oxone. A 97% yield of limonene dioxide is obtained. In a similar recent study, the effect of ultrasonic agitation on epoxidation was shown. 24 Limonene dioxide, α-pinene oxide, β-pinene oxide, farnesol trioxide, 7,8-carvone oxide, and carveol oxide were each obtained with 100% conversion and at least 97% yield during a reaction time of at least 1 h with magnetic stirring versus at most 8 min in the cases of ultrasonic stirring. The challenge of this process is that it involves a large amount of solvent and therefore a very high reaction volume and requires an excess of oxone. This makes it difficult to apply on a large scale. Very recently, in a new study, the use of CTAHS as a surfactant to solubilize the hydrophobic terpene to be epoxidized by in situ dimethyldioxirane (DMDO) generation has been reported, and it allowed to exclude the presence of any additional organic solvent and this allowed to reduce considerably the amount of oxone necessary for the epoxidation of limonene and pinenes. 17 The present work consists in epoxidizing terpenes by DMDO using two different processes. The DMDO stepwise synthesis process, in which, as a first step, DMDO is synthesized and isolated in acetone in order to, in a second step, epoxidize terpenes in a 100% organic medium. A second process is the microemulsion allowing to epoxidize terpenes by in situ-generated aqueousphase DMDO while testing the effect of other surfactants (CTAB and CTAC) in order to better disperse the hydrophobic terpenes to be epoxidized. This part of the work is based mainly on the optimization of the amount of acetone, surfactants, and sodium bicarbonate as the buffer. Finally, a comparison of the two processes will be discussed. EXPERIMENTAL SECTION 2.1. Materials. The suppliers, characteristics, and structures of the main products used in this work are summarized in Table 1. The products were used directly without any preliminary purification. The substrates to be epoxidized are terpene compounds including limonene, alpha-pinene, and beta-pinene, with special attention to limonene. The oxygenating agent is the triple salt designated as oxone. It is important to remember that not all components of the latter are active. The active compound (oxygen generator) is potassium peroxymonosulfate, KHSO 5 (CAS number: 10058-23-8) (Scheme 1). According to the manufacturer's information, the activity of the compound could decrease by 0.5% per month if stored under the recommended conditions. It is therefore recommended to use freshly purchased oxone for synthesis. Oxone in aqueous solution is strongly acidic. This acidity is regulated by using sodium bicarbonate in order to keep the reaction medium close to neutrality, favorable to epoxidation. The effects of the surfactants CTAHS, CTAB, and CTAC on the epoxidation rate will be examined. All three surfactants have the same hydrocarbon chain (cetyltrimethyl ammonium), their difference being in the counterion. Their physicochemical properties are different, and this could have an effect on the organic reaction. 25 Methods. 2.2.1. Isolated Synthesis of DMDO and Epoxidation (Stepwise). Several results on the production of DMDO to epoxidize olefins have been reported. 26−29 The method and conditions for the isolated synthesis of DMDO are almost identical in all these references. The method considered as a reference for the synthesis of DMDO in this study is that of Mikula et al. 26 The synthesis is carried out in a glass jacketed reactor of the AI R 20 L series with a cooler and pump, all provided by "Across International". The method was reproduced in the present work. To a vigorously stirred solution of sodium bicarbonate (2.5 kg) in water (4.125 L) and acetone (3 L) at 20°C, potassium monoperoxy sulfate (oxone) was added in portions (5 × 0.81 kg) every 15 min. Simultaneously, a reduced pressure of 400 mbar was applied to avoid overpressure within the reactor caused by outgassing from the addition of oxone. The reactor is connected to a nitrogen gas stream as carrier gas. A condenser connected to a cryostatic temperature controller operating at −20°C allowed collecting a yellow solution of dimethyldioxyran in acetone and recovering as a distillate, also cooled to −20°C. The prepared solution is dried with magnesium sulfate. The product was immediately quantified by iodometric titration and stored in a freezer below −20°C. The freshly prepared DMDO solution was used without delay to epoxidize each of limonene, alpha-pinene, and betapinene. For this purpose, 1 mmol of limonene was reacted with 2.3 mmol of DMDO, 1 mmol of alpha-pinene with 1.2 mmol of DMDO, and 1 mmol of beta-pinene with 1.2 mmol of DMDO. All reactions were conducted at 0°C to avoid evaporation of acetone from the DMDO solution. After 20 min of reaction, the acetone is evaporated under vacuum at room temperature and the resulting products are sent for quantification. Determination of the DMDO Concentration in Acetone. Several methods are described in the literature to determine the concentration of dioxirane groups in DMDO solutions, including iodometric titration, methylphenyl sulfide or dimethyl sulfide oxidation, UV−visible technique, and NMR. 26,30−32 For our study, the concentration of dioxirane groups was determined by iodometric titration following the one proposed by Adam. 30 For this purpose, a 1 mL solution of dimethyldioxirane in acetone was added to a 2 mL solution of acetic acid and acetone with a molar ratio of 3:2. A 2 mL saturated aqueous solution of potassium iodide (KI) was then added with some dry ice to deaerate, and the resulting mixture was kept in the dark at room temperature for 10 min. The sample was diluted with 5 mL of distilled water. A 1 mL sample was titrated with 0.001 N aqueous sodium thiosulfate (Na 2 S 2 O 3 ) solution. The titration was repeated three times, resulting in dioxyrane concentrations of 0.06 to 0.07 mol/L. Epoxidation in Microemulsions: In Situ-Synthesized DMDO for Expoxidation. All reactions were conducted under ambient conditions. The aqueous concentrations of surfactants were set at values close to their respective critical micellar concentrations (CMCs) (0.27, 0.90, and 1.40 mM, respectively, for CTAHS, CTAB, and CTAC). The aqueous concentration of oxone was 0.42 mol/L. The amount of sodium bicarbonate was so that the final pH of the reaction medium was above 7. For this, a sodium bicarbonate/oxone molar ratio higher than or equal to 3 is required. Vigorous stirring and slow addition of the oxone are required for effective epoxidation. The procedure of a typical test is as follows: in a 200 mL flask, 25 mL of distilled water, 33 mmol of sodium bicarbonate, and 5.4 mM CTAHS are mixed. In 4 mL of acetone, 6.2 mmol of limonene is dissolved and added to the flask. The whole mixture is stirred at 600 rpm. Oxone (10.5 mmol) or 6.5 g is added in 5 fractions in order to avoid the release of gas (i.e., 1.30 g every 5 min very slowly). At the end of the addition of oxone, the reaction is continued for another 20 min, which makes a total reaction time of 40 min. At the end of the reaction, the organic phase is separated with a separating funnel using ethyl diether as the extractant and dried with MgSO 4 . The limonene epoxide is separated from the extractant using a rotary evaporator. Extraction of Terpene Epoxide from the Aqueous Phase and Surfactant. In the case of terpene epoxidation in a 100% organic medium, that is, when DMDO is synthesized and isolated in acetone in order to epoxidize the terpene, the terpene epoxides are easily isolated using a rotary evaporator to remove the acetone. On the other hand, in the case of epoxidation in an aqueous medium by dispersing the terpene with a surfactant, the separation of the epoxide from the aqueous medium and the surfactant depends on the choice of the extractant. The extractant to be used must meet two criteria: solubilize the epoxides and not dissolve the surfactants (CTAHS, CTAB, and CTAC). The appropriate extractants are hexadecane, benzene, ethers, and so forth. Diethyl ether was chosen because of its low boiling point (34.6°C). Indeed, at the end of the reaction, about 20 mL of diethyl ether is added to the reaction mixture, and the whole is put into a separating funnel. The two phases, organic and aqueous, are quite distinct: the epoxides dissolved spontaneously in the diethyl ether phase (organic phase) while the surfactant as well as the waste products from the oxone and the sodium bicarbonate are trapped in the aqueous phase. The organic phase was dried with MgSO 4 and filtered. Terpene epoxide was isolated after evaporation of diethyl ether and acetone using a rotary evaporator. The yields for limonene dioxide isomers were determined by 1 H NMR. The spectrum was recorded on a machine from Varian company, model Inova at 400 MHz, with 32 scans and a relaxation time of 2 s. Approximately, 10 mg of sample was diluted in approximately 1 g of CDCl 3 . The isolated yield of each isomer is determined according to eq 3 after zooming and integrating the air (A) of each isomer peak. 26 For optimal yield, this synthesis requires vigorous stirring, periodical slow addition of oxone, application of reduced pressure, and a condensation temperature of the DMDO solution below −20°C . The Mikula et al. procedure was reproduced using the double jacket reactor of the AI R 20 L series of "Across International". Initially, the whole quantities of water, acetone, and sodium bicarbonate necessary for the synthesis were admitted into the reactor and stirred vigorously (600 rpm). The oxone was added in small fractions. To avoid the overpressure related to the addition of the oxone, an initial reduced pressure of 400 mbar was applied in the reactor. The condensate was collected in the reflux condenser operating at −20°C by cryostatic control. After 1 h 30 min of reaction, a 1.7 mL solution of DMDO is collected and quantified. Characterization and Quantification of the To avoid decomposition of the produced DMDO, it is necessary to store it in a freezer at temperatures around −20°C . At room temperature, DMDO would spontaneously convert to acetone within a few days. Mikula et al. studied the decomposition of DMDO over a storage period of 23 months at −20°C. 26 The freshly prepared DMDO was 72 mM. During the first 12 months, the product experienced a decrease in concentration to 40 mM, followed by a very small decrease over the next 11 months (38 mM). To avoid the loss, it is therefore advantageous to use freshly synthesized DMDO for the epoxidation reaction. The terpenes (limonene and pinenes) were then epoxidized (Scheme 2). Limonene dioxide was synthesized in almost 100% yield over a reaction time of 20 min. The reaction is conducted at 0°C to avoid undesirable thermal decomposition of DMDO to acetone. Under the same conditions, pinenes are also easily epoxidized with almost half the amount of DMDO compared to limonene because of their single double bond. The use of separately generated DMDO acetone solutions for the oxidation of various target molecules was reported by several authors. 35 The main advantage is the stoichiometric epoxidation of substrates in a 100% organic reaction medium. Nearneutral pH conditions must however be ensured. 36−38 On the other hand, the DMDO isolated in acetone thus produced is in very low quantity compared to the quantities of reagents involved (especially the quantity of oxone). Table 2 summarizes some literature results compared to our result concerning the employed reagents and the produced DMDO. In this regard, optimization studies estimated that when prepared beforehand as an acetone solution, the yield of DMDO with respect to potassium monoperoxysulfate (oxone) did not exceed 5%. 34 Moreover, it has been reported that condensation temperatures below −40°C are necessary to improve the concentration of the DMDO in these solutions. 26 Some studies have mentioned that the application of trifluoroacetone (CF 3 COCH 3 ), as a replacement for acetone (CH 3 COCH 3 ), allowed the collection of dioxirane solutions in the parent ketone with concentrations 6 to 8 times higher than that in solutions usually obtained using the above described procedure. 39 These studies revealed, however, that the stability of the corresponding dioxirane is very low compared to that of DMDO; upon storage at −20°C, the decomposition of this dioxirane is 6% after 48 h, and a loss of 30% for 120 h at 0°C is observed. This decomposition led mainly to methyl trifluoroacetate (CF 3 COOCH 3 ) and trifluoroacetic acid. 39 In contrast, the in situ application of DMDO precludes prior isolation and allows for a significantly improved yield relative to oxone. The following section will discuss the in situ DMDO epoxidation while discussing the effect of homogenization by different surfactants of the biphasic aqueous and organic reaction medium. 17,23,24,35 In this technique, DMDO is generated at neutral pH as an intermediate after in situ reaction of acetone with potassium monoperoxysulfate KHSO 5 , commercially available as oxone or caroat (Scheme 3). This DMDO route appears to allow reaching very high oxygen yields for epoxidation compared to the stepwise application of DMDO. Scheme 4 illustrates the in situ epoxidation of terpene by DMDO. In an aqueous solution of oxone at near-neutral pH (pH = 7−8) and in the presence of acetone, the hydrogen sulfate anion (HSO 5 − ) reacts with acetone to produce DMDO. A broad development of dioxirane chemistry on kinetic, stochiometric, and 18 O labeling data allowed to establish a mechanism of DMDO formation (Scheme 3). 40−42 According to some authors, there is additional formation of a hemiacetal intermediate, which evolves at the end to give DMDO. 38 Indeed, DMDO is known as an electrophilic oxygen transfer reagent, it can then be attacked by a variety of electron-rich substrates (terpene), yielding epoxidation products (epoxidized terpene) (Scheme 4); DMDO can also be simultaneously attacked by the hydrogen sulfate anion (HSO 5 − ), giving the sulfate anion (HSO 4 − ) and molecular oxygen (Scheme 5). Molecular oxygen may also result from autodecomposition of oxone (Scheme 6). In some cases, as a result of phase incompatibility within the substrate, acetone is employed in large quantities serving as a phase transfer intermediate and also as an easily regenerated catalyst. These effects considerably deplete the oxygen yield; for example, for the complete epoxidation of limonene, an oxone/limonene ratio of 2.6 was required. 23 An oxone/limonene molar ratio equal to 1 could suffice to completely epoxidize limonene stoichiometrically (the number of moles of active oxygen is equal to twice the number of moles of the oxone). Indeed, in order to decrease the reaction volume, that is, to exclude the organic solvent and to minimize the loss of active oxygen, a surfactant can be used in order to use micellar catalysis; micellar catalysis in organic synthesis in general is widely defined in the literature. 43 Specifically for the case of the epoxidation of a hydrophobic terpene by DMDO in an aqueous medium, the oxone dissolves in the majority aqueous phase in which the hydrophobic terpene is epoxidized via DMDO. At an aqueous surfactant concentration above the CMC, micelles are formed in which the terpene is finely dispersed spontaneously; this forms a nearly homogeneous and indefinitely stable mixture. Acetone, in turn, is a compound that is both miscible in organic and aqueous media and is converted to DMDO by reaction with the oxone in the aqueous medium in order to migrate to the interface or within the micelle to epoxidize the terpene. With this in mind, the surfactant effect will be investigated in the case of aqueous phase epoxidation for concentrations before and after micelle formation. The application of a surfactant to disperse the hydrophobic terpene in aqueous medium could considerably reduce the acetone and oxone content as well as the reaction time to fully epoxidize a terpene. The distribution of terpene, oxone, surfactant, DMDO, and acetone within the micelle is shown in Scheme 7. 17 Optimization of the Reaction Conditions. Before testing the effect of CTAHS, CTAB, and CTAC for microemulsion stabilization of the biphasic reaction medium, the parameters such as reaction time, oxone/terpene molar ratio, and NaHCO 3 /oxone molar ratio required for maintaining the pH close to neutrality and amount of acetone required were optimized. In a previous study, the reaction time and the oxone/terpene molar ratio were already optimized. 17 This part of the work will mainly consist of optimizing acetone and sodium bicarbonate as the buffer and testing and comparing the stability of three surfactant homologs (CTAHS, CTAB, and CTAC). Optimization of the Amount of Sodium Bicarbonate as the Buffer. For an epoxidation reaction in using oxone, the presence of a buffer to maintain the reaction medium at a pH close to neutral is fundamental. A saturated aqueous solution of oxone has a pH below 2. The epoxidation reaction could possibly be carried out over the range of initial pH = 1.5 to 12. For an initial value of 1.5 to 8.6, the pH is adjusted with NaHCO 3 ; for pH values above 8.6, the pH is adjusted with NaOH. Figure 1 summarizes the results of the At very acidic or basic pH values, no conversion of limonene is observed. At pH = 7.2 to 8.6, the oxone has completely reacted. The amount of NaHCO 3 introduced in a saturated aqueous solution was optimized by measuring pH at various NaHCO 3 /oxone ratios (Table 3). A minimal NaHCO 3 /oxone ratio of 3 was required. The major advantage of the epoxidation of limonene by DMDO lies in its high selectivity toward epoxides (Scheme 8). The application of CTAHS as a surfactant for this reaction is no exception. The only reaction intermediate is 1,2-limonene oxide. There was preferential attack of DMDO on the endocyclic double bond of limonene forming 1,2-oxide limonene first due to the electron-rich nature of the trisubstituted bond compared to the disubstituted exocyclic bond. No allylic epoxidation favoring the formation of carveol or carvone as in the case of epoxidation using hydrogen peroxide or tert-butyl hydroperoxide as the oxidant was observed. Also, no secondary epoxidation by epoxide ring opening favoring diol formation was observed. Similarly, the NMR spectrum of the complete epoxidation of limonene shows no proton adjacent to a diol or ketone function. Details of the proton NMR analysis will be reported below. The NMR spectra of limonene before and after complete epoxidation are presented in the information sheet (appendix). 3.2.2.2. Optimization of the Amount of Acetone as the Catalyst Free Organic Solvent. Early work dealing with epoxidation reactions in the presence of a peracid had shown that the presence of a ketone accelerates the reaction. 38,44 This led to the discovery that in the presence of a ketone, the oxygen in the peracid passes through an intermediate, which is a dioxirane. Ketones have therefore been considered as catalysts for this type of reaction. Indeed, in addition to the use of a ketone and the aqueous oxone, the olefin is solubilized in an organic solvent, and therefore, the reaction is conducted in a biphasic medium. 45−47 On the other hand, the epoxidation reaction is possible even in the absence of an inert organic solvent; this has been discussed recently by Charbonneau et al. 23,24 The epoxidation reaction was carried out using exclusively acetone both to generate DMDO and as an organic solvent. Because of the high solubility of acetone in water, however, acetone is used in large quantities so that its concentration reaches its maximum equilibrium value. This leads to a large reaction volume and a low yield of active oxygen. Very recently, a new epoxidation technique consisting of dispersing the hydrophobic terpene in the aqueous phase using a surfactant in order to exclude the organic solvent has been developed. 17 At very low surfactant concentrations above the CMC, micelles spontaneously form in which limonene is entrapped forming microemulsions. For this purpose, the use of an organic solvent or acetone in large quantities is no longer necessary. Acetone should be used in small quantities to act as a catalyst (in situ DMDO generator). To optimize the amount of acetone needed, the epoxidation reaction is carried out with different amounts of acetone while keeping the same reaction conditions. At this stage, the amount of surfactant is not optimized, and only CTAHS is tested as the surfactant. Figure 2 shows the results of the epoxidation of limonene with different amounts of acetone. The reaction sequence is as described in the experimental part. Figure 2 shows the conversion and yield of limonene dioxide as a function of the amount of acetone. When the reaction is carried out without acetone (at the 0 mL acetone point), there is a low conversion of limonene without any formation of limonene dioxide. After the use of acetone, the conversion and yield increase exponentially with the amount of acetone. Only 4 mL of acetone was sufficient to completely convert limonene to limonene dioxide; this is equivalent to an acetone/water volume ratio of less than 1:5. The required oxone/alkene molar ratio is 1.6 for limonene (double unsaturation) and 0.9 for alpha-pinene (single unsaturation) for a reaction time of only 40 min. In order to compare the efficiency of the microemulsion technique to other techniques used for epoxidation using oxone and acetone such as epoxidation in biphasic medium in the presence and without phase transfer catalyst (18-crown-6 or n-Bu4NHSO4) and epoxidation in the presence of excess acetone as a dioxirane generator and an alkene solubilizing solvent, the monosaturated terpene (alpha-pinene) is also epoxidized and the results are summarized in Table 4. Table 3 for reaction conditions. The reaction conditions are as described in the experimental part. At ambient temperature, 25 mL of distilled water, 0.05 g of CTAHS, and 6.2 mmol of limonene were diluted in 5 mL of acetone; 10.5 mmol of oxone or 6.5 g is added in five fractions (i.e., 1.30 g every 5 min very slowly). By varying the amount of sodium bicarbonate, the sodium bicarbonate/oxone ratio was established. b The pH is measured at the end of the reaction. For Table 4, as the aim of this work is a comparative study, a comparison of the present work (reaction in the microemulsion system) with previous studies is reported. From Table 4, the comparison variables such as the reaction time, amount of oxone, and the need for an organic solvent are compared side by side. For this purpose, more details of epoxidation chemistry in microemulsions are already reported in our former publications. 17,48 The present work is devoted much more on the comparison of processes using oxone as the oxidant through dimethyldioxirane. ACS Omega Epoxidation technologies that solubilize the alkene to be epoxidized in an organic solvent for a two-phase reaction medium (benzene−H 2 O, CH 2 Cl 2 −H 2 O, or acetone−H 2 O) require a volume of solvent that is generally equal to the volume of water needed to dissolve the oxone. This leads to a loss of active oxygen (excess oxone), as observed in most cases (Table 4). Indeed, an oxone/alkene molar ratio higher than 2 At ambient temperature, 25 mL of distilled water, 0.05 g of CTAHS, 33 mmol of sodium bicarbonate, 6.2 mmol of limonene diluted in 4 mL of acetone; 10.5 mmol of oxone or 6.5 g is added in five fractions (i.e., 1.30 g every 5 min very slowly). b 25 mL of distilled water, 0.05 g of CTAHS, 16 mmol of sodium bicarbonate, 6.2 mmol of α-pinene diluted in 4 mL acetone, and 5.3 mmol of oxone added in five fractions. The reaction was conducted as described in the Experimental Section. c The molar number of active oxygen is equal to twice the molar number of the oxone according to the empirical formula of the oxone described in the Experimental Section. was required to completely epoxidize the monosaturated alkene over a high reaction time compared to the microemulsion (H 2 O-surfactant) epoxidation technique. In contrast, in the case of microemulsion epoxidation, because the reaction is conducted without an organic solvent, replaced by minor amounts of surfactants, very high conversions and yields are achieved with only oxone/alkene molar ratios lower than 1 for alkene with one unsaturation (alpha-pinene) and 2 for alkene with double unsaturation (limonene). The reaction time is only 40 min. In the extensive literature available on the epoxidation of alkenes with oxone, the required oxone/alkene molar ratio was never less than 2. With the epoxidation reaction conditions at microemulsions, the use of the necessary oxone is reduced to at least 50%. Effect of CTAHS, CTAB, and CTAC for DMDO In Situ Epoxidation. The importance of surfactant interactions at the aqueous phase-organic phase (terpene) interface for the in situ DMDO epoxidation reaction has been discussed previously. 17 Oxone is an aqueous oxidizing agent, and limonene is a hydrophobic substrate; thus, the oxygen must be transferred from the aqueous phase to the limonene. To this end, the application of a surfactant can effectively improve the dispersion of the hydrophobic terpene in the aqueous phase by promoting the formation of a microemulsion. According to the literature, besides the presence of the oxidant and substrate, microemulsions are formed at specific compositions of water, oil, a surfactant, and a co-surfactant (usually an alcohol with an alkyl group between C 4 and C 8 ). 49−53 In the case of terpene epoxidation in the presence of in situ-generated DMDO, the formation of microemulsions involves only water, terpene, surfactant, and acetone at specific concentrations. 17 To optimize the amount of surfactant required to microemulsify limonene, epoxidation reactions were conducted at different surfactant concentrations at values near the CMC at room temperature. Reaction tests were carried out under the same conditions while comparing the three homologs of cetyltrimethylammonium hydrogen sulfate (CTAHS), bromide (CTAB), and chloride (CTAC). Table 5 reports the CMCs of the different surfactants employed. Figure 3 displays the limonene conversion and LDO yield obtained in the series of epoxidation tests. The reaction conditions were as described in the Experimental Section. The double epoxidation of limonene occurs first after epoxidation of the endocyclic unsaturation as an intermediate, and then, the epoxidation of the exocyclic unsaturation is carried out to obtain the limonene dioxide. According to Figure 3, the most important phenomenon to consider for homogenization of the reaction medium is the use of the surfactant at a concentration above the CMC. At concentrations below the CMC (i.e., before micelle formation), no significant effect of surfactant concentration on conversion and yield was observed for all surfactants employed. The conversion of limonene is about 60% with a yield of limonene dioxide not exceeding 35% and the presence of limonene monoxide as the intermediate. The CTAHS micelles form at a very low concentration (CMC of 0.27 mM) compared to the CTAB and CTAC micelles (0.93 and 1.40 mM, respectively). 25,54,55 The stabilities of these three surfactants are different. At only an aqueous phase concentration of CTAHS of 0.67 mM, conversion and yield of almost 100% are achieved; for CTAB, an aqueous phase concentration of 1.35 mM was sufficient to obtain the same results. This allows us to discover that CTAHS and CTAB homogenize the biphasic emulsion at concentrations very close to their respective CMC. On the other hand, for CTAC, conversions and yields of almost 100% could be reached only from the aqueous concentration of 5.4 mM. For this epoxidation process, several advantages can be listed by comparing with stepwise DMDO epoxidation as listed in Table 6. For separate preparation of DMDO, the technique is very tedious and requires additional additives and steps, which are not necessary in the application of microemulsions. In the case of microemulsions, acetone acts as a catalyst and no organic solvent is needed. In addition, the amount of acetone required, the oxygen yield with respect to oxone, and the low reaction volume make the in situ technique under microemulsions certainly eligible for large-scale applications. Stereoisomers of LDO. Generally, the epoxidation of chiral terpenes gives rise to several isomers regardless of the nature of the technology used. 19,56 Nonstereoselective epoxidation of the endocyclic unsaturation of limonene gives rise to a mixture of trans-limonene oxide and cis-limonene oxide. 19,20,56 After epoxidation of the exocyclic unsaturation to form limonene dioxide, there is the formation of an additional center of chirality at carbon C8. The number of diastereomers is then at a total of four (Scheme 9), among which 2 trans and 2 cis cannot be physically separated. 57,58 The importance of the reactivity of these cis and trans diastereomers is different when they are used as precursors in the formation of biosourced polymers. 14,56,59 These isomers are often quantified by chromatography using a chiral column or by proton NMR. 22,56,60−62 As in our case, the limonene dioxide formed by in situ DMDO epoxidation reaction is a mixture of four isomers: two stereoisomers trans and cis, each of which has two enantiomers R and S (Scheme 9). The sample of this mixture of isomers was analyzed by proton NMR (Figure 4). The different proton NMR peaks of these four isomers were identified by comparison to the relative position of the peaks of the commercial LDO isomers and the peaks of the isolated isomers reported in literature. 57,59 From 1.17 to 1.22 ppm, the methyl proton bound to the carbon at position 7, due to its direct linkage to the ring, is identified by four peaks of varying intensities characterizing the quantities of the four isomers. The integration of each of these four peaks is used to estimate the quantities of different isomers of the LDO. According to the literature, the methyl proton bound to the carbon at position 10 is not affected by the different conformations of the isomers because of its exocyclic position. 57,59 This allowed its resonance to be identified by a single peak around 1.28 ppm. As in the literature, the appearance of peaks between 3.4 and 3.5 ppm indicates the presence of oxirane (epoxide). 63 This same quantification technique was used when LDO isomers were obtained by aerobic epoxidation using cobalt-substituted mesoporous SBA. 20 A 60:40 mixture of cis and trans LDO is obtained at proportions close to those in our study for the four isomers. 56 Epoxidation of limonene with a tungsten-based polyoxometalate catalyst using aqueous H 2 O 2 as an oxidant also led to a mixture of the cis and trans isomers of 1,2limonene oxide. 19 Quantification of these isomers using chromatography also gave a 57:43 ratio of cis and trans 1,2 limonene oxide, which is also a ratio close to the results of the present study. CONCLUSIONS This study investigated epoxidation processes using DMDO under two approaches. One approach is to synthesize DMDO separately and isolate it in acetone to epoxidize the terpene in a 100% organic medium. A second approach is the in situ epoxidation in the aqueous phase under the presence of different surfactants to microemulsify the terpene. The objective was to compare the two processes. The epoxidation in stepwise synthesis of DMDO proved to be very tedious. Moreover, the conditions of DMDO synthesis and the quantities of reagents involved are not sufficient to warrant large-scale applications. Indeed, this isolated DMDO synthesis process is a promising route for minor scale epoxidations. On the other hand, according to the results listed in Table 2, the low yield of DMDO relative to the reagents involved and the very low condensation temperature are the main drawbacks of the process for large-scale application. Nevertheless, this approach could be useful in the case of epoxidation of sensitive in aqueous media substrates. 28,35,38 On the other hand, epoxidation in the presence of in situ-generated DMDO by dispersing the terpene in the aqueous medium using a surfactant proved to be the most promising. The application of a very small amount of surfactant allowed to avoid using any organic solvent, and indeed, the hydrophobic terpene is perfectly dispersed in aqueous medium. Moreover, the reaction volume is very low compared to the conventional method involving the presence of an organic solvent. This resulted in terpene epoxides in almost 100% yield; the oxygen yield on the epoxides with respect to oxone reached up to 70%. The surfactants (CTAHS, CTAB, and CTAC) should be used above their CMCs. CTAHS showed stability at the lowest concentration due to its very low CMC. It should also be mentioned that this epoxidation technique is 100% green and conducted under mild conditions and can easily be scaled up to industrial scale. The waste generated at the end of the reaction is biodegradable. ■ ACKNOWLEDGMENTS The authors would like to acknowledge Soprema Inc. (based in Drummonville, Quebec, Canada) and the Natural Sciences and ■ REFERENCES
Prevalence, Molecular Diversity, and Antimicrobial Resistance Patterns of Pathogenic Bacteria Isolated From Medical Foods, Food Staff, Cooking Instruments, and Clinical Samples in a Teaching Hospital in Tehran, Iran Background: Medical foods could be vehicles of pathogenic microbes for vulnerable people in the hospitals. Hospital kitchen is considered as the main source of this cross-contamination. Objectives: Thecurrentstudyaimedatinvestigatingthefrequencyof bacterialspeciesandtheirantimicrobialresistancepatterns in foods, food handlers, and utensils compared with those of the clinical isolates in a hospital kitchen in Tehran, Iran. Methods: This cross sectional study was performed in a hospital in Tehran, Iran from April 2011 to January 2013. Accordingly, simple random sampling of raw and cooked food materials, swab samples of cooking utensils, and hands and noses of food staff were done. Clinical samples were collected from blood, urine, wound, and respiratory aspirates of patients with hospital acquired infections. Bacterial isolates were identified according to biochemical standard identification schema. Antimicrobial susceptibility of the strains was determined by disk diffusion method according to the CLSI (the clinical and laboratory standards institute) guidelines. Moleculardiversityof indicatorbacterialisolatesof Staphylococcusaureus andEscherichiacoliinthekitchenandthoseof the isolated ones from intensive care unit were also investigated by molecular typing method. The occurrence of cross-contamination was hypothesized based on the results of phylogenetic investigation and resistance biotyping. Results: Out of the 200 kitchen samples, S. aureus , E. coli , Acinetobacter spp., Pseudomonas spp., and Enterococcus spp. were isolated in frequencies of 15.5%, 8%, 2.5%, 0.5%, and 0.5%, respectively. Prevalence of multidrug resistant-methicillin resistant strains of S. aureus (MDR-MRSA) in the samples of the hospital kitchen vs the intensive care unit (ICU) was 18.7% (6/32), compared with 91.6% (22/24), respectively. Among the kitchen E. coli isolates, MDR pattern was detected in a frequency of 52.9%; the highest frequency was detected among the isolates of utensils. Although the results of the phylogenetic and resistance biotyping analyses did not confirm significant relationship between the isolates of the ICU and hospital kitchen, this similarity was confirmed among the strains isolated from the foods, food handlers, and utensils. In this regard, food staff and utensils were considered as the main sources of cross-contamination for S. aureus and E. coli , respectively. foodworkersandinadequatecookingtoeliminatethecontaminantsduringfoodprocess-ing were postulated as the main risk factors for transmission of these bacteria, through medical foods. into hospital. Background Hospital acquired foodborne diseases are considered as life-threatening complications among hospitalized patients (1). The role of medical foods in transmission of pathogenic bacteria and occurrence of gastrointestinal diseases are established by several reports (2,3). While foodborne transmission of microbes responsible for nosocomial outbreaks encompass a small portion of these events, the importance of some bacterial pathogens including Klebsiella pneumonia, K. oxytoca, Salmonella spp., Clostridium perfringens, Coliforms, Bacillus cereus, and Staphylococcus aureus was illustrated in foodborne outbreaks in different hospitals (4)(5)(6)(7). Medical foods provide essential energetic metabolites for the growth of pathogenic microorganisms. Also, they could provide necessary conditions to produce toxins with serious impact on human health. Preparation of some of these foods, such as the foods for enteral tube feeding, requires a great deal of processing that could increase the risk of microbial infection among them. Some risk factors such as the existence of weak immune system among the patients, changes of natural microbial community in their gastrointestinal tract (GIT), or their natural performance of the body defense system could boost their susceptibility to foodborne diseases in the hospital setting. Poor hand hygiene of hospital kitchen staff could serve as the main risk factor in contamination of foods. Resident microbial flora of cooking staffs and usage of contaminated utensils, together with impaired cooking process are the main putative risk factors in the food microbial contamination in hospitals (8)(9)(10)(11). Food staff can also indirectly involve in nosocomial infections through introducing new resistant bacterial strains into hospital kitchens or the environment. Despite the high number of studies conducted on hospital-acquired infections (HAIs) in Iranian hospitals, only a few studies investigated the possible role of foods in this regard. A surveillance study provides reliable data about these pathogens and their transmission patterns. According to the priority of preventing foodborne outbreaks in the hospitals that provide homemade food products, assessing the role of medical foods in the occurrence of HAIs and transmission of pathogenic bacteria into the hospital environments are of great importance. Characterization of pathogenic bacteria, their antibiotic resistance profiles, and similarity of their molecular patterns among the isolates of food, food staff, cooking instruments, and patients' clinical samples were aimed in the current study to analyze their possible transmission patterns in a teaching hospital in Tehran, Iran. Setting and Sampling The current cross sectional study was performed in a hospital in Tehran, Iran from April 2011 to January 2013. Accordingly, simple random sampling of raw and cooked food materials, swab samples of cooking utensils, and hands and noses of food staff were used in the current study. The sample size was calculated based on a power of 80%. The clinical samples (urine and blood samples of patients with urinary tract and blood stream infections), environmental swab samples of intensive care unit (ICU), and hands of health care workers were collected from the same hospital by the standard sampling method as described by Mackowiak et al. at the same time scale (12). The samples were collected from all food preparation steps while the involved staff was unaware of the sampling. All prepacked samples and food materials were excluded from the study. All the samples were immediately transferred to laboratory for further processing and microbiological analyses. In the case of swab samples, all of the specimens were cultured on selective (mannitol salt agar and Mac-Conkey agar) and non-selective media (blood agar). In the case of food samples, defined amounts of the specimens were suspended in normal-saline solution and homogenized in a STOMACHER system (Seward stomacher) (13). Defined amounts of each food suspension were cultured on the culture media. Colony count of each bacterial isolate was determined after 24 hours incubation at 37°C under ambient atmosphere. The current study protocol was approved by the ethical committee of research institute for gastroenterology and liver diseases (RIGLD-688). Bacterial Identification Characterization of the grown bacterial isolates was done based on conventional biochemical tests (14). Briefly, staphylococcal isolates were characterized based on their colony morphology on mannitol salt agar and blood agar, microscopic examination, catalase, DNase, and coagulase activities. To confirm the identity of S. aureus isolates, polymerase chain reaction (PCR) was performed for 2 species of specific gene loci nuc and femA (Section 3.1). In the case of the grown Gram-negative bacteria on MacConkey agar, patterns of their fermentation in triple sugar iron (TSI) agar and results of their IMViC and lysine decarboxylase reactions, utilization of amino acids, and sugars were used. DNA Extraction and PCR In order to do molecular fingerprinting experiments, total DNA of E. coli (as a marker of fecal contamination) and S. aureus (a common skin pathogen) isolates were extracted from freshly prepared overnight cultures on blood agar medium. In the case of S. aureus isolates, the bacterial colonies were resuspended in 45 µL of Tris-EDTA buffer (pH 8) and 5 µL of lysostaphin solution (Sigma-Aldrich, USA) (100 µg/mL) was added. The samples were incubated at 37°C for 10 minutes. After the incubation, 5 µL of proteinase K solution (100 µg/mL) and 150 µL of 0.1 M Tris-HCl (pH 7.5) were added to the suspension. The samples were incubated for 10 minutes and heated for 5 minutes at 100°C. The DNA samples were collected after 10 minutes 2 Arch Clin Infect Dis. 2017; 12(3):e62421. centrifugation in 13,000 g. A DNA purification kit was used to extract DNA from the E. coli isolates (DNP™ kit, Cinagen, Iran). PCR for nucA and femA PCR was utilized to detect S. aureus isolates with species specific primers that detect nucA gene, nuc-F: 5 -CTGGCATATGTATGGCAATTGTT-3 and nuc-R: 5 -TATTGACCTGAATCAGCGTTGTCT-3 , and femA gene, femA-F: 5 -AAAAAAGCACATAACAGCG-3 and femA-R: 5 -GATAAAGAAGAAACCAGCAG-3 . In brief, a total of 1 µL DNA template was added to 24 µL of PCR mixture containing 2.5 µL Taq DNA polymerase buffer, 0.75 µL MgCl 2 , 0.5 µL dNTPs (Gene Fanavaran, IR.IRAN), 1.25 µL of each primer, and 0.15 µL of Taq DNA polymerase (Gene Fanavaran, IR.IRAN). The amplification was carried out on a thermal cycler under the following conditions: first, denaturation at 95°C for 3 minutes followed by 30 cycles of denaturation at 95°C for 40seconds, annealing at 53°C for 40 seconds (for nuc gene) and 47°C for 40 seconds (for femA gene), and extension at 72°C for 40 seconds. The final extension was performed at 72°C for 5 minutes. The amplified products were separated by electrophoresis in 1.2% agarose gel at 90 V for 90 minutes. Antimicrobial Susceptibility Testing All the isolates were tested for susceptibility to commonly used antibiotics to treat infections caused by different bacteria. The assay was performed by Kirby-Bauer disk diffusion method on Mueller-Hinton agar medium according to the CLSI (the Clinical and Laboratory Standard Institute) guidelines (15). The cycling program when using 10-nt primer 1283 included initial denaturation at 94°C for 4 minutes, first annealing at 36°C for 4 minutes and first extension at 72°C for 4 minutes followed by 4 cycles of the second denaturation at 94°C for 30 seconds, annealing at 36°C for 1 minute, and final elongation at 72°C for 2 minutes. Electrophoresis in 1.8% agarose gel was applied to separate the amplicon fragments according to their sizes. The agarose gels were stained with 1% ethidium bromide and analyzed under UV transilluminator (Gel Documentation System, Syngene). Similarity of all RAPD banding profiles was analyzed by GelCompar II Software. Polymorphisms of ≤ 2 and > 2 RAPD bands were considered as definitive criteria to detect related and different strains, respectively. Statistical Analysis The data were analyzed using SPSS software version 19; the chi-square test was applied wherever applicable. In addition, chi-square and the Fisher exact tests were used to find the statistical correlation between the frequency of bacteria from hospital kitchen and ICU isolates. Statistical analyses were done to detect the association among drug resistance, types of pathogens isolated from the samples, similarity of phenetic data, and similarity of the strains in biotyping compared with those of the molecular typing methods. Significance was defined as a P value < 0.05. Molecular Detection of S. aureus Strains Identification of the biochemically characterized S. aureus isolates was followed by molecular methods. Accordingly, their identity was confirmed in all cases by PCR for nuc and femA genes. Resistance profile and multidrug resistance phenotype of the bacterial isolates in hospital kitchen and ICU The frequency of antibiotic resistance among the strains isolated from the hospital kitchen and ICU was summarized in Table 2. In general, bacterial isolates from the ICU showed higher rates of resistance, compared with those isolated from the kitchen. However, resistance to azithromycin in Acinetobacter spp. strains obtained from the hospital kitchen was significantly higher than those isolated from the ICU (P = 0.0001). Also, resistance to cefepime among the P. aeruginosa strains from the utensils was higher (100%) than those isolated from the clinical samples (0%). Multiple drug resistant-methicillin resistant strains of S. aureus (MDR-MRSA) comprised over 18.75% (6/32) of all the isolates among the samples of the Comparison of the frequency of resistance to the studied antibiotics in either ICU or kitchen samples did not propose homology of the bacterial isolates for most of the genera between the 2 environments ( Table 2). Similarity of the resistance patterns among the S. aureus strains from the kitchen or those isolated from the ICU was high; however, this similarity was not the case between the 2 groups. The current study results also showed dissimilarity of the E. coli strains between the 2 places, since all the isolates from the hospital kitchen were put in separate groups. S. aureus and E. coli Phylogenetic Dendrogram: The phylogenetic analysis of S. aureus RAPD patterns did not show significant relationship between the isolates of the ICU and hospital kitchen. However, as shown in Figure 1, different strains of this bacterium from the food staff (Strains 1A-1C) or food staff and utensils (strains 27D-4D and 12A-21A) showed identical patterns. In the ICU samples, similarity of the S. aureus RAPD patterns was detected among the strains isolated from the patients (strains 215-2, 64-2, 65-2) and those isolated from the ICU environment (strains 192-2 and 373), individually. These analyses also presented the relationship between the E. coli strains isolated from the studied foods and utensils (strains 9A-17A-44D-8F). Two strains of the patients showed similar patterns with one strain isolated from the food handlers (strains 55 and 16F, respectively) and 7 strains isolated from the kitchen samples; however their diversity in resistance patterns refused this correlation (Figure 2). Discussion Hospital kitchen appears to be a source of food contamination and occurrence of foodborne outbreaks in hospitalized patients in different countries (16)(17)(18)(19). Utensils, such as industrial blender and meat grinder, regardless of the contamination of raw food materials, could be the sources of pathogenic bacteria, because their cleaning and disinfection cannot accompany completely due to their designed structure (20,21). Food handlers, through their weak health, are also suspects of common sources of bacterial contamination in such outbreaks. Incomplete cooking of the contaminated foods can introduce important bacterial pathogens involved in gastrointestinal and hospital acquired infections (22,23). Results of the current study showed utensils as the most contaminated samples in the studied kitchen. S. aureus and E. coli, as known members of the skin and faecal microbiota, were equally detected in these samples with the highest frequency (16.9%). This finding was in agreement to a recent study in Italy that established the presence of skin associated bacteria, including Staphylococcus, Streptococcus, Corynebacterium, and Propionibacterium spp., in cooking and processing tools in a hospital cooking center (24). The high rate of contamination with coliform bacteria in the food utensils could be explained by weak hygiene of the food handlers or using contaminated food staff (2,25). Given the lower level of E. coli contamination in the studied food samples (6.8%), food handlers seem to be the main sources of this contam-4 Arch Clin Infect Dis. 2017; 12(3):e62421. tion of enteral feeds was reported (26). They found no contamination in foodstuffs collected from systems assembled wearing sterile gloves, while the contamination was detected when non-sterile disposable gloves were used by the food handlers. In a study by Borges et al. in Brazil, 36% of hospital food handlers harbored S. aureus on their nails and/or hands, which was higher than that of the cur-rent study rates (20.87%) (27). Aycicek H. et al., in Turkey showed a frequency of 70% (S. aureus) and 7.8% (E. coli) contamination on the hands of food handlers that was due to poor hand hygiene and improper glove use (28). These differences could be explained by factors that influence safety of food materials including socioeconomic conditions, geographic region, and performance of surveillance 6 Arch Clin Infect Dis. 2017; 12(3):e62421. programs in each country. Transmission of pathogenic bacteria from contaminated utensils and/or food handlers to medical foods and hospital environment is problematic in clinical settings. In the current experiment, this type of cross contamination was limited to the food handlers and utensils, patient to patient, and samples of the hospital environment. However, involvement of food handlers in contamination of medical foods and occurrence of hospital foodborne outbreaks was previously reported in some countries (29)(30)(31)(32)(33) (27,34). While the current study results did not support involvement of the bacterial isolates in HAIs, transmission of these bacteria to patients' foods and also their survival after the cooking procedure proposed them as possible sources of intestinal and extra-intestinal infections in hospitalized patients consuming them. Resistance of these bacteria to multiple drug families was also considered as a risk factor in this hospital. Spread of MDR bacteria between utensils/food handlers and foods is a disturbing thread, because these bacteria are involved in most of the infections acquired from hospitals. Although results of the susceptibility testing showed the presence of MDR patterns among different strains of E. coli and S. aureus in the hospital foods and utensils, spread of other MDR bacteria, such as Acinetobacter, Pseudomonas, and Enterococcus spp., through hospital kitchen was not confirmed in these foods and the studied kitchen during the current study. Conclusions Results of the current study indicated E. coli and S. aureus as the most common bacteria isolated from foods, utensils, and food staffs in the studied hospital kitchen. However, detection of other bacteria, including Acinetobacter, Pseudomonas, and Enterococcus spp. was established in the current study. Although results of the antimicrobial susceptibility testing showed low frequency of MDR phenotypes among these isolates in this hospital kitchen, characterization of the MDR E. coli and MDR-MRSA strains proposed their risk of transmission into hospital environment or patients consuming them. Results of the molecular-and the resistance biotyping experiments suggested the occurrence of cross-contamination between the food staff/utensils and food samples. In this regard, food staff and utensils were considered as the main sources of S. aureus and E. coli transmission, respectively. Poor sanitary practices for food processing, weak health conditions of food handlers and inappropriate cooking practice were postulated as the main risk factors for bacterial contamination of the medical foods, which strongly suggests that efforts should be made to improve health status in this hospital. While simultaneous analysis of bacterial contamination of cooking instruments, food stuff, and food handlers and their transmission throughout the food chain in a hospital was the main strength of the current study, limited numbers of samples used in the current study was its main weakness. Further studies in different hospitals are needed to determine the main risk factors associated with foodborne diseases in hospital settings. Supplementary Material Supplementary material(s) is available here [To read supplementary materials, please refer to the journal website and open PDF/HTML].
High-Plex and High-Throughput Digital Spatial Profiling of Non-Small-Cell Lung Cancer (NSCLC) Simple Summary Characterizing the tumour microenvironment (TME) has become increasingly important to understand the cellular interactions that may be at play for effective therapies. In this study, we used a novel spatial profiling tool, the Nanostring GeoMX Digital Spatial Profiler (DSP) technology, to profile non-small-cell lung cancer (NSCLC) for protein markers across immune cell typing, immune activation, drug targets, and tumour modules. Comparative analysis was performed between the tumour, adjacent tissue, and microenvironment to identify markers enriched in these areas with spatial resolution. Our study reveals that this methodology can be a powerful tool for determining the expression of a large number of protein markers from a single tissue slide. Abstract Profiling the tumour microenvironment (TME) has been informative in understanding the underlying tumour–immune interactions. Multiplex immunohistochemistry (mIHC) coupled with molecular barcoding technologies have revealed greater insights into the TME. In this study, we utilised the Nanostring GeoMX Digital Spatial Profiler (DSP) platform to profile a non-small-cell lung cancer (NSCLC) tissue microarray for protein markers across immune cell profiling, immuno-oncology (IO) drug targets, immune activation status, immune cell typing, and pan-tumour protein modules. Regions of interest (ROIs) were selected that described tumour, TME, and normal adjacent tissue (NAT) compartments. Our data revealed that paired analysis (n = 18) of matched patient compartments indicate that the TME was significantly enriched in CD27, CD3, CD4, CD44, CD45, CD45RO, CD68, CD163, and VISTA relative to the tumour. Unmatched analysis indicated that the NAT (n = 19) was significantly enriched in CD34, fibronectin, IDO1, LAG3, ARG1, and PTEN when compared to the TME (n = 32). Univariate Cox proportional hazards indicated that the presence of cells expressing CD3 (hazard ratio (HR): 0.5, p = 0.018), CD34 (HR: 0.53, p = 0.004), and ICOS (HR: 0.6, p = 0.047) in tumour compartments were significantly associated with improved overall survival (OS). We implemented both high-plex and high-throughput methodologies to the discovery of protein biomarkers and molecular phenotypes within biopsy samples, and demonstrate the power of such tools for a new generation of pathology research. Introduction Non-small-cell lung cancer (NSCLC) accounts for 85% of lung cancers, and is the leading cause of cancer-related deaths [1]. Patients are often diagnosed at an advanced stage, where the immediate prognosis is poor, resulting in a five-year survival rate of less than 20% [2,3]. With the emerging success of immune checkpoint blockade leading to durable responses and prolonged survival in 15-40% of cases, there is now a need for predictive biomarkers to guide patient selection for targeted therapies [4]. The use of comprehensive tumoural information to inform clinical decision-making is becoming increasingly important [5][6][7][8][9]. Studies in the tumour microenvironment (TME) have revealed that a high degree of T-cell infiltration into the tumour provides fertile grounds for effective immunotherapies [10]. As such, the immune contexture (type, density, and location, as well as phenotypic and functional profile of immune cells) has been used to understand a greater depth of the tumour-immune cell interactions, which may provide cues into predictive biomarkers of the response to immune checkpoint therapy (anti PD-1/PD-L1) [11,12]. While traditional immunohistochemistry (IHC) techniques allow for the spatial profiling of cells in the tumour, this is often lost when tumours are analysed using bulk tissue genomic approaches. Moreover, the actual cellular proportions, cellular heterogeneity, and deeper spatial distribution are lacking in characterisation. Spatial and immunological composition with cellular status can aid in identifying micro-niches within the TME [13]. The classification of the immune context within the TME lays the foundation for addressing how the immunological composition and status (activated/suppressed) may dictate response to therapy. Therefore, to address this need, imaging and tissue sampling is required simultaneously to analyse tumour tissue and immune proteins with spatial resolution. Tissue Microarray This study has QUT Human Research Ethics Committee (UHREC) approval (#2000000494). The NSCLC Tissue Micro Array (TMA) (HLugA180Su03), containing 92 cases with concordant histologically normal adjacent tissue, was obtained from US Biomax, Inc. (Rockville, MD, USA), including associated clinical information. H&E images were demarcated by a pathologist for tumour and non-tumour regions in each core. The tissue microarray was purchased from US Biomax (commercial source). These companies keep the informed consent of the patient samples used to create the microarrays. Nanostring GeoMX Digital Spatial Profiler: Tissue Microarray The slides were profiled using Technology Access Program (TAP) by Nanostring Technologies (Seattle, WA, United States). In brief, immunofluorescent staining was performed on the TMA with tissue morphology markers (PanCK, CD3, CD45, and DAPI) in parallel with DNA-barcoded antibodies within the immune cell profiling, IO drug targets, immune activation status, immune cell typing, and pan-tumour protein panels, as shown in Table 1. Geometric (circular) and custom regions of interest (ROIs) were selected based on visualisation markers to generate tumour (PanCK+) and TME (PanCK−) areas, from which barcodes were liberated by UV light using the GeoMx DSP instrument, then hybridised and counted on the Ncounter system. Nanostring GeoMX Digital Spatial Profiler: Data Analysis Patient data presented in Table 2 was generated in R studio [14] using the package "gtsummary" [15]. Remote access to the GeoMx DSP analysis suite (GEOMX-0069) allowed inspection, quality control (QC), normalisation, and differential expression to be performed. Briefly, each ROI was tagged with metadata for its compartment and patient pairing, in order to allow pairwise comparisons. Raw data was exported and plotted in R using "ggplot2" [16] for raw counts, a signal relative to IgG controls, and an evaluation of the Pearson correlation coefficient (R) between normalisation parameters using the "ggpubr" package [17]. Normalisation using Histone H3 and S6 proteins was performed in GeoMx DSP analysis suite. Differential expression between paired compartments was evaluated by paired t-tests with a Benjamini-Hochberg correction, while differential expression between unpaired compartments was performed by a Mann-Whitney test with Benjamini-Hochberg correction, and results were plotted in R studio using "ggplot2". Relative expression data was exported from the GeoMx DSP analysis suite, hierarchical clustering performed using the R package "complexHeatmap" [18], and univariate Cox proportional hazards regression was performed using "survivalAnalysis" [19] package. Region of Interest (ROI) Selection Ninety-six ROIs in total were selected that were representative of 45 tumours, 32 TMEs, and 19 histologically normal adjacent tissues from the cohort of patients described in Table 2. Images of H&E-stained cores were demarcated by a pathologist and were utilised alongside Nanostring immunofluorescent staining for morphology markers PanCK, CD45, CD3, and DAPI to draw ROIs indicative of a tumour (CK+) or TME (CK-/CD3+). Figure 1 provides an example of this strategy where tumour and TME ROIs were able to be identified within the same tumour core. Of all the samples collected, comparisons from the same patient could be made between eight TME-NAT pairs, 14 NAT-tumour pairs, and 18 tumour-TME pairs. Figure 2 provides an overview of the tumour and immune ROI selection, as well as representative expression profiles for a number of associated markers. Data Quality Control Quality control was performed within the GeoMx DSP analysis suite, to ensure the Ncounter quantification of probes was within specification. Raw probe counts per ROI were inspected to ensure comparable ranges of the signal, and to evaluate systemic variability ins sample groups. ROIs generated median counts within the range of 10 2 and 10 3 , with observably lower median counts for ROIs 13, 67, and 96 ( Figure S1). Raw probe counts were then inspected within TME, tumours, and NAT, as targets were expected to vary by respective tissue compartment (e.g., immune markers in TME vs. tumours). Robust counts were observed for abundant targets, including histone H3, SMA, S6, GAPDH, fibronectin, cytokeratin, CD44, CD68, β-2-microglobulin (B2M), HLA-DR, CD45, and B7-H3 (beyond axis range in Figure S2); however, the remaining probes shown exhibited raw counts below 200. Overall, raw counts from NAT ROIs shown in Figure S2 appeared to be lower than others for compartments, while tumour ROIs generated higher signals for most lowly-abundant probes. Of note, background isotype control IgG probes possessed counts between 50 to 150 ( Figure S2), and while rabbit (Rb) IgG exhibited similar counts between tumour and TME compartments, mouse (Ms) IgG showed higher counts for tumours then TME. This suggests that background correction may not be the best strategy for the normalisation of lowly-expressed targets, as these targets are expressed at background or just above background levels, making their quantification challenging. Target signals relative to Ms and Rb IgG control probes was therefore evaluated to identify probes from which data should be considered with caution. Probes shown in Figure 3 whose median signal from all compartments relative to IgG Cancers 2020, 12, 3551 5 of 16 was less than 1 (pink region) were thus followed with caution, and 31 of the 55 probes above CD25 in Figure 3 below were considered robust for further analysis. Data Quality Control Quality control was performed within the GeoMx DSP analysis suite, to ensure the Ncounter quantification of probes was within specification. Raw probe counts per ROI were inspected to ensure comparable ranges of the signal, and to evaluate systemic variability ins sample groups. ROIs generated median counts within the range of 10 2 and 10 3 , with observably lower median counts for ROIs 13, 67, and 96 ( Figure S1). Raw probe counts were then inspected within TME, tumours, and NAT, as targets were expected to vary by respective tissue compartment (e.g., immune markers in TME vs tumours). background or just above background levels, making their quantification challenging. Target signals relative to Ms and Rb IgG control probes was therefore evaluated to identify probes from which data should be considered with caution. Probes shown in Figure 3 whose median signal from all compartments relative to IgG was less than 1 (pink region) were thus followed with caution, and 31 of the 55 probes above CD25 in Figure 3 below were considered robust for further analysis. Probe counts relative to rabbit (Rb) and mouse (Ms) IgG controls. Counts of each probe were normalised to mean counts of Rb and Ms IgG within each ROI. The mean of these normalised values per probe was plotted to evaluate the robustness of the target protein signal to isotype background controls. Data Normalisation The method of normalisation between ROIs was assessed by examining correlations between histone H3, S6, GAPDH, and IgG background control probes, under the assumption that normalisers should correlate between ROIs and be unrelated to underlying biology. Housekeeping proteins included in the assay (GAPDH, histone H3, and S6) were plotted to determine which pairs best correlated across ROIs ( Figure 4A-C). Histone H3 and S6 exhibited the strongest Pearson correlation coefficient (R = 0.7) ( Figure 4C), and were thus examined further for correlation to IgG background to confirm independence from tissue biology. Ms and Rb IgG strongly correlated with each other across ROIs (R = 0.92) ( Figure 4D), and the means of these IgG counts showed strong correlation with means of histone H3 and S6 housekeeping controls (R = 0.91), indicating that IgG controls, histone H3, and S6 were unrelated to underlying biology, and could act as appropriate normalisers across ROIs ( Figure 4E). coefficient (R = 0.7) ( Figure 4C), and were thus examined further for correlation to IgG background to confirm independence from tissue biology. Ms and Rb IgG strongly correlated with each other across ROIs (R = 0.92) ( Figure 4D), and the means of these IgG counts showed strong correlation with means of histone H3 and S6 housekeeping controls (R = 0.91), indicating that IgG controls, histone H3, and S6 were unrelated to underlying biology, and could act as appropriate normalisers across ROIs ( Figure 4E). In addition to the normalisation by IgG and traditional housekeeping members, ROI area and nuclei count were inspected for their utility to normalise DSP data. Figure 5A-C illustrates the relationship that the ROI area possessed with histone H3/S6, IgG, and nuclei. A number of ROIs varied In addition to the normalisation by IgG and traditional housekeeping members, ROI area and nuclei count were inspected for their utility to normalise DSP data. Figure 5A-C illustrates the relationship that the ROI area possessed with histone H3/S6, IgG, and nuclei. A number of ROIs varied by area; however, some possessed maximum sized geometry, and these ROIs varied significantly in their relationship with other normalisation parameters, indicating that ROI area was not a useful normalisation method in this experiment. Similarly, nuclei counts were evaluated relative to IgG and histone H3/S6 means ( Figure 5D,E), where some trend was evident but lacked the robustness of either IgG or histone H3/S6 normalisation. Histone H3/S6 means were therefore utilised for normalisation, and henceforth the comparative quantification of probes. Some ROIs contained the maximum area (horizontal dots in A-C) and exhibited significant variance in the secondary parameter, indicating that area was not a suitable normalisation method. Nuclei counts (D-E) demonstrated a trend with the secondary parameter; however, correlation was not as significant as that observed for IgG or histone H3/S6. NAT: normal adjacent tissue; TME: tumour microenvironment; Tumour: Tumour region. Data Analysis Hierarchal clustering by the Ward D2 method [20] was first used to explore normalised data; however, expression appeared to vary significantly within classes of compartments, such that clear distinction between the NAT, tumour, and TME was not evident (Figure 6). K-means clustering to further group ROIs into classes showed most NATs grouping together ( Figure 6, left), characterised by higher expression of most genes except for PanCK, EpCAM, and Ki-67. Another class consisting of both the TME and tumour (middle Figure 6) was characterised by lower expression of most proteins, with some ROIs expressing high levels of Ki-67 and EpCAM, whereas a third class was characterised by relatively heterogenous expression of all proteins (Figure 6, right). Cancers 2020, 12, x FOR PEER REVIEW 10 of 15 some ROIs expressing high levels of Ki-67 and EpCAM, whereas a third class was characterised by relatively heterogenous expression of all proteins (Figure 6, right). Figure 6. Clustered heatmap of relative expression of proteins per ROI. Ward D2 clustering was applied, followed by K-means clustering to delineate differences between expression profiles among compartments. NAT: normal adjacent tissue; TME: tumour microenvironment; tumour: tumour region. Global correlation matrices for target protein expression within the TME ( Figure S3) and tumour ( Figure S4) indicated a large number of significant (p ≤ 0.001) positive correlations. Global correlation matrices for target protein expression within the TME ( Figure S3) and tumour ( Figure S4) indicated a large number of significant (p ≤ 0.001) positive correlations. Differential protein expression was then evaluated between patient matched and unmatched compartments (Figure 7). Interestingly, matched TME and NAT (n = 8) did not exhibit significant differences ( Figure 7A), while matched TME-tumour pairs (n = 18) indicated an expected enrichment of CD27, CD3, CD4, CD44, CD45, CD45RO, CD68, CD163, and VISTA within the TME, while tumour regions were enriched in Ki-67, EpCAM, and cytokeratin ( Figure 7B). When incorporating all samples, irrespective of patient pairing, several proteins appeared to be downregulated in TME relative to NAT, including CD34, fibronectin, IDO1, LAG3, ARG1, and PTEN ( Figure 7C). TME-tumour comparisons remained similar to the paired data, whereas CD3, CD45RO, VISTA, and CD163 were enriched in the TME relative to the tumour ( Figure 7D). Assessment of the association between protein expression and survival was also explored through an unadjusted, univariate Cox proportional hazards regression. Interestingly, expression data from immune ROIs indicated that the presence of EpCAM and cytokeratin was associated with better patient OS ( Figure 8), while the presence of CD34, CD3, and ICOS in tumour ROIs was associated with better patient OS. When placed in a multivariate model to adjust for age, AJCC, and TNM tumour staging variables, those markers found to be significant in a univariate model no longer reached significance levels (data not shown). The number of samples did not permit higher-level multivariate analysis and statistical modelling of covariate prognostic signatures. When incorporating all samples, irrespective of patient pairing, several proteins appeared to be downregulated in TME relative to NAT, including CD34, fibronectin, IDO1, LAG3, ARG1, and PTEN ( Figure 7C). TME-tumour comparisons remained similar to the paired data, whereas CD3, CD45RO, VISTA, and CD163 were enriched in the TME relative to the tumour ( Figure 7D). Assessment of the association between protein expression and survival was also explored through an unadjusted, univariate Cox proportional hazards regression. Interestingly, expression data from immune ROIs indicated that the presence of EpCAM and cytokeratin was associated with better patient OS (Figure 8), while the presence of CD34, CD3, and ICOS in tumour ROIs was associated with better patient OS. When placed in a multivariate model to adjust for age, AJCC, and TNM tumour staging variables, those markers found to be significant in a univariate model no longer reached significance levels (data not shown). The number of samples did not permit higher-level multivariate analysis and statistical modelling of covariate prognostic signatures. Cancers 2020, 12, x FOR PEER REVIEW 12 of 15 Figure 8. Cox proportional hazards of compartment-specific protein expression, ranked by association with overall survival. Log2 protein expression was modelled against follow-up time for the tumour and TME. Hazard ratio (HR) < 1 was associated with better patient outcome; HR > 1 was associated with poorer patient outcome. Discussion The Nanostring GeoMx DSP platform [13,21] offers a novel solution for high-plex digital quantification of proteins and mRNA from fixed and fresh frozen tissues with spatial resolution [22]. It has been recently applied to triple-negative breast cancer (TNBC) [23], NSCLC [24], and melanoma [25]. However, the implementation and interpretation of such high-plex discovery is still in its infancy. The application of such technologies to large numbers of patient samples in the TMA format potentially provides unparalleled insight into spatial cell types, biomarkers, and the interactions that may underlie the disease biology. In this study, we quantified proteins across the current DSP immune cell profiling, IO drug targets, immune activation status, immune cell typing, and pan-tumour protein modules to understand the presence of these markers in tumours, tumour microenvironments, and histologically normal adjacent tissue compartments. We present a users' experience where 96 ROIs were collected from a single TMA-containing tumour and NAT cores, with data processed and analysed within the GeoMx DSP analysis suite. In conventional IHC and multiplex IHC, information can be obtained from the entirety of sections or TMA cores, giving a global perspective of marker expression and allowing post-hoc segmentation to inform on distribution. The DSP approach differs in that, while visualisation markers may inform on tumour/non-tumour regions and areas of immune cell infiltrate, ROIs are limited to a maximum of 600 µm geometric shapes. In this study, circular ROIs and several custom-drawn ROIs were used, meaning that "tumour" ROIs innately contained immune infiltrate, and that "immune" ROIs needed to be completely separate from the tumour to be defined, and may represent tumour-adjacent "stromal" immune infiltrate rather than an activated "tumour microenvironment" immune infiltrate. The DSP platform does allow for "masking" or "compartmentalization" within ROIs, enabling the signal to be obtained directly from tumour cells and from the immediate stromal space into which they have proliferated, at µm resolution [24]. However, this approach was not used in this study, and is a salient point to be considered for future analyses using the platform. Here, we also demonstrate that there is a need to empirically determine the method of normalisation and identify probes which lack robust signal-to-noise. We demonstrated that both IgG Figure 8. Cox proportional hazards of compartment-specific protein expression, ranked by association with overall survival. Log2 protein expression was modelled against follow-up time for the tumour and TME. Hazard ratio (HR) < 1 was associated with better patient outcome; HR > 1 was associated with poorer patient outcome. Discussion The Nanostring GeoMx DSP platform [13,21] offers a novel solution for high-plex digital quantification of proteins and mRNA from fixed and fresh frozen tissues with spatial resolution [22]. It has been recently applied to triple-negative breast cancer (TNBC) [23], NSCLC [24], and melanoma [25]. However, the implementation and interpretation of such high-plex discovery is still in its infancy. The application of such technologies to large numbers of patient samples in the TMA format potentially provides unparalleled insight into spatial cell types, biomarkers, and the interactions that may underlie the disease biology. In this study, we quantified proteins across the current DSP immune cell profiling, IO drug targets, immune activation status, immune cell typing, and pan-tumour protein modules to understand the presence of these markers in tumours, tumour microenvironments, and histologically normal adjacent tissue compartments. We present a users' experience where 96 ROIs were collected from a single TMA-containing tumour and NAT cores, with data processed and analysed within the GeoMx DSP analysis suite. In conventional IHC and multiplex IHC, information can be obtained from the entirety of sections or TMA cores, giving a global perspective of marker expression and allowing post-hoc segmentation to inform on distribution. The DSP approach differs in that, while visualisation markers may inform on tumour/non-tumour regions and areas of immune cell infiltrate, ROIs are limited to a maximum of 600 µm geometric shapes. In this study, circular ROIs and several custom-drawn ROIs were used, meaning that "tumour" ROIs innately contained immune infiltrate, and that "immune" ROIs needed to be completely separate from the tumour to be defined, and may represent tumour-adjacent "stromal" immune infiltrate rather than an activated "tumour microenvironment" immune infiltrate. The DSP platform does allow for "masking" or "compartmentalization" within ROIs, enabling the signal to be obtained directly from tumour cells and from the immediate stromal space into which they have proliferated, at µm resolution [24]. However, this approach was not used in this study, and is a salient point to be considered for future analyses using the platform. Here, we also demonstrate that there is a need to empirically determine the method of normalisation and identify probes which lack robust signal-to-noise. We demonstrated that both IgG background control probes as well as histone H3 and S6 housekeeping probes correlated across ROIs, while area and nuclei varied significantly and were thus less reliable for normalising data for quantification. Existing studies that have used area as a normaliser [24] have also utilised a signal-to-noise ratio cut-off >3, suggesting that our particular TMA may have exhibited disproportionately high background or low overall signal, as a significant number of probes were within range of IgG control probes. Without validation through IHC, it is difficult to interpret the meaning of probes that give signal within range of isotype IgG controls. It is noteworthy that, for example, PD-L1 counts fell below background where an abundance should be expected in a subset of NSCLC tissues, which highlights the importance of orthogonal validation when using a discovery technique. It is perhaps for this reason that many significant correlations were observed within compartments for markers that possessed low signal-to-noise, and these observations require additional validation. It is important to note that the use of traditional methods of housekeeping normalisation in such datasets require deeper investigation. Evidence exists within our data for systematic lower expression in NAT samples, for which a single normalisation approach, including all samples will arbitrarily overestimate normalized NAT counts. This is critical in differential analysis where it should be assumed that most targets are not differentially expressed, and is better controlled for by global scaling methods, such as the "Trimmed Mean of M-values: (TMM) in edgeR package [26], and "Relative Log Expression" (RLE) in DESeq2 package [27]. Such methods require more advanced informatics processing beyond the DSP analysis suite. With this in mind, it was notable that when differential analysis was applied by paired t-test to a limited number of patient pairs, NAT was indistinguishable from TME. A clear distinction between matched tumour and TME was evident, though, and was indicated by the increased presence of several key markers within the TME. Such markers included CD44, CD45, T cell lineage (CD3, CD4), memory T cells (CD45RO), monocyte/macrophage lineage (CD163, CD163), and costimulatory immune checkpoints (CD27, VISTA). When incorporating all samples, irrespective of patient matching, immunosuppressive molecules LAG3 [28] and IDO1 [29] were, perhaps counter-intuitively, significantly depleted in the TME relative to NAT, indicating the requirement for patient matching to make meaningful comparisons. Furthermore, mixed-model differential analysis should be performed to control for patient matching, where t-tests available for single-slide analysis within the DSP analysis suite are not wholly appropriate. Nevertheless, the sheer scale of high-plex analyses appropriately applied to large numbers of cases through TMAs is an incredibly powerful tool for spatial biology. The DSP protein modules include key markers that describe multiple immune cell types, immune checkpoints, and experimental targets that enable a more comprehensive understanding of the immunological parameters that influence patient outcome. While overall survival was the only clinical endpoint investigated in this study, the emergence of patient cohorts treated with immunotherapies means that such assays may be used to track patient progression and outcomes, and indicate potential biomarkers for patients most likely to respond to these therapies. Despite some limitations in the absolute definition of tumour and TME compartments in our study, we were able to identify that the presence of CD3, CD34, and ICOS expressing cells in tumour compartments were associated with better patient OS in an unadjusted univariate Cox proportional hazards model. However, these findings require validation. It is interesting to note that the enrichment of CD3 in tumour regions was associated with improved OS in this study, independently of CD4 T helper cells and cytotoxic CD8 T lymphocytes. Several markers significantly correlated with CD3 expression in the tumour compartment, including CD40, CD44, CD14, B2M, Tim-3, CD8, CD45RO, and ICOS, potentially implicating other cell lineages in immune-associated anti-tumour activity. Of note is the correlation between CD3 and ICOS, both of which were independently prognostic within tumour compartments, highlighting the potential power of such multiplex discovery. Furthermore, limitations include the retrospective nature of the study, the need for orthogonal validation, and an increase in comparative groups. Conclusions In summary, the application of such novel platforms to provide comprehensive snapshots of clinical material enables an unprecedented insight into molecular phenotypes that may be indicative of response to emerging therapies, and ultimately, patient outcome. We propose the development of appropriate normalization methods to overcome systematic variation and low signal-to-noise, and indicate the requirement for larger sample numbers to overcome the limitations of multiple testing in discovery approaches. By combining such high-plex approaches with TMAs and orthogonal validation through multispectral IHC, a new field of biomarker discovery is developing that offers to change the way clinical pathology is performed.
Recognition and cleavage of 5-methylcytosine DNA by bacterial SRA-HNH proteins SET and RING-finger-associated (SRA) domain is involved in establishment and maintenance of DNA methylation in eukaryotes. Proteins containing SRA domains exist in mammals, plants, even microorganisms. It has been established that mammalian SRA domain recognizes 5-methylcytosine (5mC) through a base-flipping mechanism. Here, we identified and characterized two SRA domain-containing proteins with the common domain architecture of N-terminal SRA domain and C-terminal HNH nuclease domain, Sco5333 from Streptomyces coelicolor and Tbis1 from Thermobispora bispora. Both sco5333 and tbis1 cannot establish in methylated Escherichia coli hosts (dcm+), and this in vivo toxicity requires both SRA and HNH domain. Purified Sco5333 and Tbis1 displayed weak DNA cleavage activity in the presence of Mg2+, Mn2+ and Co2+ and the cleavage activity was suppressed by Zn2+. Both Sco5333 and Tbis1 bind to 5mC-containing DNA in all sequence contexts and have at least a preference of 100 folds in binding affinity for methylated DNA over non-methylated one. We suggest that linkage of methyl-specific SRA domain and weakly active HNH domain may represent a universal mechanism in competing alien methylated DNA but to maximum extent minimizing damage to its own chromosome. INTRODUCTION DNA methylation occurs at the C-5 position of cytosine in most eukaryotic organisms, resulting in 5-methylcytosine (5mC) (1,2). 5mC is found predominantly in the symmetric CpG context in mammals and other vertebrates (3,4), as well as CHG (H = A, T or C) and asymmetric CHH contexts in plants (5). They constitute important epigenetic marks that are implicated in repressed chromatin state, inhibition of transcription and genome stability (1,2). Faithful inheritance of these epigenetic marks is essential to cell functions (6) while aberrant DNA methylation is associated with various diseases and disorders (7,8). Major players in the maintenance of DNA methylation in mammals include DNA methyltransferase (Dnmt1) and UHRF1 (ubiquitinlike, containing PHD and RING finger domains) (9)(10)(11)(12). UHRF1 contains a SET and RING-associated (SRA) domain that preferentially binds to hemi-methylated DNA relative to the fully methylated or unmodified DNA (10,11). In the structure of the DNA-SRA complex, the 5mC base is flipped out from the DNA duplex and is accommodated in a binding pocket of the SRA domain, potentially preventing the protein from sliding along the DNA strands (13)(14)(15). Similarly, the dimeric SRA domain from the Arabidopsis thaliana SUVH5 binds to 5mC-containing DNA either at fully-methylated or hemi-methylated CpG, CHG or methylated CHH sites. It flips out both 5mC and its partner base in the complementary strand from the DNA duplex. Each of the extruded bases is positioned in one binding pocket of an individual SRA domain (16). The eukaryotic SRA-like domains are also found in bacteria, although they are not associated with the SET and RING domains in the genome. Oftentimes they are fused or associated with restriction endonucleases (REases) (17). For example, the latest structurally characterized type IV restriction endonucleases in bacteria, such as the MspJI family (16,18) and the PvuRts1I/AbaSI family (19,20), possess DNA binding domains that are structurally similar to the eukaryotic SRA domains and adopt similar base flipping mechanism to recognise 5mC. Interestingly, they do not share any sequence similarity with the eukaryotic SRA counterparts. On the other hand, by using the eukaryotic SRA sequence as query, one can readily pull out more than 100 genes from bacterial genomes in GenBank, most of which having an annotation of 'SRA-YDG protein'. Beyond the computational prediction of the SRA presence in these genes, little is known about their functional roles. Sequence alignment reveals that these bacterial SRA domains share significant similarity to the eukaryotic counterparts, such as UHRF1 or SUVH5 SRA domains, suggesting a likely common ancestor. Conserved domain analysis suggests that most of these SRA-YDG genes consist of the N-terminal SRA domain and a C-terminal HNH-type nuclease domain. Given the binding preference to modified DNA for the eukaryotic SRA domains (13)(14)(15)(16), it is likely that these genes may encode another class of the DNA modification-dependent restriction endonucleases. Within this family of the SRA-HNH nucleases, we chose to study two close SRA homologs in Streptomyces coelicolor and Thermobispora bispora DSM 43833. In this study, we report the in vitro binding and cleavage of 5mC-containing DNA by the two proteins, as well as their in vivo toxicity in dcm + Escherichia coli cells. Structural modeling for two bacterial SRA domains and superposition to eukaryotic SRA domains are extensively studied and discussed. To our knowledge, this represents the first study on the activities of bacterial-sourced SRA-HNH protein. MATERIALS AND METHODS Bacterial strains and plasmid constructs used in this study were shown in Supplementary Table S1. Primers used in this study are listed in Supplementary Table S2. Oligos with different methylation pattern used in this study were summarised in Supplementary Table S3. Growth of E. coli strains and DNA manipulation were carried out according to Sambrook et al. (21). Construction of vectors expressing Sco5333 and its derivative The coding sequence of Sco5333 except for stop codon was amplified from total DNA of S. coelicolor A (3) 2 using KOD DNA polymerase (TOYOBO) and primers 5333EX-F& 5333EX-R. The PCR fragment was digested with NdeI and XhoI, and was inserted into pET44b as pJTU4356 to express C-terminal His6-tagged Sco5333. N-terminal Histagged Tbis1 was synthesized as overlapping gBLOCKs and cloned into pTXB1 (22,23) Table S1). Measuring transformation efficiency For transformation efficiency assay, 0.1 g of pJTU4356, pJTU4381, pJTU4382, pJTU4383 and pJTU4384 were individually introduced into E. coli DH10B and JTU006, respectively, and 0.1 g pET44b was also performed as positive control. For measuring the transformation efficiency of pTbis1, 0.1 g of pTXB1 Tbis1 was introduced into E. coli ER2566 (dcm − ) and E. coli ER2984 (dcm + ), and 0.1 g pTXB1 was performed as positive control. Transformation of each plasmid DNA was performed in three replicates. Over-expression and purification of Sco5333, Tbis1 and Dcm methyltransferase Dcm coding sequence was amplified by colony-PCR from E. coli DH10B using KOD DNA polymerase (TOYOBO) and primers dcmEX-F & R, the PCR fragment was treated with NdeI and EcoRI and ligated into the expression vector pET15b, generating pJTU4357 for producing N-terminally His6-tagged Dcm. Expression constructs pJTU4356, pJTU4357, pJTU4381, pJTU4382, pJTU4383 and pJTU4384 were introduced into BL21(DE3)/pLysS respectively. 10 ml overnight culture of each above strains was inoculated into 1-l LB medium supplied with 100 g/ml ampicillin and 34 g/ml chloramphenicol, and was grown at 37 • C to OD 600 0.6, cooled to room temperature and isopropyl thiogalactoside (IPTG) was added to a final concentration of 0.4 mM, followed by another 5 h at 30 • C. The cells were harvested and resuspended in 20 ml binding buffer (20 mM Tris-Cl and 150 mM NaCl pH 8.0) and lysed by sonication in an ice bath. After centrifugation (16 000 g for 30 min at 4 • C), the supernatant was applied to a HisTrap HP column (GE Healthcare) and purified with an AKTA FPLC (GE Healthcare) by eluting with imidazole linear gradient 20-500 mM. The product was desalted by a HiTrap Desalting column (GE healthcare) and stored in 20mM Tris-Cl buffer pH 8.0 containing 100 mM NaCl and 50% glycerol at −30 • C. Purified Sco5333 and Dcm were visualized by Coomassie-stained 12% sodium dodecyl sulfate-polyacrylamide gelelectrophoresis (SDS-PAGE) analysis. Protein concentration was determined using a Bradford Protein Assay Kit (Bio-Rad). For Tbis1, expression construct pTXB1 Tbis1 was introduced into T7 Express (NEB). Ten milliliters overnight culture was inoculated 2 × 1 l Luria-Bertani (LB) medium supplemented with 100 g/ml ampicillin. Bacteria culture was grown at 37 • C to OD 600 0.6, cooled to room temperature and IPTG was added to a final concentration of 0.3 mM. The culture was grown for additional 16 h at 16 • C. Cells were harvested and re-suspended in 50 ml sonication buffer (50 mM Tris-HCl, 50 mM NaCl, 1 mM ethylene diamine tetraacetic acid (EDTA), 1 mM dithiothreitol (DTT) and 2% glycerol at pH 8.0) and lysed by sonication in an ice bath. After centrifugation (23 000 g for 45 min at 4 • C), the supernatant was applied to a HisTrap HP column (GE Healthcare) and purified withÄKTA FPLC (GE Healthcare) by eluting with imidazole linear gradient 20-500mM. Those fractions of elutes were analysed by SDS-PAGE and nuclease activity on pBR322 (dcm + ) (25) and active fractions Nucleic Acids Research, 2015, Vol. 43, No. 2 1149 were pooled and further purified with a HiTrap Heparin HP column (GE Healthcare) by eluting with NaCl linear gradient 20 mM-1 M. Fractions containing purified protein were pooled, dialysed in storage buffer (20 mM Tris-HCl, 300 mM NaCl, 1 mM EDTA, 1 mM DTT at pH 7.4), concentrated with VivaSpin concentrator (MWCO at 30 kDa) (VIVASCIENCE) and stored in 50% glycerol at −20 • C. Purified Tbis1 was visualized by SimplyBlue TM Safe Stain (LifeTechnologies) stained 10-20% SDS-PAGE. Protein concentration was determined using Bradford Protein Assay Kit (Bio-Rad). In the assay to compare the DNA cleavage activity of Sco5333 and its mutant protein Sco5333M (Supplementary Table S1), conditions used were same as above except for concentration range of varied Sco5333 and Sco5333M (0-8 M). For cleavage of plasmid DNA pBR322 by Tbis1, buffer used was 20 mM Tris-acetate pH 7.9, 50 mM potassium acetate, 1 mM DTT supplemented with 2 mM Zn 2+ , Mg 2+ , Mn 2+ , Co 2+ or Ni 2+ . 0.2 g pBR322 plasmid DNA isolated from ER2984 was incubated with Tbis1 of 0.01-3 pmol in 20 l reaction system at 37 • C for 1 h, followed by digestion with proteinase K at 55 • C to remove the bound Tbis1, and examined by electrophoresis in 1.2% agarose gel. Pairs of 55nt-1&3, 1&4, 2&3 and 2&4 were mixed in 1:1 molar ratio and annealed ramping down from 100 • C to room temperature in a water bath. The same strategy was used to generate four 55ntDcm2 oligos with different methylation pattern (55nt-5 to 8), as well as to 55ntDcm1 oligos harbouring systematic mutation of the base flanking the modified cytosine. All single-strand oligos were synthesized, and listed in Supplementary Table S3. In the EMSA of the 5-FAM labeled 55nt duplexes: 0.25 M DNA oligos was incubated with 1 M Sco5333 in 20 mM Tris-Cl pH 8.0, 100 mM KCl in a volume of 20 l at 37 • C for 5 min. The reaction was then mixed with 4 l 6× loading buffer (30 mM EDTA, 36% glycerol, 0.035% xylene cyanol, 0.05% bromophenol blue. TAKARA) and examined by 6% native-PAGE (80:1, acrylamide/bis-acrylamide) in 0.5× TBE and 10 mA at room temperature. The gel was visualized by FU-JIFILM FLA-3000 fluorescent image analyser (excitation wavelength 473 nm). Measurement of the equilibrium dissociation constant (K D ) via isothermal titration calorimetry (ITC) MicroCal iTC200 system (GE healthcare) was used to measure the K D of Sco5333 to fully-and hemi-methylated 54nt fragment. 80 l of 300 M Sco5333 was injected, and 300 l of 30 M 54nt fragment was in the sample cell. The titration was followed with the default ITC procedure and the data was analysed using Origin 7.0. Sequence features of SRA and HNH domain of Sco5333 and Tbis1 Two SRA-HNH genes, sco5333 and tbis1, were identified in the genome of S. coelicolor and T. bispora DSM 43833 with ∼ 40% identity to the well-studied eukaryotic SRA UHRF1 and SRA SUVH5 ( Figure 1B). Structural modelling of SRA Sco5333 and SRA Tbis1 was performed based on the crystal structure of SRA UHRF1 (PDB: 3CLZ) and SRA SUVH5 (PDB: 3Q0C). The overall secondary structure of the four SRAs is very similar except for a short ␤-sheet (motif I), an ␣-helix (motif II) and an additional ␣-helix (motif III) encoded by SRA UHRF1 ( Figure 1B). Superposition of SRA Sco5333 to SRA Tbis1 revealed that their over- . ␣-helices are in cylinders, and ␤-strands are in arrows. The thumb and NKR finger corresponding to UHRF1 are underlined in red. The inverted black triangle indicates the residue that inserts into the duplex and displaces the 5mC in the SUVH5 SRA. Filled green circles represent residues that interact with 5mC in the binding pocket of SUVH5 SRA and UHRF1 SRA, and the disordered region of SUVH5 SRA domain is represented by # #. Black and green upright triangles designate residues that replace the looped-out 5mC and mask the unmodified C in the UHRF1 SRA, respectively. (C) Multiple sequence alignment of HNH motifs of Sco5333, Tbis1, Cflav and the well studied HNH motif structure of AnaCas9, Gme, T4eVII, McrA, ColE9 and ColE7. Residues in golden designate the CCCC or CCCH type of zinc finger motifs. Filled black circles represent the conserved HNH motifs. The downward black arrow indicates that Tbis1 HNH displays a ruined CCCD zinc finger. The brown D and N are active sites of HNH motifs in AnaCas9, Gme and T4eVII that bind to divalent ions. The H in cyan designates the divalent ion-binding residues of HNH motifs in McrA, ColE9 and ColE7. all structures are almost identical (Supplementary Figure S1), so we focus the following analysis on the SRA Sco5333 domain. SRA Sco5333 aligns well to the SRA SUVH5 in the key folds (␣-helix and ␤-sheets) as well as the functional motifs including the thumb, 5mC binding domain (Supplementary Figure S2A). In addition to the motif I and motif II (Supplementary Figure S2B), structural superposition revealed a much longer NKR finger loop in SRA UHRF1 which forms hydrogen bonds with the unpaired guanine and its adjacent cytosine via the side chain of N489 and R491 (Supplementary Figure S3). Despite that the structure of thumb finger in four domains aligned well, SRA UHRF1 uses V446 to replace the 5-methylcytosine, and form hydrogen bonds with R491, however, it does not directly pair with the orphaned guanine (>3.8Å) that is anchored by the charged R491. By contrast, SRA Sco5333 , SRA Tbis1 and SRA SUVH5 all use glutamine (Q), which has a longer side chain compared to valine. It replaces the 5mC and directly forms stable hydrogen bonds with the orphaned guanine ( Figure 1B, Supplementary Figures S3 and S4). The four SRA domains possess conserved 5mC binding elements either in the amino acids (marked with green dots above SUVH5, Figure 1B) or in the tertiary structures ( Supplementary Figures S3 and S4). Sco5333 and Tbis1 both contain a HNH nuclease motif, which is found in more than 2000 proteins related to nucleic acid processing in GenBank. Within the HNH do-main, Sco5333 harbours a CX 2 CX 36 CX 2 C motif ( Figure 1C) that is characteristic of a zinc finger (ZF) structure, which is also present in other proteins containing HNH nuclease domains, such as AnaCas9, Gme, T4eVII and McrA ( Figure 1C and Supplementary Figure S5). The four zinc finger proteins have a common ␤␤␣-fold in which an additional divalent metal ion is coordinated either by residues DN (red) for first three proteins or by H. . . H. . . H(cyan) for McrA, respectively ( Figure 1C, Supplementary Figure S5). The zinc finger in AnaCas9 was explained to stabilize the adjacent ␤␤␣-Metal Fold (27). In some cases, the zinc finger could be CCHH, such as in Zif268 (28) or CCCH as in KpnI (29). As another example, the SRA-HNH protein Cflav (accession no.: ADG74130) contains a CCCH zinc finger (Figure 1C). In vivo toxicity of Sco5333 and Tbis1 to E. coli with methylation To purify Sco5333, its coding sequence was fused to 6×Histag at C-terminus on pET44b and introduced to E. coli JTU006 (dam + , dcm − ) and DH10B (dam + ,dcm + ), respectively. While JTU006 gave normal transformants with high efficiency, few transformants (<5) were obtained for DH10B, among which a spontaneous mutant Sco5333 E42G was isolated. In order to evaluate the functional roles of the two domains for restriction of Dcm-methylated DNA, Nucleic Acids Research, 2015, Vol. 43, No. 2 1151 Sco5333 mutants, G32A and Y50A in the SRA domain, H228A and H253A at the HNH motif were individually introduced to E. coli DH10B or JTU006. Transformation efficiencies for all mutants into DH10B were restored to the level comparable to that for wild type sco5333 into JTU006 (Figure 2A), suggesting that the in vivo restriction of Dcmmethylated DNA requires cooperative function of the SRA domain and the HNH motif. Consistent with this result, the full-length tbis1 was restricted by ER2984 (dam + ,dcm + ) but not by ER2566 (dam + , dcm − ). By comparison, transformation efficiencies of the SRA Tbis1 domain alone into ER2984 or ER2566 were not impaired and close to each other (Figure 2B). We then measured the restriction of BL21(DE3) expressing Sco5333 to the dcm-methylated plasmid. As showed in Figure 2C, the plasmid pRSFDuet1+dcm was easily uptake by BL21(DE3), but its efficiency into BL21(DE3) expressing Sco5333 decreased by >90%. In contrast, transformation of strains with blank vector pRSFDuet1 showed little difference in transformation efficiency. We then compared the growth curves for BL21(DE3)/plysS (dam + ,dcm − ) expressing Sco5333, Sco5333 E42G and Sco5333M (triple amino acid changes of H228A,N244A,H253A) at 30 • C with and without IPTG induction. Our results suggest that there are no differences in the growth rates among the wild type and the mutants (Supplementary Figure S6), implying that in vivo Sco5333 discriminates against Dcm-methylated DNA from nonmethylated DNA at high specificity. In sharp contrast to this observation, over-expression of Tbis1 induced by IPTG led to in vivo toxicity to the Dcm-deficient strain ER2566, suggesting promiscuous cleavage of the non-methylated genome DNA at high level of Tbis1 ( Figure 2B, third panel). Effect of divalent metal ions to cleavage activity for both proteins His-tagged Tbis1 and Sco5333 were over-expressed and purified ( Figure 3A and B). In buffer without divalent metal ion, neither protein cleaved Dcm-methylated plasmid DNA, even with high protein:DNA ratio (120:1 for Sco5333, and 40:1 for Tbis1, Figure 3C and D). However, DNA cleavage activity were greatly stimulated in the presence of Ni 2+ , Co 2+ , Mn 2+ and Mg 2+ for Tbis1 (Figure 3D), and of Mn 2+ , Mg 2+ , Co 2+ and Ni 2+ for Sco5333 ( Figure 3C). The remaining Ca 2+ , Cu 2+ (data not shown) and Zn 2+ did not stimulate DNA cleavage activity. On the contrary, the stimulated cleavage activity was suppressed by equal or excess molar of Zn 2+ (Figure 4A and B). We then targeted the fourth cysteine by mutation in the zinc finger CX 2 CX 36 CX 2 C of Sco5333. Compared to the wild type, Sco5333 C252D displayed higher DNA cleavage activity and required higher concentration of Zn 2+ to suppress cleavage ( Figure 4C), suggesting that a weaker zinc finger reduces the suppression of Zn 2+ . In addition, we compared the DNA binding affinity of Sco5333 by different metal ions. In the titration assay when the molar ratio of [protein]:[DNA binding site] is ∼7:1, excessive Zn 2+ showed the most prominent effect on the DNA binding affinity to 5mC-DNA ( Figure 5A). We then set the ratio of [Sco5333]:[DNA binding site] to ∼0.7:1 and increased Zn 2+ concentration from 0.001 to 10 mM, and found increased shifting of 5mCmethylated DNA with the increasing concentration of Zn 2+ . ( Figure 5B). Run-off sequencing of the open circular and linear pUC18 revealed that the nicking and double strand breakage sites on pUC18 are randomly distributed, not sequencespecific and not associated with 5mC sites. Cleavage of 5mC-DNA and non-methylated DNA by Sco5333 in the presence of Mg 2+ was compared. To our surprise, Mg 2+ activated the DNA cleavage activity of Sco5333 to a similar extent on two type of DNA (Supplementary Figure S7). To determine if this weak non sequence-and non methylationspecific DNA cleavage is due to possible contamination of other DNA nuclease. Sco5333M, a Sco5333 mutant with all three catalytic residues (H228, N244 and H253) changed into alanine, was expressed and purified side-by-side with the wild-type. They were then assayed for their cleavage activity on pUC18 DNA with and without Dcm-methylation. Sco5333M completely lost the DNA cleavage activity even in high concentration (8 M) while Sco5333 still displayed DNA cleavage activity (Supplementary Figure S8). Therefore, we ruled out possible contamination of nucleases and concluded that the weak and promiscuous DNA cleavage activity was indeed due to excessive amount of Sco5333. Sco5333 and Tbis1 specifically bind to 5mC in all sequence contexts When testing DNA cleavage activity of SRA-HNH proteins, strong mobility shift of pUC18 (dcm + ) for Sco5333 was observed. In contrast there was no shift of nonmethylated plasmid DNAs (Supplementary Figure S9). We then chose Sco5333 to test the binding specificity of 5mCcontaining DNA. A pUC18-derived 219 bp DNA was PCR amplified and methylated by seven cytosine-5 DNA methylases with different specificities ( Figure 6A). Binding of Sco5333 gave rise to discrete shifted-bands to all methylated DNA duplexes but not to the non-methylated PCR product. Shifting of methylated DNA by Sco5333 becomes stronger as the methylated sites increases ( Figure 6A and C). Moreover, densely methylated substrates, such as CpGmethylated and GpC-methylated DNA fragments, resulted in multiple shifted bands ( Figure 6C). These results demonstrated broad 5mC recognition specificity of Sco5333. No obvious DNA double strands breakage by Sco5333 to this linear fragment was observed (Supplementary Figure S10). In addition, hemi-methylated DNA at the Dcm site, either on top or bottom strand, is recognized by Sco5333 and shifted in the assay (Figures 6A and 7A). No shift was observed for the non-methylated 55nt duplexes (Figure 7A). Similarly, Tbis1 specifically binds to fully-and hemi-methylated DNA, but not to non-methylated DNA (Supplementary Figure S11). To further look into the sequence binding specificity of Sco5333, we systematically changed the base flanking the inner C of the Dcm site (C m CWGG) into other three bases, 24 hybrid duplexes were generated and tested ( Figure 7B). In all sequence contexts, the fully-or hemi-methylated duplexes clearly gave shifted bands. These results support that Sco5333 binds specifically to 5-methylcytosine in all se- Figure 2. Transformations of plasmids encoding Tbis1 or Sco5333 and its mutants into dcm + , dcm − and sco5333 + E. coli hosts. (A) The Sco5333 expression vector pJTU4356 and its mutants, pJTU4381 encoding Sco5333 Gly32Ala and pJTU4382 encoding Sco5333 Tyr50Ala in SRA domain, and pJTU4383 encoding Sco5333 His228Ala and pJTU4384 encoding Sco5333 His253Ala in HNH motif, were introduced into E. coli DH10B (dam + , dcm + ) and its dcm knock-out mutant JTU006 (dam + , dcm − ), respectively; All the amino acid changed proteins lost the lethal phenotype in dcm + E. coli hosts as pJTU4356 does, indicating that the in vivo restriction requires the SRA domain and HNH motif together. (B) pTbis1 encoding Tbis1 and was introduced into E. coli ER2566 (dam + , dcm − ) and ER2984 (dam + , dcm + ), respectively. Consistent with transformation of pJTU4356, pTbis1 leads cell death in ER2984 but not ER2566. However, overexpressing Tbis1 by IPTG induction in ER2566 leads cell lysis and death at high efficiency, implying promiscuous cleavage of non-Dcm methylated genome DNA was induced by high concentration of Tbis1. (C) pJTU4356 was introduced into E. coli BL21 previously, generating BL21/pJTU4356(dam + , dcm − , sco5333 + ). The dcm was cloned in pRSFDuet1, whose RSF origin is compatible with pBR322 origin of pJTU4356. pRSFDuet1 carrying dcm gene was efficiently introduced into BL21, but severely restricted by BL21 harboring sco5333. Figure S12B). As some eukaryotic SRA domains can bind to both 5mC and 5hmC (30,31), Tbis1 was measured for its binding affinity to 5hmC substrates by EMSA. Results showed comparable affinity for 5mC and 5hmC for Tbis1 (Supplementary Figure S13). In order to know the roles of SRA and HNH motif in the in vitro binding of 5mC sites, EMSA for wild-type Sco5333 and its mutants, G32A, Y50A in the SRA domain, and H228A, H253A at the HNH motif were performed and compared (Supplementary Figure S14). Results showed that amino acid changes in the SRA domain (G32A and Y50A) abolished the in vitro binding to 5mC, but amino acid changes at the HNH motif (H228A and H253A) did not, demonstrating that in vitro binding to 5mC is governed by the SRA domain alone (Supplementary Figure S14). Binding properties of SRA-HNH proteins to methylated DNA We then compared the binding properties of the bacterial SRA domain to the MBD (77-165) domain of mouse MeCP2 (32). EMSA of the 54nt DNA with different methylation patterns showed that MBD preferentially binds to fully-methylated DNA but not to hemi-methylated DNA (Supplementary Figure S14). Titration assay of Tbis1 and MBD of human MeCP2 (77-166, Cayman) demonstrated that Tbis1 had >100-fold preference on fully-methylated DNA over non-methylated DNA (Supplementary Figure S15) whereas MBD of human MeCP2 had 64-fold preference to fully-methylated DNA (Supplementary Figure S16), suggesting that the bacterial SRA domain is more specific in binding 5mC DNA. To determine the affinity of Sco5333 binding to 5mC sites, the ITC approach was used to study the affinity parameters. The 54nt duplexes of fully-methylated, hemimethylated on top or bottom strand are used as the DNA substrates. Sco5333 binds to fully-methylated 54nt duplex with an equilibrium dissociation constant (K D ) of 4.1 M (Figure 8A), slightly higher than 3.1 M for the top strand hemi-methylated and also >1.5 M for the bottom strand hemi-methylated DNA ( Figure 8B&C). These dissociation constants are very close to that for SRA SUVH5 (16). According to tertiary structure prediction of Sco5333 and Tbis1 by the i-TASSER server (33,34), these two proteins adopt approximately globular shape. Gel-filtration analysis of Sco5333 and Tbis1 showed that Sco5333 forms as a homodimer in solution (Supplementary Figure S17), while Tbis1 is predominantly a monomer (Supplementary Figure S18). The ITC assay suggested that the ratio for Sco5333 to DNA was ∼1 ( Figure 8A-C), which reflects that 1 DNA duplex, with a symmetrically methylated site, might be bound by two Sco5333, namely one dimer of Sco5333 per DNA duplex. Combinatorial domain assortment of SRA and DNA cleavage domains in bacteria SRA domains, originally identified in mammalian genes, play versatile roles in epigenetic regulation in eukaryotes. With its ability to bind to sites containing methylated cytosine, SRA domains may recruit associated proteins to interact with the DNA methylation machinery. In bacteria, eukaryotic SRA-like domains are often fused with nuclease domains containing HNH motif (17,35,36). However, the vast gene family of these SRA-HNH remain functionally elusive. Recently, a number of structural work revealed that the common SRA fold is widely adopted by many bacterial type IV DNA modification-dependent restriction endonucleases. Examples include the DNA binding domains of the MspJI family (18,37), which recognizes both 5-mC and 5hmC; and those of the PvuRts1I/AbaSI family (19,20), which recognizes 5hmC and glucosylated 5hmC. In all cases, there is no apparent sequence similarity among them, possibly suggesting convergent evolution in action. Examination of the domain association in these bacterial SRA-containing genes reveals that bacterial SRA domains are often associated with different types of DNA cleavage domains. For examples, in the MspJI family, the Nterminal SRA DNA binding domain is associated with the C-terminal type IIP-like endonuclease domain with characteristic D..(E/Q)XK motif (37). In the PvuRts1I/AbaSI family, the C-terminal SRA DNA binding domain is associated with the N-terminal Vsr-like endonuclease domain (19,20). Here in this paper, the genes under study have Nterminal SRA domain and C-terminal HNH type endonuclease domain. In addition, in at least one case, an MspJI- Sco5333 shows high affinity to fully-, top-, bottom-, but not non-methylated 55nts. (B) The same strategy is performed to generate six groups of singlebase substitutions of 55ntDcm1 in which the base flanking the 5mC is replaced by other six bases. Sco5333 shows the same activity as that for 55ntDcm1, 55ntDcm2 and 55ntDcm1 single-base mutants, indicating Sco5333 recognizes and binds to all the 5mC contexts for either fully-or hemi-methylation. like SRA domain is also fused with the HNH-type endonuclease domain (SghWI in REBASE). The assortment of the domain association suggests a combinatorial nature between the bacterial SRA domain and different types of cleavage domains. The domain association may also shed light on its possible biological roles. In all cases, it seems that the role of the SRA domains is to bring the DNA cleav-age activity close to the modified DNA sites. Such activity may become useful when the bacterial cells are under attack from bacteriophages with modified DNA. SRA domain of Sco5333 and Tbis1 is much more like SRA SUVH5 than SRA UHRF1 Sco5333 and SRA SUVH5 bind to 5mC either in fully-or hemi-methylated DNA, in sharp contrast to SRA UHRF1 which preferentially binds to hemi-methylated DNA over fully-methylated one. ITC studies revealed that one molecule of 5mC DNA duplex was bound by two SRA domains in the dimer form whereas the hemi-methylated CpG is bound by one SRA UHRF1 . This stoichiometry is quite like SRA SUVH5 and might be related to the length of the NKR finger. In SRA UHRF1 , a steric clash may arise if two long NKR fingers intercalated from opposite directions into DNA groove containing 5-methylcytosine (13)(14)(15). SRA SUVH5 and SRA Sco5333 have a much shortened and disordered loop that corresponds to long NKR finger of SRA UHRF1 (Supplementary Figure S2), this loop does not insert into the DNA groove, therefore leaving enough space for another domain to bind and flip out the base on the complementary strand. Moreover, the key residue on the thumb which takes place the flipped base and pairs with the orphaned guanine on the complementary strand is glutamine Q392, Q30 and Q28 in SRA SUVH5 , Sco5333 and Tbis1, respectively, however is valine in SRA UHRF1 . Compared to the valine, glutamine skeleton has two additional carbons and an amide group that makes it much closer to the unpaired guanine (<3.1Å) and forms stable hydrogen bonds with it (16). As a result, the SRA domain of Sco5333 functions more likely as SRA SUVH5 than as SRA UHRF1 . The contribution of dual metal ion binding motifs to its cleavage activity Two ion binding motifs were identified in the HNH domain of Sco5333 and its orthologs, one is zinc finger binding motif that is composed of CX 2 CX 36 CX 2 C, the other one is the HNH motif. They overlap in the primary amino acid sequence. Sco5333 showed non-specific DNA cleavage in the presence of Mn 2+ or Mg 2+ , but DNA cleavage activity in Mg 2+ was suppressed by equal molar or higher concentration of Zn 2+ . We hypothesize that coordination of Zn 2+ by Zinc finger is much more stronger than Mg 2+ or Mn 2+ (Figure 4B), similar to R.KpnI in which binding affinity of Zn 2+ by ZF is much higher than to HNH motif (38). Binding of Zn 2+ to the Zinc finger may induce a conformational change of the overall structure of Sco5333. Our result demonstrated that increasing Zn 2+ can enhance the binding affinity of Sco5333 to 5mC DNA ( Figure 5A and B), implying this conformational change may occur to the SRA domain of Sco5333. Binding of Zn 2+ to ZF might exclude metal ions like Mn 2+ and Mg 2+ coordinated by the HNH motif, and therefore eliminate the cleavage activity that requires Mg 2+ and Mn 2+ . Consistent with this, Sco5333 C252D with the destructed ZF showed weak DNA cleavage activity. It is pos-tulated that the defective Zinc finger may have a decreased binding affinity for Zn 2+ as supported by comparative Zn 2+ suppression experiments ( Figure 4B and C), therefore allowed HNH motif to compete for coordination of Zn 2+ and activated the DNA cleavage activity in the presence of Zn 2+ alone. As another note, the predicted structure of HNH motifs of Sco5333 and Tbis1both possess the ␤␤␣-Metal Fold, but compared to other three active ZF-HNH dual motif domains, such as Anacas9, Gme HNH and T4eVII, their first ␤-sheet lack an asparagine residue ( Figure 1C and Supplementary Figure S5) that is crucial to the coordination of metal ion other than Zn 2+ . This defect might in part explain in vitro weak cleavage activity of Tbis1 and Sco5333 that requires at least 40-fold excess of protein to DNA. The difference between in vivo toxicity and in vitro cleavage activity by SRA-HNH proteins Our results have shown that plasmids expressing Sco5333 and Tbis1 could not be established in E. coli hosts with Dcm-methylation whereas they can be maintained in the hosts without 5mC methylation. However, when induced by IPTG, the methylation deficient E. coli ER2566 expressing Tbis1 lysed ( Figure 2B, third panel) while E. coli BL21 (DE3) expressing Sco5333 showed similar growth curves between the wild type and mutants ( Supplementary Figure S6). This difference might imply that HNH motif of Tbis1 is more promiscuously active than that of Sco5333 toward non-methylated DNA. This speculation is supported by lower transformation efficiency of pTbis1 into ER2566 than pJTU4356 (encoding Sco5333) into BL21 (DE3) (Figure 2A and B). Consistent with this observation, a much lower concentration of Tbis1 can generate linear plasmid in the in vitro cleavage assay. The in vivo toxicity of SRA-HNH protein to E. coli strains with the Dcm methylation requires both SRA domain and HNH motif (Figure 2A). But under in vitro conditions, purified enzymes cannot discriminate methylated or non-methylated DNA with respect to DNA cleavage. We demonstrated that the in vitro non-specific cleavage indeed stems from the enzyme rather than the contaminating nuclease. It also correlates with the observation that methylationdeficient cell expressing Tbis1 lysed upon induction by IPTG. Therefore, DNA cleavage activity by SRA-HNH, regardless of in vivo and in vitro, is very weak compared to typical restriction endonucleases. This speculation is not contradictory with the toxicity to host with 5mC modification. As the SRA domain can specifically bind to 5mC with high affinity, its cognate HNH domain may continuously exert its cleavage activity in the vicinity of target 5mC sites; Moreover, the tight binding of SRA protein to its target sequence might prevent the DNA repair systems to access the damaged sites. For the non-methylated host, the HNH domain can only transiently nick the chromosome DNA without the SRA-directed binding, so the damage of DNA might be repaired in a timely manner. Possible roles for SRA-HNH protein in bacteria Restriction endonucleases are often accompanied by DNA methylases (39,40). In the genome of S. coelicolor, sco5333 has an adjacent Type IIG restriction enzyme/N6-adenine DNA methyltransferase gene sco5331 located downstream. The neighbouring configuration of Sco5331 and Sco5333 were not conserved in other bacterial genomes, indicating that their association may be coincidental. For Tbis1, one of the immediate adjacent genes is a rRNA adenine methyltransferase. Sco5333 was isolated from S. coelicolor, a model strain with stringent restriction of alien DNA bearing 5mC and 6mA methylation (41). We previously identified a type IV HNH endonuclease ScoMcrA which can restrict phosphorothioated DNA and Dcm-methylated DNA (42). But the scoMcrA knockout mutant still displayed strong restriction activity to 5mC DNA. Here we show that Sco5333 may contribute to the observed restriction. Interestingly, sco5333 was located in a typical genomic island that is flanked by two almost identical copies of Arg-tRNA genes. Genomic islands are often associated with horizontal gene transfer (43). This finding is well fit to the notion that the restriction and methylation systems are located in the mobile element to respond to environmental threats such as phage attacks (44). Combination a typical SRA domain and an endonuclease characteristic HNH domain may represent a high efficient mechanism to counteract the threat of alien DNA bearing 5mC. The advantage by employing SRA-HNH is easy discrimination of 5mC DNA from its own DNA. More importantly, the extremely low DNA cleavage activity of the cognate zinc finger-HNH can to the maximum extent reduce the damage to the unmodified part of its chromosomal DNA. SRA-HNH might be an universal mechanism in restricting methylated DNA as most type IV restriction enzymes identified has the SRA structure and HNH motif.
Characterization of Rice Homeobox Genes, OsHOX22 and OsHOX24, and Over-expression of OsHOX24 in Transgenic Arabidopsis Suggest Their Role in Abiotic Stress Response Homeobox transcription factors are well known regulators of plant growth and development. In this study, we carried out functional analysis of two candidate stress-responsive HD-ZIP I class homeobox genes from rice, OsHOX22, and OsHOX24. These genes were highly up-regulated under various abiotic stress conditions at different stages of rice development, including seedling, mature and reproductive stages. The transcript levels of these genes were enhanced significantly in the presence of plant hormones, including abscisic acid (ABA), auxin, salicylic acid, and gibberellic acid. The recombinant full-length and truncated homeobox proteins were found to be localized in the nucleus. Electrophoretic mobility shift assay established the binding of these homeobox proteins with specific DNA sequences, AH1 (CAAT(A/T)ATTG) and AH2 (CAAT(C/G)ATTG). Transactivation assays in yeast revealed the transcriptional activation potential of full-length OsHOX22 and OsHOX24 proteins. Homo- and hetero-dimerization capabilities of these proteins have also been demonstrated. Further, we identified putative novel interacting proteins of OsHOX22 and OsHOX24 via yeast-two hybrid analysis. Over-expression of OsHOX24 imparted higher sensitivity to stress hormone, ABA, and abiotic stresses in the transgenic Arabidopsis plants as revealed by various physiological and phenotypic assays. Microarray analysis revealed differential expression of several stress-responsive genes in transgenic lines as compared to wild-type. Many of these genes were found to be involved in transcriptional regulation and various metabolic pathways. Altogether, our results suggest the possible role of OsHOX22/OsHOX24 homeobox proteins as negative regulators in abiotic stress responses. INTRODUCTION Abiotic stress conditions, including drought and salinity, are detrimental for growth and survival of plants. These environmental factors, either singularly or compositely, cause several adverse effects on the productivity of crop plants like rice. However, by adopting biotechnological tools, it is now possible to generate high-yielding stress-tolerant plants. Several TFs have been used as potent tools to engineer abiotic stress tolerance in plants (Hussain et al., 2011). The over-expression of wellcharacterized abiotic stress-responsive TFs, like dehydrationresponsive element binding proteins (DREBs), ABA-responsive element binding proteins (AREBs), no apical meristem (NAM), Arabidopsis thaliana activation factor 1/2 (ATAF1/2), cup-shaped cotyledon 2 (CUC2) proteins (NACs), has led to the generation of stress-tolerant transgenic plants without loss in crop yield (Nakashima et al., 2009;Todaka et al., 2015). However, the function of various other TFs in abiotic stress tolerance still remains to be explored. Homeobox TFs belong to a large gene family and are known to play crucial roles in various aspects of plant development (Gehring et al., 1994;Nam and Nei, 2005). Rice and Arabidopsis genomes contain at least 110 homeobox genes each (Jain et al., 2008;Mukherjee et al., 2009). The members of homeobox TF family have been categorized into 14 classes, including HD-ZIP and TALE superclasses (Jain et al., 2008;Mukherjee et al., 2009). The plant-specific HD-ZIP superclass contains highest number of homeobox proteins (48) and is grouped into four subfamilies, HD-ZIP I-IV (Jain et al., 2008;Mukherjee et al., 2009). All HD-ZIP superclass proteins possess HD and leucine-zipper (LZ) domains. Besides this, HD-ZIP II proteins contain ZIBEL and CE motifs, whereas HD-ZIP III proteins contain MEKHLA domain also. In addition, both HD-ZIP III and HD-ZIP IV subfamily proteins harbor START and HD-SAD domains (Mukherjee et al., 2009). Various HD-ZIP superclass members are known to regulate a variety of developmental processes and abiotic stress responses in plants (Ariel et al., 2007;Harris et al., 2011). The ectopic expression of a HD-ZIP I subfamily member indicated its involvement in leaf development and blue light signaling (Wang et al., 2003), and few members were reported to mediate giberrellin signaling (Dai et al., 2008;Son et al., 2010). HD-ZIP II subfamily members have been implicated in shade avoidance responses in plants (Sessa et al., 2005). HD-ZIP III subfamily members have emerged as vital regulators of apical meristem formation, maintenance of abaxial or adaxial polarity of leaves and embryo, besides vascular development and leaf initiation process in shoot apical meristem region in Arabidopsis and rice, respectively (Prigge et al., 2005;Itoh et al., 2008). Further, HD-ZIP IV subfamily members were reported to be crucial determinants of outer cell layer formation of plant organs, leaf rolling process, trichome development and anther cell wall differentiation (Nakamura et al., 2006;Vernoud et al., 2009;Zou et al., 2011). It has been speculated that evolutionary pressure resulted in well orchestrated participation of numerous HD-ZIP subfamily members in developmental regulation of plants (Ariel et al., 2007). A number of HD-ZIP superclass members have been found to be differentially expressed during abiotic stress conditions in different plant species (Gago et al., 2002;Olsson et al., 2004;Jain et al., 2008;Ni et al., 2008;Bhattacharjee et al., 2015). The involvement of some HD-ZIP I subfamily members have already been reported in modulating abiotic stress responses (Olsson et al., 2004;Song et al., 2012). In recent times, AtHB12 and AtHB7 were found to mediate both growth related processes and water stress responses in Arabidopsis (Ré et al., 2014). Some studies have demonstrated the role of a few HD-ZIP genes of Arabidopsis and rice in abiotic stress tolerance as well (Olsson et al., 2004;Zhang et al., 2012) and they may act as promising candidates for crop improvement (Bhattacharjee and Jain, 2013). Previously, at least nine of the 14 members of HD-ZIP I family in rice were found to be differentially expressed under abiotic stress conditions (Jain et al., 2008). In the present study, we performed comprehensive expression profiling of two candidate HD-ZIP I family homeobox genes, OsHOX22 (LOC_Os04g45810) and OsHOX24 (LOC_Os02g43330), under various abiotic stress conditions at different stages of development in rice. We established the nuclear localization of OsHOX22 and OsHOX24 proteins, analyzed their transactivation and dimerization properties, and identified their novel interacting proteins. Further, we studied the binding property of purified proteins with specific DNA sequences and identified their putative downstream targets at whole genome-level. In addition, we over-expressed OsHOX24 in Arabidopsis and showed its role in abiotic stress responses. For imparting abiotic stresses, 7-day-old rice seedlings were removed from trays and subjected to desiccation (whole seedlings were kept between folds of tissue paper and allowed to dry), salinity (seedling roots were submerged in 200 mM NaCl solution), cold [seedling roots were submerged in reverse-osmosis (RO) water and kept at 4 ± 1 • C in cold room] and osmotic stress (seedling roots were submerged in 200 mM mannitol solution) treatments as described earlier (Jain et al., 2008). Whole seedlings under control and stress conditions were harvested after 1, 3, 6, and 12 h time points and snap frozen in liquid nitrogen. Likewise, greenhouse grown 5-week-old mature rice plants were subjected to desiccation and salinity stress treatments for 1, 3, 6, and 12 h followed by tissue harvesting. Seedlings kept in RO water and plants grown in pots (filled with soil) supplied with RO water served as experimental control. Four-monthold reproductive-stage rice plants were subjected to desiccation (by withholding water) and salinity (200 mM NaCl solution) stresses, and flag-leaf and panicle tissues were harvested after 6 and 12 h. Real-Time Polymerase Chain Reaction (PCR) Analysis To study the gene expression, quantitative real-time PCR analysis was carried out as described earlier (Sharma et al., 2014). At least two biological replicates for each sample and three technical replicates for each biological replicate were analyzed. The relative expression level of each gene was determined using C T calculation as described previously (Jain et al., 2006b). To normalize the relative mRNA level of individual genes in different RNA samples, PP2A and UBQ5 were used as most suitable internal control genes for Arabidopsis and rice, respectively (Sharma et al., 2014). The list of primers used for real-time PCR is given in Supplementary Table S1. Sequence Analysis and Homology Modeling The alignment of genomic and coding sequences of homeobox genes was done using Sim4 software (Florea et al., 1998) to determine exon-intron organization. For promoter analysis, 2 kb sequence from upstream of start codon of the homeobox genes was retrieved using corresponding BAC/PAC clone sequences from the National Centre for Biotechnology Information (NCBI). The sequences were used as query in the PLACE database 1 to identify cis-regulatory stress-responsive elements. The homology modeling for HD of OsHOX22 and OsHOX24 proteins was performed using 9ANT (Antennapedia homeodomain-DNA complex) from Drosophila (Fraenkel and Pabo, 1998) Subcellular Localization of Recombinant Homeobox Proteins The full-length and truncated versions [C-terminal deletion ( C) of 165-261 amino acid (aa) for OsHOX24 C and 174-276 aa for OsHOX22 C] of homeobox genes were PCR amplified using gene-specific primers (Supplementary Table S2) and cloned in psGFPcs vector (Kapoor et al., 2002) using ApaI and XmaI restriction sites. The N-terminal GFP fusion constructs and empty vector (psGFPcs; experimental control) were transiently transformed in onion epidermal cells via particle bombardment method using PDS-1000 He particle delivery system (Bio-Rad Laboratories, Hercules, CA, USA) as described earlier (Sharma et al., 2014). The transformed cells were incubated in dark at 23 • C for 24 h and onion peels were visualized under confocal microscope (AOBS TCS-SP2, Leica Microsystems, Mannheim) for detection of GFP and DAPI signals. Transactivation and Dimerization Assays Full-length coding sequences of OsHOX24 (786 bp), OsHOX22 (831 bp), and their C-terminal deleted regions, OsHOX24 C (495 bp) and OsHOX22 C (521 bp), were PCR amplified using gene-specific primers (Supplementary Table S2) and cloned into pGBKT7 vector containing GAL4 DNA-binding domain. The confirmed constructs were transformed in yeast strain (AH109) [harboring HIS3, ADE2, MEL1, and lacZ reporter genes] according to small-scale yeast transformation procedure (Clontech). The empty vector (pGBKT7) and pGBKT7-p53 + pGADT7-T antigen transformed in yeast served as negative and positive experimental controls, respectively. The transformants were further serially diluted and dropped on various SD selection media, namely SD-Trp, SD-Trp-His, and SD-Trp-His-Ade and incubated at 30 • C for 3-5 days. To check the dimerization properties of homeobox proteins, full-length coding sequence of OsHOX24, OsHOX22, and their C-terminal deleted regions, OsHOX24 C and OsHOX22 C, were cloned into pGADT7 vector containing GAL4 activation domain. The bait vector containing truncated version of ORFs were co-transformed with prey vector containing either full-length or truncated ORFs in yeast. The empty vectors, pGBKT7 + pGADT7, and pGBKT7-p53 + pGADT7-T antigen, cotransformed in yeast were used as negative and positive controls, respectively. The transformants were grown on SD-Trp-Leu and SD-Trp-Leu-His-Ade selection media and incubated at 30 • C for 3-5 days. Yeast-two Hybrid Analysis The C-terminus deletion constructs, OsHOX24 C (1-164 aa) and OsHOX22 C (1-173 aa) were created in bait vector for yeast-two hybrid analysis. A cDNA library was generated from 3 h drought stress treated 7-day-old rice seedlings in library vector, pGADT7-Rec, by recombinationbased cloning using SmaI and transformed in yeast (AH109), according to manufacturer's instructions (Clontech). Using largescale transformation by PEG/LiAc method, bait constructs (OsHOX24 C and OsHOX22 C) were transformed in the competent cells prepared from a single aliquot of cDNA library glycerol stock, according to instructions provided by manufacturer (Clontech). The transformation mixture was spread on SD selection media, SD-Trp-Leu-His and incubated at 30 • C for 5 days until colonies appeared. Selected transformed yeast colonies were streaked on SD-Trp-Leu media, and simultaneously screened by colony PCR using AD5 and AD3 library vector-specific primers to check the presence of inserts. The colonies possessing insert size > 300 bp were streaked on SD-Trp-Leu-His (triple dropout medium; TDO medium) and SD-Trp-Leu-His-Ade (quadruple dropout medium; QDO medium) for reconfirmation. The putative clones were streaked on TDO and QDO selection medium supplemented with either 40 mg/ml X-α-Gal or X-β-Gal also, and allowed to grow at 30 • C for 3-5 days till blue color development. On the basis of color development in colonies due to activation of Mel1/lacZ reporter gene and size of insert, selected clones were confirmed by Sanger sequencing. Further, the interacting partners of homeobox TFs were confirmed by drop tests and lacZ reporter gene quantitative assay using O-nitrophenyl-beta-D-galactopyranosidase (ONPG) as substrate, according to manufacturer's instructions (Clontech). pGBKT7-Lam + pGADT7-T, and pGBKT7-p53 + pGADT7-T antigen, were used as negative and positive experimental control, respectively. The β-galactosidase unit for each sample was calculated according to Miller (1972). The experiments were performed in three biological replicates. To study the expression profiles of homeobox genes and the genes encoding for their interacting proteins under abiotic stress conditions, we analyzed the publicly available microarray data from Genevestigator v.3 3 . The heatmaps depicting log 2 ratio (fold change) values of the respective genes during drought, salinity and cold stress conditions were generated. Electrophoretic Mobility Shift Assay (EMSA) The PCR amplified complete ORFs [using gene-specific primers (Supplementary Table S2)] of OsHOX24 and OsHOX22 were cloned in pET28a expression vector, in BamHI/EcoRI and XhoI/HindIII restriction sites, respectively. Recombinant protein induction followed by purification under native conditions was carried out as described earlier (Sharma et al., 2014). For EMSA, single-stranded biotinylated and HEX-labeled AH1 and AH2 oligonucleotide sequences, synthesized commercially (Sigma) as tetrameric repeats [oligos with four consecutive repeats of cis-regulatory motifs (AH1/AH2)], were annealed in equimolar volumes. For protein-DNA binding reactions, 25-50 nM annealed oligos (HEX or biotin-labeled AH1/AH2) were added to 5-10 µg of purified proteins along with 5X DNA binding buffer (50 mM Tris-Cl pH-8.0, 2.5 mM EDTA, 2.5 mM DTT, 5 mM MgCl 2 , 5X protease inhibitor, 250 mM KCl, and 12.5% glycerol). Reactions devoid of annealed oligos or purified proteins served as experimental control. For binding experiments performed with HEX-labeled oligos, 200-fold excess of unlabelled oligos and (1 µg/µl) poly(di-DC) were used as specific and non-specific competitor DNA, respectively. The 3 https://genevestigator.com/gv/ binding reactions were incubated at room temperature for 30 min followed by 6% native polyacrylamide gel electrophoresis in 0.25X Tris-borate-EDTA (TBE) at 15-20 mA for 30 min. HEXlabeled fluorescent oligos complexed with purified proteins were visualized directly under Typhoon scanner. For biotin-labeled oligos complexed with purified proteins, nylon membrane (+vely charged) was used for electrophoretic transfer followed by UV crosslinking and incubation with streptavidin-horseradish peroxidase conjugate/blocking reagent solution (1:300 dilution) for requisite time. The protein-DNA complexes were detected via chemiluminescence using Enhanced Chemiluminescence (ECL) Detection system (GE Healthcare, Buckinghamshire, UK) as per manufacturer's instructions. Over-expression of OsHOX24 in Arabidopsis To over-express OsHOX24 in Arabidopsis, the PCR amplified complete ORF [using gene specific primers (Supplementary Table S2)], was cloned in binary expression vector, pBI121, in XbaI/BamHI restriction sites. The confirmed clone was transformed in Agrobacterium strain GV3101 for generating Arabidopsis transgenic lines. The transformation of WT (Col-0) Arabidopsis plants was done using Agrobacterium strain harboring the confirmed construct via floral-dip method (Clough and Bent, 1998). Seeds obtained from transformed Arabidopsis plants were screened on MS medium supplemented with kanamycin. The PCR-positive transgenic lines were grown till homozygous stage for future analyses as described earlier (Jain et al., 2006b). Phenotypic and Stress Assays To study the effect of OsHOX24 transgene in Arabidopsis, phenotype of over-expression transgenic lines was compared with WT at different stages of plant development. To study the response of Arabidopsis transgenics under various abiotic stress conditions, seed germination assays were carried out as described earlier (Sharma et al., 2014). WT Arabidopsis and transgenic seeds were plated on MS medium without or with ABA (0.5, 1, and 5 µM) or NaCl (100, 200, and 300 mM), subjected to stratification (at 4 • C) in dark for 2 days and seed germination [radical emergence after rupture of seed testa (Jain et al., 2006b;De Giorgi et al., 2015)] was recorded after 3 days of transfer to light. To assess the effect of desiccation stress, WT and 35S::OsHOX24 transgenics (HZIP1-2.3 and HZIP1-8.2) were grown on MS medium supplemented without or with PEG6000 (−0.4 MPa) for 10 days. The root length and fresh weight of seedlings grown under control and desiccation stress conditions were measured. The relative average root length and fresh weight under desiccation stress condition were calculated as percentage of root length and fresh weight of seedlings under control condition. Four-week-old mature plants were subjected to desiccation stress by withholding water for 3 weeks followed by 1-week recovery. WT and transgenic plants of same age served as experimental controls. Plant growth was monitored till seed maturation and phenotypes under control and desiccation stress followed by recovery phase were documented. To assess the effect of desiccation stress, chlorophyll content of leaves of transgenic and WT plants under desiccation and control conditions was estimated as described earlier (Sharma et al., 2014). Microarray Analysis Total RNA was isolated from 10-day-old Arabidopsis seedlings (WT and 35S::OsHOX24 transgenics) and quality control was performed as described earlier (Sharma et al., 2014). Microarray analysis for three independent biological replicates was conducted using Affymetrix GeneChip 3 IVT kit (Affymetrix, Santa Clara, CA, USA) according to manufacturer's instructions, as described earlier (Sharma et al., 2014). Microarray data has been submitted in the Gene Expression Omnibus database at NCBI under the series accession number GSE79188. GO enrichment was carried out using online GOEAST toolkit. The metabolic pathway analysis was carried out in AraCyc database as described previously (Sharma et al., 2014). Heatmaps were generated using MeV (version 4.9). Validation of microarray experiment for selected differentially expressed genes was carried out by real-time PCR analysis using gene-specific primers (Supplementary Table S1). Statistical Analysis All the experiments were conducted in at least three biological replicates unless otherwise mentioned and SE was computed in each case. For the estimation of statistical significance, Student's t-test was performed. The data points representing statistically significant differences between WT and transgenic lines or between control and stress conditions have been indicated. Sequence Analysis, Domain Organization, and DNA Binding Two of the homeobox genes belonging to HD-ZIP I subfamily, OsHOX22 and OsHOX24, which showed up-regulation under abiotic stresses in our previous study (Jain et al., 2008), were selected for further characterization and functional validation in this study. For OsHOX24, cDNA clone (AK063685) was obtained from National Institute of Agrobiological Sciences (NIAS). However, we observed ambiguity in the annotated sequence of OsHOX22 at Rice Genome Annotation Project (RGAP) and corresponding cDNA clone (AK109177) sequence. The ORF length of OsHOX22 in RGAP corresponded to 831 bp in contrast to NIAS cDNA clone, which corresponded to ORF length of 570 bp. Therefore, we amplified OsHOX22 cDNA via reverse transcriptase-PCR (RT-PCR; from total RNA isolated from 3 h drought stress treated 7-day-old rice seedlings) and cloned in pGEMT-Easy vector. The sequencing results confirmed the annotated sequence reported in RGAP (LOC_Os04g45810). The gene sequences of OsHOX24 and OsHOX22 were found to be of 1423 and 1347 bp lengths, respectively, harboring two exons interrupted by a single intron (phase 0) each (Supplementary Figure S1A). The ORFs of OsHOX24 and OsHOX22 comprised of 786 and 831 bp encoding 261 and 276 aa residues, respectively. The domain organization of OsHOX24 and OsHOX22 proteins revealed the presence of highly conserved HD and HALZ domains (Supplementary Figure S1A). A putative monopartite NLS was also detected within the HD region of both the homeobox proteins (Supplementary Figure S1A). We identified several cis-regulatory elements in the promoter sequences (2 kb upstream) of OsHOX24 and OsHOX22 (Supplementary Figure S2). Many of these cis-regulatory motifs were found to be stress-responsive in nature, for example, ABAresponsive element (ABRE), C-repeat binding factor-dehydration responsive element (CBF-DRE), low temperature response element (LTRE), myeloblastosis element (MYB), MYB core element (MYBCORE), and myelocytomatosis element (MYC). These cis-regulatory elements have been reported to be vital for the regulation of stress-responsive genes in plants . The availability of crystal structure of Drosophila Antennapedia HD protein-DNA complex (Protein Data Bank code 9ANT; Fraenkel and Pabo, 1998) enabled us to determine the three-dimensional structure of the HD of homeobox proteins by homology modeling (Supplementary Figure S3). The HD portions of OsHOX24 and OsHOX22 homeobox proteins exhibited 41-43% identity and showed more than 85% coverage of the template structure. The modeled HD structures of OsHOX24 and OsHOX22 were found to possess three alpha helices interconnected by loops (Supplementary Figure S3). By comparing the modeled HD structures of OsHOX24 and OsHOX22 with template, the residues forming nucleotide-binding site were identified (Supplementary Figures S3A,C). It was observed that HD of both the homeobox proteins were capable of binding with DNA on the major groove by forming hydrogen bonding via three amino acid residues, namely Arg4, Ile46, and Asn49 (Supplementary Figures S3B,D), conserved between the model and template. Ramachandran plot analysis showed the presence of 98 and 100% of the residues in the modeled HD structures of OsHOX24 and OsHOX22, respectively, lie in the favored regions. Homeobox Genes Were Highly Induced during Abiotic Stress Conditions at Different Stages of Development We confirmed the differential expression of OsHOX24 and OsHOX22 via real-time PCR analysis in various developmental stages of rice (Supplementary Figures S1B,C); as reported previously (Jain et al., 2008). Further, we performed comprehensive expression profiling of these genes under abiotic stress conditions at different stages of rice development. OsHOX24 and OsHOX22 genes were highly up-regulated in rice seedlings subjected to desiccation, salinity, cold, and osmotic stress treatments for various durations (1, 3, 6, and 12 h), as revealed by real-time PCR analysis (Figures 1A,B). The transcript levels of OsHOX24 and OsHOX22 gradually increased with the duration of stress treatment in all the cases. The up-regulation of OsHOX24 and OsHOX22 was higher in the seedlings subjected to desiccation stress as compared to other stresses (Figures 1A,B). Notably, the transcript level of OsHOX24 was much more elevated than OsHOX22 under different stress conditions except cold stress (Figures 1A,B). For instance, after 12 h of desiccation stress, the accumulation of OsHOX24 transcripts was about 10 times more than OsHOX22 in the rice seedlings (Figures 1A,B). Further, the transcript level of OsHOX24 and OsHOX22 was analyzed in 5-week-old mature plants, subjected to desiccation and salinity stresses for 1, 3, 6, and 12 h. Significant up-regulation of OsHOX24 and OsHOX22 was detected in the mature rice plants on exposure to stress and prolonged exposure led to further increase in their transcript levels ( Figure 1C). The extent of up-regulation of homeobox genes due to desiccation stress was found to be slightly more as compared to salinity stress. Both the homeobox genes showed increase in transcript levels till 6 h under desiccation and salinity stresses. After extended period of desiccation stress (12 h), the transcript level of OsHOX24 was found to be about five times more than OsHOX22 ( Figure 1C). Next, we examined the expression profiles of homeobox genes in panicle and flag-leaf of 4-month-old (reproductive stage) rice plants, subjected to mock, desiccation and salinity stresses for 6 and 12 h. The analysis revealed up-regulation of OsHOX24 and OsHOX22 in both flag-leaf and panicle during desiccation stress ( Figure 1D). It was also observed that the extent of up-regulation was more in flag-leaf than panicle during desiccation stress. The transcript levels of OsHOX24 were induced in flag-leaf within 6 h of desiccation stress ( Figure 1D). However, the enhanced transcript levels of OsHOX22 were detected in flag-leaf only after 12 h of desiccation stress (Figure 1D). In case of salinity stress, the transcript level of OsHOX24 was found to be up-regulated in both flag-leaf and panicle tissues ( Figure 1D). However, even after 12 h of salinity stress, no significant up-regulation of OsHOX22 could be detected in either of the tissues analyzed ( Figure 1D). Differential Expression of Homeobox Genes in Response to Plant Hormones To study the effect of plant hormones, the transcript profiling of homeobox genes was carried out in the rice seedlings subjected to various hormone treatments exogenously, including IAA, EBR, ABA, SA, ACC, BAP, and GA3. The transcript levels of OsHOX24 and OsHOX22 genes were found to be elevated under different hormone treatments (Figure 2). ABA treatment resulted in significant increase (30-80-fold) in the transcript levels of both OsHOX24 and OsHOX22. The transcript level of OsHOX24 was induced in the presence of IAA, EBR, SA, and ACC as well (Figure 2A), whereas, highest up-regulation of OsHOX22 was found in the presence of SA followed by GA3 and IAA ( Figure 2B). These results suggested that homeobox genes are involved in ABA or other hormone-signaling pathways in rice. Recombinant Homeobox Proteins Are Nuclear-Localized The amino acid sequence analysis of OsHOX24 and OsHOX22 proteins revealed the presence of a putative monopartite NLS within their HD. To confirm the subcellular localization, their complete ORFs were cloned in psGFPcs vector with N-terminal GFP fusion. The GFP-fused full-length homeobox proteins (GFP::OsHOX24 and GFP:: OsHOX22) were transiently expressed in onion epidermal cells. In case of empty vector (GFP alone), fluorescence was spread throughout the onion cell, whereas for full-length recombinant homeobox proteins, fluorescence was detected only in the nucleus, indicating the nuclear-localization of homeobox proteins ( Figure 3A). Further, we deleted the C-terminal transactivation domain of homeobox proteins (165-261 aa for OsHOX24 C and 174-276 aa for OsHOX22 C) and performed subcellular localization studies in onion epidermal cells. The truncated recombinant proteins were also found to be localized in the nucleus ( Figure 3B). The nuclear-localization of recombinant homeobox proteins was further confirmed by staining with nucleus-specific dye, DAPI. DNA Binding of Homeobox Proteins and Identification of Putative Targets Earlier studies have demonstrated the specific binding of HD-ZIP I class members with 9 bp pseudopalindromic sequences, namely AH1 (CAAT(A/T)ATTG) and AH2 (CAAT(C/G)ATTG; Sessa et al., 1997;Palena et al., 1999;Meijer et al., 2000). We also studied the binding of purified homeobox proteins with tetrameric oligos, AH1 and AH2, via EMSA. OsHOX24 and OsHOX22 proteins were found to bind with biotinylated AH1 and AH2 tetrameric oligos ( Figure 4A). The presence of multiple bands indicated that OsHOX24 could possibly associate with tetrameric oligos in monomeric or oligomeric forms. Similar patterns of protein-DNA binding could be detected using HEX-labeled oligos as well. Incorporation of 200-fold molar excess of unlabelled oligos as competitor abolished the DNA-protein binding for OsHOX22, whereas highly reduced concentration of DNA-protein complex was observed for OsHOX24 ( Figure 4B). These results indicate that OsHOX24 possesses stronger binding affinity for these target motifs as compared to OsHOX22. The genes harboring AH1 and/or AH2 motifs in their promoters may represent the downstream targets of homeobox proteins. Therefore, we scanned 1 kb upstream regions of all rice protein coding genes (39,045) for the presence of AH1 and/or AH2 motifs. At least 809 rice genes possessing one or more of these target motifs in their promoter regions were identified. A larger number (539 genes) of rice genes harbored AH1 motif as compared to the AH2 motif (289 genes; Supplementary Table S3). We investigated the major functional categories represented among these genes via GO enrichment analysis. In biological process category, the genes involved in small molecule metabolic processes, lipid metabolic process, cellular response to stimulus, oxidation-reduction, hormone mediated signaling pathways and reproductive and anatomical structure developmental processes were found to be significantly enriched (Supplementary Figure S4). Homeobox Proteins Display Transactivation and Dimerization Properties OsHOX24 and OsHOX22 proteins were found to be rich in acidic amino acids at the C-terminal region, which could possibly contribute to their transactivation property. Thus, we investigated the transcriptional activation property of these HD-ZIP I TFs in yeast. The complete ORFs and C-terminal deletion constructs ( C) of OsHOX24 and OsHOX22 were cloned in yeast expression vector containing DNA binding domain ( Figure 5A). The colonies of transformed yeast cells grew uniformly on SD-Trp selection medium. The growth of yeast transformants on SD-Trp-His and SD-Trp-His-Ade selection media, even with increasing serial dilution, confirmed the transactivating nature of full-length homeobox proteins ( Figure 5B). In contrast, yeast transformants harboring OsHOX24 C and OsHOX22 C, and empty bait vector control, did not grow in either of the selection media. This suggested that C-terminal region of full-length homeobox proteins was responsible for their transcriptional activation property, because these proteins could drive the expression of HIS3 and ADE2 reporter genes even in the absence of any interacting protein in yeast. Various homeobox proteins belonging to HD-ZIP class have been reported to form homodimers or heterodimers with other members (Meijer et al., 2000). This prompted us to investigate about the dimerization property of OsHOX24 and OsHOX22 in yeast. The complete ORFs of OsHOX24 and OsHOX22 and their C-terminal deletion constructs (OsHOX24 C and OsHOX22 C) were cloned in pGADT7 vector. The bait vector containing truncated version of homeobox proteins was co-transformed with prey vector harboring either full-length or truncated homeobox proteins in yeast. Several colonies were obtained on SD-Trp-Leu-His-Ade selection media for all the combinations of cotransformed bait and prey plasmid constructs, except for the negative control, indicating that the full-length and truncated versions of homeobox proteins can homodimerize and heterodimerize with each other ( Figure 5C). These observations suggest that C-terminal region of these homeobox proteins may not be important for dimerization. Identification of Novel Interacting Proteins of Homeobox Proteins and their Gene Expression Profiling The deletion constructs of homeobox genes (OsHOX24 C and OsHOX22 C) were used as baits to identify their interacting proteins. Numerous transformants were obtained after largescale transformation of OsHOX24 C and OsHOX22 C bait plasmid DNAs and screened on SD media lacking leucine, tryptophan and histidine. Selected transformants were screened by colony PCR and further grown on TDO (SD-Trp-Leu-His) and QDO (SD-Trp-Leu-His-Ade) media supplemented with or without X-α-Gal or X-β-Gal for reconfirmation. The growth of putative clones and blue color development in colonies was observed on TDO and QDO selection media, which also indicated activation of reported genes (Mel1 and lacZ). The sequencing of plasmid DNAs of selected confirmed clones resulted in the identification of interacting proteins of candidate homeobox TFs. At least nine and five proteins were identified as interacting proteins of OsHOX24 and OsHOX22, respectively. OsHOX24 was found to interact with protein fragments belonging to GRAM domain TF, expressed protein, high mobility group protein (HMG1/2), eukaryotic translation initiation factor I, DUF domain protein, endoplasmic reticulum (ER) lumen protein retaining receptor and enzymes like sucrose synthase and phenylalanine ammonia lyase ( Figure 6A). OsHOX22 was found to interact with protein fragments belonging to an expressed protein, pentatricopeptide repeat protein, hypoxia-responsive family protein, universal stress protein domain containing protein and UDP-glucuronosyl and UDP-glucosyl transferase domain containing protein ( Figure 6B). Further, the interacting proteins were examined for the activation of lacZ reporter gene via ONPG assay. We observed a considerable difference in the β-galactosidase activity of putative interacting proteins. Among the OsHOX24 interactors, phenylalanine ammonia lyase showed highest β-galactosidase activity followed by sucrose synthase, DUF domain protein, expressed protein, HMG1/2 expressed protein, GRAM domain TF and eukaryotic translation initiation factor I, whereas least β-galactosidase activity (almost comparable with negative control) was shown by ER lumen protein retaining receptor ( Figure 6A). In case of OsHOX22 interactors, universal stress protein domain containing protein showed highest β-galactosidase activity followed by UDPglucuronosyl and UDP-glucosyl transferase domain containing protein, pentatricopeptide protein, expressed protein and hypoxia-responsive family protein ( Figure 6B). The expression profiles of the genes encoding for interacting proteins of OsHOX24 and OsHOX22 under various abiotic stress conditions were analyzed using publicly available microarray data from Genevestigator, which comprised of expression profiling data in 7-day-old rice (IR64) seedlings subjected to 3 h of desiccation, salinity and cold stresses, in droughttolerant rice seedlings (N22) under drought stress, and in flagleaf tissues of two rice genotypes; IRAT109 (drought-resistant japonica cultivar) and Zhenshan 97 (ZS97; drought sensitive indica cultivar) under drought stress at reproductive stage of development. The transcript levels of most of these genes were found to be altered under atleast one or more of the abiotic stress conditions analyzed (Figure 6C). The genes encoding for sucrose synthase, hypoxia-responsive family protein and GRAM domain TF showed similar expression profiles as that of OsHOX24 and OsHOX22 under selected abiotic stress conditions analyzed in Genevestigator ( Figure 6C). Generation of OsHOX24 Over-expression Transgenic Arabidopsis Plants The relatedness of OsHOX22 and OsHOX24 has been speculated to be a result of ancient chromosomal duplication (Agalou et al., 2008;Jain et al., 2008). Since these genes are expected to have redundant functions, we carried out functional characterization of OsHOX24 in Arabidopsis. The complete ORF of OsHOX24 was cloned in binary vector pBI121 and over-expressed under the control of CaMV 35S promoter in Arabidopsis (Supplementary Figure S5A). A total of 29 independently transformed kanamycin-resistant T1 transgenic plants for 35S::OsHOX24 were obtained. Among them, a total of 19 T1 transgenic lines of 35S::OsHOX24 were found to be PCR positive (Supplementary Figure S5B). Three transgenic lines (designated as HZIP1-2.3, HZIP1-6.2, and HZIP1-8.2), showing segregation ratio of nearly 3:1 were grown further to obtain homozygous seeds for physiological and molecular analysis. The real-time PCR analysis showed very high transcript levels of OsHOX24 in homozygous transgenic lines, whereas it was not detectable in WT seedlings ( Figure 7A). Among all 35S::OsHOX24 homozygous transgenic lines, maximum expression was observed in HZIP1-6.2 line followed by HZIP1-8.2 ( Figure 7A). There was no detectable difference in the phenotype and various growth parameters of OsHOX24 transgenics as compared to WT at different developmental stages under normal growth conditions (Supplementary Figures S6 and S7). Arabidopsis Transgenics Show Greater Sensitivity to Abiotic Stresses The effect of plant stress hormone, ABA, and salinity stress on 35S::OsHOX24 (HZIP1-2.3 and HZIP1-8.2) transgenic lines and WT was assessed via seed germination assays. The percentage germination of transgenic lines was observed to be much lesser as compared to WT on MS medium supplemented with various concentrations of ABA (0.5, 1, and 5 µM) and NaCl (100, 200, and 300 mM). The effect of ABA was more severe on the seed germination and growth of transgenics as compared to WT. For instance, at 0.5 µM ABA, WT showed 87% seed germination, whereas transgenics showed 51-68% germination. In presence of 1 µM ABA, WT exhibited 55% seed germination in comparison to 15-28% seed germination in the transgenics. Seed germination in transgenic lines was further reduced to 5% at 5 µM ABA, whereas WT seeds displayed 20% germination (Figure 7B). A greater extent of susceptibility in transgenics was observed as compared to WT under salinity stress too. The WT seedlings were relatively healthier and showed 93% germination, whereas the transgenic lines exhibited 52-80% germination at 100 mM NaCl. At 200 mM NaCl, 7% of WT seedlings germinated as compared to no germination of the transgenics (Figure 7C). Among the two 35S::OsHOX24 lines, HZIP1-2.3 showed higher sensitivity to ABA and NaCl. To evaluate the effect of desiccation stress on 35S::OsHOX24 transgenic lines as compared to WT, relative fresh weight and root length of seedlings were estimated under desiccation stress (−0.4 MPa PEG6000) and control conditions. A significant difference in the phenotype of transgenics and WT seedlings was The deletion constructs, BD::OsHOX24 C and BD::OsHOX22 C, were co-transformed with different combinations of full-length (AD::OsHOX24fl, AD::OsHOX22fl) and deletion constructs (AD::OsHOX24 C, AD::OsHOX22 C) of homeobox proteins in yeast, as indicated on left side panel. The transformants were grown on SD-Trp-Leu (DDO medium) and SD-Trp-Leu-His-Ade medium (QDO medium) for confirmation of interaction. pGBKT7-p53 + pGADT7-T antigen represents positive control. Empty pGBKT7 vector (BD) represents negative control for transactivation assay and pGBKT7 + pGADT7 represents negative control for dimerization assay. observed under desiccation stress ( Figure 7D). The transgenic lines exhibited 5-6% of relative root length under desiccation stress (PEG) in comparison to 28% in WT (Figure 7E). Similarly, transgenic lines had significantly lesser fresh weight as compared to WT in the presence of PEG. Notably, transgenics lines exhibited only 3-7% of relative fresh weight under desiccation stress in comparison to WT, which showed 26% of relative fresh weight ( Figure 7F). Four-week-old transgenic lines of 35S::OsHOX24 subjected to water-deficit stress wilted at a faster rate than WT (Figures 8A,B). The extent of chlorosis was more prominent in the rosette leaves of transgenics as compared to WT (Figure 8C). Overall, these observations indicated that 35S::OsHOX24 transgenics are more susceptible to water-deficit stress as compared to WT at mature stage too. We observed slightly greater susceptibility of HZIP1-2.3 line as compared to HZIP1-8.2 line during seed germination under ABA and salinity stress treatments. However, HZIP1-8.2 line exhibited significantly lesser sensitivity toward water-deficit stress as compared to HZIP1-2.3 line. The variation in the extent of susceptibility between the two transgenic lines may be attributed to the developmental stage and/or stress-type. Global Gene Expression Profiling of OsHOX24 Arabidopsis Transgenics To examine the effect of OsHOX24 over-expression on global gene expression, HZIP1-2.3 transgenic line (exhibiting relatively higher susceptibility to various abiotic stresses) was chosen for microarray analysis. A total of 292 genes (112 up-regulated and 180 down-regulated) were found to be significantly (at least twofold, P ≤ 0.05) differentially regulated in the transgenic line as compared to WT (Supplementary Figure S8; Supplementary Table S4). About 8% of the differentially expressed genes belonged to TF category ( Figure 9A) and many of them were well known to be stress-responsive. In addition, pathway analysis depicted the involvement of differentially expressed genes in diverse metabolic pathways and developmental processes, such as hormone biosynthesis, secondary metabolite biosynthesis, electron carrier biosynthesis, amino acid, and fatty acid degradation pathways ( Figure 9B). Gene Ontology enrichment analysis revealed the differential regulation of genes involved in several biological processes in the transgenic line. The biological process GO terms, such as regulation of cellular response to stress and positive regulation of cell communication, showed highest representation among the down-regulated genes (Figure 9C), whereas mitochondrial electron transport, respiratory gaseous exchange, mRNA capping, and glyoxylate cycle were most represented GO terms among the up-regulated genes ( Figure 9D). In the molecular function category, calcium-transporting ATPase activity, TF activity, binding, and several enzymatic activities were most significantly enriched among the down-regulated genes (Supplementary Figure S9A), whereas some vital enzymatic activity terms showed higher representation among the up-regulated genes (Supplementary Figure S9B). Further, differential expression patterns of selected stressinducible Arabidopsis genes, such as genes encoding VQ FIGURE 6 | Interacting proteins of homeobox TFs identified by yeast-two hybrid analysis and their gene expression profiling. (A,B) The transformants in yeast strain were grown on SD-Trp-Leu (DDO medium) and SD-Trp-Leu-His-Ade medium (QDO medium) for confirmation of interaction of proteins with OsHOX24 (A) and OsHOX22 (B). pGBKT7-p53 + pGADT7-T antigen represents positive control and pGBKT7-Lam + pGADT7-T antigen represents negative control. The graphical panels (right) represent quantitative β-galactosidase assay showing the lacZ reporter gene expression (β-galactosidase activity in Miller units) for interacting proteins of OsHOX24 (A) and OsHOX22 (B). Ortho-nitrophenyl-β-D-galactoside (ONPG) was used as substrate for β-galactosidase assay. The putative function and locus identifier of the interacting proteins are given on the left side. (C) Heat-map showing gene expression profiles of OsHOX24, OsHOX22, and genes encoding for their interacting proteins during various abiotic stress conditions. The heat-map has been generated by Genevestigator (v.3) using the publicly available abiotic stress related microarray data. The color scale representing fold change (log 2 ratio) is shown below the heat-map. motif protein (AT4G20000), ribosomal binding protein L12 (AT2G03130), AP2 TF (AT2G20880), thioredoxin (AT1G69880), salt tolerance zinc finger (AT1G27730), and lipid binding protein (AT5G59310), were validated by real-time PCR analysis. The transcript profiling of these genes revealed their down-regulation in the transgenic line as compared to WT (Supplementary Figure S10), which was in good agreement with the microarray results. DISCUSSION Homeobox TFs are among the key regulators of plant development (Ariel et al., 2007). However, their role in abiotic stress responses in plants has been realized only in the past few years (Olsson et al., 2004;Luo et al., 2005;Agalou et al., 2008;Jain et al., 2008;Song et al., 2012). Earlier, we reported the differential expression of at least 37 homeobox genes under various abiotic stress conditions, many of which belong to the plant-specific HD-ZIP superclass (Jain et al., 2008). A few other studies have also reported differential regulation of HD-ZIP class homeobox genes under abiotic stress conditions (Gago et al., 2002;Olsson et al., 2004;Agalou et al., 2008;Bhattacharjee et al., 2015). About 64% of rice HD-ZIP I subfamily members were found to be abiotic stress-responsive (Jain et al., 2008). The present study was focused on the molecular characterization and functional analysis of two candidate abiotic stress-responsive homeobox genes, OsHOX24 and OsHOX22. These genes were found to be highly up-regulated under different abiotic stress conditions at various developmental stages in rice. In earlier studies, OsHOX24 and OsHOX22 were reported to be highly expressed under control and drought stress in panicle at the flowering stage of rice (Agalou et al., 2008;Jain et al., 2008), which is consistent with our observations. These results implicate OsHOX24 and OsHOX22 in abiotic stress responses at various developmental stages of rice. The role of major cis-regulatory elements, like DRE, ABRE, MYB recognition sequence (MYBR), NAC recognition sequence (NACRS), Heat shock element (HSE), and ZF-HD recognition sequence (ZFHDRS), etc. in abiotic stress responses have been investigated comprehensively (Stockinger et al., 1997;Hobo et al., 1999;Dezar et al., 2005;Liu et al., 2014). We found enrichment of numerous stress-responsive cis-regulatory elements, like ABRE, CBF-DRE, LTRE, and MYB elements in the promoters of OsHOX24 and OsHOX22. The presence of these cis-regulatory elements in their promoter regions may contribute to their abiotic stress-responsiveness. Recently, the use of droughtresponsive promoter of a rice HD-ZIP I gene, enriched in various stress-responsive cis-regulatory elements, was found to be beneficial for over-expression of specific stress-responsive genes without any detrimental effect on plant growth (Nakashima et al., 2014). Various plant hormones play critical roles in abiotic stress responses (Wang et al., 2002;Horvath et al., 2007;Grant and Jones, 2009;Jain and Khurana, 2009;Tran et al., 2010;Fujita et al., 2011;Sharma et al., 2015). Several evidences have demonstrated the interrelation between plant hormones and homeobox TFs. Arabidopsis ATHB7 and ATHB12, orthologs of OsHOX24 and OsHOX22, respectively, were found to be highly induced on exogenous application of ABA, indicating their involvement in ABA-dependent pathways (Olsson et al., 2004). Recently, these TFs were found to actively participate in ABA signaling by controlling protein phosphatase 2C and ABA receptor gene activity (Valdés et al., 2012). The members of HD-ZIP superclass have also been reported to be involved in gibberellin and auxin signaling (Dai et al., 2008;Itoh et al., 2008;Sharma et al., 2015). We also observed the up-regulation of OsHOX24 and OsHOX22 by exogenous application of ABA, IAA, and SA. These results suggest the involvement of homeobox genes in ABA, IAA, or SA-dependent stress response pathways in rice. However, their exact role in various hormone signaling pathways remains to be elucidated. Both OsHOX24 and OsHOX22 comprise of conserved HD and HALZ domains. The high degree of structural conservation in three-dimensional HD structures of OsHOX24 and OsHOX22 with antennapedia-HD DNA complex of Drosophila (Fraenkel and Pabo, 1998) suggested that they are likely to possess DNA binding property. Some reports have demonstrated DNA binding specificities of homeobox proteins with specific pseudopalindromic sequences in vitro (Sessa et al., 1997;Frank et al., 1998). Using specific recognition sites, namely AH1 and AH2, the binding specificities of HD-ZIP I TFs have been examined in rice and Arabidopsis (Meijer et al., 2000;Johannesson et al., 2001;Zhang et al., 2012). In this study, we also demonstrated the binding specificity of OsHOX24 and OsHOX22 proteins with AH1 and AH2 motifs. These observations imply that the binding specificities of homeobox proteins are conserved in different plants. One of the possible modes of action of homeobox TFs in abiotic stress response may be via regulation of downstream target genes. However, limited information is available about their downstream target genes. A dehydrin gene, CdeT6-19, has been identified as potential target of CpHB-7 (Deng et al., 2006). The genes involved in ethylene synthesis and signaling were found to be downstream targets of Hahb-4 (Manavella et al., 2006). A genome-wide scan identified at least 809 rice genes harboring AH1 and/or AH2 motifs in their promoter regions which might represent their putative downstream target genes. Many of these genes are involved in crucial biological and developmental processes. These results suggest that OsHOX24/OsHOX22 TFs may regulate the expression of downstream target genes involved in diverse biological processes to mediate abiotic stress responses. Several studies have demonstrated the transactivation property in TFs, due to the presence of intrinsic activation domain (Lu et al., 2009;Tang et al., 2012;Yang et al., 2014). Particularly, carboxy-terminal region of AtHB1 was identified to be responsible for transcriptional activation property in yeast (Arce et al., 2011). Among rice HD-ZIP TFs, OsHOX1 and OsHOX3 exhibited transcriptional repression activity, whereas OsHOX4 and OsHOX5 were recognized as transcriptional activators (Meijer et al., 2000). We noted the presence of an activation domain in the C-terminal region of OsHOX24/OsHOX22 proteins, which imparted transactivating nature to these proteins. We found OsHOX24/OsHOX22 proteins to be localized in the nucleus, consistent with the presence of a NLS in their amino acid sequence and with earlier reports for other homeobox TFs to be nuclear-localized (Song et al., 2012;Zhang et al., 2012). Altogether, these evidences suggest that OsHOX24/OsHOX22 are nuclear-localized and can function as transcriptional activators. The current knowledge about interacting proteins of homeobox TFs is limiting. We identified several proteins, including enzymes, receptor protein, expressed proteins, and a TF as putative interacting proteins of OsHOX24 and OsHOX22. Many of these putative interacting proteins were found to be abiotic stress-responsive. There are several instances, where interaction between two TFs has been found to crucially mediate abiotic stress responses in plants (Tran et al., 2007;Lee et al., 2010). We identified GRAM domain TF as putative interacting protein of OsHOX24. Interestingly, lower transcript levels of GRAM domain TF-gene was detected in ABA-sensitive Osabf1 rice mutants (Amir Hossain et al., 2010). It is well established that sucrose metabolism is severely affected by environmental alterations, which leads to strong impact on plant development (Koch, 2004). Many genes including members of sucrose synthase family and a UDP-glucosyltransferase gene have been implicated in abiotic stress responses in plants (Gupta and Kaur, 2005;Hirose et al., 2008;Tognetti et al., 2010;Wang et al., 2015). Very recently, OsPAL4 (phenylalanine ammonia lyase) has been implicated in broad spectrum disease resistance (Tonnessen et al., 2015). In our investigation, sucrose synthase (SUS4) and phenylalanine lyase were identified as the interacting partners of OsHOX24, whereas UDP-glucuronosyl and UDPglucosyl transferase domain containing protein was found to be interacting partner of OsHOX22 in yeast. A translation initiation factor has been reported to elicit abiotic stress tolerance in yeast and plants (Rausell et al., 2003). Interestingly, we too found eukaryotic translation initiation factor I to be an interactor of OsHOX24. This interaction may be crucial for regulation of translation initiation under abiotic stress conditions. Recently, a universal stress protein was found to accentuate drought tolerance in tomato (Loukehaich et al., 2012). We also identified an universal stress protein domain containing protein as one of the interacting proteins of OsHOX22. These observations indicated that homeobox TFs interact with other proteins to modulate abiotic stress responses. The analysis of few mutant and transgenic lines of homeobox TFs has revealed their role in abiotic stress responses in plants (Olsson et al., 2004;Zhu et al., 2004;Luo et al., 2005;Yu et al., 2008;Zhang et al., 2012). Recently, the overlapping and explicit roles of ATHB7 and ATHB12 in modulating various aspects of plant development and responses to water-deficit stress have been delineated (Ré et al., 2014). In a previous study, rice BELL-type homeobox TF, OsB1HD1, has been reported to act as a negative regulator by suppressing the abiotic stress signaling cascade in over-expression tobacco transgenic lines (Luo et al., 2005). The role of OsHOX22 has been deciphered in ABA-dependent abiotic stress responses, which was also found to act as negative regulator of drought and salt tolerance in rice . We over-expressed OsHOX24 in Arabidopsis to substantiate its role in abiotic stress responses, and analyzed drought and salinity stress responses of WT and transgenic plants at various developmental stages. These studies revealed higher susceptibility of transgenics as compared to WT under abiotic stress conditions. Several reports have implicated plant hormone ABA in abiotic stress responses (Cutler et al., 2010;Fujita et al., 2011). We observed enhanced sensitivity of OsHOX24 Arabidopsis transgenics under exogenous ABA treatment. Previous investigations have also demonstrated the ABA-inducible nature of HD-ZIP I family members in model plants and established their role in ABA signaling (Olsson et al., 2004;Valdés et al., 2012;Zhang et al., 2012). Overall, our results in conjunction with available reports suggest that OsHOX24 and OsHOX22 may act in an ABAdependent abiotic stress response pathway. Several genes were found to be differentially expressed in OsHOX24 Arabidopsis transgenics, which were related to secondary metabolite biosynthesis, electron carriers and IAA biosynthesis, and amino acid and fatty acid degradation pathways. The role of secondary metabolites, amino acids, fatty acids, and electron carriers are well known in plant stress adaptation (Ramakrishna and Ravishankar, 2011;Elkereamy et al., 2012;Anjum et al., 2015;Kapoor, 2015). Besides this, the crucial role of IAA in abiotic stress responses has also been proposed (Jain and Khurana, 2009;Sharma et al., 2015). Down-regulation of genes involved in biosynthesis of osmoprotectants or secondary metabolites, coupled with elevated transcript levels of genes involved in fatty acid degradation may be responsible for greater susceptibility of Arabidopsis transgenics to abiotic stresses. Several stressresponsive genes are known to be induced under various abiotic stress conditions in plants (Walley et al., 2007;Pitzschke et al., 2009;Kim et al., 2010). The down-regulation of these genes in the Arabidopsis transgenics can explain their higher susceptibility to various abiotic stresses to some extent. CONCLUSION OsHOX24 and OsHOX22 were found to be differentially expressed under various abiotic stress conditions at different stages of rice development. We demonstrated that these nuclearlocalized homeobox TFs possess transactivation and dimerization properties. We also identified novel interacting proteins of homeobox TFs and many of them are stress-responsive. We showed the binding ability of OsHOX22 and OsHOX24 with specific cis-regulatory elements and identified several putative downstream targets. The over-expression of OsHOX24 imparted higher susceptibility to various abiotic stresses in the transgenic Arabidopsis plants as revealed by several physiological and molecular assays. Overall, our results highlight the role of OsHOX24 and OsHOX22 TFs in abiotic stress responses. In future, the generation and analysis of knock-out transgenic lines would be able to provide more insights about the role of these homeobox TFs in abiotic stress tolerance. AUTHOR CONTRIBUTIONS MJ conceived and supervised the whole study. AB performed all the experiments, analyzed the data and wrote the manuscript. MJ and JPK participated in data analysis and writing the manuscript.
Automatic Melody Harmonization with Triad Chords: A Comparative Study Several prior works have proposed various methods for the task of automatic melody harmonization, in which a model aims to generate a sequence of chords to serve as the harmonic accompaniment of a given multiple-bar melody sequence. In this paper, we present a comparative study evaluating and comparing the performance of a set of canonical approaches to this task, including a template matching based model, a hidden Markov based model, a genetic algorithm based model, and two deep learning based models. The evaluation is conducted on a dataset of 9,226 melody/chord pairs we newly collect for this study, considering up to 48 triad chords, using a standardized training/test split. We report the result of an objective evaluation using six different metrics and a subjective study with 202 participants. Introduction Automatic melody harmonization, a sub-task of automatic music generation (Fernández & Vico, 2013), refers to the task of creating computational models that can generate a harmonic accompaniment for a given melody (Chuan & Chew, 2007;Simon, Morris, & Basu, 2008). Here, the term harmony, or harmonization, is used to refer to chordal accompaniment, where an accompaniment is defined relative to the melody as the supporting section of the music. Figure 1 illustrates the inputs and outputs for a melody harmonization model. Melody harmonization is a challenging task as there are multiple ways to harmonize the same melody; what makes a particular harmonization pleasant is subjective, and often dependent on musical genre and other contextual factors. Tonal music, which encompasses most of Western music, defines specific motivic relations between chords based on scales such as those defined in functional harmony (Riemann, 1893). While these relations still stand and are taught today, their application towards creating Yeh, Hsiao, Liu, Dong and Yang are with Academia Sinica, Taiwan ({ycyeh, wayne391, paul115236, salu133445, yang}@citi.sinica.edu.tw); Fukayama is with National Institute of Advanced Industrial Science and Technology, Japan (satoru s.fukayama@aist.go.jp); Kitahara is with Nihon University, Japan (kitahara@chs.nihon-u.ac.jp); Genchel is with Georgia Institute of Technology, USA (ben-jiegenchel@gmail.com); Chen and Leong are with KKBOX Inc., Taiwan (annchen@kkbox.com, ter-enceleong@kkboxgroup.com) Figure 1. Diagram of the slightly modified version of the bidirectional long short-term memory network (BiLSTM) based model (Lim et al., 2017) for melody harmonization. The input to the model is a melody sequence. With two layers of BiLSTM and one fully-connected (FC) layer, the model generates as output a sequence of chord labels (e.g., Cm or B chords), one for each half bar. See Section 2.4 for details. pleasant music often depends on subtleties, long term dependencies and cultural contexts which may be readily accessible to a human composer, but very difficult to learn and detect for a machine. While a particular harmonization may be deemed technically correct in some cases, it can also be seen as uninteresting in a modern context. There have been several efforts made towards this task in the past (Makris, Kayrdis, & Sioutas, 2016). Before the rise of deep learning, the most actively employed approach is based on hidden Markov models (HMMs). For example, (Paiement, Eck, & Bengio, 2006) proposed a tree-structured HMM that allows for learning the non-local dependencies of chords, and encoded probabilities for chord substitution taken from psycho-acoustics. They additionally presented a novel representation for chords that encodes relative scale degrees rather than absolute note values, and included a subgraph in their model specifically for processing it. (Tsushima, Nakamura, Itoyama, & Yoshii, 2017) similarly presented a hierarchical tree-structured model combining probabilistic context-free grammars (PCFG) for chord symbols and HMMs for chord rhythms. (Temperley, 2009) presented a statistical model that would generate and analyze music along three sub-structures: metrical structure, harmonic structure, and stream structure. In the generative portion of this model, a metrical structure defining the emphasis of beats and sub-beats is first generated, and then harmonic structure and progression are generated conditioned on that metrical structure. There are several previous works which attempt to formally and probabilistically analyze tonal harmony and harmonic structure. For example, (Rohrmeier & Cross, 2008) applied a number of statistical techniques to harmony in Bach chorales in order to uncover a proposed underlying harmonic syntax that naturally produces common perceptual and music theoretic patterns including functional harmony. (Jacoby, Tishby, & Tymoczko, 2015) attempted to categorize common harmonic symbols (scale degrees, roman numerals, or sets of simultaneous notes) into higher level functional groups, seeking underlying patterns that produce and generalize functional harmony. (Tsushima, Nakamura, Itoyama, & Yoshii, 2018) uses unsupervised learning in training generative HMM and PCFG models for harmonization, showing that the patterns learned by these models match the categorizations presented by functional harmony. More lately, people have begun to explore the use of deep learning for a variety of music generation tasks (Briot, Hadjeres, & Pachet, 2017). For melody harmonication, (Lim et al., 2017) proposed a model that employed two bidirectional long short-term memory (BiLSTM) recurrent layers (Hochreiter & Schmidhuber, 1997) and one fullyconnected layer to learn the correspondence between pairs of melody and chord sequences. The model architecture is depicted in Figure 1. According the experiments reported in (Lim et al., 2017), this model outperforms a simple HMM model and a more complicated DNN-HMM model (Hinton et al., 2012) for melody harmonization with major and minor triad chords. We note that, while many new models are being proposed for melody harmonization, at present there is no comparative study evaluating a wide array of different approaches for this task, using the same training set and test set. Comparing models trained on different training sets is problematic as it is hard to have a standardized definition of improvement and quality. Moreover, as there is to date no standardized test set for this task, it is hard to make consistent comparison between different models. In this paper, we aim to bridge this gap with the following three contributions: (1) We implement a set of melody harmonization models which span a number of canonical approaches to the task, including template matching, hidden Markov model (HMM) (Simon et al., 2008), genetic algorithm (GA) (Kitahara, Giraldo, & Ramirez, 2018), and two variants of deep recurrent neural network models (Lim et al., 2017). We then present a comparative study comparing the performance of these models. To our best knowledge, a comparative study that considers such a diverse set of approaches for melody harmonization using a standardized dataset has not been attempted before. (2) We compile a new dataset, called the Hooktheory Pianoroll Triad Dataset (HTPD3), to evaluate the implemented models over well-annotated lead sheet samples of music. A lead sheet is a form of musical notation that specifies the essential elements of a song-the melody, harmony, and where present, lyrics (Liu & Yang, 2018). HTPD3 provides melody lines and accompanying chords specifying both chord symbol and harmonic function useful for our study. We consider 48 triad chords in this study, including major, minor, diminished, and augmented triad chords. We use the same training split of HTPD3 to train the implemented models and evaluate them on the same test split. (3) We employ six objective metrics for evaluating the performance of melody harmonization models. These metrics consider either the distribution of chord labels in a chord sequence, or how the generated chord sequence fits with the given melody. In addition, we conduct an online user study and collect the feedback from 202 participants around the world to assess the quality of the generated chordal accompaniment. We discuss the findings of comparative study, hoping to gain insights into the strength and weakness of the evaluated methods. Moreover, we show that incorporating the idea of functional harmony (Chen & Su, 2018) into account while harmonizing melodies greatly improves the result of the model presented by (Lim et al., 2017). In what follows, we present in Section 2 the models we consider and evaluate in this comparative study. Section 3 provides the details of the HTPD3 dataset we build for this study, and Section 4 the objective metrics we consider. Section 5 presents the setup and result of the study. We discuss the findings and limtiations of this study in Section 6, and then conclude the paper in Section 7. Automatic Melody Harmonization Models A melody harmonization model takes a melody sequence of T bars as input and generates a corresponding chord sequence as output. Chord Sequence is defined here as a series of chord labels Y = y 1 , y 2 , . . . , y M , where M denotes the length of the sequence. In this work, each model predicts a chord label for every half bar, i.e. M = 2T . Each label y j is chosen from a finite chord vocabulary C. To reduce the complexity of this task, we consider here only the triad chords, i.e., chords composed of three notes. Specifically, we consider major, minor, diminished, and augmented triad chords, all in root position. We also consider No Chord (N.C.), or rest, so the size of the chord vocabulary is |C| = 49. Melody Sequence is a time-series of monophonic musical notes in MIDI format. We compute a sequence of features as X = x 1 , x 2 , . . . , x N to represent the melody and use them as the inputs to our models. Unless otherwise specified, we set N = M , computing a feature vector for each half bar. Given a set of melody and corresponding chord sequences, a melody harmonization model f (·) can be trained by minimizing the loss computed between the ground truth Y * and the model outputŶ * = f (X * ), where X * is the input melody. We consider three non-deep learning based and two deep learning based models in this study. While the majority are adaptation of existing methods, one (deep learning based) is a novel method which we introduce in this paper (see Section 2.5). All models are carefully implemented and trained using the training split of HTPD3. We present the technical details of these models below. Template Matching-based Model This model is based on an early work on audio-based chord recognition (Fujishima, 1999). The model segments training melodies into half-bars, and constructs a pitch profile for each segment. The chord label for a new segment is then selected based on the label for the training segment whose pitch profile it most closely matches. When there is more than one possible chord template that has the highest matching score, we choose a chord randomly based on uniform distribution among the possibilities. We refer to this model as template matching-based as the underlying method compares the profile of a given melody segment with those of the template chords. We use Fujishima's pitch class profile (PCP) (Fujishima, 1999) as the pitch profile representing respectively the melody and chord for each half-bar. A PCP is a 12dimensional feature vector x ∈ [0, 1] 12 where each element corresponds to the activity of a pitch class. The PCP for each of the |C| chord labels is constructed by setting the elements corresponding to the pitch classes that are part of the chord to one, and all the others to zero. Because we consider only triad chords in this work, there will be exactly three one's in the PCP of a chord label for each half bar. The PCP for melody is constructed similarly, but additionally considering the duration of notes. Specifically, the activity of the k-th pitch class, i.e., x k ∈ [0, 1], is set by the ratio of time the pitch class is active during the corresponding half bar. The result of this model are more conservative by design, featuring intensive use of chord tones. And, this model sets the chord label independently for each half bar, without considering the neighboring chord labels, or the chord progression over time. We note that, to remove the effect of the input representations on the harmonization result, we use PCP as the model input representation for all the other models we implement for melody harmonizationm. HMM-based Model HMM is a probabilistic framework for modeling sequences with latent or hidden variables. Our HMM-based harmonization model regards chord labels as latent variables and estimates the most likely chord sequence for a given set of melody notes. Unlike the template matching-based model, this model considers the relationship between neighboring chord labels. HMM-based models similar to this one were widely used in chord generation and melody harmonization research before the current era of deep learning (Raczyński, Fukayama, & Vincent, 2013;Simon et al., 2008). We adopt a simple HMM architecture employed in (Lim et al., 2017). This model makes the following assumptions: 1. The observed melody sequence X = x 1 , . . . , x M is statistically biased due to the hidden chord sequence Y = y 1 , . . . , y M , which is to be estimated. The task is to estimate the most likely hidden sequenceŶ =ŷ 1 , . . . ,ŷ M given X. This amounts to maximizing the posterior probability: where P (y 1 |y 0 ) is equal to P (y 1 ). The term P (x m |y m ) is also called the emission probability, and the term P (y m |y m−1 ) is called the transition probability. This optimization problem can be solved by the Viterbi algorithm (Forney, 1973). Departing from the HMM in (Lim et al., 2017), our implementation uses the PCPs described in Section 2.1 to represent melody notes, i.e., to compute x m . Accordingly, we use multivariate Gaussian distributions to model the emission probabilities, as demonstrated by Fujishima (Sheh & Ellis, 2003). For each chord label, we set the covariance matrix of the corresponding Gaussian distribution to be a diagonal matrix, and calculate the mean and variance for each dimension from the PCP features of melody segments that are associated with that chord label in the training set. To calculate the transition probabilities, we count the number of transitions between successive chord labels (i.e., bi-grams), then normalize those counts to sum to one for each preceding chord label. A uniform distribution is used when there is no bigram count for the preceding chord label. To avoid zero probabilities, we smooth the distribution by interpolating P (y m |y m−1 ) with the prior probability P (y m ) as follows, yielding the revised transition probability P (y m |y m−1 ). The hyperparameter β is em-pirically set to 0.08 via experiments on a random 10% subset of the training set. Genetic Algorithm (GA)-based Model A GA is a flexible algorithm that generally maximizes an objective function or fitness function. GAs have been used for melody generation and harmonization in the past (de Len, Iesta, Calvo-Zaragoza, & Rizo, 2016; Phon-Amnuaisuk & Wiggins, 1999), justifying their inclusion in this study. A GA can be used in both rule-based and probabilistic approaches. In the former case, we need to design a rule set of what conditions must be satisfied for musically acceptable melodies or harmonies-the fitness function is formulated based on this rule set. In the latter, the fitness function is formulated based on statistics of a data set. Here, we design a GA-based melody harmonization model by adapting the GAbased melody generation model proposed by (Kitahara et al., 2018). Unlike the other implemented models, the GA-based model takes as input a computed feature vector for every 16-th note (i.e., 1/4 beats). Thus, the melody representation has a temporal resolution 8 times that of the chord progression (i.e., N = 8M ). This means that x 8m and y m point to the same temporal position. Our model uses a probabilistic approach, determining a fitness function based on the following elements. First, the (logarithmic) conditional probability of the chord progression given the melody is represented as: where is the ceiling function. The chord transition probability is computed as: The conditional probability of each chord given its temporal position is defined as: where Pos m is the temporal position of the chord y m . For simplicity, we defined Pos m = mod(m, 8), where mod is the modulo function. With this term, the model may learn that the tonic chord tends to appear at the first half of the first bar, while the dominant (V ) chord tends to occur at the second half of the second bar. Finally, we use the entropy to evaluate a chord sequence's complexity, which should not be too low as to avoid monotonous chord sequences. The entropy is defined as In the fitness function, we evaluate how likely this entropy E(Y ) is in a given data set. where E is the random variable of the entropy of chord progressions and is discritized by 0.25. Its probability distribution is obtained from the training data. The fitness function F (Y ) is calculated as: We simply set all the weights w 1 , w 2 , w 3 , w 4 to 1.0 here. Deep BiLSTM-based Model This first deep learning model is adapted from the one proposed by (Lim et al., 2017), which uses BiLSTM layers. This model extracts contextual information from the melody sequentially from both the positive and negative time directions. The original model makes chord prediction for every bar, using a vocabulary of only the major and minor triad chords (i.e., |C| = 24). We slightly extend this model such that the harmonic rhythm is a half bar, and the output chord vocabulary includes diminished and augmented chords, and the N.C. symbol (i.e., |C| = 49). As shown in Figure 1, this model has two BiLSTM layers, followed by a fullyconnected layer. Dropout (Srivastava, Hinton, Krizhevsky, Sutskever, & Salakhutdinov, 2014) is applied with probability 0.2 at the output layer. This dropout rate, as well as the number of hidden layers and hidden units, are empirically chosen by maximizing the chord prediction accuracy on a random held-out subset of the training set. We train the model using minibatch gradient descent with categorical cross entropy as the the cost function. We use Adam as the optimizer and regularize by early stopping at the 10-th epoch to prevent over-fitting. Deep Multitask Model: MTHarmonizer From our empirical observation on the samples generated by the aforementioned BiL-STM model, we find that the model has two main defects for longer phrases: (1) overuse of common chords-common chords like C, F, and G major are repeated and overused, making the chord progression monotonous. (2) incorrect phrasing-non-congruent phrasing between the melody and chords similarly results from the frequent occurrence of common chords. The resulting frequent occurrence of progressions like F→C or G→C in generated sequences implies a musical cadence in an unfit location, potentially bringing an unnecessary sense of ending in the middle of a chord sequence. We propose an extension of the BiLSTM model to address these two defects. The core idea is to train the model to predict not only the chord labels but also the chord functions (Chen & Su, 2018), as illustrated in Figure 2. We call the resulting model a deep multitask model, or MTHarmonizer, since it deals with two tasks at the same time. We note that the use of the chord functions for melody harmonization has been found useful by (Tsushima et al., 2018), using an HMM-based model. Functional harmony elaborates the relationship between chords and scales, and describes how harmonic motion guides musical perception and emotion (Chen & Su, 2018). While a chord progression consisting of randomly selected chords generally feels aimless, chord progressions which follow the rules of functional harmony establish or contradict a tonality. Music theorists annotate each scale degree into tonal, subdominate, dominate functions based on what chord is associated with that degree in a particular scale. This function explains what role a given scale degree, and its associated chord relative to the scale, plays in musical phrasing and composition. We briefly describe each of these functions below: • the tonal function serves to stabilize and reinforce the tonal center. • the subdominate function pulls a progression out of the tonal center. • the dominate function provides a strong sense of motion back to tonal center. For example, a progression that moves from a dominant function scale degree chord to a tonal scale degree chord first creates tension, then resolves it. As will be introduced in Section 3, all the pieces in HTPD3 are in either C Major or c minor. Therefore, all chords share the same tonal center. We can directly map the chords into 'tonal,' 'dominate,' and 'others' (which includes the subdominate) functional groups, by name, without worrying about their relative functions in other keys, for other tonal centers. Specifically, we consider C, Am, Cm, A as tonal chords, G and B diminished as dominate chords, and the others as subdominate chords. We identify two potential benefits of adding chord functions to the target output. First, in contrast to the distribution of chord labels, the distribution of chord functions is relatively balanced, making it easier for the model to learn the chord functions. Second, as the chord functions and chord labels are interdependent, adding the chord functions as a target informs the model which chord labels share the same function and may therefore be interchangeable. We hypothesize that this multi-task learning will help our model learn proper functional progression, which in turn will produce better harmonic phrasing relative to the melody. Specifically, the loss function of MTHarmonizer is defined as: where H(·) denotes the categorical cross entropy function, f (·) the chord label prediction branch, and g(·) the chord function prediction branch. When γ = 0, the model reduces to the uni-task model proposed by (Lim et al., 2017), and we can simply write Y chord as Y . In our work, we set γ = 1.5 to ensure the loss value from L chord and L function are equally scaled. The two branches f and g share the two BiLSTM layers but not the fully-connected layer. Empirically, we found that if γ is too small, the model will tend to harmonize the melody with the chords with tonal and dominate functions; the resulting chord sequences would therefore lack diversity. The outputs of f and g are likelihood values for each chord label and chord function given an input melody. As Figure 2 shows, in predicting the final chord sequence, we rely on a weighted combination of the outputs of f and g in the following way: where h(·) is simply a look-up table that maps the three chord functions to the |C| chord labels, and α m is a pre-defined hyperparameter that allows us to boost the importance of correctly predicting the chord function over that of correctly predicting the chord label, for each chord. In our implementation, we set α m = 1.8 for the subdominate chords, and α m = 1.0 for all the other chords, to encourage the model to select chord labels that have lower likelihood to increase the overall diversity, yet without degrading the phrasing. This is because, in the middle of a musical phrase, the likelihood to observe a subdominate chord is more likely to be close to that of a tonal chord or a dominate chord. Emphasizing the subdominate chords by using a larger α m would therefore have the chance to replace a tonal chord or a dominate chord by a subdominate chord. This is less likely to occur in the beginning or the end of a phrase, as the likelihood of observing subdominate chords there would tend to be low. As we will mainly "edit" the middle part of a chord sequence with subdominate chords, we would not compromise the overall chord progression and phrasing. Proposed Dataset For the purpose of this study, we firstly collect a new dataset called the Hooktheory Lead Sheet Dataset (HLSD), which consists of lead sheet samples scraped from the online music theory forum called TheoryTab, hosted by Hooktheory (https:// www.hooktheory.com/theorytab), a company that produces pedagogical music software and books. The majority of lead sheet samples found on TheoryTab are usercontributed. Each piece contains high-quality, human-transcribed melodies alongside their corresponding chord progressions, which are specified by both literal chord symbols (e.g., Gmaj7), and chord functions (e.g., VI7) relative to the provided key.Chord symbols specify inversion if applicable, and the full set of chord extensions (e.g., #9, b11). The metric timing/placement of the chords is also provided. Due to copyright concerns, TheoryTab prohibits uploading full length songs. Instead, users upload snippets of a song (here referred to as lead sheet samples), which they voluntarily annotate with structural labels (e.g. "Intro," "Verse," and "Chorus") and genre labels. A music piece can be associated with multiple genres. HLSD contains 11,329 lead sheets samples, all in 4/4 time signature. It contains up to 704 different chord classes, which is deemed too many for the current study. We therefore take the following steps to process and simplify HLSD, resulting in the final HTPD3 dataset employed in the performance study. • We remove lead sheet samples that do not contain a sufficient number of notes. Specifically, we remove samples whose melodies comprise of more than 40% rests (relative to their lengths). One can think of this as correcting class imbalance, another common issue for machine learning models-if the model sees too much of a single event, it may overfit and only produce or classify that event. • We then filter out lead sheets that are less than 4 bars and longer than 32 bars, so that 4 ≤ T ≤ 32. This is done because 4 bars is commonly seen as the minimum length for a complete musical phrase in 4/4 time signature. At the other end, 32 bars is a common length for a full lead sheet, one that is relatively long. Hence, as the majority of our dataset consists of mere song sections, we are inclined for not including samples longer than 32 bars. • The HLSD provides the key signatures of every samples. We transpose every samples to either C major or c minor based on the provided key signatures. • In general, a chord label can be specified by the pitch class of its root note (among 12 possible pitch classes, i.e., C, C#, . . . , B, in a chromatic scale), and its chord quality, such as 'triad', 'sixths', 'sevenths', and 'suspended.' HLSD contains 704 possible chord labels, including inversions. However, the distribution of these labels is highly skewed. In order to even out the distribution and simplify our task, we reduce the chord vocabulary by converting each label to its root position triad form, i.e., the major, minor, diminished, and augmented chords without 7ths or additional extensions. Suspended chords are mapped to the major and minor chords. As a result, only 48 chord labels (i.e., 12 root notes by 4 qualities) and N.C. are considered (i.e., |C| = 49). • We standardize the dataset so that a chord change can occur only every bar or every half bar. We do admit that this simplification can decrease the chord color and reduce the intensity of tension/release patterns, and can sometimes convert a vibrant, subtle progression into a monotonous one (e.g., because both CMaj7 and C7 are mapped to C chord). We plan to make full use of the original chord vocabulary in future works. Having pre-defined train and test splits helps to facilitate the use of HTPD3 for evaluating new models of melody harmonization via the standardization of training procedure. As HTPD3 includes paired melody and chord sequences, it can also be used to evaluate models for chord-conditioned melody generation as well. With these use cases in mind, we split the dataset so that the training set contains 80% of the pieces, and the test set contains 10% of the pieces. There are in total 923 lead sheet samples in the test set. The remaining 10% is reserved for future use. When splitting, we imposed the additional requirement that lead sheet samples from the same song are in the same subset. Proposed Objective Metrics To our knowledge, there are at present no standardized, objective evaluation metrics for the melody harmonization task. The only objective metric adopted by (Lim et al., 2017), in evaluating the models they built is a categorical cross entropy-based chord prediction error, representing the discrepancy between the ground truth chords Y * and predicted chords.Ŷ * = f (X * ). The chord prediction error is calculated for each half bar individually and then got averaged, not considering the chord sequence as a whole. In addition, it does not directly measure how the generated chord sequenceŶ * fits with the given melody X * . For the comparative study, we introduce here a set of six objective metrics defined below. These metrics are split into two categories, namely three chord progression metrics and three chord/melody harmonicity metrics. Please note that we do not evaluate the melody itself, as the melody is provided by the ground truth data. Chord progression metrics evaluate each chord sequence as a whole, independent from the melody, and relate to the distribution of chord labels in a sequence. • Chord histogram entropy (CHE): Given a chord sequence, we create a histogram of chord occurrences with |C| bins. Then, we normalize the counts to sum to 1, and calculate its entropy: where p i is the relative probability of the i-th bin. The entropy is greatest when the histogram follows a uniform distribution, and lowest when the chord sequence uses only one chord throughout. • Chord coverage (CC): The number of chord labels with non-zero counts in the chord histogram in a chord sequence. • Chord tonal distance (CTD): The tonal distance proposed by (Harte, Sandler, & Gasser, 2006) is a canonical way to measure the closeness of two chords. It is calculated by firstly calculating the PCP features of two chords, projecting the PCP features to a derived 6-D tonal space, and finally calculating the Euclidean distance between the two 6-D feature vectors. CTD is the average value of the tonal distance computed between every pair of adjacent chords in a given chord sequence. The CTD is highest when there are abrupt changes in the chord progression (e.g., from C chord to B chord). Chord/melody harmonicity metrics, on the other hand, aims to evaluate the degree to which a generated chord sequence successfully harmonizes a given melody sequence. • Chord tone to non-chord tone ratio (CTnCTR): In reference to the chord sequence, we count the number of chord tones, and non-chord tones in the melody sequence. Chord tones are defined as melody notes whose pitch class are part of the current chord (i.e., one of the three pitch classes that make up a triad) for the corresponding half bar. All the other melody notes are viewed as non-chord tones. One way to measure the harmonicity is to simply computing the ratio of the number of the chord tones (n c ) to the number of the non-chord tones (n n ). However, we find it useful to further take into account the number of a subset of non-chord tones (n p ) that are two semitones within the notes which are right after them, where subscript p denotes a "proper" non-chord tone. We define CTnCTR as n c + n p n c + n n . CTnCTR equals one when there are no non-chord tones at all, or when n p = n n . • Pitch consonance score (PCS): For each melody note, we calculate a consonance score with each of the three notes of its corresponding chord label. The consonance scores are computed based on the musical interval between the pitch of the melody notes and the chord notes, assuming that the pitch of the melody notes is always higher. This is always the case in our implementation, because we always place the chord notes lower than the melody notes. The consonance score is set to 1 for consonance intervals including unison, major/minor 3rd, perfect 5th, major/minor 6th, set to 0 for a perfect 4th, and set to -1 for other intervals, which are considered dissonant. PCS for a pair of melody and chord sequences is computed by averaging these consonance scores across a 16th-note windows, excluding rest periods. • Melody-chord tonal distance (MCTD): Extending the idea of tonal distance, we represent a melody note by a PCP feature vector (which would be a one-hot vector) and compare it against the PCP of a chord label in the 6-D tonal space (Harte et al., 2006) to calculate the closeness between a melody note and a chord label. MCTD is the average of the tonal distance between every melody note and corresponding the chord label calculated across a melody sequence, with each distance weighted by the duration of the corresponding melody note. Comparative Study We train all the five models described in Section 2 using the training split of HTPD3 and then apply them to the test split of HTPD3 to get the predicted chord sequences for each melody sequence. Examples of the harmonization result of the evaluated models can be found in Figures 3 and 4. The chord accuracy for the template matching-based, HMM-based, GA-based, BiLSTM-based, MTHarmonizer models is 29%, 31%, 20%, 35%, and 38%, respectively. We note that, since one cannot judge the full potential of each algorithm only from our simplified setting of melody harmonization, we do not intend to find what method is the best in general. We rather attempt a challenge to compare different harmonization method which have not been directly compared because of the different context that each approach assumes. In what follows, we use the harmonization result for a random subset of the test set comprising 100 pieces in a user study for subjective evaluation. The result of this subjective evaluation is presented in Section 5.1. Then, in Section 5.2, we report the results of an objective evaluation wherein we compute the mean values of the chord/melody harmonicity and chord progression metrics presented in Section 4 for the harmonization results for each test set piece. Subjective Evaluation We conducted an online survey where we invited human subjects to listen to and assess the harmonization results of different models. The subjects evaluated the harmoniza- Figure 3, the result of the MTHarmonizer appears to be more diverse and functionally correct. We also see that the result of GA is quite "interesting"-e.g., with nondiatonic chord D flat Major and close the music phrase with Picardy third (i.e., a major chord of the tonic at the end of a chord sequence that is in a minor key). We also see that the non-deep learning methods seem to be weaker in handling the tonality of music. tions in terms of the following criteria: • Harmonicity: The extent to which a chord progression successfully or pleasantly harmonizes a given melody. This is designed to correspond to what the melody/chord harmonicity metrics described in Section 4 aim to measure. • Interestingness: The extent to which a chord progression sounds exciting, unexpected and/or generates "positive stimulation. This criterion corresponds to the chord-related metrics described in Section 4. Please note that we use a less technical term "interestingness" here since we intend to solicit feedback from people either with or without musical backgrounds. • The Overall quality of the given harmonization. Given a melody sequence, we have in total six candidate chord sequences to accompany it: those generated by the five models presented in Section 2, and the humancomposed, ground-truth progression retrieved directly from the test set. We intend to compare the results of the automatically generated progression with the original human-composed progression. Yet, given the time and cognitive load required, it was not possible to ask each subject to evaluate the results of every model for every piece of music in the test set (there are 6 × 923 = 5, 538 sequences in total). We describe below how our user study is designed to make the evaluation feasible. Design of the User Study First, we randomly select 100 melodies from the test set of HTPD3. For each human subject, we randomly select three melody sequences from this pool, and present to the subject the harmonization results of two randomly selected models for each melody sequence. For each of the three melodies, the subject listens to the melody without accompaniment first, and then the sequence with two different harmonizations. Thus, the subject has to listen to nine music pieces in total: three melody sequences and the six harmonized ones. As we have six methods for melody harmonization (including the original human-composed harmonization), we select methods for each set of music such that each method is presented once and only once to each subject. The subjects are not aware of which harmonization is generated by which method, but are informed that at least one of the harmonized sequence is human-composed. In each set, the subject has to listen to the two harmonized sequences and decide which version is better according to the three criteria mentioned earlier. This ranking task is mandatory. In addition, the subject can choose to further grade the harmonized sequences in a five-point Likert scale with respect to the criteria mentioned earlier. Here, we break "harmonicity" into the following two criteria in order to get more feedback from subjects: • Coherence: the coherence between the melody and the chord progression in terms of harmonicity and phrasing. • Chord Progression: how coherent, pleasant, or reasonable the chord progression is on its own, independent of the melody. This optional rating task thus has four criteria in total. The user study opens to an "instructions" page, that informs the subjects that we consider only root-positioned triad chords in the survey. Moreover, they are informed that there is no "ground truth in melody harmonization-the task is by nature subjective. After collecting a small amount of relevant personal information from the subjects, we present them with a random audio sample and encourage them to put on their headsets and adjust the volume to a comfortable level. After that, they are prompted to begin evaluating the three sets (i.e., one set for each melody sequence), one-by-one on consecutive pages. We spread the online survey over the Internet openly, without restriction, to solicit voluntary, non-paid participation. The webpage of the survey can be found at [URL removed for double-blind review]. User Study Results In total, 202 participants from 16 countries took part in the survey. We had more male participants than female (ratio of 1.82:1), and the average age of participants was 30.8 years old. 122 participants indicated that they have music background, and 69 of them are familiar with or expertise in the harmonic theory. The participants took on average 14.2 minutes to complete the survey. We performed the following two data cleaning steps: First, we discarded both the ranking and rating results from participants who spent less than 3 minutes to complete the survey, which is considered too short. Second, we disregarded rating results when the relative ordering of the methods contradicted that from the ranking results. As a result, 9.1% and 21% of the ranking and rating records were removed, respectively. We first discuss the results of the pairwise ranking task, which is shown in Figure 5. The following observations are made: • The human-composed progressions have the highest "win probabilities" on average in all the three ranking criteria. It performs particularly well in Harmonicity. • In general, the deep learning methods have higher probabilities to win over the non-deep learning methods in Harmonicity and Overall. • For Interestingness, GA performs the best among the five automatic methods, which we suspect stems from its entropy term (Eq. (6)). • Among the two deep learning methods, the MTHarmonizer consistently outperforms the BiLSTM in all ranking criteria, especially for Interestingness. We (subjectively) observe that MTHarmonizer indeed generates more diverse chord progressions compared to the vanilla BiLSTM, perhaps due to the consideration of functions. The results of the rating task shown in Figure 6, on the other hand, lead to the following observations: • Congruent with the results of the ranking task, the MTHarmonzer model achieves the second best performance here, only losing out to the original humancomposed chord progressions. The MTHarmonzier consistently outperforms the other four automatic methods in all the four metrics. With a paired t-test, we find that there is significant performance difference between the MTHarmonzer progressions and the original human-composed progressions in terms of Coherence and Chord Progression (p-value<0.005), but no significant difference in terms of Interestingness and Overall. • Among the four metrics, the original human-composed progressions score higher in Coherence (3.81) and Overall (3.78), and the lowest in Interestingness (3.43). This suggests that the way we simplify the data (e.g., using only root-positioned triad chords) may have limited the perceptual qualities of the music, in particular its diversity. • Generally speaking, the results in Chord Progression (i.e., the coherence of the chord progression on its own) seems to correlate better with the results in Coherence (i.e., the coherence between the melody and chord sequences) than the Interestingness of the chord progression. This suggests that a chord progression rated as being interesting may not sound coherent. • Although the GA performs worse than the MTHarmonizer on all the four metrics, it actually performs fairly well in Interestingness (3.23), as we have observed from the ranking result. A paired t-test showed no significant performance difference between the GA generated progressions and original human-composed progressions in Interestingness. A hybrid model that combines GA and deep learning may be a promising direction for future research. From the rating and ranking tasks, we see that, in terms of harmonicity, automatic methods still fall behind the human composition. However, the results of the two deep learning based methods are closer to that of the human-composed ones. Objective Evaluation The results are displayed in Table 1. We discuss the result of the melody/chord harmonicity metrics first. We can see that the results for the two deep learning methods are in general closer to the results for the original human-composed progressions than those of the three non-deep learning methods for all three harmonicity metrics, most significantly on the latter two. The template matching-based and HMM-based methods scores high in PCS and low in MCTD, indicating that the harmonization these two methods generate may be too conservative. In contrast, the GA scores low in PCS and high in MCTD, indicating overly low harmonicity. These results are consistent with the subjective evaluation, suggesting that these metrics can perhaps reflect human perception of the harmonicity between melody and chords. From the result of the chord progression metrics, we also see from CHE and CC that the progressions generated by the template matching-based and HMM-based methods seem to lack diversity. In contrast, the output of GA features high diversity. As the GA based method was rated lower than the template matching and HMM methods in terms of the Overall criterion in our subjective evaluation, it seems that the subjects care more about the harmonicity than the diversity of chord progressions. Comparing the two deep learning methods, we see that the MTHarmonizer uses more non-chord tones (smaller CTnCTR) and uses a greater number of unique chords (larger CC) than the BiLSTM model. The CHE of the MTHarmonizer is very close to that of the original human-composed progressions. In general, the results of the objective evaluation appear consistent with those of the subjective evaluation. It is difficult to quantify which metrics are better for what purposes, and how useful and accurate these metrics are overall. Therefore, our suggestion is to use them mainly to gain practical insights into the results of automatic melody harmonization models, rather than to judge their quality. As pointed out by (Dong, Hsiao, Yang, & Yang, 2018), objective metrics can be used to track the performance of models during development, before committing to running the user study. Yet, human evaluations are still needed to evaluate the quality of the generated music. Discussions We admit that the comparative study presented above has some limitations. First, because of the various preprocessing steps taken for data cleaning and for making the melody harmonization task manageable (cf. Section 3), the "human-composed" harmonizations are actually simplified versions of those found on TheoryTab. We considered triad chords only, and we did not consider performance-level attributes such as velocity and rhythmic pattern of chords. This limits the perceptual quality of the human-composed chord progression, and therefore also limits the results that can be achieved by automatic methods. The reduction from extended chords to triads reduces the "color" of the chords and creates many innacurate chord repetitions in the dataset (e.g., both the alternated CMaj7 and C7 will be reduced to C triad chord). We believe it is important to properly inform the human subjects of such limitations as we did in the instruction phase of our user study. We plan to compile other datasets from HLSD to extend the comparative study in the future. Second, in our user study we asked human subjects to rank and rate the results of two randomly chosen methods in each of the three presented sets. After analyzing the results, we found that the subject's ratings are in fact relative. For example, the MTHarmonizer's average score in Overall is 3.04 when presented alongside the humancomposed progressions, and 3.57 when confronted with the genetic algorithm-based model. We made sure in our user study that all the methods are equally likely to be presented together with every other method, so the average rating scores presented in Figure 6 do not favor a particular method. Still, caution is needed when interpreting the rating scores. Humans may not have a clear idea of how to consistently assign a score to a harmonization. While it is certainly easier to objectively compare multiple methods with the provided rating scores, we still recommended asking human subjects to make pairwise rankings in order to make the result more reliable. Third, we note that the HMM used in this study only equips essential functions and does not include extensions to improve the model, such as tying the probabilities, using tri-grams or extending the hidden layers which have been vastly discussed in the literature (Paiement et al., 2006;Temperley, 2009;Tsushima et al., 2017). This is to observe how the essential functions of the HMM characterize the harmonization results rather than to explore the full potential of the HMM-based models. Reviewing the properties of harmonization algorithms which imitate styles in a dataset as in our research still holds its importance, although recent music generation research is shifting towards measuring how systems can generate content that extrapolates meaningfully from what the model have learned (Zacharakis, Kaliakatsos-Papakostas, Tsougras, & Cambouropoulos, 2018). Extrapolation could be based on the model which also achieves interpolation or maintaining particular styles among data points. We believe we can further discuss extrapolation based on the understanding of how methods imitate data. Conclusion In this paper, we have presented a comparative study implementing and evaluating a number of canonical methods and one new method for melody harmonization, including deep learning and non-deep learning based approaches. The evaluation has been done using a lead sheet dataset we newly collected for training and evaluating melody harmonization. In addition to conducting a subjective evaluation, we employed in total six objective metrics with which to evaluate a chord progression given a melody. Our evaluation shows that deep learning models indeed perform better than non-deep learning ones in a variety of aspects, including harmonicity and interestingness. Moreover, a deep learning model that takes the function of chords into account reaches the best result among the evaluated models.
Predicting virus-host association by Kernelized logistic matrix factorization and similarity network fusion Background Viruses are closely related to bacteria and human diseases. It is of great significance to predict associations between viruses and hosts for understanding the dynamics and complex functional networks in microbial community. With the rapid development of the metagenomics sequencing, some methods based on sequence similarity and genomic homology have been used to predict associations between viruses and hosts. However, the known virus-host association network was ignored in these methods. Results We proposed a kernelized logistic matrix factorization with integrating different information to predict potential virus-host associations on the heterogeneous network (ILMF-VH) which is constructed by connecting a virus network with a host network based on known virus-host associations. The virus network is constructed based on oligonucleotide frequency measurement, and the host network is constructed by integrating oligonucleotide frequency similarity and Gaussian interaction profile kernel similarity through similarity network fusion. The host prediction accuracy of our method is better than other methods. In addition, case studies show that the host of crAssphage predicted by ILMF-VH is consistent with presumed host in previous studies, and another potential host Escherichia coli is also predicted. Conclusions The proposed model is an effective computational tool for predicting interactions between viruses and hosts effectively, and it has great potential for discovering novel hosts of viruses. well as robust growth of target host strains [8], which is usually difficult to achieve in experiments. The isolation method based on culturing bacteria is inefficient to identify viruses, and it only identifies relatively fewer viruses. Nowadays, discoveries of unknown viruses have been greatly accelerated by metagenomic shotgun sequencing, but unlike viral isolation, viral sequences assembled from metagenomics usually fails to directly obtain hosts infected by them. For example, crAssphage is a highly abundant human enterovirus, which may play an important role in the human intestinal tract, but the cultivation of crAssphage in the laboratory is still not achievable, so its hosts and biological function has not yet been identified [9]. As more and more metagenomic sequencing datasets are available, it is urgent to propose effective culture-free methods to identify new viruses and their hosts. Recognizing hosts infected by viruses is important for understanding dynamics of viruses and their effects on microbial communities. Recently, some computational methods have been used to infer associations between viruses and hosts. Edwards [10] et al. introduced three types of virus-host association prediction methods, including sequence homology [6,11,12], abundance profile co-occurrence [13] and sequence composition [14][15][16]. As for virus-host association prediction methods based on sequence homology, homologies between new viruses and potential hosts are limited, because they depend on whether hosts of the query virus exist in the host genome database. Abundance profile method is based on co-variation, but significant co-variation does not necessarily represent real interaction. Because there is usually a time delay in dynamic interactions between viruses and hosts, many interactions depending on timescale sampling may not be detected. Sequence composition is based on codon usage or short pairs of nucleotides (k-mers) shared by viruses and hosts to predict which hosts the virus infects. Ahlgren et al. proposed 11 measurements of oligonucleotide frequency (ONF) such as d à 2 to calculate k-mers distances between viruses and hosts [17]. This method achieves good results in host prediction accuracy at the genus level, but less than 40% at the species level. In addition, previous human microbial community studies relied on independent bacterial and viral communities, i.e. they were divided into two separate network communities [2,5], which could not capture complex dynamics of virus-host interactions. In this paper, we propose a logistic matrix factorization algorithm based on integrating multi-information on the heterogeneous network to predict potential virus-host associations (ILMF-VH). The main differences from previous studies are that our proposed method combines information of three networks to form a virus-host heterogeneous network and applies similar network fusion (SNF) to integrate multiple host information for constructing the host-host similarity network. We used the benchmark data of viral and bacterial genomes in NCBI, and verified that ILMF-VH obtained best performance compared with recent five network-based methods under five-fold cross validation. Moreover, the host prediction accuracy is 63.66% which is 24.66 and 13.29% higher than two recently proposed virus-host association prediction methods respectively, and it is 0.49% higher than our previous approach [18]. In addition, the host of crAssphage inferred by our algorithm includes putative host Bacteroides obtained from previous studies [9,19], and another potential host Escherichia coli is also suggested. Because previous studies have shown that Escherichia coli is associated with human intestinal diseases, such as diarrhea [20], our research indicates that crAssphage may be closely related to these diseases, and this proves that our approach is effective in predicting novel virus-host associations. Data sets We used the data adopted by Ahlgren et al. which collected accession numbers and taxonomies of 1427 viruses and 31,986 hosts. For the initial analysis, we selected a subset including 352 viruses whose hosts were at strain level [17]. In addition, we downloaded the benchmark datasets provided by Edwards et al. including accession numbers and taxonomies of 820 viruses and 2699 hosts [21]. Based on accession numbers of viruses and hosts, we have written scripts to obtain their whole genome sequences from NCBI. In terms of each virus, their known virus-host associations are obtained through the 'isolate host = 'or 'host = 'fields in the viral annotation file. The genome of crAssphage in the human intestinal metagenomic is downloaded from NCBI and the accession number is JQ995537.1 [19]. Methods As for our model, the virus set and host set are represented by V ¼ fv 1 ; v 2 ; …; v N v g and H ¼ fh 1 ; h 2 ; …; h N h g, where N v and N h represent the number of viruses and hosts, respectively. The associations between viruses and hosts are defined as an adjacency matrix Y ∈R N v ÂN h , if a virus v i is known to be associated with a host h j , then y ij is set to 1; otherwise, y ij is set to 0. In terms of elements in the adjacency matrix Y, the negative and positive interactions between viruses and hosts are represented by 0 and 1, respectively. In this work, firstly, we define a set of viruses which are positively related to hosts as V þ ¼ fv i j P N v i¼1 y ij > 0; ∀1≤ i ≤ N v g , then a set of viruses which are negatively related to hosts is defined as V − = V\V + . Next, a set of hosts which are positively related to viruses is defined as H þ ¼ fh j j P N h j¼1 y ij > 0; ∀1 ≤ j ≤N h g, and a set of hosts which are negatively related to viruses is defined as H − = H\H + . Finally, similarities between viruses are calculated by oligonucleotide frequency (ONF) measures and expressed by S v ∈R N v ÂN v ; similarities between hosts are calculated by integrating ONF measures and Gaussian interaction profile (GIP) kernel similarity based on SNF model, and expressed by S h ∈R N h ÂN h . Oligonucleotide frequency measures for viruses and hosts Recently, dissimilarity measurements based on k-mer frequencies have been applied to infer relationships between genomic sequences [17]. Here, based on the hypothesis that similar viruses or hosts share similar k-mer patterns, we calculated k-mer similarities between viral genomic sequences to measure correlations between viruses. Similarly, k-mer similarities between hosts' genomic sequences are calculated to measure correlations between hosts. According to previous research [17], d à 2 [22] has a good performance in calculating k-mer similarity and k is set to 6, so we calculate the distance between k-mer frequency vectors of each pair of viruses or hosts. Finally, the virus-virus similarity matrix S v and the host-host similarity matrix S(onf) h can be obtained. Gaussian interaction profile kernel similarity for hosts Zou et al. [23] calculated the GIP kernel similarity between microbes based on the known disease-microbe association matrix and achieved good results. Apart from sequence similarities of hosts, based on the assumption that similar hosts exhibit similar patterns with viruses, we apply GIP kernel similarity to measure associations between hosts. There are two steps to calculate GIP kernel similarity. First, the interaction profile IP(h i ) of host h i is the i-th column of the adjacency matrix Y, which is a binary relationship vector representing associations between a host h i and each virus. The GIP kernel similarity between host h i and h j is calculated from their interaction profiles and defined as [24]: This is a kernel that represents the similarities between hosts. These kernels are called Gaussian kernels. The parameter γ h is used to control the kernel bandwidth and defined as: Here, N h is the number of hosts. According to the previous study [25], we simply set r 0 h to 1. Integrated similarity for hosts The associations between hosts are measured by calculating ONF measures and GIP kernel similarity between hosts, respectively. Here, we introduce similar network fusion (SNF) [26] to integrate two host similarity networks. The SNF includes following three main steps. First, the edge weights of each host similarity network are represented by a N h × N h matrix S h , respectively. Then, as for each similarity network, a normalized weight matrix p can be obtained by the following formula [26]: Here S(i, j) is the matrix element of S h . Then, k nearest neighbor (KNN) is used to measure the local relationship as follows: N i represents the number of neighbors in the host. This method filters out low-similar edges. Let P (v) and KNN (v) represent similar matrices of the above two hosts, respectively. The process of SNF is an iterative update of similarity matrices, which corresponds to each data type as follows [26]: This step updates the matrix P (v) when m parallel exchange diffusion processes are generated on m host networks. In this paper, we have two types of host similar matrices, so m is set to 2. The final similarity matrix that integrates all data types is defined as follows: Construction of heterogeneous networks Kernelized logistic matrix factorization We developed a kernelized logistic matrix factorization algorithm based on network similarity fusion for predicting virus-host associations, and the flowchart of ILMF-VH model is shown in the Fig. 1. First, the binary matrix Y is decomposed into W ∈R N v Âk and ∈R N h Âk , so viruses and hosts are mapped to the shared potential lowdimensional space. Seq(v i , h j ) represents the ONF similarity between each pair of virus and host, and we integrate this sequence similarity information into the associated probability p ij , which represents association probability of virus-host pair (v i , h j ) and is defined as the logistic function: It is hypothesized that known relationships between viruses and hosts provide useful information for virus-host association prediction. Current importance weighting methods have been proven to be effective for personalized recommendations and drug-target interaction predictions [27,28]. We apply the weight constant c to control the level of importance between each known and unknown associations. According to previous studies, c is set to 5. The conditional probability of Y is defined as: In this work, we also use the neighborhood regularization method to regularize the logistic matrix factorization algorithm [28]. The nearest neighbors of virus v i and host h i are defined as N(v i ) ∈ V\v i and N(h j ) ∈ H\h i , N(v i ) and N(h i ) represent the K 1 neighbors of the virus v i or the host h i , respectively. K 1 is set to 5 according to the experiment. The neighborhood information of viruses and hosts is represented by the adjacency matrices A and B, respectively. In terms of matrix A, if virus v m ∈ N(v i ), a im ¼ s v im , otherwise a im = 0; in terms of Fig. 1 The flow chart of ILMF-VH model matrix B, if host h n ∈ N(h j ), b jn ¼ s h jn , otherwise b jn = 0 . The main purpose of virus-host association prediction is to minimize distances between v i /h i and nearest neighbors N(v i )/N(h i ). We should try to minimize the following objective formula: Where tr(•) is the trace of the matrix, Our goal is to find the minimum of the following objective functions: Where , σ v and σ h are expressed as the variance of Gaussian distribution of viruses and hosts, respectively. ‖•‖ F represents the Frobenius norm of the matrix, and W and H are randomly initialized using a Gaussian distribution with a mean of 0 and standard deviation of 1 ffi ffi r p . We use the AdaGrad algorithm [29] to solve the optimization problem of Eq. (11). When learning vectors W and H, vectors of the negative virus group or host group are learned only based on negative associations in the training process. However, some unknown virus-host associations may exist potential correlations. Based on previous studies, we replaced the vector of a negative virus/host with a linear combination of its neighbors in the positive set [28]. Here, we build K 2 nearest neighbor sets for each virus and host separately and K 2 is set to 5, according to the experimental study. We use N + (v i )/ N + (h j ) to express K 2 nearest neighbors of v i ∈ V − /h j ∈ H − in V + /H + . Therefore, w i and h j in Eq. (7) are corrected to: Evaluation metrics Based on the heterogeneous network constructed by the above method, we compare the AUC [30] and AUPR [31] of ILMF-VH and recent five network-based algorithms by five times five-fold cross-validation to evaluate their performances. Then, based on previous studies [10,17], we evaluated our virus-host association prediction methods by host prediction accuracy on a benchmark dataset including 820 viruses genomes. The host prediction accuracy refers to the percentage of the virus which is predicted to have the same host taxonomy level as known hosts of the query virus. Performance evaluation of different based-network methods In order to assess the performance of our model, we trained datasets including 352 viruses and 71 hosts to obtain model parameters and tested our model on benchmark datasets including 820 viruses and 2699 hosts. In addition, we compare ILMF-VH model with five recently proposed network-based methods (LMFH-VH [18], NetLapRLS [29], KBMF2K [32], BLM -NII [33], CMF [34]) through five-fold cross-validation in the dataset containing 352 viruses. In each round of five-fold cross-validation, one-fifth of the virus-host associations are set to test data, and corresponding elements in the adjacency matrix Y are set to 0, the other four subsets are used as training data. It should be noted that in each round of five-fold cross-validation experiment, when virus-host relationships are set to 0, the Y matrix has been changed, so each time we need to recalculate GIP kernel similarities between hosts, and then kernel similarities can be fused with ONF similarities of hosts by applying SNF model to obtain updated host-host similarities. In addition, according to previous studies [28,[32][33][34][35], the range of parameter settings for each method is shown in the Table 1. Here, we use a random search strategy [36] for each model to select optimal parameters. Table 2 shows the AUC values and AUPR values obtained by six methods in the data sets including 352 viruses. The results showed that ILMF-VH achieved the best performance and AUC value and AUPR value are 0.9202 and 0.6243, respectively. This result demonstrates the effectiveness of our model in virus-host association prediction. Sensitivity analysis of parameter values As seen in Additional file 1: Figure S1- Figure S4, these figures show AUPR values obtained by ILMF-VH model corresponding to different parameter settings. We also tested effects of different K value (the number of neighbors of KNN) of SNF model on AUPR values (Additional file 1: Figure S5). So, we mainly analyze five parameters of ILMF-VH and the number of neighbors K of SNF model. More specificity, we analyse the change trend of AUPR values with different factorization factor k used for matrix factorization. As shown in Additional file 1: Figure S1, the optimal value of k is 100 and average AUPR value of ILMF-VH is 0.6305 under five-fold cross validation. In addition, we also study impacts of regularization parameters α and β used for neighborhood smoothing in the prediction procedure. Additional file 1: Figure S2 shows the change trend of AUPR values under different α and β. The optimal values of α and β are 0.0625 and 0.25, respectively. When α > 0.0625 and β > 0.25, corresponding AUPR values begin to decrease. These results emphasize that neighbor regularization has a certain impact on the virus-host prediction model. Moreover, we also analyse effects of λ on the prediction procedure. Here, λ ¼ λ v ¼ 1 , σ v and σ h represent the variance of Gaussian distribution of viruses and hosts, respectively. As shown in Additional file 1: Figure S3, the AUPR value becomes larger gradually with the increase of λ, and when λ equals 2, AUPR reaches optimal value. Additional file 1: Figure S4 shows the variation trend of AUPR when learning rate parameters γ is set to different values. When γ equals 0.25, AUPR takes the optimal value; when γ increases, the AUPR value begins to decrease, so γ is set to 0.25. Furthermore, we also analyzed influences of different neighbor parameter K of SNF model on AUPR values. As shown in Additional file 1: Figure S5, the AUPR value reaches the optimal value when K is set to 5; when K increases again, the AUPR value begins to decrease, so the optimal value of K is 5. Comparison of ILMF-VH and previous virus-host prediction studies In this work, we apply the ILMF-VH method to the benchmark dataset including 820 viruses and 2699 complete bacterial genomes. First, we calculate scores between each virus and candidate hosts. The higher the predicted score, the more likely the virus is infected by the host. Here, the highest ranked host is identified as the predicted result of the given virus, and if the predicted host is the same as known host of the given virus at the species level, the predicted host is considered as a correct one. Figure 2 shows the host prediction accuracy of four types of methods include abundance profile cooccurrence, sequence homology, sequence composition, and network-based. The result shown that ILMF-VH achieved the highest host prediction accuracy (58.90%) compared with other three types of methods. In order to further improve the host prediction accuracy, we apply a consensus method [17] to our method. We believe that the most frequent host species in the top n predicted hosts of a virus can be classified as the host taxon of the given virus. The prediction accuracy is highest at n = 5, therefore, we selected the most frequent classification among the top 5 hosts as the host taxon of the query virus. As shown in Fig. 2, when a consensus strategy is applied to our model, the host prediction accuracy can be increased to 63.66%, which is 24.66%, 13.29 and 0.49% higher than three proposed virus-host prediction methods [10,17,18], respectively. As for the general situation, when a new virus lacks host information, we can use the ILMF-VH method to predict its potential hosts. First, we constructed a virushost network based on known virus-host associations; then GIP kernel similarities between hosts based on known virus-host associations can be calculated, and these GIP kernel similarities and ONF similarities of hosts are integrated through SNF model, so the host similarity network can be constructed. At the same time, we can calculate ONF similarities of whole genome sequences between the new virus and other viruses in the [17] predicted potential hosts of crAssphage based on sequence similarities between crAssphage and candidates hosts; WANG et al. [9] used the Markov random field integration network to predict potential hosts of crAssphage. They all suggested that bacteria belonging to Bacteroidetes are the host of crAssphage. According to previous study [19], crAssphage is a virus that is widely found in the human gut genome, but we know very little about its biological significance and hosts of crAssphage, due to the difficulty of culturing crAssphage. Different methods have been proposed to predict Fig. 2 The host prediction accuracy of four types of methods for benchmark datasets including 820 viruses and 2699 hosts hosts of the given virus, our information integration algorithm validates the host of crAssphage which was found in previous studies and also predicts another potential host Escherichia coli. As for each virus, the candidate host is ordered according to predicted association scores obtained by ILMF-VH algorithm. In this paper, we assume that if the known candidate host of a virus v j is h i , another new host h k at the same taxon level as the host h i may be a potential host of the virus v j . At the same time, the higher the predicted score of the candidate host h k , the more likely it is to have a potential correlation with the query virus. In the case study, we added the whole genome sequence of crAssphage to the similarity network containing 820 viruses, that is, similarities between the crAssphage and 820 virus sequences can be calculated based on ONF measurement, thus a new virus-virus similarity network can be constructed. Apart from that, we also add links between crAssphage and 2699 hosts to the virus-host network to build a new virus-host association network. Based on ONF measurement and known associations between viruses and hosts, we used our algorithm to obtain predicted scores between crAssphage and 2699 candidate hosts. Our approach supports the previous conclusion that candidate hosts belonging to Bacteroides are potential hosts of crAssphage. As for the top 50 predicted hosts of crAssphage, there were three hosts belonging to phylum Bacteroidetes and were ranked 4th, 44th and 50th: Cardinium endosymbiont of Encarsia pergandiella, Weeksella virosa, and Tannerella forsythia. Our prediction model also inferred that Escherichia coli belonging to phylum Proteobacteria is the potential host of crAssphage, and Escherichia coli ranks highest among 2699 hosts. A possible explanation for its highest predicted score is that the alignment-free similarity score between crAssphage and Escherichia coli is 0.6568, which is higher than the average score (0.6096) between the virus and all candidate hosts. Therefore, sequence alignment is an important part of extracting virus-host association signal, and it provides an efficient contribution indicator for this prediction result. Our algorithm predicted host of crAssphage based on the metagenomics sequencing data, which is identical to the putative host at phylum level in previous studies. In addition, another potential host Escherichia coli is also inferred. Recent studies have shown that [20] most Escherichia coli strains grow harmlessly in the gut and rarely cause diseases in healthy individuals. However, many pathogenic strains can cause diarrhea or extraintestinal disease in both healthy and immunocompromised individuals. Our experimental results suggest that crAssphage may play an important role in these diseases. In general, our algorithmic model is effective in predicting potential hosts of new viruses. Conclusion and outlook Viral infection usually results in changes in the ecosystem function of host cells. Virus-host association studies can reveal complex virus-host network interactions and are important for understanding of microorganism diversity. Despite this, although some methods for virus-host association prediction have been proposed, the host prediction accuracy at the species level cannot be achieved very well and these methods need to be improved. We present an effective method ILMF-VH for predicting virus-host associations. We performed the best performance compared to recent five network-based methods by five-fold cross-validation. Secondly, we compared the host prediction accuracy with several recently proposed virushost association prediction methods [10,17]. Our method obtained the highest host prediction accuracy (63.66%). Finally, we analyzed our method's abilities to predict potential hosts for the given virus. As for the crAssphage, our predicted hosts are corresponding to previous studies, and predicted another host Escherichia coli is associated with intestinal diseases. In general, it is important to study virushost associations. Our research not only has potential to predict hosts of viruses, but also can be applied to predict virus-host associations. Although some results have been achieved so far, there are still some problems that can be further studied in the future. First, the biology characteristics of viruses and hosts are abundant and varied. Apart from whole genome sequences, protein, amino acid, abundance profile and other related information might also have contribution to the prediction model. It needs further research to study what information provides a reliable basis for virus-host association prediction, and extracting appropriate characteristics of viruses and hosts are important for predicting results. Here, we integrate genome sequence information and known virus-host associations. In the future researches, we will consider adding different information sources of viruses and hosts to analyze impacts of different characteristics on prediction results.
Dynamic Distributed Storage of Stormwater in Sponge-Like Porous Bodies: Modelling Water Uptake : An innovative concept of dynamic stormwater storage in sponge-like porous bodies (SPBs) is presented and modelled using first principles, for down-flow and up-flow variants of SPBs. The rate of inflow driven by absorption and / or capillary action into various porous material structures was computed as a function of time and found to be critically dependent on the type of structure and the porous material used. In a case study, the rates of inflow and storage filling were modelled for various conditions and found to match, or exceed, the rates of rainwater inflow and volume accumulation associated with two types of Swedish rainfalls, of 60-min duration and a return period of 10 years. Hence, the mathematical models indicated that the SPB devices studied could capture relevant amounts of water. The theoretical study also showed that the SPB concepts could be further optimized. Such findings confirmed the potential of dynamic SPB storage to control stormwater runo ff and serve as one of numerous elements contributing to restoration of pre-urban hydrology in urban catchments. Finally, the issues to be considered in bringing this theoretical concept to a higher Technological Readiness Level were discussed briefly, including operational challenges. However, it should be noted that a proper analysis of such issues requires a separate study building on the current presentation of theoretical concepts. Introduction Urbanization dramatically alters the hydrological cycle of developing areas by reducing hydrological abstractions and accelerating runoff, which leads to increased runoff volumes and flow peaks, and ultimately the risk of water ponding or flooding [1]. Such changes of the urban environment, accompanied by deteriorating runoff quality and geomorphology of urban streams, lead to reduced biodiversity and contribute to unsustainability in urban areas [2]. Therefore, restoration of the pre-development catchment by measures enhancing water abstraction (e.g., infiltration in soakaways and evapotranspiration on green roofs) and slowing the speed of runoff (e.g., cross-berms in runoff swales) have been pursued in modern drainage design [3,4]. In practice, the above measures, or their functional principles, are applied via broadly varying scales, typically described as lot-level [5], neighborhood and catchment scales [6]. The discussion here focuses on lot-scale measures (LSMs), which in common terminology are also called distributed control measures, and when such LSMs serve to dissipate runoff volume they are referred to as runoff source controls. The importance of these measures follows from their key features: The objectives of the paper are to: (a) describe the theoretical sponge-like porous bodies (SPB) water storage concept on the basis of first principles, (b) derive and verify semi-analytical models for entry of stormwater into SPBs, (c) use such models to demonstrate the theoretical capacity of SPBs to fully capture selected Swedish design rainfalls, and (d) discuss practical challenges that may be encountered in field applications in stormwater control, including steps in advancing technology readiness level (TRL) of SPBs. SPB Storage Concept Description In the first SPB storage variant, Down-flow SPB storage, a relatively large area, such as sections of a roof, parking lot, playground, or football field, would be covered with a material that absorbs the rainwater directly upon contact. The material then swells in a vertical direction, keeping the intercepted water in place (Figure 1a). Note that the SPB storage sketches presented in the figure are intended just to elucidate the theoretical concept of such storage, without any aesthetic, practical or placement considerations. One possible type of material to consider in this application are hydrogels, recognizing that such materials can satisfy several demands of Down-flow SPB storage. For maximum effectiveness, the influx of water into a hydrogel layer should match the influx of rainwater, but partial interceptions One possible type of material to consider in this application are hydrogels, recognizing that such materials can satisfy several demands of Down-flow SPB storage. For maximum effectiveness, the influx of water into a hydrogel layer should match the influx of rainwater, but partial interceptions with some water bypassing are fully acceptable, as demonstrated by similar widely used control measures, e.g., green roofs [11]. In fact, there may be opportunities for installing Down-flow SPB storage units in conjunction with green or gray roofs. In the second variant, water ponded on the ground moves upwards and is stored in two types of pre-installed Up-flow SPB storage structures: (i) intermediate height (1-2 m) supported vertical structures that expand only in the horizontal direction when absorbing water (Figure 1b), and (ii) low-height (<0.2 m) unsupported structures that grow from the ground as they absorb water (Figure 1c). The former structures can be wrapped around trees (Figure 1b) or lamp posts, or attached to walls and concrete bridge abutments, to mention a few examples. The supported storage structures may swell in a horizontal direction when water first moves horizontally in and then upwards through the structures. For the unsupported structures that grow from the ground as they absorb water (Figure 1c), the material not only swells, but also stiffens. This process is analogous to the rising of flowers, when irrigated after a dry period. The water impoundments formed by these structures may also store water from adjacent areas. Examples of these storage structures include hydrogel fibers and porous rods made of natural fibers [31]. The design of unsupported storage structures may require the use of a cluster of compliant rods, which stiffen as they become saturated by capillary action, or by applying a pressure gradient that makes them grow vertically as a function of the inflow rate. However, there must be spaces for storing the expanding material during dry weather periods, which can be a challenge with respect to space limitations and safety. Depending on site conditions, individual rods may grow in isolation, one by one, or in clusters forming various patterns serving as barriers around water impoundments holding significant volumes of stormwater ( Figure 1c). Such impoundments could be designed to feed soakaway pits, or similar infiltration structures. Governing Equations The governing equations are presented below for the Down-flow and Up-flow SPB storage variants. Down-Flow SPB storage: Governing Equations The flow into absorbing materials, like hydrogels, may be described by the diffusion equation: where θis the concentration of water within the swelling material at location rand time t and D ij is the diffusion coefficient. Assuming that D is constant in space and independent of θ, Equation (1) reduces to: where the product Dθ ,j is the flux of water per unit area. From these equations the volumetric uptake V of water per unit time ∆t, Q ab , was derived by Sweijen et al. [32] as or, in a different form, after expressing the volumetric water uptake as a function of time ∆t, the following equation is obtained where A ab is the area of the interface between the water and the absorbing material, n is the vector normal to the interface, and V ab is the volume of water that penetrated through the surface A ab into the absorbing material. From this equation it is obvious that the flow rate will increase with the size (area) of the interface, the magnitude of D, and the gradient of θ(r,t), which should be as large as possible. The governing equation for the up-flow SPB storage is assumed to be the Darcy law, together with the condition of water incompressibility, according to: where u i is the velocity, φ the porosity, K ij the permeability and p the pressure. The constant g is the acceleration due to gravity that is set to act in the negative x 1 direction and ρ is the density of water. This equation is valid for Newtonian flow through a stationary porous medium, up to particle Reynolds numbers Re p = Ud/ν~40 [33], where U is the average velocity in the porous medium, d is a characteristic length of the solids in the porous medium, and ν is the kinematic viscosity of the fluid. The porous medium is not allowed to deform in this first model of Up-flow SPB storage. In our model, the flow is driven by capillary action described by where ∆p is the pressure jump over the curved water surface (the capillary pressure), γ is the surface tension, Θ is the contact angle, and R is the pore radius of the porous medium. By replacing ∆p on the left-hand side of Equation (7) with ρ gh max , the maximum height, h max , within a capillary tube can be derived, and this expression is also known as Jurin's law. Important for Up-flow SPB storage is that the main driving mechanism for the upward flow, capillary action, is promoted by small scale capillaries, while the water moves much more easily through large-scale channels. The reason for this is that the capillary pressure increases with 1/R, while the permeability K ij and thus the flow rate is proportional to R 2 , where R is the typical length dimension of the problem, e.g., a pore radius in a porous medium or the size of particles of the porous medium. In the following, we have chosen to use the size of the particles (fibers), while it is also possible to include the porosity in the expression [27,28]. SPB Process and Geometric Parameters As briefly mentioned in the previous section, there are several parameters of SPBs that can be tuned up for optimal performance. Relevant values of the most significant among these parameters are discussed here in more detail. Down-Flow SPB Storage: Parameters The magnitude of the diffusion coefficient of water entering the Down-flow SPB storage, D, is determined by the properties of the absorbing material, the composition of water, and conditions at the site. It is, therefore, desirable to find a combination of such characteristics that maximizes D. One group of efficient absorbers of water are hydrogels. For example, Bajpai [34] derived values of D between 6.9 × 10 −9 and 1.3 × 10 −8 m 2 /s for a hydrogel based on Acrylamide doped with different concentrations of maleic acid. Doll et al. [35] measured values between 1.3-1.4 × 10 −9 m 2 /s for two types of bio-based hydrogels, and El-Hamshary [36] tested a number of variants of poly(acrylamide-co-itaconic acid) hydrogels and obtained values between 3.0 × 10 −10 -1.2 × 10 −8 m 2 /s. Hence, there is a broad range of D values in the literature, depending on the hydrogel composition. Following Bajpai [34], D is here approximated as 1.0 × 10 −8 m 2 /s, which falls into the upper end zone of the results reviewed but is short of the maximum. Regardless of D, the rate of water uptake is enhanced by a large area of contact between the absorbing material and the water, A ab , as shown in Equation (3). So instead of using a flat surface Water 2020, 12, 2080 6 of 21 area of the absorbing material the interface with the water should be corrugated in some way. As an example, let us study a prismatic block of absorbing material, with a plan view area, A flat , height (thickness) H, and a grid arrangement of relatively small cavities, with a square cross-section b × b in plan, and depth h, shown in Figure 2. In such a simplified porous medium body, repeating square cells can be described by a plan dimension (side B), and height H, in Figure 2. Now also let b = b/B and h = h/B and observe that 0 < b < 1 and 0 < h < H/B. The total contact area between the water and the absorbing material may now be derived as: and for a maximum value of h , Equation (8) becomes The absorbing area increases with b but, as stated above, the theoretical maximum value of b equals one. In practice, the actual value should be smaller, around 0.5, to maintain a sufficient volume of the absorbing material. More interesting is the ratio H/B. Realistically, the upper limit H is constrained, while B is fairly arbitrary and sets the scale of the cavities. Finally, Equation (9) may be generalized to: where C is a constant equal to 4 for the geometry in Figure 2 and π for cylindrical cavities, for instance. As shown by Equations (3) and (4) the flow rate into the absorbing material is directly related to the gradient of θ(r,t), which will decrease as the absorbing material becomes saturated. Water 2020, 12, x FOR PEER REVIEW 6 of 22 Regardless of D, the rate of water uptake is enhanced by a large area of contact between the absorbing material and the water, Aab, as shown in Equation (3). So instead of using a flat surface area of the absorbing material the interface with the water should be corrugated in some way. As an example, let us study a prismatic block of absorbing material, with a plan view area, Aflat, height (thickness) H, and a grid arrangement of relatively small cavities, with a square cross-section b × b in plan, and depth h, shown in Figure 2. In such a simplified porous medium body, repeating square cells can be described by a plan dimension (side B), and height H, in Figure 2. Now also let b' = b/B and h' = h/B and observe that 0 < b' < 1 and 0 < h'< H/B. The total contact area between the water and the absorbing material may now be derived as: and for a maximum value of h', Equation (8) becomes The absorbing area increases with b' but, as stated above, the theoretical maximum value of b' equals one. In practice, the actual value should be smaller, around 0.5, to maintain a sufficient volume of the absorbing material. More interesting is the ratio H/B. Realistically, the upper limit H is constrained, while B is fairly arbitrary and sets the scale of the cavities. Finally, Equation (9) may be generalized to: where C is a constant equal to 4 for the geometry in Figure 2 and π for cylindrical cavities, for instance. As shown by Equations (3) and (4) the flow rate into the absorbing material is directly related to the gradient of θ(r,t), which will decrease as the absorbing material becomes saturated. In Section 2.6, the sensitivity of the above parameters will be discussed with the combinations of values listed in Table 1 as a base and with h = H. In Table 1 it can be seen that, as the unit cell length scale B decreases, the number of square cavities increases for the same plan-view area. Hence, the area exposed to water, Aab, increases substantially. In Section 2.6, the sensitivity of the above parameters will be discussed with the combinations of values listed in Table 1 as a base and with h = H. In Table 1 it can be seen that, as the unit cell length scale B decreases, the number of square cavities increases for the same plan-view area. Hence, the area exposed to water, A ab , increases substantially. Up-flow SPB storage is schematized as a model cylinder consisting of a solid cylinder with radius δ and height H s , a porous inner annulus with a larger radius a and height H i , and a porous outer annulus with a larger radius b and a height H o (see Figure 3). The solid cylinder mimics a tree or lamp post, and the inner and outer annuli confine the porous media, cf. Figures 1b and 3. The porosities are φ i and φ o , respectively, the geometrical scales of the porous media are R i and R o , respectively, and the origin of a cylindrical co-ordinate system (x, r, θ) is located at the center of the solid cylinder base ( Figure 3). The actual dimensions of the cylinders are arbitrary, and so are the values of R and φ, but to make the cylinder an effective absorber (R i R o ), we set R i < R o . By applying this model, the water uptake, as a function of time, Q in (t), can be derived for various geometries and conditions, and the Up-flow SPBs can be evaluated. Now, as an additional assumption, let the porous media consist of solid vertical rods with radii R i and R o , respectively, let the porosity φ be 0.7 in both annuli and let the temperature be 20 • C implying that the surface tension (water-air) is 72.8 mN/m. To simplify the model further, the radii are set to remain constant during the filling, and the porous media do not deform; hence, there is no swelling and perfect wetting is assumed, θ = 0. The sensitivity of the parameters of Up-flow SPB storage will be discussed in Section 2.6, based on the combinations of the values listed in Table 2. Up-flow SPB storage is schematized as a model cylinder consisting of a solid cylinder with radius δ and height Hs, a porous inner annulus with a larger radius a and height Hi, and a porous outer annulus with a larger radius b and a height Ho (see Figure 3). The solid cylinder mimics a tree or lamp post, and the inner and outer annuli confine the porous media, cf. Figures. 1b and Figure 3. The porosities are ϕi and ϕo, respectively, the geometrical scales of the porous media are Ri and Ro, respectively, and the origin of a cylindrical co-ordinate system (x, r, θ) is located at the center of the solid cylinder base ( Figure 3). The actual dimensions of the cylinders are arbitrary, and so are the values of R and ϕ, but to make the cylinder an effective absorber (Ri ≠ Ro), we set Ri < Ro. By applying this model, the water uptake, as a function of time, Qin(t), can be derived for various geometries and conditions, and the Up-flow SPBs can be evaluated. Now, as an additional assumption, let the porous media consist of solid vertical rods with radii Ri and Ro, respectively, let the porosity ϕ be 0.7 in both annuli and let the temperature be 20 °C implying that the surface tension (water-air) is 72.8 mN/m. To simplify the model further, the radii are set to remain constant during the filling, and the porous media do not deform; hence, there is no swelling and perfect wetting is assumed, θ = 0. The sensitivity of the parameters of Up-flow SPB storage will be discussed in Section 2.6, based on the combinations of the values listed in Table 2. Solutions of the Governing Equations The governing equations are solved numerically for the two geometries considered. For the model of the Down-flow SPB, this is rather straightforward, while for the model of the Up-flow SPB some further analysis is required. Most of the derivations for the Up-flow SPB are presented in Supplementary Materials. Down-Flow SPB Storage: Numerical Set-Up A numerical scheme for Equations (2)-(4) and (10) is set-up in COMSOL Multiphysics ® for the unit cell defined by dashed lines in Figure 4. Equation (2) is then solved with the boundary conditions specified in Equations (11)- (18). Notice that it is assumed that the height of the absorbing material is constant throughout the absorption process, hence the material will not swell in the model. Instead it is assumed that only 50% of the cell volume is occupied by the absorbing material making room for the water. These restrictions may be relaxed in future studies. Water 2020, 12, x FOR PEER REVIEW 8 of 22 Solutions of the Governing Equations The governing equations are solved numerically for the two geometries considered. For the model of the Down-flow SPB, this is rather straightforward, while for the model of the Up-flow SPB some further analysis is required. Most of the derivations for the Up-flow SPB are presented in Appendix A. Down-Flow SPB Storage: Numerical Set-Up A numerical scheme for Equations (2)(3)(4) and (10) is set-up in COMSOL Multiphysics ® for the unit cell defined by dashed lines in Figure 4. Equation (2) is then solved with the boundary conditions specified in Equations (11)- (18). Notice that it is assumed that the height of the absorbing material is constant throughout the absorption process, hence the material will not swell in the model. Instead it is assumed that only 50% of the cell volume is occupied by the absorbing material making room for the water. These restrictions may be relaxed in future studies. The problem is set up with periodic boundary conditions according to and the condition of no flux at the bottom can be expressed as Dirichlet boundary conditions for a concentration θ(r,t) = 0.5 at the boundaries specified by Water 2020, 12, 2080 9 of 21 As an initial condition, it is assumed that θ = 0 in the volume V ab specified by the boundaries given by Equations (13)- (18) and z = 0. Up-Flow SPB Storage: Numerical Set-Up The capillary pressure, Equation (7), created between the rods within the annular clusters induces both radial and axial components of the flow. These components need to be calculated for each time step, depending on the positions of the flow front in both the inner and outer annular porous media, as illustrated in Figure 3. The pressures in the annular porous media are p i (x, r, t) and p o (x, r, t), respectively, where the flow and pressure are assumed to be independent of the azimuthal co-ordinate and the fluid is set to be incompressible. The perpendicular, i.e., in the radial direction, and parallel permeabilities, i.e., in the axial direction, in the annular porous media are denoted as K i,⊥ , K i, , K o,⊥ , and K o, , respectively. Darcy's law, Equation (5), may now be expressed as: where the modified pressures are ϕ i = p i + ρgx and ϕ o = p o + ρgx, respectively, and g is the gravitational acceleration. The permeabilities in Equation (19) are chosen to be given by where Π max = π/ 2 √ 3 in the case of hexagonal packing of the fibers [37] and Π b is the fraction of fibers. This is a simplification, but more advanced expressions, as e.g., derived in [38][39][40][41], can be used in future studies. Equation (19) are set-up for the geometry and boundary conditions of interest according to Supplementary Materials, in which the derivations in [26] were followed, with two main differences: in the present model the finite widths of the channels are taken into account, and the gravity is considered. From this derivation the problem was solved in a semi-analytical fashion using MATLAB. Verification of Numerical Solutions By definition, the numerical derivations are approximations and need to be verified, as presented below. Down-Flow SPB Storage: Verification To verify the robustness of the numerical results for the Down-flow SPB storage an approximate analytical solution was found. This approximate solution is valid as long as the boundary layers of thickness ∆, that are initially developing on all the boundaries given by Equations (13)- (18), are much smaller than the characteristic dimensions of the Down-flow SPB storage, i.e., ∆ << b, B, H and h. Considering the different boundaries in Equations (13)-(18) separately, a similarity solution for each boundary is found. To exemplify, in the region from x = b/2 to x = B/2 the approximate solution is given by Similar solutions can be obtained for the other four boundaries. The total volume of water absorbed into the volume V(t) as a function of time is then found by integrating the water concentration over the total volume V ab , according to Equation (4), yielding an approximate result given by As an additional verification, a simple asymptotic solution valid for short times can also be found by expanding Equation (22) in the limit t → 0 according to In addition to these expressions, the numerical results can be compared to the maximum possible uptake of water. In Figure 5 the absorbed water volume diffusing into the volume V ab is plotted as a function of time for the three cases in Table 1 and for the three solutions. The solid line is the numerical result, the dotted line is the approximate analytical solution, Equation (22), and the dashed line is an asymptotic solution valid for short times, Equation (23). For cases Down1 (blue lines) and Down2 (red lines), the numerical and approximate analytical solutions overall agree quite well, but for short times there is a slight disagreement since the numerical solution cannot capture the rapid t 1/2 initial development of the analytical solution. For case Down3 (black line), the agreement for short times is good, while for long times only the numerical solution fulfills the correct asymptotic volume of 75 L. The analytical solution here becomes invalid, because the boundary layers are no longer small compared to the dimensions of case Down3. It should be noted that Down1 and Down2 also approach the correct asymptotic volume, which takes a long time. The agreement with the analytical solutions and the final volume of water for all cases indicates that the numerical solution is correct. Similar solutions can be obtained for the other four boundaries. The total volume of water absorbed into the volume V(t) as a function of time is then found by integrating the water concentration over the total volume Vab, according to Equation (4), yielding an approximate result given by As an additional verification, a simple asymptotic solution valid for short times can also be found by expanding Equation (22) in the limit → 0 according to In addition to these expressions, the numerical results can be compared to the maximum possible uptake of water. In Figure 5 the absorbed water volume diffusing into the volume Vab is plotted as a function of time for the three cases in Table 1 and for the three solutions. The solid line is the numerical result, the dotted line is the approximate analytical solution, Equation (22), and the dashed line is an asymptotic solution valid for short times, Equation (23). For cases Down1 (blue lines) and Down2 (red lines), the numerical and approximate analytical solutions overall agree quite well, but for short times there is a slight disagreement since the numerical solution cannot capture the rapid t 1/2 initial development of the analytical solution. For case Down3 (black line), the agreement for short times is good, while for long times only the numerical solution fulfills the correct asymptotic volume of 75 L. The analytical solution here becomes invalid, because the boundary layers are no longer small compared to the dimensions of case Down3. It should be noted that Down1 and Down2 also approach the correct asymptotic volume, which takes a long time. The agreement with the analytical solutions and the final volume of water for all cases indicates that the numerical solution is correct. Up-Flow SPB Storage: Verification The results of the model can be verified and validated for the case of flow in a single channel with no porous medium, but including the effects of gravity. This is the famous experiment of capillary rise in a tube, with the well-known approximate analytical solution by Washburn [23]. More recently, Fries and Dreyer [42] found an exact analytical solution for this case and also tested it experimentally, with good agreement. For the case of flow in two annular regions, the maximum heights reached in the two channels (see Equation (25), in Supplementary Materials) can be verified by energy arguments. In the initial phase of the development of the fronts in each channel, when the interaction between the channels is zero, the expressions for the fronts Equation (24) in Supplementary Materials also agree with the results of Fries and Dreyer [42]. In Figure 6 a comparison is shown between the numerical results for the inner and outer channels given by Equations (22) and (23), in Supplementary Materials and the result obtained by using the analytical solution of Fries and Dreyer without any interaction. It is seen that, during this initial phase, in which there is a small interaction between the channels, the agreement is good. In the work by Zarandi et al. [27] the analytical solution by Fries and Dreyer [42] was also validated for porous media. In their case the porous medium consisted of several different kinds of glass-fiber wicks. The agreement between theory and experiments is good for several cases while for other cases there is a clear difference. The conclusion is that the disagreement in some cases can be derived from inhomogeneities, caused by 'kinking' fibers in the porous medium. Hence, the theory of Fries and Dreyer [42] provides an upper bound of the experimental data. In this context it is also important to mention that the surface energies (contact angle) may play an important role and that there are measures to reduce this angle as shown by Caglar et al. [29]. Including additional mechanisms, Caupin et al. [43] derived an upper limit for the capillary rise, which on the nano-scale was very large and exceeded Jurin's law. Experimental evidence is still required. The results of the model can be verified and validated for the case of flow in a single channel with no porous medium, but including the effects of gravity. This is the famous experiment of capillary rise in a tube, with the well-known approximate analytical solution by Washburn [23]. More recently, Fries and Dreyer [42] found an exact analytical solution for this case and also tested it experimentally, with good agreement. For the case of flow in two annular regions, the maximum heights reached in the two channels (see Equation (25), in Appendix A) can be verified by energy arguments. In the initial phase of the development of the fronts in each channel, when the interaction between the channels is zero, the expressions for the fronts Equation (24) in Appendix A also agree with the results of Fries and Dreyer [42]. In Figure 6 a comparison is shown between the numerical results for the inner and outer channels given by Equations (22) and (23), in Appendix A and the result obtained by using the analytical solution of Fries and Dreyer without any interaction. It is seen that, during this initial phase, in which there is a small interaction between the channels, the agreement is good. In the work by Zarandi et al. [27] the analytical solution by Fries and Dreyer [42] was also validated for porous media. In their case the porous medium consisted of several different kinds of glass-fiber wicks. The agreement between theory and experiments is good for several cases while for other cases there is a clear difference. The conclusion is that the disagreement in some cases can be derived from inhomogeneities, caused by 'kinking' fibers in the porous medium. Hence, the theory of Fries and Dreyer [42] provides an upper bound of the experimental data. In this context it is also important to mention that the surface energies (contact angle) may play an important role and that there are measures to reduce this angle as shown by Caglar et al. [29]. Including additional mechanisms, Caupin et al. [43] derived an upper limit for the capillary rise, which on the nano-scale was very large and exceeded Jurin's law. Experimental evidence is still required. (22) and (23) in Appendix A against the analytical solution of Fries and Dreyer [42] within the limits of short times (t ≤ 4 s). Parameter Sensitivity Before comparing the modelled SPB capacities to rainfall data, a brief parameter sensitivity analysis is performed. Down-Flow SPB Storage: Parameter Sensitivity The value of B, which is the scale of water/SPB interface corrugations shown in Table 1 and Figure 2, influences the volumetric flow rate to a large extent, as evident from comparing the solid lines of different colors in Figure 5. Hence, the smaller the scale studied, the faster is the overall absorption process. For the smallest scale, Down3, where the sides of the open squares are 10 mm, Parameter Sensitivity Before comparing the modelled SPB capacities to rainfall data, a brief parameter sensitivity analysis is performed. Down-Flow SPB Storage: Parameter Sensitivity The value of B, which is the scale of water/SPB interface corrugations shown in Table 1 and Figure 2, influences the volumetric flow rate to a large extent, as evident from comparing the solid lines of different colors in Figure 5. Hence, the smaller the scale studied, the faster is the overall absorption process. For the smallest scale, Down3, where the sides of the open squares are 10 mm, the 1 m 2 flat surface can absorb around 75 L in 15 min. The sensitivity of the results due to diffusion, D, is given by the absorbed volume dependence on square root of the diffusion coefficient, according to the asymptotic expression (23). It only affects the growth of the volume in time, not the total absorbed volume: see the difference between the solid lines D = 1.0 × 10 −8 m 2 /s, the dashed lines, D = 0.5 × 10 −8 m 2 /s, and the dotted lines D = 1.0 × 10 −9 m 2 /s in Figure 7. In this context, the variation of the parameter b (=b/B, see Figure 2) is also interesting. Although this parameter should be set large for rapid growth, a large value of b yields a smaller total volume of absorption, as best seen by scrutinizing the solid lines of different colors in Figure 7. Here blue lines denote b = 0.4, red b = 0.5, and green b = 0.6. The effect of varying H is close to linear and increases both the initial rate of growth as well as the total absorption. Figure 7. In this context, the variation of the parameter (=b/B, see Figure 2) is also interesting. Although this parameter should be set large for rapid growth, a large value of yields a smaller total volume of absorption, as best seen by scrutinizing the solid lines of different colors in Figure 7. Here blue lines denote = 0.4, red = 0.5, and green = 0.6. The effect of varying H is close to linear and increases both the initial rate of growth as well as the total absorption. Up-Flow SPB Storage: Parameter Sensitivity In all three cases in Table 2, one can anticipate initially fast inflows into both porous media, followed by a slowdown in the inflow (see Figure 8a,b). The graphs, however, reveal that, in the three cases studied, the water uptake differs. Initially the water uptake is fastest in case Up1, during the period 0 < t < 2 s, as shown in Figure 8a. Then, during a period up to slightly more than 100 s, the water uptake is greatest in case Up2, (Figure 8b). After that, the water uptake in the system with almost equal radii (Up3) dominates. Hence, the storage set-up with two porosities can be designed for both a fast water uptake, as well as the maximum volume of water uptake. The maximum water uptake is about 8, 20 and 46 L, for durations of 10, 100 and 1000 s, respectively. The results may also be interpreted in terms of the water column height in the respective annuli. The maximum height is about 1.4 m for the cases studied, as exemplified by cases Up1-3 in Figure 9. The figure also shows that there is a large difference in the final height between the inner and outer annuli for Up1-2, which is unsatisfactory. Hence, having several annuli with different radii may make the dynamic storage structure more efficient, as a complement to using the porous media with less spread sizes of the fibers (Case Up3). Up-Flow SPB Storage: Parameter Sensitivity In all three cases in Table 2, one can anticipate initially fast inflows into both porous media, followed by a slowdown in the inflow (see Figure 8a,b). The graphs, however, reveal that, in the three cases studied, the water uptake differs. Initially the water uptake is fastest in case Up1, during the period 0 < t < 2 s, as shown in Figure 8a. Then, during a period up to slightly more than 100 s, the water uptake is greatest in case Up2, (Figure 8b). After that, the water uptake in the system with almost equal radii (Up3) dominates. Hence, the storage set-up with two porosities can be designed for both a fast water uptake, as well as the maximum volume of water uptake. The maximum water uptake is about 8, 20 and 46 L, for durations of 10, 100 and 1000 s, respectively. Table 2). Graphs in the two panels correspond to two time intervals: (a) 0-1 min and (b) 0-1 h, respectively. Graph colors: the blue, red and green curves denote cases Up1, Up2 and Up3, respectively. Table 2. The graph panels represent two time intervals, 0-1 min and 0-1 h, respectively. The blue, red and green curves represent cases Up1, Up2 and Up3, respectively. Dashed and dotted lines represent the height of the water column after the merging of fronts. Concerning the other parameters in Table 2, it is obvious that the amount of water uptake will increase with an overall increase in δ, a and b (see also Figure 3). The effect of changing porosity (set in Table 2 as 0.7) is, however, less obvious and, therefore, such an effect is illustrated in Figure 10 where the color blue denotes ϕ = 0.6, red ϕ = 0.7 and green ϕ = 0.8 and the different line types represent the cases Up1-3. Hence, using sparser porous media will both speed-up the uptake and increase the maximum uptake of water and makes it evident that the porosity should be as high as possible, while keeping the capillary pressure sufficiently high. Table 2). Graphs in the two panels correspond to two time intervals: (a) 0-1 min and (b) 0-1 h, respectively. Graph colors: the blue, red and green curves denote cases Up1, Up2 and Up3, respectively. The results may also be interpreted in terms of the water column height in the respective annuli. The maximum height is about 1.4 m for the cases studied, as exemplified by cases Up1-3 in Figure 9. The figure also shows that there is a large difference in the final height between the inner and outer annuli for Up1-2, which is unsatisfactory. Hence, having several annuli with different radii may make the dynamic storage structure more efficient, as a complement to using the porous media with less spread sizes of the fibers (Case Up3). (Table 2). Graphs in the two panels correspond to two time intervals: (a) 0-1 min and (b) 0-1 h, respectively. Graph colors: the blue, red and green curves denote cases Up1, Up2 and Up3, respectively. Table 2. The graph panels represent two time intervals, 0-1 min and 0-1 h, respectively. The blue, red and green curves represent cases Up1, Up2 and Up3, respectively. Dashed and dotted lines represent the height of the water column after the merging of fronts. Concerning the other parameters in Table 2, it is obvious that the amount of water uptake will increase with an overall increase in δ, a and b (see also Figure 3). The effect of changing porosity (set in Table 2 as 0.7) is, however, less obvious and, therefore, such an effect is illustrated in Figure 10 where the color blue denotes ϕ = 0.6, red ϕ = 0.7 and green ϕ = 0.8 and the different line types represent the cases Up1-3. Hence, using sparser porous media will both speed-up the uptake and increase the maximum uptake of water and makes it evident that the porosity should be as high as possible, while keeping the capillary pressure sufficiently high. Table 2. The graph panels represent two time intervals, 0-1 min and 0-1 h, respectively. The blue, red and green curves represent cases Up1, Up2 and Up3, respectively. Dashed and dotted lines represent the height of the water column after the merging of fronts. Concerning the other parameters in Table 2, it is obvious that the amount of water uptake will increase with an overall increase in δ, a and b (see also Figure 3). The effect of changing porosity (set in Table 2 as 0.7) is, however, less obvious and, therefore, such an effect is illustrated in Figure 10 where the color blue denotes φ = 0.6, red φ = 0.7 and green φ = 0.8 and the different line types represent the cases Up1-3. Hence, using sparser porous media will both speed-up the uptake and increase the maximum uptake of water and makes it evident that the porosity should be as high as possible, while keeping the capillary pressure sufficiently high. Modelling the SPB Storage Interception of Short-Duration Design Rainfalls From the hydrological point of view, two essential properties of SPB storage are the rates of inflow and the total storage volume, which should be selected according to the design rainfall characteristics. Ultimately, the SPB structures fully intercepting and storing the incoming rainwater would provide the highest attainable runoff control, but such a condition should not be viewed as a prerequisite in assessing the feasibility of SPB storage applications in stormwater control. One can envisage integrated storage designs, in which SPB storage would complement specific features of conventional storage (e.g., when applied as one of the layers of a green roof). Also note that LSMs, and even larger scale measures, are sometimes designed to partly bypass high flows, as noted, e.g., for green roofs by Shafique et al. [11]. With this in mind, capacities of SPB storage inflow rates and volumes are theoretically assessed in this section for specific Swedish short-duration extreme rainfall data adopted from Olsson et al. [44]. Inflow Two types of inflow into stormwater storage can be distinguished: direct rainfall over the footprint of the storage facility, and indirect inflow diverted from adjacent drainage contributing areas. The indirect inflow generally exceeds the direct by a significant factor given by the ratio of contributing areas (Aindirect/Adirect). Direct inflow in wet weather can be determined from the local rainfall regime [44], and for the purpose of this study the following rainfall data were adopted from Swedish precipitation records (see Table 3): (a) 60-min block rainfalls of return period of 10-years, for southwestern and northern regions of Sweden (i.e., the regions with the highest and lowest annual precipitation depths, respectively), and (b) 60-min block rainfalls, with a 5-min high-intensity rainfall burst (starting at 27.5 min since rain onset), with the return period of 10 years, for the southwestern and northern regions. Modelling the SPB Storage Interception of Short-Duration Design Rainfalls From the hydrological point of view, two essential properties of SPB storage are the rates of inflow and the total storage volume, which should be selected according to the design rainfall characteristics. Ultimately, the SPB structures fully intercepting and storing the incoming rainwater would provide the highest attainable runoff control, but such a condition should not be viewed as a prerequisite in assessing the feasibility of SPB storage applications in stormwater control. One can envisage integrated storage designs, in which SPB storage would complement specific features of conventional storage (e.g., when applied as one of the layers of a green roof). Also note that LSMs, and even larger scale measures, are sometimes designed to partly bypass high flows, as noted, e.g., for green roofs by Shafique et al. [11]. With this in mind, capacities of SPB storage inflow rates and volumes are theoretically assessed in this section for specific Swedish short-duration extreme rainfall data adopted from Olsson et al. [44]. Inflow Two types of inflow into stormwater storage can be distinguished: direct rainfall over the footprint of the storage facility, and indirect inflow diverted from adjacent drainage contributing areas. The indirect inflow generally exceeds the direct by a significant factor given by the ratio of contributing areas (A indirect /A direct ). Direct inflow in wet weather can be determined from the local rainfall regime [44], and for the purpose of this study the following rainfall data were adopted from Swedish precipitation records (see Table 3): (a) 60-min block rainfalls of return period of 10-years, for southwestern and northern regions of Sweden (i.e., the regions with the highest and lowest annual precipitation depths, respectively), and (b) 60-min block rainfalls, with a 5-min high-intensity rainfall burst (starting at 27.5 min since rain onset), with the return period of 10 years, for the southwestern and northern regions. Comparison of Rainfall Volumes to Storage Capacities: Down-Flow SPB Storage The Down-flow SPB storage provides a direct capture of rainwater, and the comparison between the numerical solution (black solid line) in Figure 5 (i.e., Down3) and the rainwater inflows from Table 3 reveals that the rainwater volumes for the two events in the two regions are fully captured in the case of Down3 (see Figure 11). It should be noted that the Down-flow SPB can capture more rainwater than that supplied; thus, in the Down3 case, there is some safety available should the storage capacity deteriorate for unknown reasons. This point will be further discussed towards the end of this section. It can be inferred from data in Figure 11 that the SPB storage concept proposed here has a great potential to capture and store all the rainwater associated with the short-duration extreme rainfall events studied. Comparison of Rainfall Volumes to Storage Capacities: Down-Flow SPB Storage The Down-flow SPB storage provides a direct capture of rainwater, and the comparison between the numerical solution (black solid line) in Figure 5 (i.e., Down3) and the rainwater inflows from Table 3 reveals that the rainwater volumes for the two events in the two regions are fully captured in the case of Down3 (see Figure 11). It should be noted that the Down-flow SPB can capture more rainwater than that supplied; thus, in the Down3 case, there is some safety available should the storage capacity deteriorate for unknown reasons. This point will be further discussed towards the end of this section. It can be inferred from data in Figure 11 that the SPB storage concept proposed here has a great potential to capture and store all the rainwater associated with the short-duration extreme rainfall events studied. Comparison of Cumulative Rainfall Volumes and Theoretical Storage Capacities: Up-Flow SPB Storage The Up-flow SPB storage, that has a minor footprint, is not intended to directly capture rainwater, but to store stormwater from a certain contributing area, draining into individual storage facilities. Hence, various contributing areas A should be considered for individual storage arrangements, formed by sets of SPB cylinders. Assuming that stormwater can easily enter the confined storage area without any losses along the flow path, the Up-flow SPB (green curve in Figure 8, case Up3) is compared to eight design rainfall scenarios in Figure 12. The scenarios are listed and were defined as follows: two regional rainfalls (SW & N), two events (60 min rainfalls with or without a high intensity burst), and two contributing areas, 1 and 5 m 2 . The comparison indicates that one Up-flow SPB storage unit per unit area, Up3/m 2 , has a greater storage capacity than needed to capture all runoff, but one storage unit/(5 m 2 ) is not enough to capture all runoff, regardless of the design event or the geographical region. Nevertheless, the Up-flow SPB storage can collect a great amount of water, and with a high density of storage units, it would be feasible to capture all runoff. It should also be noted that, in terms of capacity, Up3 is merely the "best" case among the three cases studied, rather than an optimized solution. Comparison of Cumulative Rainfall Volumes and Theoretical Storage Capacities: Up-Flow SPB Storage The Up-flow SPB storage, that has a minor footprint, is not intended to directly capture rainwater, but to store stormwater from a certain contributing area, draining into individual storage facilities. Hence, various contributing areas A should be considered for individual storage arrangements, formed by sets of SPB cylinders. Assuming that stormwater can easily enter the confined storage area without any losses along the flow path, the Up-flow SPB (green curve in Figure 8, case Up3) is compared to eight design rainfall scenarios in Figure 12. The scenarios are listed and were defined as follows: two regional rainfalls (SW & N), two events (60 min rainfalls with or without a high intensity burst), and two contributing areas, 1 and 5 m 2 . The comparison indicates that one Up-flow SPB storage unit per unit area, Up3/m 2 , has a greater storage capacity than needed to capture all runoff, but one storage unit/(5 m 2 ) is not enough to capture all runoff, regardless of the design event or the geographical region. Nevertheless, the Up-flow SPB storage can collect a great amount of water, and with a high density of storage units, it would be feasible to capture all runoff. It should also be noted that, in terms of capacity, Up3 is merely the "best" case among the three cases studied, rather than an optimized solution. In both comparisons, the SPB storage units modelled (i.e., Down-flow and Up-flow) could capture and store more water than available early during the rainfall event, which is physically impossible. For a more realistic comparison between the cumulative rainfall volume and storage filling, the numerical model for the Up-flow case was modified to handle events with a constant rate of inflow. Using this new model and focusing on Case Up3 with the largest uptake of water during the first 60 min, it appears that at the beginning of the rainfall event, the inflows into the Up-flow SPB storage follow the SW region runoff inflows for various contributing areas: 1, 2.5, 5 and 10 m 2 (see Figure 13). After some time, depending on the contributing area, the storage can no longer fully capture runoff from respective areas. For the area of 2.5 m 2 , this tipping point occurs just before the 60-min rainfall event ends, and hence, the SPB unit can almost capture runoff from a contributing area of 2.5 m 2, exposed to the SW region design event with a uniform intensity (see the black dashed and dashed lines in Figure 13). Therefore, this finding provides evidence that the presented Up-flow SPB concept has great potential for controlling runoff. A similar comparison for the Down-flow SPBs would yield the same conclusion. Notice that the comparison will hold also for multiple closely time-spaced rain showers. In both comparisons, the SPB storage units modelled (i.e., Down-flow and Up-flow) could capture and store more water than available early during the rainfall event, which is physically impossible. For a more realistic comparison between the cumulative rainfall volume and storage filling, the numerical model for the Up-flow case was modified to handle events with a constant rate of inflow. Using this new model and focusing on Case Up3 with the largest uptake of water during the first 60 min, it appears that at the beginning of the rainfall event, the inflows into the Up-flow SPB storage follow the SW region runoff inflows for various contributing areas: 1, 2.5, 5 and 10 m 2 (see Figure 13). After some time, depending on the contributing area, the storage can no longer fully capture runoff from respective areas. For the area of 2.5 m 2 , this tipping point occurs just before the 60-min rainfall event ends, and hence, the SPB unit can almost capture runoff from a contributing area of 2.5 m 2, exposed to the SW region design event with a uniform intensity (see the black dashed and dashed lines in Figure 13). Therefore, this finding provides evidence that the presented Up-flow SPB concept has great potential for controlling runoff. A similar comparison for the Down-flow SPBs would yield the same conclusion. Notice that the comparison will hold also for multiple closely timespaced rain showers. Comparison between the cumulative rainfall volume and Up-flow SPB (Up3) storage filling (black solid line) for various inflow scenarios: two climatic regions (N and SW), two shapes of rainfall hyetographs (block rainfalls without and with a high intensity burst; blue and red lines, respectively), and two runoff contributing areas (1 and 5 m 2 , respectively). The figure also demonstrates the storage capacity of the SPB. In both comparisons, the SPB storage units modelled (i.e., Down-flow and Up-flow) could capture and store more water than available early during the rainfall event, which is physically impossible. For a more realistic comparison between the cumulative rainfall volume and storage filling, the numerical model for the Up-flow case was modified to handle events with a constant rate of inflow. Using this new model and focusing on Case Up3 with the largest uptake of water during the first 60 min, it appears that at the beginning of the rainfall event, the inflows into the Up-flow SPB storage follow the SW region runoff inflows for various contributing areas: 1, 2.5, 5 and 10 m 2 (see Figure 13). After some time, depending on the contributing area, the storage can no longer fully capture runoff from respective areas. For the area of 2.5 m 2 , this tipping point occurs just before the 60-min rainfall event ends, and hence, the SPB unit can almost capture runoff from a contributing area of 2.5 m 2, exposed to the SW region design event with a uniform intensity (see the black dashed and dashed lines in Figure 13). Therefore, this finding provides evidence that the presented Up-flow SPB concept has great potential for controlling runoff. A similar comparison for the Down-flow SPBs would yield the same conclusion. Notice that the comparison will hold also for multiple closely timespaced rain showers. Discussion The discussion focuses on the following aspects of SPB storage: potential role of SPB storage in stormwater management, water uptake (speed and volume) and its modelling, filtration of stormwater by SPBs, and other practical considerations. Analysis of the SPB storage concept presented here places this type of storage into the category of 'best practice', serving to manage stormwater at the smallest scale, i.e., the lot or site level (defined earlier as LSMs). While the quantified effect of a single measure on the catchment runoff is not measurable, the overall benefit of many such measures throughout the catchment is significant as argued, e.g., in the case of green roofs [45]. Furthermore, these localized distributed measures are not required to intercept the incoming stormwater fully, because of their multiplicity and hierarchical placement in the upper reaches of the catchment. For example, another similar LSM, green roofs, on average retain 57-76% of incoming rainwater [45] and bypass the rest, to be intercepted by other downstream measures or be conveyed to the receiving waters. Functionally, Down-flow SPB storage compares favorably to gray roofs (i.e., roof storage without vegetation), but the literature on gray roofs is rather limited. In terms of storage filling, the Down-flow SPB storage was shown here to be capable of fully intercepting design rainfall from 10-year cool temperate climate events. Furthermore, one can also envisage the SPBs as part of an integrated system containing various control measures, in which SPBs complement other measures. Compared to green roofs, gray roofs or SPB storage measures would reduce the load of the storage structure on the roof, eliminate the weight of green roof vegetation substrate (typically 200 mm [45]) and avoid water quality issues connected with elution of green roof media. However, these benefits have to be measured against the ecological benefits of green roofs, even though Francis and Jensen's review [17] points out that reports of green roof benefits in the form of ecosystem services are sometimes insufficiently documented. Up-flow SPBs have a limited footprint and could play a different role in local runoff management than down-flow SPBs, by forming vertical storage units, or, in the case of unsupported rods growing from the ground up, by temporarily retaining stormwater over infiltration soakaways, or ground depressions, and thereby increasing their effectiveness in reducing runoff. Table 3. Direct inflow of rainwater into storage. Direct Rainwater Inflow Velocity (=Intensity) [m/s] per Unit Area [m 2 ] Southwest (SW) North (N) Uniform intensity (also called the block rainfall) 60-min duration event, with return period 1:10 years 0.68 × 10 −5 no preceding rainfall 0.53 × 10 −5 no preceding rainfall 60-min duration event, with a high-intensity burst of 5 min, return period 1:10 years (the hyetograph was patterned after Berggren [46] The computational models presented in this study focused on the inflow to the storage facility, but the outflow of water during storage drawdown, at rates much lower than those of inflow, is also important and subject to ongoing investigation. There is a significant difference between the time scales of the inflow and outflow rates associated with design events: the former can be as short as 1 h [46] and the latter can be as long as 48 h, depending on the local rainfall regime [9]. Hence, the average outflow rate would be about 50 times smaller than the inflow rate of design events. Theoretically, water may be released, e.g., by changes in capillary action, by mechanisms applying pressure to stored water, and by evaporation of water into the atmosphere [47][48][49][50][51]. All three measures have different time-scales, and their design would need to meet the time limit on the full drawdown of the storage facility, typically specified as 48 h (such limits depend on the type of storage facility and the local rainfall regime). For Down-flow SPBs, only fine particles will be present in the rainwater entering the SPB storage unit, and filtration may be a minor issue, but airborne materials, dust, leaves, twigs, and bird droppings may interfere with the transport of water into the absorbing material. One practical option would be to apply some pre-treatment of rainwater, in the form of diversion or screening [52]. Even with this simple pre-treatment, it is likely that impurities collected on the SPB surface would form a biological filtration layer (usually called "a cake" or "schmutzdecke"), as described, e.g., by Li and Davis [53] and Tien [54], but the SPB material itself should not be affected by the "particles". However, the functioning of the storage will deteriorate when "particles" start to fill the cavities, serving to increase the interface area and cover the surface of the absorbing material as the cake is formed. This implies that the flow rates into and out of the storage would be impaired, while the amount of water that can be stored would remain about the same. This type of filtration can be modelled on various spatial scales and represents a possible topic for future research, e.g., building on the work of Tien [54], Li and Davis [55], Frishfelds et al. [56], Lundström et al. [57], and Zhou et al. [58]. From the operational point of view, "the cake" needs to be removed at certain time intervals, which is commonly achieved in practice by periodic flushing. The Up-flow SPBs are also exposed to the entry of water carrying various materials (e.g., solids), chemicals (e.g., nutrients, dissolved organic matter), and fecal microorganisms attributed mostly to pets and wildlife droppings [59]. However, these SPBs should be less susceptible to clogging by larger particles, because of gravity forces acting on such particles in the direction opposite to that of flow (away from the filter membrane). In general, the storage structure needs to be designed to sustain its operation and functioning taking into account such influent quality. The first step for enhancing the storage structure operation would be, again, to include some kind of pre-treatment of the incoming stormwater (mostly settling, or screening). Even with pre-treatment, the SPB storage system needs to be able to handle solids of various sizes. Conceivably, fine particles could be allowed to enter into the vertical structures, in a process mimicking deep bed filtration, while larger particles would be filtered away, in the form of cake filtration, or deposition at the SPB structure foot. The advantage of this approach would be that the cake will have a relatively high permeability since it will consist of larger size particles/objects. When allowing smaller particles to enter the storage, the filtration process should not impair the flow rates to the same extent as in the case of cake filtration of such particles [57]. In addition, in this case, in-depth studies may be carried out for the dynamic storage itself, including porous media with various sizes of pore structures [55,60,61]. However, concerns about the porous material surface fouling will need to be addressed. While the treatment of stormwater by filtration in SPBs is a challenge for flow entry and storage filling, it offers a great opportunity for enhancing the stored stormwater quality. This dual functionality, providing both quantity and quality control benefits, would improve cost-benefit considerations of SPB storage facilities and their economic attractiveness. In addition to the storage outflow, several practical issues of operating SPB storage may need to be resolved before starting prototype field testing: rainwater or stormwater filtration upon storage entry through the water/SPB interface, formation of biofilms impeding inflow (fouling), environmental concerns, resistance to damage or deterioration of SPB due to freezing, and safety of designs incorporating SPB storage. For filtration, the conditions at the point of inflow into SPBs are of high importance, i.e., the SPB performance in receiving, storing and treating rainwater or stormwater carrying suspended solids and chemicals. Consequently, when looking for practical applications of SPB storage, one should start with relatively "clean" waters (e.g., rainwater on rooftops). The simulated test cases show that significant volumes of water can be stored relatively quickly in the SPB storage units analyzed and that the geometry of such units could be further optimized. The modelling experiments with relevant inflow rates resulting from short-duration design rainfalls furthermore indicate that the concepts presented could be used to contribute to the restoration of catchment hydrology and reduce runoff even during heavy rainfalls. Although the design rainfall events are generally developed for return periods ranging from 2 to 100 years [44], it is conceivable that the SPBs would be best applied within a lower range of return period, perhaps 2-25 years, and in an integrated manner with other types of storage, existing or planned, in the catchment. For example, the overflows from SPBs could be diverted to conventional storage, and vice versa. Conclusions Potential roles of dynamic water storage in sponge-like porous bodies (SPBs) in lot-level stormwater management were proposed and theoretically examined. To this end, mathematical analysis and numerical modelling, based on first principles, demonstrated that SPBs could be designed to fully capture Swedish design rainfalls of 1-h duration and the average return period of 10 years. Furthermore, such analyses demonstrated that the use of absorbing and/or porous media on several scales in self-driven storage designs can be optimized with respect to the theoretical maximum storage capacity and inflow rates. Hence, the potential of the concepts discussed is much greater than what could be presented in this paper. At the same time, it is evident that additional theoretical and experimental work is needed to advance the Technology Readiness Level of the proposed theoretical SPB concept beyond the current level.
Low prevalence of symptomatic thyroid diseases and thyroid cancers in HIV-infected patients Thyroid diseases (TDs) have been widely associated with HIV infection. However, data about TDs prevalence and distribution are controversial, and few published studies are available. The aim of our study was to assess prevalence and risk factors of symptomatic thyroid disturbances, including thyroid cancers, in a large cohort of HIV-infected patients. A retrospective cohort study was performed at the Department of Infectious and Tropical Diseases of the University of Brescia, Italy, in the period 2005–2017. We identified all HIV-positive patients with a diagnosis of symptomatic TD in the electronic database of our Department (HIVeDB); we also operated a record-linkage between our data and the Health Protection Agency database (HPADB) of Brescia Province. Multivariate logistic regression analysis was used to determine risk factors associated with TDs onset; an incidence rate analysis was also performed. During the study period, 6343 HIV-infected patients have been followed at our Department; 123 received a diagnosis of symptomatic TD (1.94% of the entire cohort). In the TDs group, almost half of patients were females (n = 59, 48%), mean age was 47.15 years (SD: 11.56). At TD diagnosis, mean T CD4+ cell count was 491 cell/uL and most patients showed undetectable HIV-RNA (n = 117, 95.12%). Among them, 81 patients were found to have hypothyroidism (63 with Hashimoto’s thyroiditis), 21 hyperthyroidism (17 suffered from Graves’ disease), while 11 subjects were diagnosed with a primitive thyroid cancer. Papillary thyroid cancer was the most frequent histotype (n = 7, 63.63%), followed by medullary (n = 2, 18.18%) and follicular thyroid cancer (n = 1, 9.1%). Male gender was a protective factor for TDs development, especially for hypothyroidism (p < 0.001); age emerged as a variable associated with both hypothyroidism (p = 0.03) and thyroid cancer (p = 0.03), while CD4+ cell nadir <200 cell/mm3 was associated with symptomatic hyperthyroidism (p = 0.005). To conclude, symptomatic thyroid dysfunctions rate in well-treated HIV-infected patients is low. Age and gender are crucial elements in the onset of thyroid abnormalities, together with T CD4+ cell nadir. Interestingly, medullary thyroid cancer seems to be much more frequent in HIV-infected patients compared to the general population. gland, rarely resulting in thyroid dysfunction 6,20 . Antiretroviral therapy can also be implicated in TDs. Stavudine has been linked to lower FT4 levels and hypothyroidism 7,8,11,19 , whereas the role of other regimens, as integrase (INIs) and protease inhibitors (PIs), in impairing thyroid function has yet to be fully investigated. Other drugs, as phenytoin, carbamazepine and lithium, that are frequently used among PLWH for psychiatric disorders, can also modify thyroid hormone levels. Moreover, the improvement of host immune response induced by cART could trigger autoimmune disorders, leading to the appearance of aberrant thyroid autoantibodies. Thus, late after the introduction of antiretroviral therapy, approximately 2% of PLWH develops Graves' disease 14,21,22 . Lastly, Hepatitis C virus (HCV) coinfection is frequently observed in PLWH, due to similar transmission routes. HCV has been associated with different autoimmune conditions, including thyroid disorders, although the actual causative mechanisms have not been entirely determined 23,24 . Furthermore, interferon therapy can also promote autoimmune reactions in HCV-infected patients, especially targeting thyroid gland 25 . To date, few published papers regarding thyroid dysfunctions in HIV-positive patients are available and thyroid alterations prevalence and distribution in PLWH still remain controversial, since previous studies show sample, definitions and outcomes heterogeneity. Moreover, to our knowledge, no studies focusing only on symptomatic thyroid dysfunctions have been performed and data about thyroid primitive tumors in HIV-infected patients are completely lacking. Thus, the aims of this study were to assess the prevalence of symptomatic thyroid dysfunctions, including primitive thyroid cancers, in a large cohort of HIV-infected patients in the modern cART era and to evaluate their laboratory and clinical characteristics. Finally, we also determined the risk factors associated with the occurrence of symptomatic thyroid abnormalities in this cohort. Methods Setting and study population. We performed a retrospective cohort study including all HIV-infected patients aged more than 18 years old and receiving cART in charge at the Department of Infectious and Tropical Diseases, ASST Spedali Civili and University of Brescia, Italy. The period of follow-up ranged from January 2005 to December 2017. We considered as symptomatic TDs all thyroid disorders needing medical or surgical treatment and all thyroid primitive tumors. Symptomatic thyroid dysfunctions were subsequently classified in three groups as follows: hypothyroidisms, hyperthyroidisms and thyroid cancers (TCs). We identified all HIV-positive patients with a symptomatic TD in the electronic database used for clinical management at our Department (HIVeDB); we also performed a record-linkage between HIV-infected patients followed by our Department and the Health Protection Agency database (HPADB) of Brescia Province, which tracks all services provided by the National Health Service. In this way, we captured all subjects with HIV infection who received in-hospital services and medical treatment for hypo-/hyperthyroidism or had a medical certificate with an ICD9-CM code for TD (including thyroid cancers). Uncertain TD cases were confirmed with phone-call to patients. Patients' demographic, epidemiological, clinical and laboratory (biochemical and viro-immunological) data were recorded in an electronic file. Information about coinfections, eventual comorbidities and medication history (including cART) were also collected. In particular, the following parameters were recorded at TD diagnosis and at the last available visit performed at our Clinic: HIV-RNA (cut-off of 37 copies/ml) and lymphocyte T CD4+ and CD8+ cell counts. The outcomes of the research were TDs, considered in general and as specified diagnosis-related groups (hypothyroidism, hyperthyroidism and thyroid tumors). They were chosen arbitrarily a priori, considering scientific evidence and convenience to retrieve data. Statistical analysis. Continuous variables were reported as mean with standard deviation (±SD), categorical variables as frequency with percentages. We used chi-squared tests and Fisher exact tests for dichotomous variables comparison and t-test or Wilcoxon rank-sum test for group means comparison (for normal and non-normal distributions, respectively). Odds ratios (ORs) were computed through multivariate logistic regression. Our model computed ORs for all outcomes, considering CD4+ nadir (as a dichotomous variable with cut-off: 200 cell/mm 3 ), gender, age at HIV test (per 10-years older) and HCV coinfection. All the OR estimates were reported together with their 95% confidence intervals (95% CIs) and the p value. We performed the incidence rate analysis for TDs (considered all together) using periods of three years, from 2005 to 2016. Analysis were stratified by gender. Subjects with less than 30 days of follow up were excluded from the incidence analysis but kept in all the others. We excluded prevalent cases. For this analysis we did not include incident cases during 2017 for getting homogeneous periods. Moreover, two sensitivity analyses excluding patients with positive HCV antibodies and patients with an AIDS-defining diagnosis were conducted for both the incidence and the prevalence of TDs. All statistical tests were two-sided at the level of significance p < 0.05. Analyses were performed using Stata software version 14.0 (StataCorp, College Station, TX, USA). ethics. This study was conducted according to the Declaration of Helsinki and the principles of Good Clinical Practice (GCP). As this study had a retrospective design and was based on routinely collected data, patients' informed consent was not required according to the Italian law (Italian Guidelines for classification and conduction of observational studies, established by the Italian Drug Agency, "Agenzia Italiana del Farmaco -AIFA" on March 20, 2008). Moreover, for this study we used the general authorization of the Italian Guarantor for the use of retrospective demographical and clinical data, which have been treated according to present laws. Results characteristics of study cohort. From 2005 to 2017, 6343 HIV-infected patients have been followed at our Department with at least one visit; 71.68% was represented by males, mean age at the end of follow-up was 47.30 ± 11.90 years. HCV/HIV coinfection was present in 39.08% of the cohort. A total of 123 patients showed a symptomatic TD in our HIVeDB during the study period (1.94% of the entire cohort including prevalent and incident cases). All detected subjects were confirmed by the HPADB and no additional cases were identified using this source. characteristics of patients with tD (n = 123). All patients were known to be HIV-infected when they developed thyroid dysfunction; mean age at TD diagnosis was 46.6 years (SD: 11.5). Characteristics of the study population at baseline and at the end of follow-up are summarized in Table 1. In the TDs group, almost half of patients were females (n = 59, 48%), 67 subjects (54.5%) showed a CD4+ lymphocytes nadir <200/mm 3 and 71.54% (n = 88) experienced a previous AIDS-defining event. Mean time from HIV to TD diagnosis was 11.04 years (SD: 9.60); at TD diagnosis, mean T CD4+ cell count was 491 cell/uL (range 1-1556 cell/uL) and most patients had undetectable HIV-RNA (n = 117, 95.12%). The logistic regression analysis showed that TDs were negatively associated with male gender (OR = 0.52, 95% CI 0.35-0.77; p < 0.001) and positively associated with age at HIV test per 10-year increase (OR = 1.24, 95% CI 1.06-1.45; p = 0.005). At the end of follow-up, all patients with TD had serum thyroid stimulating hormone (TSH), serum triiodothyronine (T3) and serum thyroxine (T4) levels within the reference ranges (data not shown). Overall, 99 patients (80.49%) received levothyroxine, 17 (13.82%) were cured with methimazole, 3 (2.44%) took iodine and selenium supplements. Furthermore, 20 patients (16.26%) had a history of surgical thyroidectomy. Figure 1 shows the distribution of TDs in the cohort considered. Hashimoto's thyroiditis and Graves' disease were the most frequent pathologies of thyroid gland (51.22% and 13.82% of all TDs, respectively). Demographic and clinical characteristics of patients with TDs (classified in hypothyroidisms, hyperthyroidisms and TCs) are shown in Table 2. In this analysis we did not include 10/123 patients who suffered from an euthyroid goiter. patients with hypothyroidism (n = 81). Hypothyroidism was the most prevalent TD in our cohort (81 out of 113, 71.68%); the majority of patients (n = 63, 77.78%) had a Hashimoto's thyroiditis. Mean age was 45.80 ± 11.49 years; slightly more than half was represented by males (n = 42, 51.85%) and overall 74.07% of subjects presented at least one comorbidity. Concerning HIV status, 22 patients (27.16%) experienced a previous AIDS-defining event, with a mean time from HIV acquisition to TD diagnosis of 11.23 years (SD: 10.20). At TD diagnosis all patients with hypothyroidism were on stable cART, the majority with a PI-containing regimen (n = 69, 85.19%). At the end of follow-up, all patients with hypothyroidism received a levothyroxine-based treatment. At the multivariate regression logistic analysis, hypothyroidism was negatively associated with male gender (OR = 0.41, 95% CI 0.26-0.65; p < 0.001) while age at HIV test per 10-year increase was borderline positively associated (OR = 1.21, 95% CI 1.00-1.46; p = 0.05). patients with hyperthyroidism (n = 21). The 18.5% (21/113) of patients with TD showed a symptomatic hyperthyroidism; the majority of them (n = 17, 80.95%) suffered from Graves' disease. Males represented 52.38% of this group (n = 11); mean age was 42.52 years (SD: 12.58). 76.19% of patients presented at least one comorbidity. Mean time from HIV to TD diagnosis was 12.06 years (SD: 8.14), mean T CD4+ nadir was 105.74 cell/ mm 3 (SD: 169.30). At TD diagnosis the totality of patients with hyperthyroidism received cART, 57.14% (n = 512) with a PI-based regimen. At the end of study, all patients were treated for their thyroid dysfunction (17 with methimazole, 4 with levothyroxine), while 5 underwent thyroidectomy. At the multivariate regression logistic analysis, only T CD4+ cell nadir <200 cell/mm 3 was independently associated with hyperthyroidism symptomatic disorders (OR = 3.08, 95% CI 1.06-1.45; p = 0.005). patients with primitive thyroid cancer (n = 11). Eleven subjects were diagnosed with TC during the study period. Papillary thyroid cancer was the most frequent TC (n = 7, 63.63%), followed by medullary (n = 2, 18.18%) and follicular thyroid cancer (n = 1, 9.1%). The histotype was not recorded for one patient. In this group, www.nature.com/scientificreports www.nature.com/scientificreports/ mean age was 48.64 years (SD: 6.30) and patients were equally divided by gender (45.5% females). Almost all subjects (n = 10, 90.91%) presented at least one comorbidity (cardiovascular, diabetes or osteopenia), while 36.36% (n = 4) were coinfected with HCV. TC was meanly diagnosed after 8.69 years from HIV diagnosis (SD: 9.88) and 100% of patients was on stable cART at that time (63.64% showing an undetectable viremia, with a mean T CD4+ cell count of 527 cell/mm 3 ). All subjects underwent thyroidectomy and all received a substitution therapy with sodic levothyroxine. One of the patients with papillary thyroid cancer had a metastatic disease involving lungs and bones; he finally died, after six years of chemotherapies and radiotherapies. TC diagnosis was made at 47 years old; patient remained in good viro-immunological conditions until the exitus (at the last exams available: HIVRNA not detectable, T CD4+ 510 cell/mm 3 ). To date, the rest of patients diagnosed with TC are in good conditions. At the multivariate logistic regression analysis TC diagnosis was independently associated only with age at HIV test per 10-year old increase (OR = 1.68, 95% CI 1.05-2.66; p = 0.03). Incidence of thyroid diseases from 2005 to 2016 in our cohort. For the incidence analysis we limited the period from 2005 to 2016. We included 5337 patients with at least 2 available visits during this period, excluding prevalent cases (42251.71 person-years of follow-up). We observed 96 TDs incident cases with an overall www.nature.com/scientificreports www.nature.com/scientificreports/ crude incidence rate of 2.2 per 1000 persons (95% CI 1.8-2.7); the crude incidence rate was 1.6 per 1000 persons (95% CI 1.2-2.2) for males and 3.8 per 1000 persons (95% CI 2.9-5.1) for females. Demographics characteristics and TDs incidence rates for the entire follow-up considered (2005-2016) and for each 3-year period are shown in Table 3. As expected, at the sensitivity analyses excluding subjects with positive HCV antibodies and with an AIDS-defining diagnosis we found only few differences for both the incidence and the prevalence of TDs due to the small number of total patients with symptomatic thyroid disorders considered (data not shown). Discussion Here we report that TDs, including cancers, are a rare event in PLWH, with double incidence for females compared to males. Thus, in our cohort, only 1.94% of HIV-infected patients were diagnosed with thyroid disturbances during the follow-up period: 1.28% showed symptomatic hypothyroidism, 0.33% symptomatic hyperthyroidism, while thyroid cancer was documented in 0.17% of the considered population. Other studies reported a significative higher prevalence of TDs in PLWH, ranging from 16% to 33.1% 11,26-29 , since they included also subclinical TDs in their analysis. If we compare the sole prevalence of overt hypo and hyperthyroidism in HIV-positive individuals, our data are similar to those found by other Authors, although their studies were performed in much smaller cohorts 17,[30][31][32] . Moreover, in our cohort all patients were on stable cART at TD diagnosis, confirming that well treated PLWH are not at an increased risk of thyroid dysfunction, as previous described [14][15][16][17] . However, it is not easy to make comparisons among distinct epidemiologic studies, due to the diversity of populations and end-points analyzed. Differences involve disease definition and severity (e.g. overt vs subclinical dysfunction), selection criteria, different reference ranges and laboratory techniques used to measure serum thyroid hormone levels 33 , besides the influence of age, sex and environmental factors 34 . For these reasons and considering the high prevalence of subclinical thyroid dysfunctions in HIV-infected subjects, we analyzed only symptomatic diseases. The two major autoimmune TDs, Hashimoto's thyroiditis and Graves' disease, were the most frequent pathologies of thyroid gland in our cohort. Although small researches reported a prevalence of Hashimoto thyroiditis up to 2.6% in HIV-positive patients 35 , a large-population based study 36 evaluated the presence of autoimmune diseases among 5186 HIV-infected patients, finding only 1 case of Hashimoto's thyroiditis and 2 cases of Graves' disease. Thus, these TDs seem to be more frequent in our cohort, showing a prevalence comparable to international general population values 34 . Autoimmune TDs are much common among women than in men, with a female to male ratio ranging from 5:1 to 10:1 in general population 34 . The biological explanation for this gender difference is not entirely clarified, but the X chromosome inactivation could significantly contribute to the high incidence of autoimmune TDs in females 37 ; moreover, it seems that maternal immune responses against fetal antigens can trigger autoimmune processes 38 . In our cohort of PLWH, male gender was confirmed as a protective factor for TDs onset, especially for hypothyroidism. TDs development is also influenced by ageing in general population 39 . In our study, age at HIV test emerged as a variable associated with both hypothyroidism and TCs. Notably, we also focused on temporal trends, finding an increase in TDs incidence for each 3-year period from 2005 to 2016, attributable to the ageing of our cohort. Thus, increasing age and non-AIDS related comorbidities (which were highly represented in our cohort, with a peak of 90.91% of prevalence for TCs), together with chronic inflammation, immunosenescence and polypharmacy may play a role in TDs occurrence and progression. However, immune restoration is also probably implicated in TDs development. As a matter of fact, we found that a T CD4+ cell nadir <200 cell/mm 3 was independently associated with symptomatic hyperthyroidism. In this group we reported the most remarkable CD4+ T cell increase from nadir to TD diagnosis (mean CD4+ www.nature.com/scientificreports www.nature.com/scientificreports/ increase: 437.54 ± 306.13), supporting the previously described hypothesis of Graves' disease as a manifestation of delayed immune reconstitution inflammatory syndrome (IRIS) 21,22 and suggesting that some CD4+ T lymphocytes subsets may affect its occurrence 40 . Finally, we did not find any correlation between HCV coinfection and thyroid abnormalities, although some Authors highlighted the role of both HBV and HCV in increasing the probability of thyroid dysfunction, especially hypothyroidism 20,29 . In the post-HAART period, PLWH reported an increasing risk of non-AIDS-defining cancers that it is higher compared to uninfected controls [41][42][43] . To our knowledge, our study is the first one to evaluate TCs prevalence and associated risk factors in a population of HIV-infected patients. According to the latest Italian general population tumor report 44 , the risk to develop a TC is much higher in females compared to males; furthermore, although TC incidence is increasing, the overall mortality remains extremely low. In our study, TC seems to be a very rare condition, with a lower prevalence than the one reported for general population (0.17 vs 0.33%, respectively). In this population of PLWH, patients diagnosed with TC were equally divided by gender(although females represent less than 30% of the total cohort) and age was the only risk factor associated with thyroid tumor development. Italian data 44 show that, from 2010 to 2014, 82% of TCs was represented by papillary thyroid cancer, followed by follicular (7%), medullary (4%) and anaplastic (1%) histotypes. As expected, in our cohort of PLWH, the most frequent TC was represented by papillary thyroid cancer. Interestingly, medullary thyroid cancer was diagnosed in almost 20% of patients considered in our study, with a percentage significantly higher compared to the Italian general population. Although medullary thyroid cancer development was associated with mutations of RET tyrosine kinase, no information is available about a possible interaction between HIV and thyroid tissue, as described for EBV 45 . However, due to our limited sample size, more researches are needed in order to confirm our results and to evaluate the real distribution of TCs in a population of HIV-infected individuals. A recent study 46 , performed using data from > 6 million US patients from the National Cancer Data Base (NCDB), showed that PLWH were more likely to be diagnosed with advanced-stage of TCs and to experience a higher mortality after TC diagnosis compared to general population. At TC diagnosis, all individuals considered in our cohort were on stable cART, the majority with a satisfying viro-immunological profile; moreover, at the end of follow-up, TC was controlled in all patients except one, who died. Our study should be interpreted within its limitations, firstly the retrospective monocentric design, which does not permit a more comprehensive description of HIV and immune system dynamicity. Secondly, although every TD diagnosis was accurately verified, there could be some bias in estimating the prevalence of thyroid disturbances in the population considered. Finally, all participants are from the same geographical area and it is reasonable to assume a similar iodine intake; however, our results may not apply to regions with different iodine consumption. The strengths of this study include the large population size, the enrolment of an unselected group of HIV-infected patients and an accurate retrieval of data on TDs thanks to the use of two different sources, the clinic database and the HPADB of Brescia Province. conclusions To conclude, our study is the first to analyze the prevalence of exclusively symptomatic TDs (with accurate retrieval of diagnosis) in a large cohort of HIV-infected patients, confirming age and gender as crucial elements in the onset of thyroid abnormalities and highlighting the role of T CD4+ cell nadir as additional factor. Despite the high prevalence of thyroid dysfunction described in PLWH, symptomatic TDs remain a rare event. However, in the next decades, we will probably face a progressive increase of this comorbidity, considering that TDs (especially hypothyroidism) are dramatically more prevalent after the age of 40 47 and that PLWH are actually ageing. Periodic screening with measurement of TSH levels could be implemented, in order to provide a rapid treatment and to minimize thyroid-related complications. Moreover, we found an interesting much more elevated rate of medullary thyroid cancer in PLWH compared to the general population; however, the distribution of different histotypes and their extensive characterizations are yet to be fully described in this category of patients.
Procedural Reading Comprehension with Attribute-Aware Context Flow Procedural texts often describe processes (e.g., photosynthesis and cooking) that happen over entities (e.g., light, food). In this paper, we introduce an algorithm for procedural reading comprehension by translating the text into a general formalism that represents processes as a sequence of transitions over entity attributes (e.g., location, temperature). Leveraging pre-trained language models, our model obtains entity-aware and attribute-aware representations of the text by joint prediction of entity attributes and their transitions. Our model dynamically obtains contextual encodings of the procedural text exploiting information that is encoded about previous and current states to predict the transition of a certain attribute which can be identified as a span of text or from a pre-defined set of classes. Moreover, our model achieves state of the art results on two procedural reading comprehension datasets, namely ProPara and npn-cooking Introduction Procedural text describes how entities (e.g., fuel, engine) or their attributes (e.g., locations) change throughout a process (e.g., a scientific process or cooking recipe). Procedural reading comprehension is the task of answering questions about the underlying process in the text (Figure 1). Understanding procedural text requires inferring entity attributes and their dynamic transitions, which might only be implicitly mentioned in the text. For instance, in Figure 1, the creation of the mechanical energy in alternator can be inferred from second and third sentences. Full understanding of a procedural text requires capturing the full interplay between all components of a process, namely entities, their attributes and their dynamic transitions. Recent work in understanding procedural texts develop domain-specific models for tracking entities in scientific processes or cooking recipes . More recently, Gupta and Durrett [2019b] obtain general entity-aware representations of a procedural text leveraging pretrained language models, and predict entity transitions from a set of pre-defined classes independent of entity attributes. Pre-defining the set of entity states limits the general applicability of the model since entity attributes can be arbitrary spans of text. Moreover, entity attributes can be exploited The fuel source will power an alternator. The electrons will run through to the outlets of the generator. An alternator will convert mechanical energy in to measurable electrical energy. An engine must be powered by gas or some fuel source. exists ( Procedural reading comprehension is the task of answering questions about the underlying process. Sample questions in a PROPARA tasks are: 'What is the process input?', 'What is the process output?', 'What is the location of the entity?'. for tracking entity state transitions. For example, in Figure 1, the location of fuel can be effectively inferred from text as engine without the explicit mention of the movement transition in the first sentence. Moreover, the phrase converted in the third sentence gives rise to predicting two transition actions of destruction of one type of energy and creating the other type. In this work, we introduce a general formalism to represent procedural text and develop an endto-end neural procedural reading comprehension model that jointly identifies entity attributes and transitions leveraging dynamic contextual encoding of the procedural text. The formalism represents entities, their attributes, and their transitions across time. Our model obtains attribute-aware representation of the procedural text leveraging a reading comprehension model that jointly identifies entity attributes as a span of text or from a pre-defined set of classes. Our model predicts state transitions given the entity-aware and attribute-aware encoding of the context up to a certain time step to consistently capture the dynamic flow of contextual encoding through an LSTM model. Our experiments show that our method achieves state of the art results across various tasks introduced on the PROPARA dataset to track entity attributes and their transitions in scientific processes. Additionally, a simple variant of our model achieves state of the art results in the NPN-COOKING dataset. Our contributions are three-fold: (a) We present a general formalism to model procedural text, which can be adapted to different domains. (b) We develop DYNAPRO, an end to end neural model that jointly and consistently predicts entity attributes and their state transitions, leveraging pretrained language models. (c) We show that our model can be adapted to several procedural reading comprehension tasks using the entity-aware and attribute-aware representations, achieving state of art results on several diverse tasks. Related Work Most previous work in reading comprehension [Rajpurkar et al., 2016] focus on identifying a span of text that answers a given question about a static paragraph. This paper focuses on procedural reading comprehension that inquires about how the states of entities change over time. Similar to us, there are several previous work that focus on understanding temporal text in multiple domains. Cooking recipes describe instructions on how ingredients consistently change. bAbI [Weston et al., 2015] is a collection of datasets focusing on understanding narratives and stories. Math word problems [Kushman et al., 2014, Hosseini et al., 2017, Amini et al., 2019, Koncel-Kedziorski et al., 2016 describe how the state of entities change throughout some mathematical procedures. Narrative question answering [Kočiskỳ et al., 2018 inquires to reason about the state of a story over time. The PROPARA dataset is a collection of procedural texts that describe how entities change throughout scientific processes over time, and inquires about several aspects of the process such as the entity attributes or state transitions. Several models (e.g., EntNet [Henaff et al., 2017], QRN [Seo et al., 2017], MemNet [Weston et al., 2014]) have also been introduced to track entities in narratives. The closest work to ours is the line of work focusing on the PROPARA and NPN-COOKING datasets. use an attention-based neural network to find transitions in ingredients. Pro-local and Pro-Global [Mishra et al., 2018] first identify locations of entities using an entity recognition approach and use manual rules or global structure of the procedural text to consistently track entities. leverage manually defined and knowledge-base driven commonsense constraints to avoid nonsensical predictions in Pro-Struct model (e.g., entity trees don't moves to different locations). KG-MRC [Das et al., 2019] maintain a knowledge graph of entities over time and identify entity states by predicting the location span with respect to each entity while utilizing a reading comprehension model. NCET (Gupta and Durrett [2019a]) introduces a neural conditional random field model to maintain the consistency of state predictions. Most recently, ET BERT [Gupta and Durrett, 2019b] uses transformers to construct entity-aware representation of each entity and predict the state transitions from a set of predefined classes. In this paper, we integrate all previous observation and develop a model that jointly identifies entities, attributes, and transitions over time. Unlike previous work that is designed to address either attributes or transitions, our model benefits from the clues that are implicitly and explicitly mentioned for both entity attributes and transitions. Leveraging both aspects of procedural reading comprehension lead us to a general and adaptive definition and model for such task that has achieved state of art in several tasks. Procedural Text Representation Procedural text is a sequence of sentences describing how entities and their attributes change throughout a process. We introduce a general formalism to represent a procedural text: where E is the list of entities participating in the process, A is the list of entity attributes, and T is the list of transitions. Entities are the main elements participating in the process. For example, in the scientific processes described in PROPARA entities include elements such as energy, fuel, etc. In the cooking ... recipe domain, the entities are ingredients such as milk, flour, etc. The entities can be given based on the task such as in PROPARA and cooking domain or they can be inferred from the context (e.g., math word problems). Attributes are entity properties that can change over time. We model attributes as functions Attribute(e) = val that assign a value val to an attribute of the entity e. The entity state at each time is derived by combining all the attribute values of that entity. Attribute values can be either spans of text or can be derived from a predefined set of classes. For example, in PROPARA an important attribute of an entity is its location which can be a span of text. Npn-Cooking dataset introduces several attributes (such as shape and cookedness) for each ingredient. Example attributes addressing the entities in PROPARA are modeled as follows: exists ( Model We introduce DYNAPRO, an end-to-end neural architecture that jointly predicts entity attributes and their transitions. Figure 2 depicts an overview of our model. DYNAPRO first obtains the representation of the procedural text corresponding to an entity at each time step (Section 4.1). It then identifies entity attributes for current and previous time steps (Section 4.2) and uses them to develop an attribute-aware representation of the procedural context (Section 4.3). Finally, DYNAPRO uses entity-aware and attribute-aware representations to predict transitions that happen at that time step (Section 4.4). Entity-aware representation Given a procedural text S 0 . . . S k . . . S T and an entity e, DYNAPRO encodes procedural context X k at each time step k and obtains the entity-aware representation vector R k (e). The procedural context is formed by concatenating entity name, query, and a fragment of the procedural text. The entity name and the query are included in the procedural context to capture the entity-aware representation of the context. Since entity attributes are changing throughout the process, we form the context at each step k by truncating the procedural text up to the k th sentence. More formally, the procedural context is defined as: where [S 0 . . . S k ] is the fragment of the procedural text up to the k th sentence, Q e is the entity-aware query (e.g., "Where is e?"), [C i ] includes tokens that are reserved for attribute value classes (e.g., nowhere, unknown), and [cls] and [sep] are special tokens to capture sentence representations and separators. DYNAPRO then uses a pre-trained language model to encode the procedural context X k (e) and returns the entity-aware representation R k (e) = BERT (X k (e)). Hereinafter, for the ease of notation we will remove the argument e from the equations. Attribute Identification DYNAPRO identifies attribute values for each entity from the entity-aware representation R k (e) by jointly predicting attribute values from a pre-defined set of classes or extracts them as a text span. Class Prediction Some attribute values can be identified from a set of pre-defined classes. For instance existence attribute of an entity can be identified from {nowhere, unknown, spanoftext}. Our model predicts the probability distribution P class k over different classes of attribute values given the entity-aware representation R k . where R k is the entity-aware representation, g is a non-linear function, f is a linear function and θ 1 are learnable parameters. Span Prediction Defining all attribute values apriori limits the general applicability of procedural text understanding. Some attribute values are only mentioned within spanoftext. For example, the location of an entity may be mentioned in the text, but not as a set of pre-defined classes. For span prediction, we follow the standard procedure of phrase extraction in reading comprehension [Seo et al., 2016] that predicts two probability distributions over start and end tokens of the span. where g is a non-linear function, f is a linear function and θ 2 and θ 3 are the learnable parameters used to find the probability distributions of start and end tokens of the span. In order to capture the transitions of entity attributes, our model captures attributes for time steps k − 1 and k given a procedural context X k . More specifically, we use equations 3 and 4 to compute the probability distributions P class k−1 , P span k−1 , P class k and P span k for both time steps k and k − 1. Attribute-aware Representation DYNAPRO obtains attribute-aware representation R a k of the context to encode entity attributes and their transitions at each time step k using the predicted distributions P span k and P class k for each entity e. The intuition is to assign higher probabilities to the tokens corresponding to the attribute value of the entity at time step k. Where class ∈ {nowhere, unknown, span} are the predefined classification of attributes, P class k and P span k denote the probability distribution of attribute values over predefined classes and the span of text respectively, and are calculated using equations 3 and 4. m class is a vector that masks out the input tokens that do not correspond with a specific class. We model the flow of the context by concatenating attribute-aware representations for time step k and k − 1 as, Transition classification DYNAPRO predicts attribute transitions from entity-aware and attribute-aware representations. In order to make smooth transition predictions and avoid redundant transitions we include a Bi-LSTM layer before the classification of the transition. where h is the hidden vector of sequential layer, θ 4 is the learnable parameter and R seq k is the output of the sequential layer. Inference and Training Training Our model is trained end-to-end by optimizing the loss function below: loss total = (loss span , loss class ) k−1 + (loss span , loss class ) k + loss transition k Each loss function is defined as a cross entropy loss. (loss span , loss state ) k and loss transition k are the losses of attribute prediction and the transition prediction modules at time step k, respectively. Inference At each time step k, the attributes A k and transitions T k are predicted given P span k , P class k , and P transition k . The final output of the model consists of two sets of predictions, the attributes A 0...k and transitions T 0...k which are combined to track entities throughout a process given a task-specific objective (more in implementation details). Datasets We evaluate our model over the PROPARA dataset introduced by with the vocabulary size of 2.5k. This dataset contains over 400 manually-written paragraphs of scientific process descriptions. Each paragraph includes average of 4.17 entities and 6 sentences. The entities are extracted by experts and the transitions are annotated by crowd-workers. We additionally evaluate our model on the NPN-COOKING dataset. This corpus contains 65k cooking recipes. Each recipe consists of ingredient tracked during the process. Training samples are heuristically annotated by string matching and dev/test samples are annotated by crowd-workers. We randomly sample from the training recipes that have contained ingredients which changed in location attribute. Tasks and metrics We evaluate DYNAPRO on three tasks in PROPARA and one task in NPN-COOKING. Sentence-level predictions The task is introduced by Mishra et al. [2018] that considers questions about the procedural text: Cat-1 asks if the specific entity is Created/Destroyed/Moved, Cat-2 asks the time step at which the entity has been Created/Destroyed/Moved, and Cat-3 asks about the location that entity is Created/Moved/Destroyed. The evaluation metric calculates the score of all transitions in each category and reports the micro and macro average of the scores among three categories. Document-level predictions Action dependencies The task is recently introduced by Mishra et al. [2019] to check whether the actions predicted by a model have some role to play in overall dynamics of the procedural paragraph. The final metric reported for this task is the precision, recall, and F1 scores of the dependency links averaged over all paragraphs. Location prediction in Recipes The task is to identify the location of different entities in the cooking domain. In this domain, the list of attributes are fixed. We evaluate by measuring the change in location and compute F1 and accuracy in attribute prediction. Implementation Details We use the official implementation of BERT base huggingface library [Wolf et al., 2019]. We choose cross entropy loss function. The learning rate for training is 3e −5 and the training batch size is 8. The hidden size of the sequential layer is set to 1000 and 200 for class prediction and transition prediction respectively. We use the predicted A k−1 to initialize the attribute of timestep 0 and at any other timestep we use at A k predictions for finding the value of attribute at timestep k. In the sentence level evaluation task introduced in , the consistency is not required. Inference phase for this task only uses the attribute predictions. For the document-level predictions, we construct the final predictions by favoring the transition predictions. In case of inconsistency where there is no valid attribute prediction to support the transition we refer to the attribute value to deterministically infer the transition. To adapt the results of DYNAPRO to identify action dependencies, we postprocess the results using similar heuristics described in the original task. To adapt DYNAPRO to the NPN-COOKING dataset, we use a 243-way classification to predict attributes because the attributes are known apriori. Table 1 and Table 2 compare DYNAPRO with previous work (detailed in Section 2) in PROPARA and NPN-COOKING tasks. As shown in the tables, DYNAPRO outperforms the state of the art models in most of the evaluations. Document-level task We observe the most significant gain (3 absolute points in F1) in the documentlevel tasks, indicating the ability of the model in global understanding of the procedural text by joint predictions of entity attributes and transitions throughout time. Overall, DYNAPRO predicts transitions with higher confidence, and hence results in high precision in most document-level tasks. Sentence-level task DYNAPRO outperforms the state-of-the-art models on Ma-Avg and Mi-Avg metrics when comparing the full predictions and gives comparable results to previous work on change and time step predictions. Note that ET BERT [Gupta and Durrett, 2019b] only predicts actions (Create, Destroy, Move) but fails to predict location attributes as spans DYNAPRO obtains a good performance on Cat-1 and Cat-2 prediction while learning to predict questions with more complex structure. We speculate that our lower numbers in Cat-1 and Cat-2 are due to DYNAPRO's highly confident decisions that lead to high precision, but lower prediction rate, noting that Cat-1 and Cat-2 evaluate accuracy. Ablation Studies and Analyses In order to better understand the impact of DYNAPRO's components, we evaluate different variants of DYNAPRO in the document-level task of the PROPARA dataset. instead of the attribute-aware representation R a k . • Full procedural input that uses the full text of the procedure instead of the truncated text X k at time step k. Table 3 shows that removing each component from DYNAPRO hurts the performance, indicating that joint prediction of attribute spans, classes, and transitions are all important in procedural reading comprehension. Moreover, the table shows the effect of attribute-aware representations that incorporate the flow of context by predicting attributes of two consecutive time steps. Finally, the table shows the effect of procedural context modeling by truncating sentences up to a certain time step rather than considering the full document at each time step. Note that document-level evaluation in PROPARA requires spans of texts being identified, therefore removing span prediction from DYNAPRO cannot be ablated. # Sentence Gold Prediction 1.1 Blood enters the right side of your heart. heart right side of your heart 1.2 Blood travels to the lungs. lungs lungs 1.3 Carbon dioxide is removed from the blood lungs lungs 1.4 Blood returns to left side of your heart heart left side of your heart 2.1 Blood travels to the lungs blood blood 2.2 Carbon dioxide is removed from the blood. -? 3.1 Fuel converts to energy when air and petrol mix. -air and petrol 3.2 The car engine burns the mix of air and petrol. engine air and petrol 3.3 Hot gas from the burning pushes the pistons. piston air and petrol 3.4 The resulting energy powers the crankshaft. Error Analysis Qualitative Analyses Table 4 shows the three types of common mistakes in the final predictions. In the first example DYNAPRO successfully tracks the blood entity while circulating in the body, yet there is a mismatch of what portion of the text it chooses as the span. In the second example, the model correctly predicts the location of carbondioxide as blood, but there is not enough external knowledge provided for the model to predict that this entity gets destroyed. In the third example, the model mistakenly predicts the airandpetrol as a container for the energy, and since the changes are explicitly happening to the container they are not propagate to the entity. Inconsistent Transitions We categorize possible inconsistencies in transition predictions into three categories. (The percentage numbers shows how many times that inconsistency was observed in the inference step.): • Creation(2.0%): When the supporting attribute is predicted to be non-existence or the previous attribute shows that the entity already exists. • Move(1.5%): When the predicted attribute is not changed from previous prediction or it refers to a non-existence case. • Destroy(1.0%): When the predicted attribute for the last timestep is non-existence. Conclusion We introduce an end-to-end model that benefits from both entity-aware representations and attributeaware representations to jointly predict attributes values and their transitions related to an entity. We present a general formalism to model procedural texts and introduce a model to translate procedural text into that formalism. We show that entity-aware and temporal-aware construction of the input helps to achieve better entity-aware and attribute-aware representations of the procedural context. Finally, we show how our model can achieve inferences about state transitions by tracking transition in attribute values. Our model achieves the state of the art results on various tasks over the PROPARA dataset and the NPN-COOKING dataset. Future work involves extending our method to automatically identifying entities and their attribute types and adapting to other domains.
Spectroscopic Observations of High-speed Downflows in a C1.7 Solar Flare In this paper, we analyze the high-resolution UV spectra for a C1.7 solar flare (SOL2017-09-09T06:51) observed by the \textit{Interface Region Imaging Spectrograph} (\textit{IRIS}). {We focus on the spectroscopic observations at the locations where the cool lines of \ion{Si}{4} 1402.8 \AA\ ($\sim$10$^{4.8}$ K) and \ion{C}{2} 1334.5/1335.7 \AA\ ($\sim$10$^{4.4}$ K) reveal significant redshifts with Doppler velocities up to $\sim$150 km s$^{-1}$.} These redshifts appear in the rise phase of the flare, then increase rapidly, reach the maximum in a few minutes, and proceed into the decay phase. Combining the images from \textit{IRIS} and Atmospheric Imaging Assembly (AIA) on board the {\em Solar Dynamics Observatory} ({\em SDO}), we propose that the redshifts in the cool lines are caused by the downflows in the transition region and upper chromospheric layers, which likely result from a magnetic reconnection leading to the flare. In addition, the cool \ion{Si}{4} and \ion{C}{2} lines show gentle redshifts (a few tens of km s$^{-1}$) at some other locations, which manifest some distinct features from the above locations. This is supposed to originate from a different physical process. Keywords: line profiles -magnetic reconnection -Sun: chromosphere -Sun: flares -Sun: UV radiation 1. INTRODUCTION Solar flares are one of the most energetic events on the Sun (e.g., Fletcher et al. 2011), which are generally believed to be associated with magnetic reconnection (Kopp & Pneuman 1976;Masuda et al. 1994;Lin & Forbes 2000). In the standard flare model, magnetic reconnection releases massive energy in the corona, which is pre-stored in a non-potential magnetic structure. The released energy is subsequently transported downward to the lower atmosphere through thermal conduction and/or nonthermal particles. Hence the chromospheric plasma is heated and emits strong radiation that forms flare ribbons. Due to an enhanced thermal pressure, the heated chromospheric material moves upward to the corona and fills the flare loops that are visible in EUV and soft X-ray bands. This process is known as chromospheric evaporation (Brosius & Phillips 2004;Milligan et al. 2006a,b;Doschek et al. 2013). In general, the evaporation is also accompanied by a compression of chromospheric plasma based on momentum balance (Canfield et al. 1990), which is referred to as chromospheric condensation (Fisher et al. 1985;Milligan et al. 2006a;Zhang et al. 2016). There are several ways to investigate the energetics and dynamics of flares, of which the spectroscopic diagnostics are a classical and important one. Based on the fact that each spectral line is formed in a specific atmospheric layer, we could obtain various information on different layers of the atmosphere by using different lines. For example, the cool lines of Si IV and C II that are formed at ∼10 4.8 K and ∼10 4.4 K respectively could be used to diagnose the transition region (TR) and upper chromosphere (e.g., McIntosh & De Pontieu 2009;Tian et al. 2014a;Li et al. 2019). The hot Fe XXI line with a formation temperature of ∼10 7.1 K could reveal the physical properties of the hot corona (Tian et al. 2014b;Battaglia et al. 2015;Graham & Cauzzi 2015;Li et al. 2015;Polito et al. 2015Polito et al. , 2016Young et al. 2015;Dudík et al. 2016;Brosius & Inglis 2018). In particular, the Doppler velocity can be derived from line profiles, which is a good indicator of the plasma flows during a flare. For optically thin lines, blueshifts are generally due to plasma upflows whereas redshifts imply plasma downflows. There are a large number of studies on the blueshifts/redshifts which result from chromospheric evaporation/condensation in solar flares. For instance, blueshifts with velocities ranging from ∼50 to ∼300 km s −1 were typically observed in the spectra of highly ionized Fe atoms (e.g., Fe XVI to Fe XXIV) from instruments such as Hinode/EIS and the Interface Region Imaging Spectrograph (IRIS) (e.g., Brosius & Phillips 2004;Liu et al. 2006;Chen & Ding 2010;Zhang et al. 2016;Li et al. 2017a). Redshifts with velocities of ∼20-80 km s −1 from the relatively cool lines of He II, O III, O V, and Fe XII were also detected owing to plasma condensation (e.g., Wuelser et al. 1994;Ding et al. 1995;Czaykowska et al. 1999;Brosius 2003;Kamio et al. 2005;Teriaca et al. 2006;Del Zanna 2008;Milligan & Dennis 2009). Note that these blueshifts/redshifts are located on the flare ribbons. It is worth mentioning that there are few observations that have reported the rapid redshifts (say, >100 km s −1 ) in the cool lines on the flare ribbons. Instead, some rapid blueshifts or redshifts were observed on the loops, which could be interpreted as magnetic reconnection outflows. Sadykov et al. (2015) reported a strong jet-like flow with a redshift velocity of ∼100 km s −1 in the chromospheric C II and Mg II lines just prior to a flare, which perhaps comes from the magnetic reconnection region. Reeves et al. (2015) detected intermittent fast downflows (∼200 km s −1 ) in the Si IV line as evidence for magnetic reconnection between the prominence magnetic fields and the overlying coronal fields. Moreover, bidirectional outflows with velocities of tens to hundreds of km s −1 were observed in the Si IV line in terms of a tether-cutting (TC) reconnection (Chen et al. 2016) or a separator reconnection (Li et al. 2017b). In spite of these, spectroscopic observations of magnetic reconnection are still lacking and the detailed physical mechanisms of fast flows appearing in some events are not fully understood yet. Fortunately, IRIS (De Pontieu et al. 2014) provides high-resolution slit-jaw images (SJIs) as well as spectra for a large number of flares. The sub-arcsecond observations from IRIS reveal fine structures of flares and illustrate distinct features of plasma flows. In this work, we detect rapid redshifts with a velocity of ∼150 km s −1 in the cool Si IV and C II lines at some locations during a C1.7 flare. We also observe gentle redshifts of tens of km s −1 in these cool lines at some other locations. Such distinct redshifts are supposed to originate from different processes. In the following, we present the observations in Section 2 and data reduction in Section 3. Then we show the results from IRIS spectral lines in detail in Section 4. In Sections 5 and 6, we give the discussions and conclusions, respectively. (h)). In this work, we select these four locations to study the typical features of the flare region, which can be divided into two groups, i.e., one for locations A and B and the other for locations C and D. These two sets of locations exhibit some distinct spectral features on the moment maps as well as in the line profiles as described in Section 4. The AIA on board SDO obtained UV and EUV images for this flare with a spatial resolution of 1. ′′ 2 (or 0. ′′ 6 pixel −1 ) and temporal resolutions of 24 s and 12 s, respectively. The UV and EUV bands are sensitive to plasmas at different temperatures. For example, the 131Å, 193Å, and 171 A bands are for coronal plasmas with their responses peaking at ∼10 MK, ∼1.6 MK, and ∼0.6 MK, respectively, while the 304Å and 1600Å bands are for chromospheric and TR plasmas with formation temperatures of ∼0.05 MK and ∼0.1 MK, respectively. Note that here, we use the AIA 1700Å images and SJIs 2832Å to make a co-alignment, both of which show clear sunspot features. The uncertainty of the co-alignment is estimated to be ∼1 ′′ . We mainly use the Si IV 1402.8Å line that has a formation temperature of ∼10 4.8 K in this work. The C II lines at 1334.5Å and 1335.7Å (∼10 4.4 K), the Mg II k line at 2796.4Å (∼10 4.0 K), and the Fe XXI line at 1354.1Å (∼10 7.1 K) are referred to as well. For the Si IV line, we make a moment analysis to derive the total intensity (the zeroth moment) and Doppler velocity (the first moment) for the following two reasons. (1) The Si IV line profiles are very complicated in this flare, i.e., showing two or even more emission peaks (see Figure 3). (2) The ratio of the two Si IV lines at 1393.8Å and 1402.8Å somewhat deviates from 2 (ranging from ∼1.4-2.1 at locations A-D) during the flare, indicating that the Si IV line suffers from an opacity effect (e.g., Peter et al. 2014;Yan et al. 2015;Kerr et al. 2019). For the optically thin Fe XXI line that is mainly contaminated by the C I 1354.3 A line, we use a double Gaussian function to fit the two lines. Note that we also apply a triplet Gaussian fitting to these two lines when the C I line exhibits a red asymmetry during the flare. In order to calculate the Doppler velocity, here we use some photospheric or chromospheric lines over a relatively quiet region before the flare onset to determine the reference wavelength. (c)). Note that the two C II resonance lines are blended due to a large redshift. By contrast, at locations C and D, the Si IV line profiles show two components or a red asymmetry with the Doppler velocity lower than 100 km s −1 (Figures 3(c) and (d)). In particular, the C II and Mg II lines exhibit a redshifted line core at these two locations (Figures 4(b) and (d)). Moreover, weak but blueshifted emission is detected in the hot Fe XXI line at location C (Figures 4(f) and (h)). However, at location A, the Fe XXI line emission is relatively stronger as well as show evident redshifts with a velocity of a few tens of km s −1 (Figures 4(e) and (g)). Figure 5(a) gives the space-time diagram of the Si IV line intensity. One can see that locations A-D show significant brightenings that exhibit an apparent motion towards the north over time. From the Doppler velocity map of the Si IV line in Figure 5(b), it is seen that all these locations display evident redshift features that correspond to the brightenings (see the overplotted contours). However, the redshift velocities at locations A and B can be as high as 150 km s −1 , which seem to be uncommon in observations. By contrast, the redshift velocities at locations C and D are only a few tens of km s −1 , which are often observed and reported in previous studies. Note that there appear some notable blueshifts at the bottom part, which are supposed to be caused by the flare-accompanied jet eruptions. As shown in the top panel of Figure 6, at location A, the Si IV line intensity starts to rise at ∼06:54:20 UT. It shows two peaks with the first one (at ∼06:55:17 UT) prior to the main peak of the time derivative of GOES SXR emission, and the second one (at ∼06:56:05 UT) slightly after that (see the two vertical dash-dotted lines). The temporal variation of the redshift velocity resembles that of the line intensity, only that the velocity increases a little bit earlier than the intensity. The velocity rises rapidly from ∼06:53:40 UT and reaches its maximum (∼150 km s −1 ) at ∼06:55:17 UT. Kinematic Features of the Downflows After the velocity reaches its second peak (∼140 km s −1 ) at ∼06:56:05 UT, it gradually decreases to nearly zero at ∼07:00:05 UT. The variation behaviour at location B (the bottom panel of Figure 6) is similar to that of location A except for a slight difference in timing. The redshifts at location B firstly appear at ∼06:53:10 UT and rise up to the first peak (∼150 km s −1 ) at ∼06:54:30 UT. Like at location A, the Doppler velocity at location B shows two peaks in coincidence with two peaks in intensity (as indicated by the two vertical dash-dotted lines). By comparison, the Doppler velocity at location B reaches its first peak about one minute earlier than that at location A. The second peak of the velocity, however, appears at the same time for both locations A and B. The time profiles of line intensity and Doppler velocity at locations C and D (see Figure 7) are distinct from those mentioned above. It is seen that, at location C, the intensity increases at ∼06:54:00 UT and shows some fluctuations. For the velocity, it starts to rise earlier at ∼06:52:10 UT and decreases to zero at ∼06:58:50 UT, also showing some fluctuations with amplitudes ranging from ∼30 to ∼50 km s −1 . Location D also reveals a fluctuation behaviour in the time profiles of intensity and velocity. Compared with the redshift velocities at locations A and B, the velocities at locations C and D are much smaller. DISCUSSIONS As shown above, the two sets of locations exhibit some distinct spectral features that are supposed to originate from different processes. As the C1.7 flare studied here is small in size and especially complex in morphology, it is somewhat difficult to determine the precise positions of the four selected locations, i.e., whether at flare ribbons or on flare loops, from the present data. In this section, we only provide some possibilities or speculations for the physical origin of the redshifts and blueshifts observed at these locations. For locations A and B, there show up continuum emission and cool narrow lines, which seem to support that they correspond to flare ribbons. However, such high-speed (∼150 km s −1 ) redshifts have scarcely been reported in the cool lines at flare ribbons and seem to be hard to explain using the ribbon scenario. In particular, relatively strong emission as well as evident redshifts are detected in the hot Fe XXI line at these two locations, which most likely originate from flare loops (e.g., Tian et al. 2014b;Young et al. 2015;Tian et al. 2016;Polito et al. 2018). In fact, some loop-like structures can be seen in the SJIs as well as AIA images at these two locations. Based on all of these, we conjecture that the flare loops probably overlap with the flare ribbons along the line of sight at locations A and B. In the following, we consider several possibilities for the origin of the high-speed (>100 km s −1 ) redshifts or downflows at these two locations presuming that they are mainly contributed by flare loops: (1) project effect, (2) filament plasma draining, (3) hot plasma cooling down, and (4) reconnection outflows. Firstly, considering that the flare under study is near the solar limb, we need to check the possible consequence of the projection effect. Sometimes, a particular viewing angle could attain redshifts for actual upflows. However, this is not the case here, since the angle between the line of sight and the loop axis seems to be still acute. Secondly, we notice that there appears some filament plasma draining in this C1.7 flare as revealed by AIA 304Å images. However, after a careful check of the images, we find that the draining starts to appear in the FOV of IRIS at ∼06:56 UT, and then moves through the IRIS slit at ∼06:59 UT (Figure 8), when the high-speed redshifts at locations A and B have almost disappeared. Therefore, this filament plasma draining is unlikely responsible for the origin of the high-speed downflows. Thirdly, when some hot plasma, say, the evaporation plasma, cools down, it will produce significant redshifts, particularly in the decay phase of the flare. However, we notice that the downflows mostly appear before the SXR emission peak time, i.e., in the rise phase of the flare. Hence the cooling plasma may not be the cause of the high-speed redshifts in the rise phase. Finally, the remaining possibility is that the high-speed downflows are a result of magnetic reconnection. This is illustrated in Figure 9. At the initial time, two coronal loops L1 and L2 are observed around the IRIS slit (Figure 9(a)). A few minutes later (∼06:54 UT), these two loops approach and magnetic reconnection occur between them, which produces a small flare loop Ls as well as the sigmoid structure S (Figures 9(b) and (c)). The reconnection heats the plasma and drives plasma outflows that move along the newly formed flare loop Ls. One of the outflows is located near the slit positions A and B while the other is out of the slit region. The sigmoid structure then loses its balance and undergoes a subsequent eruption in the corona. The illustration here is in accordance with the tether-cutting (TC) model proposed by Moore et al. (2001). The high-speed redshifts are likely due to the outflow of the magnetic reconnection near the footpoint of Ls. Such high-speed is supposed to take place in the lower atmosphere due to magnetic cancellation. Here it should also be mentioned that the accompanied redshifts in the hot Fe XXI line might be caused by a retracting of hot flare loops (e.g., Tian et al. 2014b) or termination shocks (e.g., Polito et al. 2018;Shen et al. 2018). As regards locations C and D, they might correspond to flare loops and their behaviours could be explained by a loop scenario suitably. The gentle blueshifts in the hot Fe XXI line as well as redshifts in the cool lines of Si IV, C II, and Mg II exhibit some fluctuations throughout the flare time, which are likely caused by hot plasma filling and cool plasma draining in the flare loops, respectively. The cyan curve S in panels (b) and (c) shows the sigmoid structure after reconnection.
Poor Cognitive Function Is Associated with Obstructive Lung Diseases in Taiwanese Adults Previous studies have reported an association between the impairment of cognitive performance and lung diseases. However, whether obstructive or restrictive lung diseases have an impact on cognitive function is still inconclusive. We aimed to investigate the association between cognitive function and obstructive or restrictive lung diseases in Taiwanese adults using the Mini-Mental State Examination (MMSE). In this study, we used data from the Taiwan Biobank. Cognitive function was evaluated using the MMSE. Spirometry measurements of forced expiratory volume in 1 s (FEV1) and forced vital capacity (FVC) were obtained to assess lung function. Participants were classified into three groups according to lung function, namely, normal, restrictive, and obstructive lung function. In total, 683 patients enrolled, of whom 357 participants had normal lung function (52.3%), 95 had restrictive lung function (13.9%), and 231 had obstructive lung function (33.8%). Compared to the normal lung function group, the obstructive lung function group was associated with a higher percentage of cognitive impairment (MMSE < 24). In multivariable analysis, a low MMSE score was significantly associated with low FVC, low FEV1, and low FEV1/FVC. Furthermore, a low MMSE score was significantly associated with low FEV1 in the participants with FEV1/FVC < 70%, whereas MMSE was not significantly associated with FVC in the participants with FEV1/FVC ≥ 70%. Our results showed that a low MMSE score was associated with low FEV1, low FVC and low FEV1/FVC. Furthermore, a low MMSE score was associated with obstructive lung diseases but not with restrictive lung diseases. Introduction Mild cognitive impairment (MCI) is a transitional stage between normal aging and dementia, especially Alzheimer's disease [1]. It is a neurocognitive disease characterized by cognitive impairment exceeding that expected for age and education level, but not significant enough to interfere with instrumental activities of daily living [2]. The prevalence of MCI differs by age as follows: 6.7% for those aged 60-64 years, 8.4% for those aged 65-69 years, 10.1% for those aged 70-74 years, 14.8% for those aged 75-79 years, and 25.2% for those aged 80-84 years [3]. MCI is associated with a 5 to 10% annual conversion rate to dementia [4,5]. In addition, Petersen et al. reported that the cumulative incidence rate of dementia for people with MCI aged over 65 years was as high as 14.9% after two years of follow-up [3]. The exact cause of MCI is unclear, but some measures to help prevent cognitive decline have been proposed, including aerobic exercise, mental activity, and controlling cardiovascular risk factors in patients with MCI [6]. In addition, current research has focused on improving the early detection and treatment of MCI. Lung diseases are considered to be a determinant of cognitive decline and dementia [7,8]. Pathan et al. concluded that impaired lung function was independently associated with worse cognitive function at baseline and a higher subsequent risk of dementia hospitalization. However, they found no association between lung function and cognitive decline over time [9]. Other studies have reported different and inconsistent results. For example, Weuve et al. provided limited evidence of an inverse association between forced expiratory volume in 1 s (FEV1) and cognitive aging [10]. Moreover, several studies have reported that mid-life lung function can predict psychomotor ability, memory, processing speed, and executive function in mid-life, but only a significant decline in psychomotor ability over time [11,12]. Taken together, these findings point to a decline in cognition and FEV1 with age. Other studies have reported an independent association between FEV1 and cognitive function in all age groups, but the correlations were weak [13,14]. Lung diseases and impaired lung function are preventable, and therefore it is important to identify the modifiable risk factors for MCI. However, whether obstructive or restrictive lung diseases have an impact on cognitive function is unknown. Currently, air pollution is a world-wide public health issue that has reported impacts on respiratory, cardiovascular, and neurological systems [15]. Air pollution has been shown to pose threats to health, and it has been linked to many diseases, including cardiovascular diseases, chronic obstructive pulmonary diseases (COPD), and even autoimmune diseases [16]. Emerging evidence has also shown associations between air pollution and cognitive decline and neurological disorders, such as Alzheimer's disease and Parkinson's disease [17], which will increase the burden on our aging society. Therefore, the aim of this study was to investigate associations between cognitive function and different types of lung diseases with confounding factors such as lifestyle and cardiovascular risks. We investigated cognitive function using the Mini-Mental State Examination (MMSE) in individuals with lung diseases using data from the Taiwan Biobank (TWB) to clarify the clinical interpretation. The TWB The TWB was created with the aim of recording lifestyle and genomic data of residents in Taiwan, where it is currently the largest biobank supported by the Taiwanese government [18,19]. The TWB includes the information of community-based volunteers with no prior history of cancer and who are aged between 30 and 70 years. All of these volunteers provide written informed consent, blood samples, and complete questionnaires during face-to-face interviews with researchers from the TWB. In addition, all of the participants in the TWB undergo physical examinations. In this study, we included 5000 participants who were registered in the TWB as of April 2014. The data stored in the TWB include body height, body weight, and personal and lifestyle factors. In this study, we defined regular exercise as participating for at least 30 min three times a week in activities including jogging, hiking, playing a sport, yoga, swimming, cycling, and computer/console-based exercise games. However, work-related activities such as physical or manual work were not classified as being "exercise" in this study. Demographic, Laboratory, and Medical Data Demographic characteristics (sex and age), smoking habits, medical history (hypertension, asthma, emphysema or bronchitis, and diabetes mellitus (DM)), lifestyle factors (regular exercise and midnight snacking habits), systolic blood pressure (SBP), diastolic blood pressure (DBP), education duration, and laboratory data (estimated glomerular filtration rate (eGFR), triglycerides, hemoglobin, fasting glucose, total cholesterol, and uric acid) were recorded at baseline. EGFR was calculated using the modification of diet in renal disease 4-variable equation [20]. Body mass index (BMI) was calculated as weight (kg)/height (m) 2 . Evaluation of Cognitive Function We assessed the cognitive function of the subjects using the MMSE [21]. The MMSE is used as a screening tool to assess cognitive impairment, with a low score indicating the need for further evaluations. The total MMSE score was calculated by summing up subscale scores, with a maximum score of 30 points. One hundred and fifty-four participants with complete MMSE measurements during the enrollment period were included in this study. Spirometry Measurements Spirometry measurements (in L) of FEV1 and forced vital capacity (FVC) were obtained. Spirometry tests were conducted using a MicroLab spirometer and Spida 5 software (Micro Medical Ltd., Rochester, Kent, UK) by a trained technician according to the 2005 technical standards of the American Thoracic Society and the European Respiratory Society [22]. We performed three lung function tests in each participant, all of which met the quality criteria standards (i.e., with differences within 5% or 100 mL), and the best result of the three tests was used for analysis. FVC-predicted (or FVC%-predicted) and FEV1predicted (or FEV1%-predicted) values were calculated by dividing the measurements by reference values, which were calculated according to formulas derived from the general population based on Asian ethnicity, sex, age, and height. The formulas for this population were entered into spirometry software, and the details of individual participants were also entered to yield percent-predicted values. There were no post-bronchodilator measurements. A total of 1054 participants were screened for this study, of whom 371 did not have complete spirometry measurements during the enrollment period and were excluded, while the remaining 683 participants with complete spirometry measurements were included. Ethics Statement The Institutional Review Board on Biomedical Science Research/IRB-BM, Academia Sinica, Taiwan, and the Ethics and Governance Council of the TWB, Taiwan, provided ethical approval for the TWB. Each participant provided written informed consent in accordance with institutional requirements, and this study was conducted in accordance with the principles of the Declaration of Helsinki. In addition, the Institutional Review Board of Kaohsiung Medical University Hospital approved this study (KMUHIRB-E(I)-20180242). Statistical Analysis Data are presented as percentages, means ± standard deviations, or median (25 th -75 th percentile) for triglycerides. An MMSE cut-off score of 24 was used to classify the severity of cognitive impairment. The study participants were classified into three groups according to lung function. One-way analysis of variance was used to compare differences among groups, followed by a Bonferroni-adjusted post hoc test. Multivariate stepwise linear regression analysis was used to identify factors associated with FVC, FEV1, and FEV1/FVC. A p value of less than 0.05 was considered to indicate a statistically significant difference. Statistical analysis was performed using SPSS version 19.0 for Windows (SPSS Inc., Chicago, IL, USA). Results The mean age of the 683 enrolled participants was 63.9 ± 2.8 years, and included 337 males and 346 females. The participants were stratified into three groups according to lung function as follows: normal lung function (FEV1/FVC ≥ 70% and FVC predicted ≥ 80%; n = 357, 52.3%), restrictive lung function (FEV1/FVC ≥ 70% and FVC-predicted < 80%; n = 95, 13.9%), and obstructive lung function (FEV1/FVC < 70%; n = 231; 33.8%). A comparison of the clinical characteristics among these three groups is shown in Table 1. Compared to the participants with normal lung function, those with restrictive lung function were older, had higher SBP, lower FVC, lower FVC-predicted, lower FEV1, and lower FEV1-predicted. On the other hand, compared to the participants with normal lung function, those with obstructive lung function were more predominantly female, and more had an MMSE total score <24, had lower FVC, lower FVC-predicted, lower FEV1, lower FEV1-predicted, and lower FEV1/FVC. Table 2 shows the determinants of FVC, FEV1, and FEV1/FVC in all participants according to multivariable stepwise linear regression analysis after adjusting for sex, age, hypertension, smoking history, DM, a history of asthma, emphysema or bronchitis, lifestyle factors of regular exercise and midnight snacking habits, BMI, total cholesterol, log triglycerides, fasting glucose, hemoglobin, eGFR, uric acid, SBP, DBP and MMSE score. The results showed that an older age, female sex, history of smoking, asthma, emphysema or bronchitis, not participating in regular exercise, high SBP, high eGFR and low MMSE score (unstandardized coefficient β = 0.018; p = 0.008) were significantly associated with low FVC. In addition, an older age, female sex, smoking history, high SBP and low MMSE score (unstandardized coefficient β = 0.021; p = 0.007) were significantly associated with low FEV1. Finally, female sex, high total cholesterol, and low MMSE score (unstandardized coefficient β = 0.475; p = 0.049) were significantly associated with low FEV1/FVC. For the sample size n = 683, the study is almost balanced with respect to Type I and Type II error rates, with α = 0.05 and β = 1 − 0.999 = 0.0001 (power test = 100%). We further performed multivariable analysis after adjustment of education level (o; 6 and >6 years), and found low MMSE score was still significantly associated with low FVC (unstandardized coefficient β = 0.017; p = 0.009) and low FEV1 (unstandardized coefficient β = 0.021; p = 0.008). Table 3 shows the determinants of FEV1 in the study participants with FEV1/FVC < 70% using multivariable stepwise linear regression analysis. The results showed that female sex and low MMSE score (unstandardized coefficient β = 0.019; p = 0.041) were significantly associated with low FEV1. For the sample size n = 231, the study is almost balanced with respect to Type I and Type II error rates, with α = 0.05 and β = 1 − 0.991 = 0 (power test = 99.1%). We further performed multivariable analysis after adjustment of education level, and found low MMSE score was still significantly associated with low FEV1 (unstandardized coefficient β = 0.019; p = 0.041). Table 4 shows the determinants of FVC in the study participants with FEV1/FVC ≥ 70% using multivariable stepwise linear regression analysis. The results showed that older age, female sex, not participating in regular exercise, and high SBP were significantly associated with low FVC, whereas MMSE score was not significantly associated with low FVC. For the sample size n = 452, the study is almost balanced with respect to Type I and Type II error rates, with α = 0.05 and β = 1 − 0.999 = 0.001 (power test = 100%). Discussion In this study, we found that the presence of obstructive, but not restrictive, lung diseases was associated with a higher percentage of cognitive impairment (MMSE < 24). Overall, a low MMSE score was independently associated with worse lung function as indicated by low FVC, low FEV1, and low FEV1/FVC. Furthermore, a low MMSE score was significantly associated with low FEV1 in the participants with obstructive lung function, whereas MMSE was not significantly associated with FVC in the participants with restrictive lung function. Increasing evidence has shown an association between compromised lung health with dementia and a decline in cognitive ability. Lutsey et al. concluded that lung diseases, and mainly restrictive but also to a lesser degree obstructive diseases, were associated with an increased risk of incident dementia and MCI in a 27-year community-based cohort study. Such findings have also been reported in patients with Alzheimer's disease, vascular dementia, and even nonsmokers [23]. The findings of our study are different from previous studies in that we found that obstructive, but not restrictive, lung diseases were associated with cognitive impairment. For people with restrictive lung diseases, older age, female sex, hypertension, and not participating in regular exercise were associated with worse lung function as indicated by FVC. We focused mainly on the influence of cognitive function rather than incident dementia. In comparison to Lutsey's study, [23] in which the average age of the participants was 54.2 ± 5.8 years, our study included older participants with a mean age of 63.9 ± 2.8 years. In addition, our study participants were East Asian, whereas those in Lutsey's study were mostly Caucasian and 25.9% were African American. Moreover, the neuropsychological assessments used to assess dementia and MCI in Lutsey's study were adopted from the Atherosclerosis Risk in Communities Neurocognitive Study, [24] and involved detailed neurocognitive assessments, neurologic examinations, brain imaging, validated telephone-based cognitive assessments, and modified telephone interviews to evaluate cognitive status or hospitalization codes. In comparison, we used the MMSE as the only cognitive assessment tool which was administered by well-trained staff. These differences may partially explain the difference in results between the two studies. The important finding of the present study was that a low MMSE score was associated with low FVC, low FEV1, and low FEV1/FVC. Furthermore, a low MMSE score was associated with obstructive lung function, but not restrictive lung function. Obstructive and restrictive lung diseases have different pathophysiologies. Obstructive lung diseases include asthma, bronchiectasis, bronchitis, and COPD. Previous studies have reported that COPD was associated with a nearly 80% higher risk of developing MCI over five years [25], and MCI or dementia over 25 years [26]. The duration of COPD has also been associated with the risk of MCI, [25] and a clinical history of COPD has also been associated with a decline in cognitive performance over time [27]. However, Pathan et al. did not find an association between the presence of obstructive lung diseases and a greater risk of dementia hospitalization [9]. COPD and asthma patients generally have higher rate of comorbidities, including cardiovascular-, bone-, and other smoking-related conditions at baseline [28]. These comorbidities in COPD patients are independent of smoking and traditional risk factors, and more cardiovascular events have been shown to contribute to a worse cognitive decline in COPD patients [29][30][31]. Patients with COPD are at higher risk for atherosclerotic disease [32]. As in our study, the cholesterol levels were higher in the obstructive group than in the normal group (205.4 ± 36.8 vs. 199.6 ± 36.4), which indicated a higher risk for cerebrovascular disease in the obstructive group. Such findings explained the higher risks of impairment of cognitive performance in patients with obstructive lung disease. It is now well established that atherosclerotic disease contributes significantly to both morbidity and mortality in COPD. Shared risk factors for atherosclerotic disease and COPD, such as smoking, low socioeconomic class, and a sedentary lifestyle contribute to the natural history of each of these conditions. Restrictive lung diseases are characterized by limited lung expansion, resulting in reduced lung volume, ventilation-perfusion mismatch, and hypoxemia. Yaffe et al. demonstrated that hypoxemia during sleep increased the risk of MCI or dementia over a 4.7-year follow-up period [33]. Chronic constant hypoxemia due to restrictive or obstructive lung diseases has been reported to affect neurological function through systemic inflammation, oxidative stress, physiologic stress, sympathetic nervous system activation, cerebral arterial stiffness, and small-vessel damage [7,34]. The impact of lung diseases on the risk of dementia and MCI may differ by race. COPD has been associated with the risk of dementia and MCI in black people, whereas restrictive impairment has been associated with cognitive impairment in white people [23]. Differences in the underlying specific pathologies between obstructive and restrictive impairment patterns have also been reported to vary by race [35,36]. This may explain the impact of different lung diseases on the risk of MCI between Caucasians and the East Asian population observed in our study. There are important strengths to our study, including the large community-based sample of healthy individuals with no history of cancer, objective ascertainment of lung function using standard protocols, comprehensive neurocognitive assessment using the MMSE, and controlling for confounding factors including lifestyle, smoking, and cardiovascular risks. However, there are also several limitations. First, this was a cross-sectional study. We did not perform the evaluation for how long patients had the lung disease. Therefore, we could not evaluate the causal relationship between lung disease and cogni-tive function. Further longitudinal studies are warranted to investigate the risk of incident dementia. Second, relatively few cases had restrictive lung diseases, which may have caused bias in the estimations. Third, hypoxemia could be the missing link in our study, however, we lacked data on oxygen levels. Fourth, the MMSE score in our study was not adjusted for sensory impairment. Finally, the use of medications such as hypnotics was not analyzed in this study, and this may have influenced the risk of MCI. In conclusion, we found that cognitive decline was associated with worse lung function as indicated by low FVC, low FEV1, and low FEV1/FVC. Furthermore, cognitive decline was mainly associated with obstructive lung diseases, but not restrictive lung diseases in our Taiwanese adult participants.
Vibrations and Spatial Patterns Change Effective Wetting Properties of Superhydrophobic and Regular Membranes Small-amplitude fast vibrations and small surface micropatterns affect properties of various systems involving wetting, such as superhydrophobic surfaces and membranes. We review a mathematical method of averaging the effect of small spatial and temporal patterns. For small fast vibrations, this method is known as the method of separation of motions. The vibrations are substituted by effective force or energy terms, leading to vibration-induced phase control. A similar averaging method can be applied to surface micropatterns leading surface texture-induced phase control. We argue that the method provides a framework that allows studying such effects typical to biomimetic surfaces, such as superhydrophobicity, membrane penetration and others. Patterns and vibration can effectively jam holes and pores in vessels with liquid, separate multi-phase flow, change membrane properties, result in propulsion, and lead to many other multiscale, non-linear effects. Here, we discuss the potential application of these effects to novel superhydrophobic membranes. Introduction Biomimetic functional surfaces find many applications, including various types of membranes for water filtration. Recent advances in nano/microtechnology have made it possible to design biomimetic functional surfaces with micro/nanotopography with various properties, such as non-adhesion, the ability for liquid manipulation at the microscale, and other applications. In order to understand the structure-property relationships in these novel materials and surfaces, it is important to study how micro/nanotopography changes surface properties resulting in effective macroscale properties. While micro/nanotopography usually constitutes a number of periodic spatial patterns, there is a similarity between the effect of small-scale patterns and small-amplitude fast vibrations. Using the mathematical method of separation of motions, small fast vibrations can be substituted by an effective force perceived at the larger scale. The simplest mechanical example of this effect is the vibration-induced stabilization of the inverted pendulum on a vibrating foundation often called the Kapitza pendulum [1][2][3]. The upside down position of the pendulum is unstable (Figure 1a). However, if the foundation vibrates with a small amplitude and high frequency-relative to the size and natural frequency of the pendulum, respectively-, the inverted equilibrium position can become stable (Figure 1b). This is perceived, at the macroscale, as an effective stabilizing "spring force" which maintains the pendulum in equilibrium. In other words, small fast vibrations can be substituted by an effective force, which stabilizes the inverted pendulum. This effect of the vibration-induced stabilization can be extended to the case of a double and multiple pendulums [4] and even of "effective The effect can be extended to a broad range of mechanical systems including those involving the fluid flow and propulsion [6], as well as systems that are not purely mechanical. Thus, small fast vibrations can cause shear thickening of non-Newtonian fluids, which is perceived as an effective force acting upon the liquid leading, for example, to the rise of the figurines in the cornstarchsometimes called the "cornstarch monster" trick. Beyond that, liquid droplets can bounce indefinitely in a non-coalescing state above a vibrating bath of bulk liquid, which is perceived as a "vibro-levitation" effect [7]. Furthermore, small fast vibrations can lead to the vibration-induced effective phase transitions, for example, when a granular material flows through a hole in a vessel like a liquid, or when vibrations of a vessel with a liquid jams the hole and prevents liquid penetration. Blekhman [8] suggested that the stability problem of an inverted pendulum on a vibrating foundation has relevance to a diverse class of non-linear effects involving dynamic stabilization of statically unstable systems ranging from the vibrational stabilization of beams, to the transport and separation of granular material, soft matter, bubbles and droplets, as well as the synchronization of rotating machinery. In these problems, the small fast vibrational motion can be excluded from the consideration and substituted by effective slow forces acting on the system causing the stabilizing effect. It has been suggested that the "effective hardening" of fibers in a composite material can lead to a novel class of "dynamic materials" with effective properties controlled by an externally applied electric field [9]. Surface micro/nanotopography can also change effective material or surface properties. The micro/nanotopography can be thought of as a spatial pattern while small fast vibrations constitute periodic temporal patterns. For example, properly controlled micro/nanotopography can affect the wettability of surfaces as seen in the case of superhydrophobic [10] and non-adhesive surfaces [11], The effect can be extended to a broad range of mechanical systems including those involving the fluid flow and propulsion [6], as well as systems that are not purely mechanical. Thus, small fast vibrations can cause shear thickening of non-Newtonian fluids, which is perceived as an effective force acting upon the liquid leading, for example, to the rise of the figurines in the cornstarch-sometimes called the "cornstarch monster" trick. Beyond that, liquid droplets can bounce indefinitely in a non-coalescing state above a vibrating bath of bulk liquid, which is perceived as a "vibro-levitation" effect [7]. Furthermore, small fast vibrations can lead to the vibration-induced effective phase transitions, for example, when a granular material flows through a hole in a vessel like a liquid, or when vibrations of a vessel with a liquid jams the hole and prevents liquid penetration. Blekhman [8] suggested that the stability problem of an inverted pendulum on a vibrating foundation has relevance to a diverse class of non-linear effects involving dynamic stabilization of statically unstable systems ranging from the vibrational stabilization of beams, to the transport and separation of granular material, soft matter, bubbles and droplets, as well as the synchronization of rotating machinery. In these problems, the small fast vibrational motion can be excluded from the consideration and substituted by effective slow forces acting on the system causing the stabilizing effect. It has been suggested that the "effective hardening" of fibers in a composite material can lead to a novel class of "dynamic materials" with effective properties controlled by an externally applied electric field [9]. Surface micro/nanotopography can also change effective material or surface properties. The micro/nanotopography can be thought of as a spatial pattern while small fast vibrations constitute periodic temporal patterns. For example, properly controlled micro/nanotopography can affect the wettability of surfaces as seen in the case of superhydrophobic [10] and non-adhesive surfaces [11], icephobicity [12], liquid flow [6] and filtration [13]. The novel field of texture-induced phase transition has recently emerged from the area of the superhydrophobicity [14]. Both with the spatial and temporal modulations, a small-scale phenomenon, such as the micro/nanotopography or vibrations, can be effectively substituted by an effect perceived at the larger scale, such as the stabilizing or "vibro-levitation" force [15]. In our earlier publication, we discussed how small patterns can be used for liquid flow, including the shark-skin effect and blood flow applications ("haemeophobic" or blot-clotting preventing surfaces) [6]. In this review, we focus on how small fast vibrations and micro/nanotopography can affect surface physicochemical properties, with emphasis on superhydrophobic and regular membranes. We will introduce the mathematical method of separation of motions. The stabilization of the inverted pendulum is the model example for the application of this method. We then discuss the analogy between vibrations and spatial patterns. This will be followed by a discussion of the effect of topography and vibrations on membrane permeability. Effect of Small Fast Vibrations and Surface Patterns In this section we introduce the method of separation of motions and apply this method to determine the stability criteria for the inverted pendulum and numerous similar systems, both in mechanics and physical chemistry. Then we discuss the mathematical analogies, which correlate with vibrations and spatial patterns. Separation of Motion and Effective Forces The method of separation of motions was first suggested by the Russian physicist Kapitza in 1951 [3] to study the stability of a pendulum on a vibrating foundation. The method was generalized for the case of an arbitrary motion in a rapidly oscillating field and it is discussed in the classical textbook on theoretical physics by Landau and Lifshitz [16]. Consider a point mass m in an oscillatory potential field II(x), where x is spatial coordinate, with the minimum of the potential energy corresponding to the stable equilibrium. The force acting on the mass is given by´dΠ{dx, therefore, the equation of motion of the system is m .. x "´dΠ{dx. In addition, to the time-independent potential field Π pxq, a "fast" external periodic force f cosΩt acts upon the mass with a small amplitude f and high frequency Ω ąą b`d 2 Π{dx 2˘{ m, which is much higher than natural frequency. The equation of motion then becomes m .. x "´pdΠ{dxq`f cosΩt (1) The motion of the mass can be represented as a sum of the "slow" oscillations X ptq due to the "slow" force´dΠ{dx and small "fast" oscillations ξptq due to the "fast" force f cosΩt, The mean value ξ ptq of this fast oscillation over its period 2π{Ω is zero, whereas X ptq changes only slightly during the same period. Therefore, the mean location of the mass can be written as: x ptq " X ptq`ξ ptq « X ptq (5) .. X ptq In Equations (2)-(6) quantities with a bar are averaged quantities over the period of 2π{Ω. Substituting Equation (2) into Equation (1) and using the Taylor series first-order terms in powers of ξ: The slow and fast terms in Equation (7) must separately be equal. The second derivative of small fast oscillations .. ξ is proportional to Ω 2 , which is a large term. On the other hand, the terms on the right-hand side of Equation (7), containing the small ξ, can be neglected. The term´dΠ{dx is a slow restoring force. The remaining fast terms can be equated, m This can be written as m .. X " ´d where Π e f f is an effective potential energy given by: Thus, the effect of fast vibrations ξ when averaged over the time period 2π/Ω is equivalent to the additional term m . ξ 2 {2 on the right-hand side in Equation (10). This term is the mean kinetic energy of the system under fast oscillations. Thus, small fast vibrations can be substituted by an additional term in the potential energy resulting in the same effect oscillations have on the system. The most interesting case is when this term affects the state of the equilibrium of a system. Let us say, in the absence of vibrations a system has an effective potential energy Π e f f " Π with a local maximum of the potential energy ( Figure 1c). Vibrations can bring this system to a stable equilibrium due to the additional term discussed before, creating a local minimum of the potential energy ( Figure 1d). In such cases, the small fast vibrations have a stabilizing effect on the state of equilibrium. Blekhman [8] has applied the method of separation of motions to many mechanical systems and suggested what he called the "vibrational mechanics" as a tool to describe diverse effects in the mechanics of solid and liquid media, from effective "liquefying" of the granular media, which can flow through a hole like a liquid when on a vibrating foundation, to the opposite effect of "solidifying" liquid by jamming a hole in a vessel on a vibrating foundation, as well as the vibro-synchronization of the phase of two rotating shafts on a vibrating foundation. Blekhman [8] has also suggested an elegant interpretation of the separation of motions. According to his interpretation, there are two different observers who can look at the vibrating system. One is an ordinary observer in an inertial frame of reference in which one can see both small, ζ, and large, X, oscillations. The other one is a "special" observer in a vibrating frame of reference, which does not see the small-scale motion ξ, possibly, due to a stroboscopic effect or because one's vision is not sensitive enough to see the small-scale motion. As a result, what is seen for the ordinary observer as an effect of the fast small vibrations is perceived by the special observer as an effect of some new effective force. This fictitious force is similar to the inertia force, which is observed in non-inertial frames of reference. Furthermore, when the stabilizing effect occurs, the special observer attributes the change in effective potential energy to fictitious slow stabilizing forces or moments. The additional slow stabilizing force for the system (or torque for rotational systems) V can be written as: For an inverted pendulum on a harmonically vibrating foundation pAcosΩtq, where A is the amplitude and Ω is the frequency, the equation of motion can be written in terms of its angular displacement ψ as: The inverted position can become stable if the condition A 2 Ω 2 ą 2gL holds ( Figure 1b). Using Equations (11) and (12) the effective stabilizing torque can be obtained [7,15] as: which, in the small angle approximation sin2ψ « 2ψ is also equivalent to the inverted pendulum being stabilized by a spring with the torsional spring constant k " mA 2 Ω 2 2 . Similarly, this method can be used to derive the effective stabilizing torques on inverted multiple pendulums on a vibrating foundation, and the increase in stiffness of a vibrating flexible rope to prevent buckling-the "Indian rope trick" [15]. Thus, small fast vibrations can affect the equilibrium and manifest as an effective stabilizing force. The effective stabilizing force in Equation (11) was obtained as an average over time. In the following sections, we use similar averaging over temporal or spatial variable to study the effect of temporal or spatial patterns on physicochemical properties. Kirchhoff's Analogy between Spatial and Temporal Patterns Similar to how small vibrations can be substituted by an effective force, small-amplitude patterns in space can have the same effect. The so-called Kirchhoff's dynamical analogy establishes similarity between the static bending shape of a beam and the dynamics of motion of a rigid body [17,18]. Let us consider a slender beam of area moment of inertia I, and modulus of elasticity E, whose end points are loaded by an axial compressive force F as shown in Figure 2a. The slope at any point (x, y) is denoted by the angle ψ. For any small element ds on the beam dy{ds " sinψ. Bending moment at (x, y) is given by EI dψ ds "´Fy. By combining these equations, we get a differential equation, which describes the spatial bending patterns on the beam [19]. This is similar to the differential equation of oscillation of a simple pendulum of length L, EI ds This is similar to the differential equation of oscillation of a simple pendulum of length L, Equation (15) describes the deflection of the pendulum. Note how the spatial variable s in Equation (14) corresponds to time variable t in Equation (15). Static bending of a beam is a boundary value problem, while motion of a pendulum is an initial value problem. However, despite this difference, an analogy exists between the motion of a pendulum and the shape of a buckled elastic rod. Now, we will consider the analogy of a beam with an inverted pendulum. Consider bending of a beam under a tensile load F as shown in Figure 2b. The beam is bent 360° making an approximate circle. The inset in Figure 2b shows a free-body diagram of a small section of the beam near its end. The equilibrium of the beam corresponds to the value of the bending moment M = FΔy, which is proportional to the displacement Δy. which is similar to the equation of motion of an inverted pendulum for small angular displacement, The equation for the inverted pendulum has a solution of the exponential form Equation (15) describes the deflection of the pendulum. Note how the spatial variable s in Equation (14) corresponds to time variable t in Equation (15). Static bending of a beam is a boundary value problem, while motion of a pendulum is an initial value problem. However, despite this difference, an analogy exists between the motion of a pendulum and the shape of a buckled elastic rod. Now, we will consider the analogy of a beam with an inverted pendulum. Consider bending of a beam under a tensile load F as shown in Figure 2b. The beam is bent 360m aking an approximate circle. The inset in Figure 2b shows a free-body diagram of a small section of the beam near its end. The equilibrium of the beam corresponds to the value of the bending moment M = F∆y, which is proportional to the displacement ∆y. The expression for the bending moment can be written as EI dψ {ds " F∆y. Here, we study whether the equilibrium of the beam is stable (straight beam) or unstable (bended beam). Differentiating the expression for the bending moment with respect to s and assuming lim ∆sÑ0 ∆y ∆s " ψ we obtain: which is similar to the equation of motion of an inverted pendulum for small angular displacement, On the basis of Kirchhoff's analogy, the beam under compressive loading corresponds to the stable regular pendulum (Figure 2a), whereas the buckled beam under tensile loading corresponds to the unstable inverted pendulum (Figure 2b). An inverted pendulum can be stabilized by harmonically vibrating its foundation. Similarly, a buckled beam can be stabilized by a spatial periodicity in the geometry of the beam (Figure 2c). If the properties of the elastic rod are changed in a periodic manner with small amplitude h ăă 1 and frequency Ω about the stationary value EI 0 such that: Equation (16) attains the form: which is similar to Equation (12) for an inverted pendulum on a harmonically vibrating foundation. Equation (18) can be converted into the canonical form of the Mathieu equation: The stability and instability of the Mathieu equation can be studied by Lindstedt-Poincaré perturbation method [20,21]. The stability curves are obtained as: and can be represented on the Ince-Strutt stability diagram ( Figure 3). The solution of Equation (19) is stable for values of (δ, ε) that lie in the shaded region between the stability curves shown in Figure 3. The negative values of δ is of importance in this discussion because of the tensile nature of the force F. . The stability and instability of the Mathieu equation can be studied by Lindstedt-Poincaré perturbation method [20,21]. The stability curves are obtained as: and can be represented on the Ince-Strutt stability diagram (Figure 3). The solution of Equation (19) is stable for values of (δ, ε) that lie in the shaded region between the stability curves shown in The spatial periodicity of the beam can be interpreted as distributed bending moments along the beam, which can be replaced by an effective stabilizing shear force, V, as shown in Figure 2c. Therefore, the periodicity in the geometry of the beam manifests as an effective shear force. This shear force can stabilize the beam when the condition: is satisfied. Otherwise, the beam is unstable with an exponentially increasing slope. We conclude that a pattern on the surface profile of a rod affects destabilization of the rod just as small fast vibrations affect the stability of an inverted pendulum. Wetting and Membranes In the preceding section, we studied how small fast vibrations or small-amplitude spatial structures can be substituted by an effective energy term, which can lead either to an effective force (such as the vibro-levitation force) or affect mechanical or phase equilibrium. In this section, we will focus on the effect of small vibrations and structures on wetting, and, specifically, on the filtration. Superhydrophobicity: How Surface Patterns Change Wetting and Phase State Non-wetting can be achieved using temporal patterns as seen in the case of vibro-levitating droplets. Oil droplets were seen to levitate indefinitely over a vibrating oil surface in the frequency range 35-350 Hz. The thin film of vapor between the droplet and the vibrating surface is stabilized by vibrations [7]. Similarly, non-wetting can be achieved on superhydrophobic surfaces with the help of micro/nano topography. Wettability of a surface is usually characterized by the contact angle (CA), θ, which a droplet of liquid makes with a solid surface. On a hydrophobic surface a water droplet makes θ ą 90˝, while on a hydrophilic surface a water droplet makes θ ă 90˝. For an ideally smooth homogenous surface, the equilibrium CA pθ 0 q of a liquid droplet (say, of water) is given by the Young equation where γ SA , γ SW , and γ WA are the surface free energies of the solid-air, solid-water, and water-air interfaces. However, on real surfaces with roughness [22,23] and chemical heterogeneity, the observed CA can be different from θ 0 . In such cases, the CAs are estimated by Wenzel and Cassie-Baxter models [24,25]. The Wenzel model (Figure 4a) gives the effective CA on a rough, chemically homogenous surface. cosθ W " R f cosθ 0 (23) where roughness factor R f ě 1, is the ratio of the solid surface area to the projected area. We can see from Equation (23) that roughening a hydrophobic surface makes it more hydrophobic (larger CA), while roughening a hydrophilic surface makes it more hydrophilic (lower CA). On a superhydrophilic surface, the water droplet spreads out into a thin film. If a rough surface harbors pockets of air, thus creating chemical heterogeneities, then the CA is given by the Cassie-Baxter model (Figure 4b). Rough solid Rough solid Cassie-Baxter state cosθ CB " r f f SL cosθ 0´1`fSL (24) where r f is the roughness factor of the wet area, and 0 ď f SL ď 1 is the fractional solid-liquid interfacial area. The air pockets can lead to the surface being superhydrophobic. On superhydrophobic surfaces, water beads up into a near-spherical shape. In both above cases, we see that surface texture (roughness) is an essential parameter in determining the wettability (or non-wettability) of a surface. On a superhydrophobic surface, a water droplet effectively "freezes" into a spherical shape. The roughness features on the superhydrophobic surface also harbor and stabilize pockets of air. On a superhydrophilic surface, a water droplet effectively "melts" into a thin film, just like the coalescence of a droplet into a liquid bath. In order to perform averaging of surface energy, we consider a 2D system-a solid rough surface of length L along the x-axis with unit width. The roughness profile of which is given by F(x) whereas the local surface free energy is γ(x). The roughness factor of the surface can be written in the integral form as: Similar to the averaging of small fast vibrations over time in Equation (11), the effect of surface topography and chemical heterogeneity can be incorporated into the effective surface free energies of the interface as an integral over the spatial coordinate x. For a chemically homogenous rough surface, the surface energy is constant (γ(x) = constant) and Equation (26) (26) using the average of the product of the surface free energy and the surface profile over a length is similar to the augmentation of the effective potential energy in Equation (10), with a term averaged over time. The effective surface energy and, thus, the CA can be modified by controlling the surface texture and chemistry. Marmur suggested that appropriate texturing of a surface can lead to stable air films on underwater surfaces resulting in underwater superhydrophobicity [26]. Later on, Patankar and co-workers studied surface texture-induced phase transitions [14,27,28]. They investigated how surface texture affects the Leidenfrost effect [29] manifested by water droplets levitating over a sufficiently hot skillet due to the presence of an evaporating vapor film (Figure 5a). Such a film is formed only when the hot surface is above a critical temperature, whereas at lower temperatures the vapor film collapses. However, the critical temperature can be reduced, and the vapor film collapse can even be completely suppressed [30] when micro-textured superhydrophobic surfaces are used [14]. Their result demonstrated that the surface texturing can potentially be applied to control other phase transitions, such as ice or frost formation, and to the design of low-drag surfaces in which the vapor phase is stabilized in the grooves of textures without heating. co-workers studied surface texture-induced phase transitions [14,27,28]. They investigated how surface texture affects the Leidenfrost effect [29] manifested by water droplets levitating over a sufficiently hot skillet due to the presence of an evaporating vapor film (Figure 5a). Such a film is formed only when the hot surface is above a critical temperature, whereas at lower temperatures the vapor film collapses. However, the critical temperature can be reduced, and the vapor film collapse can even be completely suppressed [30] when micro-textured superhydrophobic surfaces are used [14]. Their result demonstrated that the surface texturing can potentially be applied to control other phase transitions, such as ice or frost formation, and to the design of low-drag surfaces in which the vapor phase is stabilized in the grooves of textures without heating. The concept was further expanded by Jones et al. [27], who showed that surface texturing can stabilize the vapor phase of water, even when liquid is the thermodynamically favorable phase. Furthermore, the reverse phenomenon exists, when patterned hydrophilic surfaces keep a liquid water layer at high temperatures when it would otherwise boil. Thus, nanoscale roughness can be applied to manipulate the phase of water. The molecular dynamics simulations demonstrated that (b) Self-propelled Leidenfrost droplets on an asymmetric saw-tooth surface [6]. The concept was further expanded by Jones et al. [27], who showed that surface texturing can stabilize the vapor phase of water, even when liquid is the thermodynamically favorable phase. Furthermore, the reverse phenomenon exists, when patterned hydrophilic surfaces keep a liquid water layer at high temperatures when it would otherwise boil. Thus, nanoscale roughness can be applied to manipulate the phase of water. The molecular dynamics simulations demonstrated that the vapor and liquid phases of water adjacent to textured surfaces are stable. Patankar [28] has also identified the critical value of roughness, below which the vapor phase is sustainable and/or trapped gases are kept in roughness cavities or valleys, thus maintaining the immersed surface dry. Linke et al. [31] demonstrated that surfaces with small asymmetric texture (saw-tooth profile) can induce self-propulsion in Leidenfrost droplets, and in the process, the droplets climb over the steep sides of the surface texture [32]. Leidenfrost droplets levitate on a thin film of vapor formed when the droplet contacts a surface whose temperature is much greater than the boiling point of the liquid. The vapor phase is expelled from under the droplet due to the pressure gradient in the film between the peaks and the valleys of the surface profile. Due to the inherent asymmetry of the surface texture, the vapor leaks out asymmetrically from under the droplet, causing a net directional flow of vapor. The resultant viscous forces entrain the droplet in the same direction ( Figure 5b) [33][34][35]. The self-propulsion effect has potential application, such as in a sublimation heat engine [36]. The viscous force generated per tooth of the saw-tooth profile is given [32] by using momentum balance: where η is the viscosity of the vapor, U is the velocity of the vapor flow, h F is the average thickness of the vapor film, r c is the contact radius of the droplet, and λ is the tooth length. If there are N teeth below the droplet, then the net propulsion force can be obtained as: For the values η = 1.9ˆ10´5 Pa s, U = 0.2 m s´1, h F = 10 µm, r c = 2.5 mm, λ = 1 mm, and N = 5, the force is F = 4.75 µN. The summation of forces over an area due to surface patterns in Equation (28) is similar to the integration of small fast vibrations in Equation (11). The vibrations can be substituted by an effective stabilizing force. Similarly, the surface topography manifests as a propulsion force. We saw how asymmetric surface patterns on a surface can be substituted by an effective force that spontaneously propels a Leidenfrost droplet over steep inclines. In general, the phenomenon of surface texture-based phase transition can be described as suppressing the boiling point and, thus, is similar to superheating or subcooling of water. Similar to the vibration-induced phase transitions, the effect of the small spatial pattern is in changing the phase state of the material. Water Flow through a Vibrating Pipe with Hysteresis In this section, we study the effects of small fast vibrations on the flow through a hole. First, let us consider a macroscopic flow of a fluid through a pipe as shown in Figure 6a, with the mean flow velocity v related to the pressure loss ∆P by the nonlinear relation: where a is a constant. Note that for laminar flow, the dependency between the pressure and flow velocity is linear. However, in a non-ideal situations non-linearity can emerge, represented by the quadratic dependency in Equation (29), which may be a consequence of various factors, such as the turbulence, non-linear viscosity, or asymmetric variations in the pipe profile. The non-linearity is essential since it results in hysteresis [8]. where a is a constant. Note that for laminar flow, the dependency between the pressure and flow velocity is linear. However, in a non-ideal situations non-linearity can emerge, represented by the quadratic dependency in Equation (29), which may be a consequence of various factors, such as the turbulence, non-linear viscosity, or asymmetric variations in the pipe profile. The non-linearity is essential since it results in hysteresis [8]. To apply the averaging method, let us assume a slow velocity v0 which changes negligibly over a time period 2  . If the pipe is subjected to a fast external vibration (Figure 6b To apply the averaging method, let us assume a slow velocity v 0 which changes negligibly over a time period 2π{Ω. If the pipe is subjected to a fast external vibration (Figure 6b) in the form of x " hcosΩt, where h is a small constant amplitude, then the additional fast component of velocity is . x "´hΩsinΩt. The standard assumption of the method of separation of motions is that the flow velocity is small in comparison with the amplitude of the velocity of vibrations, hΩ. The flow velocity can be written as the sum of the slow and fast components. v " v 0´h ΩsinΩt Substituting Equation (30) into Equation (29): ∆P " a pv 0´h ΩsinΩtq 2 (31) ∆P " av 0 2`a phΩsinΩtq 2´2 av 0 hΩsinΩt where ∆P 0 is the pressure loss due to v 0 , which changes negligibly over 2π{Ω. Averaging Equation (32) over the period 2π{Ω, similar to the temporal averaging in Equation (11) gives: In Equation (34), the effect of the fast vibrations is perceived as the additional pressure difference ∆P v " a phΩq 2 {2, which can intensify or weaken the fluid flow thought the pipe. Equation (34) is similar to Equation (10) in that vibrations augment the potential energy of the system, and at certain values of hΩ the vibrations can effectively stop the fluid flow. The dependency of the pressure difference in the value hΩ is shown in Figure 6c based on Equation (34). Since the dependency is non-linear, for any small change in velocity˘δv due to the external vibrations, the corresponding total change in ∆P is non-zero, as shown. The pressure difference ∆P 2 for a small increase in velocity is greater than the pressure difference ∆P 1 for a small decrease in velocity. This hysteresis can affect the flow in the pipe and under certain conditions even stop the flow. Using values of ∆P 0 = 1 kPa, and a = 700, 1000, and 1200 kg m´3 (similar to the densities of gasoline, water and glycerin, respectively), a plot of Equation (34) is shown in Figure 6d. If the hydrostatic pressure driving the flow is ∆P in , the fluid in the vibrating pipe ceases to flow when: Instead of a vibrating pipe, if we consider a vibrating fluid container with a hole at the bottom, the velocity of the fluid drainage is related to the static pressure head (H) of the fluid in the container as v9 a 2gH. Therefore, the nonlinearity in Equations (29) and (35), still holds. Thus, the drainage of fluid through the hole can be stopped by controlling the amplitude and frequency of the vibrations. We showed how small fast vibrations can affect fluid flow through a hole, and, under certain conditions, effectively act as a valve. Next, we extend this principle to the case of vibrating membranes on the micro/nanoscale. Liquid Penetration through Pores in Vibrating or Patterned Membranes Semipermeable membranes (e.g., biological cell membranes) allow only certain molecules or ions to pass through. Osmosis is the transport of solvent molecules, such as water, through a semipermeable membrane from a region of higher to lower solvent chemical potential until the chemical potentials equilibrate. Osmosis is driven by the concentration gradient of the solute across the membrane, or, in other words, the chemical potential difference of the solvent across the membrane. The excess external pressure that must be applied to prevent the osmotic flow is called the osmotic pressure π. The osmotic pressure is given by the van't Hoff equation: π " c solute RT (36) where c solute is the molar concentration of the solute in the solution, R is the gas constant, and T is the absolute temperature. When an external pressure greater than the osmotic pressure π is applied to reverse the flux of solvent molecules then the process is called reverse osmosis (RO). A novel principle of phase separation (e.g., water and oil from their mixture) has already been suggested using the membranes, which are hydrophilic but oleophobic, or hydrophobic but oleophilic. Note that oil, and other organic non-polar liquids, typically has a surface energy much lower than that of polar water. Because of this, hydrophilic materials are usually also oleophilic. However, dealing with the underwater oleophobicity, one can find materials which are hydrophilic but still repel oil when immersed in water. In the previous section, we saw how vibrations could manifest as a pressure affecting the fluid flow. For a vibrating membrane (Figure 7) consisting of several holes, the vibrations manifest as an effective pressure a phΩq 2 {2, as seen in Equation (34). The vibrations can change the effective membrane permeability if: The RO process is used commonly to desalinate water. RO membranes are porous structures used in the RO process. Solvents usually take a tortuous path through the RO membranes. Note that RO is used for separation of a solvent from a solution. However, a completely different principle can be used to separate liquid mixtures using patterned superhydrophobic surfaces. The RO process is used commonly to desalinate water. RO membranes are porous structures used in the RO process. Solvents usually take a tortuous path through the RO membranes. Note that RO is used for separation of a solvent from a solution. However, a completely different principle can be used to separate liquid mixtures using patterned superhydrophobic surfaces. This may lead to a reverse osmotic flux even when the applied pressure is less than the osmotic pressure [6]. One of the recent applications of surfaces with tailored wettability is in separation of oil-water mixtures [37]. Porous media/meshes, which are selectively wetted by either water or organic solvents, can be used in this process. These porous material are analogous to the RO membranes used in desalination. The common terminology associated with wetting of a surface by oil are defined as follows. Oleophilic surfaces display oil CA less than 90°. Oleophobic (oil CA greater than 90°) surfaces used for oil-water separation need to operate in the three phase solid-oil-water system instead of the usual solid-water-air system. This calls for underwater oleophobic surfaces [38] which exhibit oil CA greater than 90° in the solid-oil-water system. Surfaces that are superhydrophobic and oleophilic, or hydrophilic and underwater oleophobic can be used to separate out oil from water. Natural and artificial materials have been used for oil-water separation. Kapok plant fiber which is naturally hydrophobic and oleophilic was seen to separate diesel oil from water. Kapok, which is wetted by diesel due to capillary rise, can be dried and reused [39]. Artificial membranes are made by using porous/meshed structures with specific pore sizes, which may be roughened and coated with a surface agent to tailor their wetting properties. The wetting properties depend on the pore size, surface roughness and surface agent used. Stainless steel and copper meshes, and filter paper [40], were commonly used to separate mixtures in which oil is layered over water. If one of the phases Figure 7. Vibrations can change the permeability of membranes. This may lead to a reverse osmotic flux even when the applied pressure is less than the osmotic pressure [6]. One of the recent applications of surfaces with tailored wettability is in separation of oil-water mixtures [37]. Porous media/meshes, which are selectively wetted by either water or organic solvents, can be used in this process. These porous material are analogous to the RO membranes used in desalination. The common terminology associated with wetting of a surface by oil are defined as follows. Oleophilic surfaces display oil CA less than 90˝. Oleophobic (oil CA greater than 90˝) surfaces used for oil-water separation need to operate in the three phase solid-oil-water system instead of the usual solid-water-air system. This calls for underwater oleophobic surfaces [38] which exhibit oil CA greater than 90˝in the solid-oil-water system. Surfaces that are superhydrophobic and oleophilic, or hydrophilic and underwater oleophobic can be used to separate out oil from water. Natural and artificial materials have been used for oil-water separation. Kapok plant fiber which is naturally hydrophobic and oleophilic was seen to separate diesel oil from water. Kapok, which is wetted by diesel due to capillary rise, can be dried and reused [39]. Artificial membranes are made by using porous/meshed structures with specific pore sizes, which may be roughened and coated with a surface agent to tailor their wetting properties. The wetting properties depend on the pore size, surface roughness and surface agent used. Stainless steel and copper meshes, and filter paper [40], were commonly used to separate mixtures in which oil is layered over water. If one of the phases in oil-water mixture is dispersed in the other as small droplets (smaller than the pore size) the meshes become ineffective. Hydrophobic porous media has been developed for separation of oil-water emulsions with and without surfactants [41][42][43][44]. Table 1 summarizes the literature which discusses various types of oil-water filtering membranes. The rough surface of the mesh for oil-water separation having pores of radius w is wetted partially by oil, water and air. The effective surface free energy of a rough chemically heterogeneous surface is given by Equation (26). The solid-liquid interface area in any single pore is augmented by the factor f SL´r f¯L . The capillary pressure P cap across the interface of the liquid L is given by the force balance P cap w 2 " 2w f SL´r f¯L γ LV cosθ 0 , which simplifies as: where γ LV is the surface free energy of the liquid-vapor interface. Note that the effect of the surface micro/nanotopography is incorporated into Equation (38) via the roughness factor. The roughness factor is the surface profile averaged over an area. This is similar to the effect of vibrations averaged using the temporal integral in Equation (11). The capillary pressure at a solid-oil interface is whereas the capillary pressure at a solid-water interface is P cap˘w ater " 2 f water´r f¯w ater γ water cosθ water w (40) where γ oil , γ water , θ oil , θ water are the surface free energy of the oil-vapor interface, the surface free energy of the water-vapor interface, the equilibrium CA of oil and the equilibrium CA of water, respectively. The capillary pressure, given by Equation (38), determines if the liquid spontaneously flows through the mesh. For a hydrophobic, oleophilic mesh`P cap˘w ater is negative, whereas`P cap˘o il is positive and as a result, oil selectively permeates though the pores; water permeates only if an external pressure is applied to negate`P cap˘w ater . For example, for a mesh of pore size w = 10 µm with the values r f = 2.0, γ water = 72 mNm´1, θ water = 107˝, f water = 0.19, γ diesel = 23 mNm´1, θ diesel = 60˝, f diesel = 1.0, we obtain`P cap˘w ater =´1.6 kPa and`P cap˘d iesel = 4.6 kPa. The mesh will stop the water, while allowing diesel through the pores. In the case of a mesh used for oil-water separation, micro/nanotopography augments the capillary pressure and is therefore a critical factor. Equation (35) is the critical condition for flow through a vibrating pipe or out of a vessel, while Equation (37) is the critical condition for the permeability of a vibrating membrane. The vibrations were averaged over time. In Equation (38) the surface micro/nanotopography was averaged over the area. In essence micro/nanotopography can affect the mesoscale transport through porous media while small fast vibrations can affect the molecular transport through porous media. Note the similarity between Equation (36) for osmotic pressure and Equation (38) for the capillary pressure. Additionally, note that the osmotic pressure is independent of the membrane properties, whereas the capillary pressure for wetting is dependent on the surface characteristics. The effect discussed in this section is different from that of classical osmosis. Osmosis is a molecular scale effect and the expression for the osmotic pressure in Equation (36) is derived from thermodynamics. The pattern-induced liquid separation, which we suggest to be referred to as "pseudo-osmosis", is a mesoscale effect with a characteristic length scale (i.e., the superhydro/oleophobic/philic surface pattern of nanometers). Conclusions We discussed how small fast vibrations (temporal patterns) and micro/nanotopography (spatial patterns) can affect physicochemical properties. We used Kirchhoff's dynamical analogy, which draw parallels between spatial patterns and vibrations. We also applied Kapitza's method of separation of motions as a tool to find an effective force that can be substituted for small fast vibrations. We applied this tool to several examples, including the flow of liquid though vibrating pipes and membranes. In all these cases, we derived an expression for an effective force that can be substituted for vibrations or patterns. Novel biomimetic membranes which are hydrophilic but oleophobic or hydrophobic but oleophilic can be developed using this principle. The separation of oil-water mixture using selectively wettable membranes/meshes is similar to the molecular osmotic transport across a semipermeable membrane; however, the principle is different since the phenomenon is not at the molecular scale. It is important to note that, in all the cases discussed in this paper, vibrations or surface patterns lead to some nonlinearity or hysteresis, which results in a peculiar behavior such as stabilization and propulsion. Thus, spatial and temporal patterns can affect material and surface properties. Potential applications include smart materials with tunable properties. The approach developed in our paper allows estimating system design and performance by knowing the properties of small scale vibrations and patterns. More importantly, we suggest a general method to study how small patterns affect macroscale wetting properties with superhydrophobicity and oil-water separating membranes being examples of where the method can be applied. Conflicts of Interest: The authors declare no conflict of interest.
Analysis on the Possibility of Gree Company Resisting "Barbarians at the Door" After the Mixed Ownership Reform In 2019, Gree Electric Appliances, Inc. of Zhuhai (Gree Inc.) carried out a new round of mixed-ownership reform. The main purposes of this reform are to introduce new strategic investors into the company and, at the same time, provide the management of Gree Inc with unprecedented power. And this paper will discuss the significance of the mixed-ownership reform of Gree Inc from the perspective of the ability to resist barbarians in the capital market, which is groundbreaking. In the analysis, the shareholding structure of Gree at present will be compared with that of China Vanke Co., Ltd (Vanke) in 2014 which once suffered from hostile takeover to demonstrate how the management of Gree Inc can defend against barbarians by taking the advantage of shares. After a comprehensive analysis, it can be concluded that after the reform, Gree Inc has more advantages in resisting the barbarians of the capital market than Vanke in 2014. Specifically, the management of Gree Inc is able to utilize more than 30% of shares, including the shares of the largest shareholder, when faced with a hostile takeover. And this is strong enough to prevent barbarians from making any material resolutions in the general meeting of shareholders, which ensures the normal running of Gree Inc and the original board of directors. INTRODUCTION As is widely known, a multitude of enterprises is owned by State Owned Assets Supervision and Administration Commission (SASAC) in China. But after the promotion of the share-trading reform, the shareholding structure of many state-owned enterprises gradually became dispersed. Merits as such reform have, it also provided barbarians in the capital market with opportunities to attack the stated-owned enterprises by purchasing their stocks in the opening market. [1] The hostile takeover of Vanke by Shenzhen Baoneng Investment Group Co., Ltd (Baoneng Group) in 2015 is a case in point. On the other hand, in 2019, Gree Inc carried out a new mixed-ownership reform. The 2 main purposes of this reform are to introduce a new strategic investor, Hillhouse Capital, and expand the power of Gree's management. At the same time, the shareholding structure of Gree Inc has become highly dispersed. [2] Since 2019, there were many papers on the mixed-ownership reform of Gree Inc. But instead of talking about the ability to resist barbarians, some papers explained the significance of this reform from the aspects of governance structure, incentive mechanism of management, and future performance of Gree Inc [2]. Although there are still some papers discussing about the strategies of Gree Inc on defending against barbarians, their research is set in the context in which the mixed-ownership reform had not taken place [3] [4]. And this paper is going to discuss whether or not Gree Inc will be able to resist barbarians even when the shareholding structure of Gree Inc is highly dispersed after the mixed-ownership reform. Based on the similarities between Gree Inc and Vanke, the shareholding structures of Gree Inc will be compared to that of Vanke in the following parts. As well as that, the analysis of the power given to the management of Gree Inc will also be considered to discuss whether the company is capable of resisting attack from barbarians after the reform. The purpose of this paper is to let more stateowned enterprises see the value of this brilliant reform. Although after the mixed-ownership reform, the shareholding structure of Gree Inc has become more dispersed, but its ability to resist barbarians has not been weakened, which means that there is no absolute correlation between equity diversification and the risks of controlling rights. Therefore, other companies can also learn the way of Gree Inc to carry out mixed-ownership reform. However, because of the expanding power of the management, this mixed-ownership reform method may breed agency problem in Gree Inc. Thus, other companies should pay attention to it while learning. Gree Electric Appliances, Inc. of Zhuhai Gree Electric Appliances, Inc. of Zhuhai was established in 1989 and listed on the Shenzhen Stock Exchange in 1996. Its products cover two major areas including household appliances like air conditioners and industrial equipment like advanced manufacturing equipment. The shareholding structure of Gree Inc has undergone three stages. During the first stage, more than 50% of shares in Gree Inc was controlled by Gree Group which is 100% controlled by SASAC of Zhuhai. But during the second stage, only 18% of total shares were held by Gree Group thanks to the share-trading reform. During the last stage, Gree Inc became a company without controlling shareholders and actual controllers due to the mixed-ownership reform. The proportion of shares held by Gree Group was reduced to 3.22%. Instead, Zhuhai Mingjun has become the largest shareholder, holding 15% of shares. In addition, in order to optimize the incentive mechanism, Gree Inc also set up a new incentive plan providing the management with 4% of shares. As is shown, the shareholding structure of Gree Inc has become highly dispersed. When it comes to the performance, the net profit of Gree Inc (" Figure 1") has maintained a high growth rate since 2006. After Dong Mingzhu became the chairman of the board in 2012, the net profit of Gree Inc even transcended that of its rival, Midea Group. In addition, the dividend payout rate of Gree Inc (" Figure 2") always ranks the first among all A-share listed companies standing at more than 40%. With a highly dispersed sharing structure and impressive performances, Gree Inc is an ideal target for barbarians in capital market. Barbarians 'Barbarians' is borrowed from the title of "Barbarians at the gate" which is a book depicting the story of the largest and most notable LBO of the decade, the hostile takeover of R. J. Nabisco in 1988 by Kolberg Kravis Roberts [4]. Barbarians are actually acquiring firms that make hostile acquisition by using small amount of funds. And those companies with highly dispersed shareholding structure and relatively low share prices are more likely to become the target. Also, the target companies acquired are often in another industry, so the barbarians are unfamiliar with the business model of the target companies [1]. However, the purpose of their acquisition is not to run the target company, but to obtain short-term benefits. Thus, many decisions they make after acquisitions are acting against the companies and other shareholders. (Liu Ruojiao, 2019) Scramble for Control over Vanke Scramble for Control over Vanke is considered as the "decade of the deal" in China. During this battle, the private company Baoneng Group played the role of barbarian, while Vanke, the leading brand of real estate in China, was the target company. Baoneng used more than 40 billion yuan of funds to acquire 25% of shares of Vanke from 2015 to 2016 and became the largest shareholder of Vanke. During this period, the management of Vanke turned to China Resources Co. Limited, its former largest shareholder, for help, requesting them to increase their holdings of shares and cooperate with Vanke in implementing a private placement plan. However, China Resources did neither of them, which made the situation worse. Fortunately, the battle came to an end when Baoneng Group was punished by Chinese insurance regulator due to their violation of Chinese insurance regulator and China Resources finally gave in, transferring all their shares to Shenzhen Metro Group Co., Ltd [5]. In fact, it is the negative attitude of China Resources that aggravated the conflict between Vanke and Baoneng. And because of it, many scholars believed that the scramble between Baoneng and Vanke had developed into the scramble between China Resources and Vanke by the end of 2015. Therefore, the relationship between the largest shareholder and the management can determine whether a company can defend against barbarians. The Management of Gree Inc Can Utilize the Shares of the Largest Shareholder After the mixed-ownership reform, the shareholding structures of Gree Inc and Vanke in 2014 have a lot in common (" Table 1" and " Table 2"). For example, both of them have a highly dispersed shareholding structure; both of their largest shareholders hold approximately 15% of total shares. Back to Gree Inc, if we only focus on the number of shares, Zhuhai Mingjun and China Resources are alike. However, after scrutinizing the shareholding structure of Zhuhai Mingjun, you will find that the management of Gree Inc has a close relationship with Ming Jun, and this relationship is strong enough to help the management resist barbarians. According to " Figure 3", the actual controller of Zhuhai Mingjun is Zhuhai Yuxiu Investment Management Co., Ltd. (Zhuhai Yuxiu). The shareholders of Zhuhai Yuxiu consist of 4 entities which are subordinate to the management of Gree Inc, Hillhouse Capital and Beijing MaoYuan Real Estate Co., Ltd. (Beijing MaoYuan). Taking the advantages of this complex structure, the management of Gree Inc is able to become an important part of Zhuhai Mingjun without spending much money. On the other hand, it is Hillhouse Capital and its subsidiaries that contributed the most to build this structure while enjoy the least power acting as limited partners (limited partners are responsible for capital contributions but only enjoy usufruct). Therefore, it can be concluded that Hillhouse capital pay for the cost to enlarge the power of management. This is a compelling evidence that Hillhouse Capital believe in the management of Gree Inc and will not intervene in the decision-making of the management. Thus, if Gree Inc was under attack by barbarians, Hillhouse Capital would always stand with the management and enable management to utilize the shares of Zhuhai Mingjun in order to defend against hostile takeovers. shares can be turned into voting rights. According to the "Shares Transfer Agreement" signed by Zhuhai Mingjun on the day it became the largest shareholder, how to exercise the 15% voting rights at the general meeting of shareholders in Gree Inc is actually decided by board of directors in Zhuhai Yuxiu which comprises of 3 members who are respectively sent by the management, Hillhouse Capital and Beijing MaoYuan. They must vote to decide how to exercise the 15% voting rights, and it must be approved by more than 2 directors. Therefore, if Hillhouse wanted to stop the management from using shares, they could just vote against the management of Gree Inc with Beijing MaoYuan. But is it really necessary for Hillhouse capital do that? Hillhouse Capital is a strategic investor. They pay more attention to the long-term development of Gree Inc. In contrast, barbarians are anxious for short-term benefits. Their objectives contradict with each other. If Barbarians were finally able to control Gree Inc, Hillhouse capital would also suffer from huge losses. Therefore, Hillhouse capital would not risk betraying the management of Gree Inc. To sum up, after the mixed-ownership reform, the management of Gree Inc is able to proactively utilize the shares of the largest shareholder when facing barbarians, which is huge progress compared with Vanke. The Management of Gree Inc Can Utilize the Shares of Their Own and Other Shareholders Although Vanke and Gree Inc have a lot in common, a careful comparison reveals the fact that more amounts of shares are directly held by the management of Gree Inc. After the mixedownership reform, the management was granted with a 4% stocks incentive plan. As well as that, Dong Mingzhu has held 0.74% of shares since 2012. To sum up, the management of Gree Inc directly hold 4.74% of shares in Gree Inc while only 0.2% of shares in Vanke are held by the management of Vanke, according to Vanke's annual report in 2014. When it comes to the second largest shareholder, Hebei Jinghai Guarantee Investment Co., Ltd was actually initiated and founded by Zhu Jianghong who is also the founder of Gree Inc. It is always considered that Hebei Jinghai is the party acting in concert of Dong Mingzhu. In other words, whatever decisions the management are going to make, Hebei Jinghai will always stand with them. Based on the 2 paragraphs above, apart from the 15% of shares from Zhuhai Mingjun, the management of Gree Inc are still able to directly utilize 13.65 % (4.74%+8.91%) of the shares. But so far, 3.22% of shares held by Gree Group has never been discussed. It cannot be denied that when Gree Group was still the largest shareholder, management would find it difficult to gain support from Gree Group who represents SASAC of Zhuhai. This is mainly because Gree Group and the management of Gree Inc used to have different objectives towards the development of Gree Inc. While management cared about the sustainable development of Gree Inc, Gree Group emphasized more importance on political performance, hoping that Gree Inc was able to generate huge amount of value in the shortterm. But nowadays, only 3.22% of shares was held by Gree Group and Gree Inc has become one of the Fortune 500 Companies. Gree Group would rather act as a state-owned investor and enjoy the "tempting" dividends paid from Gree Inc than act against management which is meaningless. In addition, after the scramble for control over Vanke, our country has become aware of the threats caused by the barbarians. Therefore, as one of the state agencies, SASAC would not turn a blind eye to Gree Inc if it suffered from hostile takeover by barbarians. Instead, SASAC would corporate with management to defend against barbarians in the capital market. Quantitative Analysis According to the previous 2 sections, the management of Gree Inc is able to utilize 31.78% (15%+8.91%+3.22%+4.74%) of shares which are respectively belong to Zhuhai Mingjun, Hebei Jinghai, Gree Group and the management themselves (the author would like to call it "Alliance" in the following parts.). This part is going to discuss to what extent the management of Gree Inc are able to resist barbarians taking the advantages of these 31.78% of shares. According to the Articles of Association of Gree Inc, when any SPECIAL resolution is to be made by the shareholders' meeting, it shall be adopted by shareholders representing more than two third of the voting rights of the shareholders IN PRESNECE. "Alliance" has occupied 31.78% of shares which is close to 1/3. In other words, "Alliance" has been able to prevent barbarians from manipulating any special resolution at the Advances in Economics, Business and Management Research, volume 649 shareholders' meeting, which can ensure the survival of Gree Inc. However, when it comes to ordinary resolutions, everything seems to be more perplexing. According to the Articles of Association of Gree Inc, when any ORDINARY resolution is to be made by the shareholders' meeting, it shall be adopted by shareholders representing more than half of the voting rights of the shareholders IN PRESNECE. And a rigorous calculation is needed here to test the ability of management to defend against barbarians. Before calculation, here are some basic assumptions. First of all, every member of "Alliance" would not sell their shares and all of them would present in the general meeting of shareholders. Secondly, the attendance rate of minority shareholders in general meeting of shareholders remains the same. Thirdly, minority shareholders would stand with barbarians. According to the minutes of general meetings of shareholders, the following ratios can be calculated (" Table 3"). The first ratio is the proportion of the voting shares represented by minority shareholders in presence to the total voting shares of Gree Inc, which is labeled as α . The second ratio is the proportion of voting shares represented by all minority shareholders to the total voting shares of Gree Inc, which is labeled as β. The third ratio is attendance rate which is equal to α/β. After referring to the past 10 shareholders' meetings, an average attendance rate can be calculated as 23.33% which is subsequently used for calculating to what extent barbarians would outweigh "Alliance" on the shareholders' meetings. Table 3. Calculation of average attendance rate Table 4. Calculation of the extent to which barbarians would outweigh "alliance" Before any further explanation, the introduction of some important ratios in " Table 4" is necessary. The first ratio is the proportion of voting shares represented by "Alliance" to total voting shares of Gree Inc, which is labeled as x. The Second ratio is the proportion of voting shares represented by all minority shareholders to total voting shares of Gree Inc, which is labeled as y. The third ratio is the proportion of voting shares represented by barbarian to total voting shares of Gree Inc, which is labled as z. The fourth ratio is the proportion of voting shares represented by minority shareholders in presence to total voting shares of Gree Inc, Advances in Economics, Business and Management Research, volume 649 which can be calculated as y * attendance rate. And the last ratio is γ which can be calculated as x/(x+k). Since the author assumes that every member of "Alliance" would not sell their shares and all of them would present in the general meeting of shareholders, x would stabilize at 31.87%. As the barbarian increases its shares of Gree Inc, the amounts of shares held by minority shareholders would decline, but their total shares are always equal to 68.13% (100%-31.87%). γ is the ultimate indicator of whether "Alliance" is still able to outweigh barbarian at the general meeting of shareholders. If γ was larger than 50%, "Alliance" might play a dominant role when shareholders' meeting is making ordinary resolution. But if γ was lower than 50%, barbarian might play a dominant role instead. After calculation, the barbarian must hold at least 20% of Gree's shares in the market. Or it is not able to manipulate ordinary resolutions on shareholders' meeting. In accordance with the current stock price of Gree Inc, Barbarian needs to spend roughly 60 billion to acquire 20% of shares which is much higher than 44.1 billion paid by Baoneng Group to acquire 25% of shares in Vanke. Nowadays, the regulations in our country have become better-rounded. It would be extremely difficult for barbarians to fund 60 billion by using inappropriate method like what Baoneng Group did. In addition, the author also assumes that minority shareholders would stand with barbarians. However, in reality, there are still a certain amount of minority shareholders who would stand with management, which will make it even more challenging for barbarians to dominate the shareholders' meeting. According to Company Law of the People's Republic of China, ordinary resolutions include the appointment and removal of members of the board of directors and the board of supervisors. Thus, as long as the proportion of shares held by the barbarian is lower than 20%, the original board of directors would remain unchanged. And this offers an opportunity for the original board of directors to come up with strategies such as introducing new strategic investors or a private placement plans to deter the barbarian from any further purchasing. Discussion and Summary In 2019, Gree Inc carried out a mixedownership reform and the management of Gree Inc was granted with an unprecedented power. Because of this power granted to management, Gree Inc has more advantages in resisting the barbarians of the capital market than Vanke. Specifically, the management of Gree Inc is able to utilize more than 30% of shares which are respectively belong to Zhuhai Mingjun, Hebei Jinghai, Gree group and management themselves. After rigorous calculations, it can be concluded that 31.87% of shares is strong enough to prevent barbarians from making any material resolutions at the general meeting of shareholders, which ensures the normal running of Gree Inc and the original board of directors. As long as the original board of directors can function normally, strategies used for defense can be adopted in a timely manner to prevent further hostile takeovers by barbarians. The mixed-ownership reform of Gree Inc is such a creative method that other state-owned enterprises can learn. But every corn has 2 sides. That the management is granted great power can trigger the principal-agent problem. If the management of a company was diligent and conscientious like the management of Gree Inc, this method for mixed-ownership reform would be ideal. But if the management is self-interested, this method can be Pandora's Box. Therefore, other companies must pay attention to this point when they learn from Gree Inc. And it would be better if they could design an extra mechanism to contain the power of management when necessary. CONCLUSION After a rigorous analysis, it can be concluded that Gree Inc does have a better defense mechanism than Vanke did when it suffers from a hostile takeover by barbarians. Specifically, Gree's management can more proactively utilize the shares of other shareholders to prevent the barbarian from making decisions that are detrimental to the future development of Gree Inc. But it cannot be denied that this paper still has some defects. For example, only the shareholding structure is considered to discuss whether Gree Inc is able to defend against barbarians. Research in the future can take more factors into accounts such as the constitution of the board of directors, the corporate culture within Gree Inc, the introduction of new strategic investors and such like. As well as that, in the quantitative analysis, this paper assumes that the future attendance rate of minority shareholders in the meeting is equal to the average amount in the past. This is not always the case in reality since the adoption of online voting will encourage more minority shareholders to attend the shareholders' meetings. In this case, Gree Inc will have more difficulties in defending barbarians by taking advantage of the shareholding structure. Therefore, consideration on the attitude of minority shareholders should be added in future research. AUTHORS' CONTRIBUTIONS This paper is independently completed by Junshen Lin. ACKNOWLEDGMENTS Firstly, I would like to show my deepest gratitude to Professor Liebenau from LSE, who enables me to have a further understanding of socio-economy, management of companies and finance. Further, I would like to thank Miss Zou, my tutor who provides valuable guidance in every stage of the writing of this thesis. Without all their enlightening instruction and impressive kindness, I could not have completed my thesis.
Roles and relevance of mast cells in infection and vaccination Abstract In addition to their well-established role in allergy mast cells have been described as contributing to functional regulation of both innate and adaptive immune responses in host defense. Mast cells are of hematopoietic origin but typically complete their differentiation in tissues where they express immune regulatory functions by releasing diverse mediators and cytokines. Mast cells are abundant at mucosal tissues which are portals of entry for common infectious agents in addition to allergens. Here, we review the current understanding of the participation of mast cells in defense against infection. We also discuss possibilities of exploiting mast cell activation to provide adequate adjuvant activity that is needed in high-quality vaccination against infectious diseases. Introduction Classically mast cells are considered critical effector cells in allergy by virtue of their potential to secrete a variety of allergic mediators. The number of mast cells is increased at sites of allergic inflammation, and there is a correlation between mast cell density in the tissue and the severity of allergic symptoms [1] . In allergy, plurivalent antigens bind and crosslink IgE molecules bound to the high-affinity IgE-receptor (FcεRI) expressed on mast cells, resulting in cell degranulation and release of proinflammatory mediators. Three major categories of mast cell mediators have been described: (1) preformed granule-associated mediators such as histamine and serotonin; (2) newly generated lipid mediators such as leukotrienes and prostaglandins; (3) de novo synthesized cytokines including chemokines. IgE-mediated activation of mast cells initiates the early phase of allergic responses, resulting in pathologies including greater epithelial permeability, mucus production, smooth muscle contraction, vasodilitation and neurogenic inflammation. The immediate response is followed by recruitment of a variety of other immune cells that participate in the late phase of the reaction, further exacerbating allergic pathology [1] . Mast cells are derived from hematopoietic progenitors in the bone marrow which migrate via blood to tissues all over the body where they further differentiate and mature into different phenotypes, depending on the local microenvironment. Stem cell factor (SCF), also known as steel factor, KIT ligand, or mast cell growth factor, is found to be the primary growth and differentiation factor for mast cells [2] . The cellular receptor for SCF is the product of the c-kit proto-oncogene. In addi-tion to SCF, mast cell growth and differentiation can be facilitated by several other cytokines including IL-3. For example, expansion of tissue mast cells upon nematode infection requires IL-3 [3][4] . Immature mouse mast cells can be differentiated invitro from bone marrow precursor cells in the presence of IL-3 without SCF [5] . Mast cells are enriched in the skin, around blood vessels, and in mucosal membranes such as the respiratory and gastrointestinal tracts. Most notably, mast cells are highly enriched in the skin and mucosal barriers of the body, where they serve as a first line of defense. It is noteworthy that mature mast cells are capable of differentiating both phenotypically and functionally as a consequence of tissue-specific stimulation under defined microenvironmental conditions. For example, inflamed lungs are reported to have more tryptase/chymase-producing mast cells compared with non-inflamed lung tissue in which tryptase-producing mast cells are dominant [6][7] . Mast cell subtypes Two major subtypes of rodent mast cells have been characterized, i.e. connective tissue mast cells (CTMC) and mucosal mast cells (MMC), based on their tissue localization [8][9][10][11] . For instance, skin mast cells and mast cells residing in the peritoneal cavity are CTMC, whereas mast cells located in the respiratory or gastrointestinal tracts are usually characterized as MMC. In addition to tissue localization, other properties such as protease and cytokine profiles, membrane receptor distribution, and growth factor requirements also distinguish these two types of mast cells. In addition to residing in connective and serosal tissues, CTMC in mice have been found in the submucosa of the stomach [12] and nasal tissue [13] . In contrast, human mast cells are usually grouped based on the expression pattern of two mast cell-specific proteases, i.e. tryptase and chymase. According to this classification, two major human mast cell subgroups have been proposed. Mast cells that contain only tryptase are referred to as MC T , whereas those that contain both tryptase and chymase are termed MC TC . In terms of correlation to their murine counterparts, MC T are found mainly in mucosal tissues, resembling mouse MMC, while MC TC , which reside in such sites as the skin and small intestinal submucosa, are more closely related to mouse CTMC [14] , although the tissue localization is less stringent for human "CTMC" and "MMC". Similar to mouse mast cells, human mast cells also differ in the requirement for growth and differentiation factors. Specifically, SCF is needed for the survival of both types, whereas IL-4 is indispensable for MC TC , but not for MC T [15] . In addition to IgE-and FcεRI-mediated cell activation, mast cells can be activated by a variety of other stimulators, such as IgG immune complexes, cytokines, complement components, neuropeptides, chemical agents, and physical stimuli, as mast cells express broad-ranging surface receptors including Fc receptors, complement receptors, and pathogen-associated molecular patterns (PAMP) such as Toll-like receptors (TLR). These observations, together with the description of a wide spectrum of mast cell mediators, provide a basis for proposals implicating mast cells in almost all aspects of immune responses. Therefore, mast cells have been postulated to be modulators of numerous physiological and pathological responses beyond their classically defined role in allergies mediated mainly through FcεRI. These multifunctional properties of mast cells have been more extensively reviewed elsewhere [16][17] . It has to be pointed out that the overwhelming research findings addressing the roles of mast cells have relied on the use of mast celldeficient, KIT mutant mice which have other phenotypic abnormalities in addition to mast cell deficiency. These data await further experimental verification using the KIT-independent mast cell-deficient models to eliminate the confounding elements as a result of KIT mutation [18] . The roles of mast cells in host defense The earliest observation of a beneficial role of mast cells is their potential in defense against parasitic infection [19][20] . The MMC pool expands extensively during nematode infection, a process dependent on IL-3 [3][4] . Both IgE and mouse mast cell protease-6 (mMCP-6) are required for chronic immune responses against Trichinella spiralis infections [21] . In a helminth infection model, mast cells contribute to pathogen clearance by migrating to the draining lymph nodes and producing IL-6 and IL-4 [22] . Interestingly, mast cells have also been described to be critical for Th1 response-mediated defence against oral infection with Toxoplasma gondii [23] . In addition to defense against helminth infections, mast cells have also been described to be protective in bacterial infections. One of the classic examples of mast cell-dependent anti-bacterial infection is demonstrated by the cecal ligation and puncture (CLP) model of acute peritonitis which is dependent on tumour necrosis factor (TNF) [24] and the ability of mast cells to lower neurotensin levels [25] . Mast cells harbour antimicrobial peptides including cathelicidin in their secretory granules [26] . Furthermore, β-hexosaminidase, which is abundantly contained in mast cell granules, has recently been reported to have bactericidal activity [27] . The roles and relevance of mast cells in defense against viral and fungal infections have also been suggested [28][29] . Pathogen-mediated mast cell activation can be achieved through several mechanisms. Mast cells can be activated, through the equipped TLR, by direct recognition of microbial components such as bacterial lipopolysaccharide (LPS) and peptidoglycan resulting in distinct outcomes [30][31][32] . Mast cells can respond to microbial stimuli by surface proteins such as CD48 [33][34] . Furthermore, mast cells can be stimulated by endogenous inflammatory factors such as cytokines and complement components secondary to infection [35][36] . Indirect interaction of mast cells with pathogens can also be achieved through the recognition of pathogen-antibody complexes by Fc receptors including FcεRI and Fcγ receptors expressed on mast cells [37][38][39] . Fc receptor-mediated mast cell activation may also be triggered in the presence of certain pathogen-derived proteins that can bind immunoglobulins in an antigen-independent manner. A classic example of such a bacteria-derived superantigen is protein A from Staphylococcusaureus which can activate human and mouse tissue mast cells [40][41][42] , as the FcεRI molecules on these mast cells are most likely to have already been occupied with IgE, resulting in crosslinking of FcεRI upon protein A binding. However, the pathophysiological roles of such superantigen-mediated mast cell activation in defence against infection await further clarification. Similar to mast cell activation in other circumstances, the activation by pathogens is also believed to include both degranulation of pre-formed granular contents and selective denovo mediator production, for example, cytokines and lipid mediators, the patterns of which differ greatly depending on the stimulus encountered. These mast cell-associated products, such as TNF, IL-4, OX40 ligand and mMCP-6, are important for the recruitment and stimulation of other innate immune participants, e.g. neutrophils, macrophages, natural killer (NK) cells and eosinophils, contributing to the clearance of pathogens [21,30,[43][44] . Mast cells not only interact with cells in the immediate vicinity where the infection first takes place but also influence distant targets, e.g. cells in lymph nodes through mediators that they release [45] . It is also reported that mast cells can kill bacteria by producing extracellular traps that contain antimicrobial mediators [46] . In addition to contributing to innate immune responses by virtue of their large spectrum of granular products, mast cells also form a link between innate and adaptive immunity. Mast cells modulate the phenotype and function of key players in adaptive immunity, such as dendritic cells (DC), B cells, and T cells. Mast cells have been shown to functionally interact with professional antigen presenting cells (APC) such as DC and regulate their function mainly through mast cell-derived granular pro ducts. For example, histamine is capable of regulating the chemotaxis of immature DC [47][48] and cross-presentation of extracellular antigens [49] . TNF produced from mast cells is critical for DC migration [50][51][52] . TLR7 ligand-mediated mast cell activation is effective for the migration and maturation of Langerhans cells [53] . Maturation and activation of immature DC by mast cell-DC direct contact results in the activation of T cells that release IFN-γ and IL-17 promoting Th1 and Th17 responses, respectively [54] . Mast cells provide essential signals such as IL-6 and CD40L to enhance proliferation of B cells and drive their differentiation toward IgA-secreting plasma cells [55] . Mast cells can enhance the activation of T cells by providing costimulatory signals and secreting TNF [56][57][58] . Mast cells also contribute to the recruitment of T cells to sites of viral infection by secreting chemotactic molecules [59][60] . One of the key processes in achieving successful adaptive immunity is the presentation of microbial antigens to T lymphocytes. Whether or not mast cells are capable of acting as antigen-presenting cells is still controversial [61][62][63][64][65] . This is largely because of the argument that mast cells under steady state do not seem to constitutively express major histocompatibility complex class II (MHC-II) or co-stimulatory molecules such as CD86 [63][64] . In contrast, mast cells upregulate expression of MHC-II and costimulatory molecules following stimulation by inflammatory factors such as IFN-γ and LPS [63][64] . Therefore, mast cells may have the potential to directly present antigens to T cells at least under certain circumstances, for example, in inflamed tissues, to initiate adaptive immune responses. Mast cells have also been demonstrated to present antigen to and activate CD8 + T cells through MHC-I molecules [66][67] . Alternatively, mast cells are reported as participating in antigen cross-presentation to T cells [68] . Cross-presentation refers to a process, most typically following intracellular microbial infection, during which professional APC ingest infected cells and display the antigens of the microbes originally engulfed by the infected cells for recognition by T lymphocytes [69] . This is an efficient mechanism for presenting the antigens of those microbes that have infected host cells that may not produce all the signals, e.g. MHC-II recognition and costimulation needed to initiate T cell activation. The professional APC that have ingested infected cells may present the microbial antigens to both CD4 + and CD8 + T lymphocytes depending on the processing and presentation routes. Morphological changes of the host cells as a result of, e.g. microbial infections, apoptosis, and tumourigenesis, will facilitate ingestion by APC. In principle, any type of cells that have internalized antigens can participate in cross-presentation upon ingestion by APC. Importantly, mast cells have been implicated in the pha gocytosis of various types of antigens [70][71][72][73] . Various mechanisms have been reported for mast cells to internalize bacterial pathogens [74][75][76] . Indeed, mast cells can serve as an antigen-reservoir and participate in antigen cross-presentation [68] . In vitro cultured bone marrow-derived cultured mast cells (BMMC) can internalize IgE-bound chicken ovalbumin (OVA) protein, followed by engulfment by DC which process and present the OVA peptide to T cells that have specific receptors for the OVA peptide [68] . Induction of BMMC apoptosis is documented to be critical for efficient presentation by DC to T cells of the antigen originally phagocytosed by mast cells [68] . Owing to the fact that mast cells are capable of participating in both innate and adaptive immunity, and that they are enriched at the mucosal and skin barriers between the body and the external environment, mast cells, similar to skin Langerhans cells, tissue-resident DC and epithelial cells, are believed to be sentinel cells that are probably the first responders to a threat within seconds. Equipped with their immunologic armory of mediators, mast cells may possibly exert a pivotal role in the surveillance and elimination of pathogens by diversified mechanisms. While people have been endowing mast cells with a more positive image in health, new findings also implicate mast cells or their released products negatively in infection. Although mast cell-associated TNF has been reported to be critical for a CLP model of acute peritonitis [24] , it has to be pointed out that mast cell-derived TNF is not always protective in acute peritonitis, especially in models of severe CLP [77] . The detrimental effects of mast cells in severe peritonitis have also been ascribed to the release of IL-4 that inhibits the phagocytic potential of macrophages [78] . Mast cell degranulation may contribute to vascular leakage that may exacerbate dengue virus infection [38] . Even the potential of mast cells to recruit other immune effector cells during an infection is not always protective as this has been found to promote Chlamydia pneumoniae infection [79] . Interestingly, mMCP-4, the mouse counterpart of human mast cell chymase, can degrade TNF, thus dampening the severity of inflammation associated with sepsis and limiting the damage caused by TNF [80] , suggesting antagonism between mast cell mediators, thus favouring protection. Therefore, the implication and relevance of mast cells in host defense is a complex issue and the net outcome may depend on many antagonistic factors. The implication of mast cells in vaccination A vaccine is a biological preparation that stimulates an immune response against specific antigens that either are derived from the pathogen itself or resemble the structure of the pathogen. Ever since the first documented vaccination attempt by Edward Jenner for the prevention of small pox in 1796, vaccines have played a crucial role in protecting people against many infectious diseases [81][82] . The eradication of smallpox and the effective control of polio represent two classic success stories of how vaccines can play a major role in improving global health. Nevertheless, the demand for better and more effective vaccines against many infectious diseases is still growing, especially when infections such as tuberculosis, HIV, dengue fever and malaria still present enormous global problems. From a societal point of view, vaccination remains the most effective intervention in the control of infectious diseases and for the improvement of global health. There are two principal forms of vaccines: those that are live attenuated vaccines and those that are killed whole pathogens or subcomponent vaccines. An advantage of live attenuated vaccines is that they usually stimulate long-term immune responses similar to natural infection. However, live attenuated vaccines always come with a risk of reversion into more virulent organisms that could cause adverse reactions or more severe infections. In contrast, killed vaccines or subcomponent vaccines are more predictable and, therefore, safer. Furthermore, another concern that makes live attenuated vaccines less practical is the demand for a cold-chain for storing or transporting these vaccines. Therefore, killed vaccines are still much in use, even though they are weaker and usually do not promote as strong long-term memory responses. To make killed vaccines more effective, we need adjuvants which are substances that enhance immune responses and stimulate long-lasting, robust protective immunity. An adjuvant that is included in the vaccine contributes greatly to the efficacy of the vaccination by affecting the immune responses both quantitatively and qualitatively. Importantly, protective immunity following vaccination may be generated with lower amounts of antigen and a reduced dosing frequency after addition of an adjuvant [83] . Of all currently available adjuvants, aluminium salts (alum) have the longest history in practical vaccination. Alum-based vaccines have a good safety record and are capable of inducing early, high-titer, long-lasting protective immunity. At present, alum is still the most widely used adjuvant in both veterinary and human vaccines. The mechanism of action has been proposed to depend on a depot effect, enabling physical adsorption of antigen onto the alum depots. Furthermore, alum is reported to have direct immunostimulating effects [84] . The relevance of mast cells in alum-mediated adjuvanticity has been explored [84] . Interestingly, mast cells are found to respond to alum stimulation by releasing histamine and a panel of cytokines including IL-5 and IL-1β. Although by using the mast cell-deficient Kit W/W-v mice it is demonstrated that mast cells are not required for the priming of endogenous CD4 and CD8 T cells [84] , this does not formally exclude the contribution of mast cells to the adjuvant activity of alum in the wild-type mice as redundant pathways may exist. However, alum does not seem to be effective for mucosal immunisation, a route that has appreciable advantages compared with routes that require needle injections, i.e. intramuscular or subcutaneous delivery of vaccines. Needle-free mucosal vaccination can be achieved via oral, intranasal, sublingual, or intravaginal routes [85][86] . The obvious benefits of mucosal immunisation include avoidance of blood-borne contamination through re-use of syringes and needles as well as the fact that no trained professional personnel are required for vaccine delivery. Furthermore, mucosal immunisation can generate both systemic and mucosal immune responses [85][86] . Strikingly, mucosal immunisation can generate effective secretory IgA even at mucosal sites distant from where the vaccine is delivered [87][88] . For example, nasal immunisation can generate protective mucosal antibodies in the genital tract mucosa, which signifies the advantage of nasal vaccination. As most pathogens enter the body through mucosal surfaces, local mucosal immune responses are critically important in defense against invading pathogens. Therefore, how to achieve strong local protection has become one of the major goals of vaccine development. As the mucosal route of vaccination, as opposed to the parenteral route, often results in immune tolerance development, potent adjuvants are much warranted. Therefore, the selection of a strong mucosal adjuvant for effective vaccination is vital and possibly as important as the vaccine antigens themselves [85] . A number of strategies are proposed to design mucosal adjuvants. TLR agonists have been tested and these include TLR4 ligand monophosphoryl lipid A [89][90] , TLR9 ligand CpG oligodeoxynucleotides (ODN) [91] and the TLR5 ligand flagellin [92] . Bacterial enterotoxins which include cholera toxin (CT) and Escherichia coli heatlabile toxin (LT) constitute another major group of experimental mucosal adjuvants [93] . Both CT and LT are composed of five B-subunits (CTB and LTB) and a single copy of the A subunit (CTA or LTA) [94] . The CTA subunit is produced as a single polypeptide chain that is post-translationally modified through the action of a Vibrio cholerae protease to form two chains, CTA1 and CTA2, which remain linked by a disulphide bond. CTA1 is enzymatically active by ADP-ribosylating the cell membrane bound Gsα-protein, whereas CTB binds to GM1-gangliosides present on virtually all nucleated cells [95] . CTA2 is responsible for linking CTA into the CTB pentamer [96] . DC are believed to play a central role in the presentation of antigens to naïve T cells, which is a critical process for the development of adaptive immunity following natural infection [97] . As adjuvants are expected to mediate the same consequences as natural infections, quite a number of adjuvant studies are focused on the interaction of adjuvant with DC. Other types of cells have also been described to contribute to adjuvanticity. For example, B cells [98][99] , macrophages [100] , NK cells and NKT cells [101][102][103] have also been implicated as targets for vaccine adjuvants. Given the accumulating evidence suggesting a functional interplay between mast cells and other immune cells such as DC, T cells and B cells in adaptive immune responses, also mast cells have been implicated in adjuvant functions. Indeed, mast cell activators such as c48/80 have been reported as exerting a mucosal adjuvant function [104] . More specifically, c48/80 is demonstrated to be an efficient adjuvant by mobilizing DC to the draining lymph nodes through production of TNF. Successful vaccinations of several animal infection models using c48/80 as adjuvant have now been reported [105][106][107][108][109] . Retention of c48/80 and antigen on mucosal surfaces by chitosan-based nanoparticles can further promote mucosal immunisation [110] . The IL-1 family cytokines such as IL-1, IL-18 and IL-33 have been shown to exert adjuvant function capable of augmenting protection against influenza virus infection [111] . Interestingly, the effect of IL-18 and IL-33 is suggested to be mast cell-dependent [111] , which is not surprising as both cytokines can activate mast cells resulting in proinflammatory cytokine production. IL-18 together with IL-2 is potent in expanding the mucosal mast cell pool and the production of mMCP-1, which is critical for parasite expulsion [112] . IL-33 is described as a danger signal that can alert mast cells [113] and keratinocytederived IL-33 can stimulate mast cells to produce TNF and IL-6, cytokines critical for defence against herpes simplex virus infection [114] . Polymyxins which are clinically approved antibiotics can activate mast cells and boost immunisation [115] . In a QuilA-adjuvanted cattle vaccination model for protection against nematode infection, mast cells are most likely to be involved in the mechanism of adjuvanticity through the production of granzyme B and granulysin [116] . Synthetic particles harboring TNF, mimicking mast cell granules, have been reported to be powerful adjuvant in a mouse model of influenza [117] . Furthermore, it is suggested that the gold standard mucosal adjuvant CT may stimulate the release of IL-6 from mast cells boosting humoral immune responses [118] . Although the bacterial enterotoxins have been demonstrated to be powerful mucosal adjuvants experimentally, these substances are precluded from clinical use because of their toxicity and, hence, they have very limited applicability in human vaccines [119][120] . Extensive studies have, however, focused on the detoxification of these molecules using various approaches. For example, site-directed mutagenesis has generated detoxified mutants, such as CT112K, LTG192, LTR72, or LTK63, with little or no enzymatic activity, but with retained adjuvant function in experimental models [121][122][123][124] . However, a drastically different approach was applied by Lycke and co-workers who developed an adjuvant based on the intact CTA1 molecule without the B-subunit. The CTA1 is linked genetically to a dimer of the D-fragment of Staphylococcusaureus protein A forming the CTA1-DD adjuvant. Thus, CTA1-DD has retained the adjuvant function while the molecule cannot bind to GM1-ganglioside, rendering the molecule nontoxic [125] . In contrast to CT, intranasal administration of CTA1-DD results in neither inflammation nor accumulation in nervous tissues as is found with CT or LT [126] . The adjuvanticity of CTA1-DD has been well documented in various infectious disease models, which include Chlamydia trachomatis, influenza, HIV, Mycobacterium tuberculosis, and Helicobacter pylori [127][128][129][130][131][132][133] . The ADPribosyltransferase activity is central to the adjuvant effect [134] . In addition, mechanistic studies have identified several mechanisms of action that may explain the adjuvanticity of CTA1-DD invivo. As the DD domain binds to all immunoglobulins, CTA1-DD can target B cells through the B cell receptor, i.e. surface bound immunoglobulins, and promote B cell activation and germinal center development [135] . Moreover, the adjuvant also enhances T cell-independent immune responses [135] . Importantly, CTA1-DD stimulates germinal center formation effectively generating long-lived plasma cells and long-lived B memory cells [136] . Furthermore, also follicular DC and complement activation have been found to be essential elements for the function of this adjuvant [137] . In contrast to intact Staphylococcus aureus protein A, which can activate mast cells [40,42] , CTA1-DD fails to activate mast cells [138] . However, as the double D domains derived from protein A have binding sites for immunoglobulins, CTA1-DD can bind to all immunoglobulins including IgG [139][140] . We demonstrated that CTA1-DD and IgG may form complexes that are able to activate mast cells through Fcγ receptors, resulting in degranulation and the production of TNF and IL-6. Intranasal immunisation in the presence of CTA1-DD and IgG as an adjuvant can enhance antigen-specific immune responses compared with CTA1-DD alone. Importantly, this enhancement is dependent on mast cells [138] . Furthermore, we demonstrated that only CTMC, but not MMC, can be activated by immune complexes composed of CTA1-DD and IgG. This effect is mediated by FcγRIIIA, an activating receptor that is confirmed to be only expressed on CTMC. Indeed, CTMC are found in the nasal submucosa and these cells are demonstrated to express FcγRIIIA [13] . As MMC are not activated in response to stimulation by IgG immune complexes because of the lack of FcγRIIIA [13] , it was intriguing to investigate whether or not MMC could contribute to adaptive immune responses somehow, perhaps using another mechanism. We have recently reported that IgG immune complex-primed MMC can mediate enhanced antigen-specific activation of T cells, possibly providing a cross-presentation mechanism to boost mucosal vaccination [141] . In practical immunisation, this may happen when IgG immune complex-containing vaccine formulations are used. The development of adjuvants that enhance the potency of subunit vaccines formulated for administration through the mucosal routes is much desired. Dissecting and revealing the molecular mechanisms, through which mast cells precisely control adaptive immune responses to combat microbial infections, may have implications for rationally designing mucosal vaccine formulations. We propose that IgG immune complex-induced mast cell activation may be considered as one of the components for mucosal vaccine adjuvants. Fig. 1 summarizes the current knowledge regarding the strategies for the selection of vaccine formulations that target mast cells for enhancing immune responses. One of the challenges associated with mast cellmediated immune enhancement, of course, lies in overcoming the complexity of safety issues for the clinical development of the vaccines. The constant threats posed by infectious diseases over millions of years may have driven evolutionary pressure to keep mast cells, despite their adverse properties, e.g. in causing allergy, in humans to exploit these cells' beneficial functions in host defense. Our immune system has evolved mechanisms to balance the positive and negative contributions of mast cells to health. It is worth exploring strategies to make use of the adjuvant properties of mast cells to provide high-quality vaccination while minimizing any health-compromising factors.
Design, Development and Delivery of Rasagiline Mesylate from Monolithic Drug in Adhesive Matrix Patches The purpose of this research was to prepare and evaluate monolithic drug-in-adhesive type patches of Rasagiline Mesylate (RM) containing penetration enhancer and having seven day wear property. Preformulation studies like solubility in permeation enhancers, compatibility study, transmission study, uptake study and crystallization study of Rasagiline Mesylate in various pressure sensitive adhesive polymers were performed. Transdermal system was prepared by solvent casting method. The effects of various permeation enhancers (Propylene Glycol, Oleic Acid, Isopropyl Palmitate, and lauryl lactate) on the ex-vivo transcutaneous absorption of Rasagiline Mesylate through human cadaver skin were evaluated by modified Franz diffusion cell system. Ex-vivo transcutaneous absorption of prepared transdermal patch was performed using different concentration of Lauryl lactate (3%, 5%, and 7%). In-vitro Adhesion testing (Peel, tack shear etc.) was performed on different dry GSM (Grams per Square Meter) of patch like 80GSM, 100 GSM and 150 GSM. The final transdermal patches were tested for appearance, weight of matrix, thickness, % assay of drug content, in-vitro adhesion testing, cold flow study and ex-vivo skin permeation studies. Based on crystallization study and adhesion testing, Durotak-4098 (14% drug concentration) was selected as pressure sensitive adhesive. Patch containing Lauryl lactate showed highest cumulative permeation compared to other permeation enhancers. The patch containing 5% laurel lactate showed greater transdermal flux (2.36 µg/cm2 /hr). Patch with 150 dry GSM showing promising adhesion properties. Backing film Scotchpak 9723 and release liner Saint Gobain 8310 was selected based on transmission and uptake study of Rasagiline Mesylate. Stability study indicates that developed formulation remains stable. In conclusion, the present research confirms the practicability of developing Rasagiline Mesylate transdermal system. Introduction A transdermal system (TDS) is proposed to release the medicine into systemic circulation through the skin to cure disorders in locations far away of the site of application. Exact shape and size of the TDS are available for systemic action and are proposed for the treatment or prevention of systemic disease. Medicine released from the TDS is absorbed through skin into blood circulation and reached to target tissues to achieve a therapeutic effect. [1] The TDS has many merits over conventional dosage forms; it will improve bioavailability, enhancement of therapeutic efficacy, avoid limitations of the first-pass effect, and maintenance of steady plasma level of medicine. [2,3] There are mainly two types of TDS matrix type and reservoir type TDS. Matrix type TDS contains drugs in pressure-sensitive adhesion, and the reservoir type system may contain drugs in solution or PSA, but there is the rate-controlling membrane to manage the delivery of medicine. [4] RM is used to treat early Parkinson's disease, and it is an irreversible inhibitor of monoamine oxidase type B (MAO-B). [5] RM is selected as a model drug for TDS development as the drug has low bioavailability (36%) and undergoes extensive first-pass metabolism. RM has all the ideal properties to develop a TDS like low dose (1 mg/day), shorter half-life (3 hours), molecular weight (267.35 g/mol), and partition coefficient Log P (2.24). [6] A basic component of the TDS is a backing film to support the adhesive matrix, PSAs as a matrix to control the release and provide adhesion properties to patch, the rate-controlling membrane and release liner to protect the patch during storage. [7,8] In a present research study, the industrials approach was adopted for the development of a TDS. Various preformulation studies like solubility of a drug in various solvents and permeation enhancers, transmission and uptake study, drug-excipient compatibility study and crystallization study in different type PSA were performed. Selection and optimization of PSA and permeation enhancers were done based on in-vitro adhesion testing and penetration study on human cadaver skin by modified Franz diffusion cell system. A total of thirteen different formulations were formulated to select and optimization of PSA, a permeation enhancer, and dry GSM of a matrix. The final transdermal patches were tested for appearance, the weight of matrix, thickness, percentage assay of drug content, in-vitro adhesion testing, cold flow study, ex-vivo skin permeation studies, and stability study. Fabrication of Rasagiline Mesylate Drug in Adhesive Patches Precisely weighed RM was dissolved in ethyl acetate under mixing. A permeation enhancer was added and mixed it for 10 minutes. The PSA was added slowly under mixing. The blend was mixed for 10-15 minutes for uniform mixing of all ingredients. A blend was coated on silicone coated surface of release liner with uniform thickness to achieve 150 dry GSM in Mathis-I Lab coater for 20 minutes at 80°C. Dried sheets were laminated with backing film using Benchtop laminator. Prepared laminates were die cut using 15 cm² die. Die-cut patches were pouched in paper pouches and stored at controlled room temperature till further evaluation ( Table 1). Preformulation Studies Solubility studies in different permeation enhancers were carried out by preparing a saturated solution of drug in different permeation enhancers and solvents. In the drugexcipient compatibility study, the drug was mixed with various excipients (PSA, permeation enhancer) in 1:1 ratio. This mixture was kept in glass vials than properly capped and sealed with teflon tape. Two vials of each mixture were put at controlled room temperature (25°C) and in the hot air oven at 40°C for one month period. A transmission study was done in order to check the permeability of the drug through backing film and liner. This study is critical in the selection of the backing and liner. If the drug is permeable through any of these, it may cause formulation error, and drug content in a final formulation may decrease as some drug may leach or transmit through backing or liner during the formulation. Uptake study was done in order to check the percent drug uptake or absorbed by backing and liner. A saturated solution of RM was prepared in lauryl lactate. Several pieces of backing film/ release liners were cut of 12.56 cm 2 size and applied on Franz diffusion cell. The donor cell was then mounted above it, and the two cells were clamped tightly so that the backing film/release liners mounted on the receptor was sandwiched between the donor and the receptor cells. The receptor cell of the diffusion cell apparatus was filled with phosphate-buffered saline pH 7.4.2 mL of a saturated solution of RM was filled in the donor compartment using a syringe. A magnetic stirrer was used to keep the receptor media under stirring. The temperature was maintained at 40°C. The receptor phase was sampled at initial, 24, 48, 96, 120, and 168 hours, and the withdrawn samples were analyzed for RM content. For uptake study, a saturated solution of RM was prepared in lauryl lactate. Several 1 cm² pieces of backing films/release liners were cut and immersed in RM-lauryl lactate solution and stored at 40°C/75% RH and 25°C/60% RH. The backing film/ release liner was sampled at the interval of 2, 4, and 6 weeks. The sample was wiped carefully and analyzed for RM content. For a crystallization study, patches were prepared with different concentrations of API and stored at different stability conditions as follows: Freeze-thaw cycle accelerated stability, 90% RH and controlled room temperature for 60 days. The final transdermal patches were evaluated for appearance, weight of matrix, thickness, % assay of drug, in-vitro adhesion testing, cold flow study, and ex-vivo skin permeation studies. Appearance Prepared TDS were inspected for appearance visually. Weight of Matrix The total weight of each patch was taken individually, and the weight of matrix calculate by removing tare weight of release liner and backing film and mean value calculated. [9,10] Thickness of Patch The thickness of patches was measured using micrometer at five different places of patch and mean values were calculated. [11] Rasagiline Mesylate (RM) Content by High-Performance Liquid Chromatography (HPLC) The RM content was determined by developed high performance liquid chromatographic method. [12,13] Methanol and phosphate buffer (pH 3.0) was used as mobile phase. Following chromatography, the condition was used for the analysis of RM content in the TDS. In-vitro Adhesion Testing (Peel, Tack, and Shear) In-vitro adhesion properties of patch are characterized by peel strength (minimal force which takes to remove the patch from its surface), shear strength (the patch applies a resistances against the flow/detachment) and tack strength (the property of the patch to form bond with the contact surface under light pressure and brief contact). [14] Tack is the early physical bonding of patch onto the skin normally occurs in a few seconds. Tack was measured using Lloyd (AMETEX) instrument. One patch was taken and cut it to a size of 1-inch square. Remove release liner was removed and applied on test panel such way that adhesion side remain upward direction towards the hole. A machine was started at a speed of 610 mm/min to bring a contact of the probe to the adhesive site of patch. After 1 second contact time, removed probe from adhesive at the same speed. Note downforce (maximum) required for removing the probe from patches. Shear was measured using Chem. Instrument (10 bank shear tester). The patch was cut of 0.5-inch width. A patch was adhered on to the stainless steel plate up to 0.5-inch mark portion of the release liner. Roll the roller on it and allow it to stand for 15 minutes. The other end was attached with a hook, and after completion of required dwell time, 100 gm weight was hanged on the hook. The time required to fall down the patch was measured. Cold Flow Study of Rasagiline Mesylate Patch Cold flow is the movement of adhesive outside the borders of a TDS during storage. The cold flow was measured by a microscopical method by viewing the patch under a microscope at initial and after 1-month when stored at 40°C/75% RH. [15][16][17] Ex-vivo Permeation Study of Rasagiline Mesylate Patch Hairless human cadaver skin is used for permeation studies. Dermatomed of human cadaver skin was received from the vendor. Stratum corneum of skin was separated from dermatomed skin by putting the skin hot water for 2-3 hours at 55 ± 5°C using a cotton swab. Separated stratum corneum was dried overnight at controlled room temperature. After drying of the skin, it was stored at 2-8°C till further used. The Ex-vivo permeation study was performed using a modified Franz diffusion cell. The excised human cadaver skin was cut into a size of 2.0 cm² using die cutter and placed between the receptor and donor compartments so that the dermal side of the skin faced the receiver fluid. The release liner was removed from the patch and applied to the skin. The specified volume of diffusion medium was filled into the Franz diffusion cell. Franz diffusion cell was placed on a magnetic stirrer, and small magnetic bead was added and stirred the content under 500 RPM to keep them well mixed. Aliquots of a diffusion medium were removed at specified sampling points and filtered through Whatman® filter grade 41. An amount of drug was determined by HPLC using an appropriate method, as discussed earlier. The same volume of fresh diffusion medium after each removal was reinstated into the diffusion cell to maintain sink condition. The study was continued up to 168 hours. Each study was carried out on three skin (n = 3), and mean flux was calculated. [18][19][20][21] Stability Study The Evaluation of Rasagiline Mesylate Transdermal Patches Appearance All patches of different formulation were rectangular patch with round edges comprising rectangular transparent release liner with beige color backing. Upon removal of the transparent release liner, a uniform matrix layer on the backing is visible, substantially free of external particulate matter and bubbles. Weight of Matrix Uniformity The weight of matrix of all formulations is uniform. The values of all formulations vary from 121.6 ± 2.4 mg (80 GSM) to 227.0 ± 1.1 mg (150 GSM) (Tables 2 and 3). Thickness of Patch The thickness of patches ensured uniformity of thickness in all developed formulation. The value ranges from 241.2 ± 1.3 µm (80 GSM) to 264 ± 1.4 µm (150 GSM) (Tables 2 and 3). Rasagiline Mesylate Content by High-Performance Liquid Chromatography The value ranges from 97.9 ±1.4 % to 103.6 ± 1.1 (Tables 2, 3). From the thickness of matrix, weight of matrix and drug content, it was confirmed that uniformity of all formulation giving reproducible results by the proposed manufacturing method. In-vitro Adhesion testing (Peel, Tack, and Shear) Effect of PSAs on adhesion properties: The data of Table 2 shows that formulation FD-3, which having Durotak 87-4098, shows higher peel value compare to formulations fatal dose-1 (FD-1) and fatal dose-2 (FD-2) having Durotak 6908 and Bio PSA-4202, respectively. Thus, as acrylate polymer is suitable for long wear TDS. Formulation FD2 showed higher tack but low shear while formulation FD1 showed low peel, tack, and higher shear. Effect of permeation enhancer on adhesion properties: The data of Table 3 shows increase in the permeation enhancer, adhesive force increases (peel and tack value) and cohesive force (shear) value decreases FD8 and FD9. This indicates that the permeation enhancer has a plasticizer effect due to the alteration of Tg value of the PSA of the system. [22] Effect of dry GSM on adhesion properties: The data of Table 3 shows that as GSM of matrix increases, a continuous increase in the peel and tack was naturally associated with a decrease in shear adhesion. Cold Flow Study of Rasagiline Mesylate Patch From Tables 2 and 3, as we increase the tack of a patch, there was an increase in the cold flow of patch. Also as we increase the dry GSM of patch, there was increase in the cold flow. Patches having high tack and low shear value showed higher cold flow. Ex-vivo Permeation Study of Rasagiline Mesylate Patch Ex-vivo permeation study of formulated transdermal patches was performed through human cadaver skin by Franz diffusion cell. The maximum thermodynamic activity can be achieved when the highest drug concentration in the matrix of patches. The average cumulative amount of RM that permeated across the skin, over 168 hours, from FD 1-2% API, FD 2-5% API, FD 3-14% API was found to be formulation FD3 (without any permeation enhancers). The total quantity of RM transported in 168 h was 287.33 ± 13.43 μg/cm 2 , 256.00 ± 11.27 μg/cm 2 , 219.00 ± 1.00 μg/ cm 2 and 206.0 ± 2.65 μg/cm 2 when treated with lauryl lactate, oleic acid, isopropyl palmitate, and propylene glycol, respectively. The greatest skin flux of RM observed (2.36 μg/cm 2 /h at 12 and 24 hours) when formulation contain lauryl lactate among various formulations evaluated in this study, under similar experimental conditions. In addition, Lauryl lactate shows higher flux during 168 hours period when compared with the rest three permeation enhancers. From Tables 2 and 3, the penetration rate of RM in the existence of different permeation enhancers intended for enhancing the drug permeation. The RM permeation was significantly enhanced with the addition of permeation enhancers. Among the penetration enhancers evaluated, the permeation rate was decreased as; lauryl lactate > oleic acid > isopropyl palmitate> Propylene glycol Fig. 5. Data shows that as dry GSM (Matrix thickness) increases in a formulation, cumulative amount permeated, and the rate of drug permeation is increased. There is a slight increase in flux for formulation having 100 GSM compares to 80 GSM, but significantly increases in a formulation having 150 GSM. In FD13 formulations, the rate of permeation maintains up to 96 hours and then starts decreasing after 96 hours. While for FD11 and FD12 formulations, the rate of permeation is higher up to 12 and 24 hours, respectively, compared to FD13 formulation and decrease to a similar level of FD13 at 12 and 24 hours, respectively. These data indicate that an increase in GSM of the TDS increases the delivery rate at later time points, and it is mostly due to the constant concentration gradient for the entire time. Stability study Stability study of formulation FD13 was performed at 25°C/65% R.H. and 40°C/75% R.H storage conditions up to 3 months. As per the stability plan, the patches were withdrawn and tested for appearance, % assay of drug content, related substances, in-vitro adhesion, and cold flow. Stability results at CRT and accelerated storage condition were found to be satisfactory for up to three months. Discussion A systematic development was followed to develop drug in adhesive rasagiline mesylate transdermal system (ARMTS) using various parameters to developed an effective delivery system. The effect of various formulation factors like PSAs, skin permeation enhancers, and solubility were examined. Based on results, the development of the RM TDS was successful with Durotak 87-4098 PSA, which can provide sustain drug delivery up to 168 hour period. Based on the crystallization study, RM has a higher solubility in acrylate adhesive compare to polyisobutylene adhesive and silicone adhesive. The drug has around 14% w/w solubility in Durotak 87-4098. Release liner SG-8310 and backing film Scotchpak 9723 was selected based on uptake and transmission study. Different permeation enhancers were evaluated and formulation with lauryl lactate has found higher flux compare to other permeation enhancers like oleic acid, propylene glycol, isopropyl palmitate, and propylene glycol. Formulation with 5% lauryl lactate shows appropriate flux and adhesion properties like peel, tack, and shear for a 7-day wear period. Matrix thickness also plays an important role in the determination of in vitro adhesion characteristics; as matrix thickness (GSM) increases, a continuous increase in the peel was naturally associated with the reduction in shear adhesion. Ex-vivo permeation study evident that the level of enhancer and GSM of the formulation is directly proportional to the cumulative drug amount and skin flux rate. Formulation with Durotak 87-4098 having 14% drug concentration and 5% lauryl lactate as penetration enhancer showing good permeation rate (2.36 ± 0.07 µg/cm²/hr), appropriate adhesion properties (Peel, tack and shear) for a 7-day wear period, low cold flow and good stability data up to 3 months at 25°C/60% RH and 40°C/75% RH.
Classifying AGN by X-ray Hardness Variability The physics behind the dramatic and unpredictable X-ray variability of Active Galactic Nuclei (AGN) has eluded astronomers since it was discovered. We present an analysis of Swift XRT observations of 44 AGN with at least 20 Swift observations. We define HR-slope as the change of Hardness Ratio (HR) with luminosity ($L$). This slope is measured for all objects in order to: 1. Classify different AGN according to their HR-HR-slope relation and 2. compare HR-$L/L_\mathrm{Edd}$ trends with those observed in X-ray binaries for the 27 AGN with well measured black hole masses. We compare results using a count-based HR definition and an energy-based HR definition. We observe a clear dichotomy between Seyferts and radio loud galaxies when considering count-based HR, which disappears when considering energy based HR. This, along with the fact no correlation is observed between HR parameters and radio loudness, implies radio loud and radio quiet AGN should not be discriminated by their HR behavior. We provide schematic physical models to explain the observed transition between energy defined HR states. We find Seyferts populate the high, hard, phase of the HR-$L/L_\mathrm{Edd}$ diagram as well as do three radio loud objects. Two LINERs populate the low, soft, phase part of this diagram. Finally, radio loud objects are concentrated around small positive HR-slopes, while Seyferts track the HR phase diagram which may provide clues to the geometry of the corona. INTRODUCTION The X-ray emitting corona of Active Galactic Nuclei (AGN) has been studied extensively, yet many questions remain. Open questions range from the heating mechanism that creates the ∼ 10 9 K source, through the geometry and location of this plasma in the near AGN environment, to the explanations of the hourly and daily variations in both X-ray flux and spectral shape. Several components are identified in the X-ray spectra of AGN, tying the spectral shape to physical phenomena. Most spectra have a dominant powerlaw, attributed to a corona reprocessing seed accretion disk photons into X-rays through comptonization. Two more components determine the hardness of a spectrum; A soft excess below 1 keV whose origin is still debated (Done & Nayakshin 2007;D'Ammando et al. 2008;Boissay et al. 2016), and a hard component attributed to the X-rays reflecting off the disk which manifests primarily in a ∼ 20 − 100 keV Compton hump (George & Fabian 1991). Only the tail is observed below 10 keV. Beyond these prevalent observed components in AGN, suppression of the soft band is observed, attributed to photoelectric obscuration. Examples are Seyfert 2s (Kinkhabwala et al. 2002) and transient obscurers in Seyfert 1s observed only recently in high resolution (Kraemer et al. 2005;Kaastra et al. 2014;Mehdipour et al. 2017), but may not be rare (Markowitz et al. 2014). Many observations focus on high resolution spectroscopy in order to extract accurate measurements of the X-ray emission. A complementary method probing spectral variability is to use broadband spectra along with extensive monitoring in order to study property changes of the corona. The Hardness Ratio (HR), provides a quantitative description of the spectral shape: where H and S are the count rates of a given telescope in the defined hard (H) and soft (S) bands. Through analysis of the HR along with the information on its variability, more insight can be garnered on the X-ray emitting mechanisms. In addition to illuminating the physics of the X-ray source, hardness surveys also probe the further environment of the AGN such as absorbers (Suchkov et al. (2006) use HR to identify absorbed sources). HR and BHXRBs When discussing HR, it is interesting to see whether AGN cycle through the same phase states as those observed in Black Hole X-ray Binaries (McHardy 2006). A comprehensive overview of BHXRBs by Remillard & McClintock (2006) covers emission states of BHXRBs in Section 4. One of the main results presented in Remillard & McClintock (2006) is the BHXRB spectral cycle, transitioning through a soft, thermally dominated state, a steep powerlaw dominated state, and a hard powerlaw state. These are associated with the emergence and dissipation of jets, corroborated by simultaneous radio observations. Fig. 4 in Fender et al. (2005) demonstrates this cycle in a HR phase diagram. Spectral hardening is observed with a steep rise in intensity associated with a jet, followed by a shock softening of the spectrum and finally the jet dissipates and intensity drops to the quiescent state. The HR is used to classify these states quantitatively and in a model-independent manner. Both Remillard & McClintock (2006) and Fender et al. (2005) describe the soft part of the spectrum as a hot accretion disk, emitting thermally at 1 keV. This component is then shadowed by optically thick, hard, and non-thermal X-ray emission associated clearly with radio emission, the hallmark of a jet. Following the increasing intensity and the disappearance of the disk component, the hard part becomes more optically thin allowing the soft thermal component, the disk, to shine through. This happens while intensity remains maximal, exhibiting both non-thermal and disk components. Fender et al. (2005) associate this state with a second, faster jet. In any case the powerlaw in this state is steep, similar to that observed in the soft state, perhaps indicative of a bright disk. Wu & Gu (2008) measure a break in the LX /L Eddpowerlaw slope relation (where LX and L Edd are the X-ray and Eddington luminosities) in 6 BHXRBs, pointing to a possible transition between a radiatively inefficient accretion flow to standard disk accretion. This gives a connection of the BHXRB observed spectral states to physical models explaining these transitions, and associating the accretion behavior with that of the HR. HR and AGN An important connection between BHXRBs and AGN has been identified by McHardy (2006) who finds a break in the power spectral density in both AGN and BHXRB. He ties this time-scale with the size of the accretion disk in both cases, which correlates with the black hole mass for stellar and AGN scales. Searches for HR states in AGN analogous to BHXRB have been carried out for 20 years. McHardy et al. (1999) measure softening of spectral slope with intensity in two AGN, MCG 6-30-15 and NGC 5506, drawing an analogue with BHXRB. Two more examples are Emmanoulopoulos et al. (2012) which find a harder-when-brighter behavior for NGC 7213, and Mallick et al. (2017) who find a softer-whenbrighter behavior for Ark 120. Unlike BHXRBs, due to the longer timescale no single AGN can been observed to transition in a full HR cycle. Gu & Cao (2009) showed in a large compilation of low luminosity AGN that the powerlaw slope flattens with L Bol /L Edd between AGN (L Bol is the bolometric luminosity). The authors associate this harder-when-brighter behavior with the hard phase of the BHXRB phase diagram. Interestingly, when looking at the much more luminous PG quasars with L/L Edd > 0.1, Shemmer et al. (2006) observe a reversal, i.e. softer-when-brighter behavior, again by comparing different AGN. This change of behavior is further observed between the luminous sample of Shemmer et al. (2008) and the low luminosity LINERS presented in Younes et al. (2011). When going into the highest energy regimes, TeV Blazars have been seen in a few works to conform to a harder-when-brighter behavior (Brinkmann et al. 2003;Ravasio et al. 2004;Pandey et al. 2017Pandey et al. , 2018. Recently a comprehensive study by Connolly et al. (2016) measured the relation between the HR behavior of the X-ray spectrum and the intensity of the AGN in a Palomar selected sample of 24 AGN observed using Swift XRT, defining the soft band up to 2 keV, and the hard above. They find that primarily the selected AGN display a harderwhen-brighter behavior, with only 6 showing no correlation or a softening with luminosity, though this may be due to varying absorption. The authors find that low luminosity Seyferts belong in the luminous hard or intermediate part of the BHXRB phase diagram. This is in line with the model of Falcke et al. (2004), who attempt to unify the BHXRB and AGN picture by suggesting low luminosity AGN as the hard state counterpart of the BHXRB, with the spectrum dominated by a non-thermal powerlaw component. HR Caveats One has to be careful when defining the bands for HR. For example, Rani et al. (2017) performed a recent HR study of a few AGN, BL LACs and Seyferts observed by NuStar. They define the soft band as 3-10 keV, and hard up to 79 keV, which is useful for measuring the effects of the Compton reflection bump. They find no correlation of HR with flux. Another example is Sobolewska & Papadakis (2009), who fit powerlaw slopes to the AGN hard (> 2 keV) band observed by RXTE. They find a softer slope with increasing luminosity unlike the Swift sample of Connolly et al. (2016). The difference can be explained by the different choice of bands, where Sobolewska & Papadakis (2009) are mostly sensitive to changes in the Compton reflection bump, and they do measure large reflection factors on average. The main goal in this paper is to characterize AGN through their spectral states. This is done using the HR and its dependence on luminosity in order to identify physical explanations for these states and their transitions. In addition, we consider the AGN sample as a tracer of a single state cycle, and compare it with a single BHXRB cycle. SAMPLE AND DATA REDUCTION Our main goal is to analyze the change of HR with flux. In this work we define Hard (H) as 2-10 keV and soft (S) as 0.4-2 keV in the rest frame. Usually the soft band is defined on Swift XRT observations starting from 0.3 keV. Here we use 0.4 keV to allow for objects z 0.3 to be analyzed consistently. We reduce Swift XRT observations 1 in the PC mode using xselect within the heasoft 2 software package. We extract the source spectrum using a 20 pixel circle around the coordinates taken from Simbad 3 (Wenger et al. 2000), and background using an annulus up to 30 pixels. (ii) At least 20 observations in Swift XRT PC mode have: (a) more than 500 source photons (b) at least one pixel with more than 3 photons Criterion 2(a) ensures that the hard and soft part of the band contain more than 10 photons in all used observations, such that the Poisson uncertainty of each band in each observation is at most 30%, usually much better. Criterion 2(b) eliminates high background and smeared observations. Of these we drop 8 objects with z > 0.3 (next smallest is 0.36), and are left with 44 objects as detailed in Table 1. Some of these have had their HR analyzed before for example 5 objects; NGC 3227, NGC 4151, M 87, M 81, NGC 5548 have been studied by Connolly et al. (2016), and 3C273 has been examined in McHardy (2006). The AGNs in Table 1 are mostly Seyfert 1 and BL LAC objects with 3 Blazars, two LINERS, and two Seyfert 2s. These span orders of magnitude in distance, luminosity, L/L Edd , and X-ray radio loudness (LR/LX ). That all said, these objects do not constitute a statistically well defined sample since they have been selected for extensive monitoring with Swift for different reasons. These are objects that are interesting to the community, and some of their biases include; the brightest and/or nearest AGN, perhaps selected through their radio activity, or thought to be highly variable. Nonetheless, the wide range of AGN parameters, their variability, along with the fact that the sample includes a significant population of both radio loud and radio quiet AGN, make it interesting for analyzing spectral variability in AGN. Initial look -Fvar In the final column of Table 2 we present the excess variance parameter (Markowitz & Edelson 2004): where the lightcurve variance is σ 2 , σ 2 i are the count rate variances of each individual observation (i) representing the uncertainty, and Ci are the individual observation count rates so that σ 2 = C 2 i − Ci 2 . An overline represents an average across all observations. Fvar is a measure of how much an object changes in excess of the observational uncertainty. The sample spans values of Fvar from 7% to 73%. In Fig. 1 the soft band Fvar is plotted against the hard, dividing AGN into two classes. Moreover, these objects are distinct to begin with, as AGN above the line are radio loud, and those below Seyferts. Different processes likely dictate variability for objects below and above the line, for example jet compared with outflow variability. This measurement already yields a separation through hardness behavior of these objects. Analysis using Fvar is complementary to that presented in the following HR analysis, where we study variability of the hardness state with luminosity. Count HR In contrast with the common definition of HR, in this work we use flux based definitions. Instead of considering count rates, in each channel (energy) we divide the count rate by the effective area. This allows for an instrument independent analysis that can be compared with past and future telescopes. After correcting all incoming photon energies for redshift, we sum count fluxes (counts s −2 cm −2 ) from 0.4 keV to 2 keV, and from 2 keV to 10 keV as emitted in the rest-frame and corrected for nominal galactic absorption (Wilms et al. 2000) in each channel individually. The flux in each energy channel is calculated as: where Ci are the count rates in the channel, Ti the galactic transmission at the channel energy as calculated by tbabs 4 (Wilms et al. 2000;Kalberla et al. 2005) using abundances from Asplund et al. (2009), and Ai is the effective area of the channel. The sums of these below and above 2 keV constitute the soft and hard bands used in eq. 1. Count uncertainties are propagated to the H and S band sums using Gaussian error propagation. We do this for each observation, and plot them against the mean count luminosity of the observations, as seen in the two top panels of Fig. 2, with the entire sample available on-line 5 . We next use the Orthogonal Distance Regression (ODR) package (Boggs et al. 1989) as implemented in Python 3, Scipy 6 to get a best-fit HR-slope. Using an ODR rather than a standard regression relaxes the assumption that HR is the dependent variable, taking into account more uncertainty. We employ the HR-slope to quantify the behavior of HR with luminosity, and ignore the intersection with the HR axis, as HR is not defined at zero luminosity. The hardness of an object is defined as the HR-mean across all observations. Details of the fit are listed in Table 2 in the 3rd and 4th columns. A positive slope indicates an harder-when-brighter behavior, and a negative one softer-when-brighter. While a linear fit is not always a good one, it distinguishes between these two generic scenarios, and shows how drastically an object becomes harder-when-brighter or softer-when-brighter. Usually the trend is clear and the HR-slope is inconsistent with 0. In section 3 we discuss the results and these trends. Energy HR Since the analysis is redshift corrected we can analyze the HR with a more physical definition of hard and soft. Instead of taking pure counts, we multiply for each energy bin the count flux in Eq. 3 by the rest-frame energy associated with the channel Ei, and divide by the width of the energy bin ∆Ei, to obtain the energy flux density: This method, while uncommon, should give a more physical view of the HR behavior, as it represents the changes in the energy content of the AGN, which govern the interaction with its surrounding environment. Consequently, this definition has higher HR values compared to the count based definition, as more weight is given to the higher energies. Results are given in columns 5 and 6 of Table 2. Two examples are presented in the two bottom panels of Fig. 2, and the complete sample is available on-line 7 . HR BEHAVIOR OF THE SAMPLE The summary of our main results is presented in Fig. 3. These four plots show all objects on a HR-mean-slope diagram, and are naturally divided by the 0 HR-slope line. On the right side of each diagram are harder-when-brighter objects, on the left softer-when-brighter. The bottom half are softer objects, and the upper harder objects. Two plots are given for each definition of HR, counts (top) and energy (bottom) -one focusing on most objects (left), and a zoomed out plot showing the outliers as well (right). The two LIN-ERS are omitted from these plots due to steep slopes with large error bars (see Table 2). The two solid horizontal lines are HR values of AGN spectral photon powerlaw indices of Γ = 1.5 (top) and Γ = 2 (bottom). The dashed line shows a Γ = 2 powerlaw absorbed with a neutral column of 10 22 cm −2 . Most AGN are contained as expected between the two powerlaws, with obscured objects such as NGC 5548 and NGC 1365 near and above the dashed line (right plots). NGC 4151, in the top far right panel is a clear obscured outlier. The two top panels are dedicated to the count based definition of HR. This is the classic value used, and interestingly when considering all objects provides a clear dichotomy between Seyferts and radio loud AGN (top left). Radio loud AGN are harderwhen-brighter and Seyferts are softer-when-brighter, though perhaps it is more sensible to call this behavior harder-whenfainter as Seyferts often have changing ionized absorbers that can attenuate the soft band count much more. It may be hard to attribute a physical meaning to the count HR, so we consider an energy based HR (bottom panels in Fig. 3) and the picture changes somewhat. Seyferts, while maintaining the bottom left to top right orientation on the plot, are shifted both in slope and hardness compared to the count diagram. While a hardening of all objects is an obvious consequence of the new energy definition, a steeper HR-slope is not immediately implied. The radio load AGNs for example, remain centered around HR-slope=0, remaining harder-when brighter, though this may be a consequence of being near a HR-slope of 0. Note NGC 4151 is no longer as peculiar in the bottom right panel, now that the hard counts are given a greater weight in the HR. The plot can be divided into 4 quadrants by the Γ = 1.5 line with absorbed objects above it, perhaps even 6 with Γ = 2 as a second horizontal axis. With the HR energy definition we can attribute clear physical meaning. Numbering 4 quadrants from top right counter-clockwise we have: Q-1 Harder-when-brighter objects that are initially hard to begin with. Any harder-when-brighter behavior can most easily be attributed to an injection of hot electrons into the 7 Figure links X-ray emitting gas, whether through magnetic reconnection in the corona or through jets attributed to radio loud AGN. The Seyferts in this quadrant may comprise objects with the most active coronae in the sample. NGC 4151, the outlier in this quadrant, has a completely absorbed soft band, so any change in HR must be attributed to a change in the hard band. Q-2 Hard objects that become softer-when-brighter. Perhaps a better name is harder-when-fainter, as these objects are dominated by variable obscuration (NGC 5548, NGC 3783, NGC 1365). Q-3 Objects with soft emission dominating and softerwhen-brighter behavior. This quadrant could fit with the coronal cooling paradigm as detailed by Haardt & Maraschi (1991), where a hot corona brightens due to increasing UV disk photons and softens. Q-4 Soft objects with harder-when-brighter behavior. This quadrant is dominated by radio loud AGN, and may be attributed to a jet emerging and dominating a previously quiescent soft emitter, as in the BHXRB picture. Akn 564 is the only Seyfert in this quadrant, which is interesting as it is not associated with any special radio activity. These 4 quadrants are depicted in Fig. 4. The very different emission mechanisms interpreted from the full spectral energy distribution of the radio loud and radio quiet AGN, along with the fact the two mix in this diagram, suggests that the two types of objects should not be unified by this diagram. Seyferts span a large portion of the diagram, suggesting their coronae can be classified according to this diagram while the radio loud objects are concentrated in the same region, with mean hardness alone a good classifier of the X-ray spectral behavior. The HR track In this section we will consider only HR in terms of energy. Looking at Fig. 3(d), since the radio loud objects seem to be classified completely by their mean HR it makes sense to focus separately on the Seyferts. The sample is small, but there is a possible track in the phase diagram, as shown in Fig. 5. In this figure the first, lower branch is fit with a simple linear regression yielding a slope of 6.7, R 2 = 0.73, and a p-value of 0.007. Akn 564, the Seyfert at the bottom right, is much softer than the rest of the Seyfert population, and is excluded from this fit. The track may continue in multiple ways into the absorbed AGN region of the track in the top left. One option is shown by the dashed lines, which are not fits. NGC 3227 is an outlier with an energy HR-slope of -0.7, not plotted in Fig. 5, and thus the track could be further skewed to the left. In this proposed track the objects become harder-when-dimmer with increasing hardness, until saturation and a migration towards the hard part (right hand side) of the plot. When considering in particular the first, main branch of the track, Compton cooling dominates its beginning and transitions into coronae dominated by energy injection. Assuming the coronae are similar in nature between Seyferts, such that the heating and cooling mechanisms are ubiquitous, it seems geometry is a simple explanation for the grad- ual difference. In this case the softer-when-brighter start would be objects with coronae mostly above and around the black hole, a-la the lamppost picture. In this scenario the corona is exposed to remote and cool parts of the disk. As the track trends harder and harder-when-brighter (right and up along this branch), this could be due to the corona dropping towards the black hole. In this part of the branch, coronal heating becomes significant, dominating the reduced cooling. Variations in flux will then be mostly due to the energetics of the corona itself and the hardening of the Compton scattering. Whether this is a plausible explanation or not needs to be backed up with a dynamical physical model of the corona. Beyond the first branch, the few remaining Seyferts track back (to the left, top of the diagram), as obscura-tion of the AGN becomes important. This could be due to the hard objects, and low corona, giving rise to a significant outflow, coming from the inner disk. This branch reaches an observed saturation at HR ≈ 0.8, where the soft band is almost completely obscured. The track turns back to the right, maintaining this saturated mean HR, but an increasing harder-when-brighter behavior, up to NGC 4151. Could there be an AGN cycle? There is a possibility that AGN follow a similar cycle to that of BHXRBs. On the other hand, a simple scaling of relevant times such as the viscosity time needed to disperse an accretion ring which scales with the size of the accreting system, ∼ 10 6 years for AGN (e.g., Duschl et al. 2000), means we Table 2. Each plot is a HR-mean-slope phase diagram, two plots for each definition, count-HR (top) or energy-HR (bottom). The left plots are centered around the HR = 0 vertical line, and zoom-outs are on the right. (right). Horizontal solid lines correspond to a HR of a Γ = 2 and Γ = 1.5 powerlaw. The dashed lines show a Γ = 2 powerlaw absorbed by a neutral column of 10 22 cm −2 . When considering energy the mean hardness is increased, as a greater weight is given to the hard band. The dashed line singles out heavily absorbed objects (above it). Most unabsorbed Seyferts are contained between the HR defined by the two powerlaw slopes, but when considering energetics (bottom) the predominantly softer-when-brighter behavior of the count analysis (top) is gone. will not see such a full cycle. Thus, a statistical approach should be used to try and positively discern any such cycles in AGN. Considering that the overall luminosity of an object speaks both to its size and the accretion efficiency, perhaps a better way to compare different objects and their HR would be through LX /L Edd . We present in the top panel of Fig. 6 a plot of the best fit energy HR tracks for the 27 objects that have measured black hole masses (See Table 1). The three top red, dashed lines are radio loud AGN with mass measurements, 3C273, S5 0716+71, and H1426+428. These Viewed this way there does not seem to be very much of a cycle, as most objects populate the same region of the plot, Seyferts and radio loud AGN. This coexistence on the phase diagram hampers a physical distinction between jetted and non-jetted AGN in terms of HR behavior. We compare this plot with BHXRB data from Fig. 2 of Fender et al. (2005), plotted in the bottom panel of Fig. 6. Viewed in this way it would seem that all objects presented in Fig. 6 are in the hard state extending to the intermediate state as defined by Fender et al. (2005). Hardness is defined differently in the two panels. We use the definition of HR from equation 1 with flux calculated from equation 4. Fender et al. (2005) define the X-ray color as the counts in the 6.3-10.5 keV band over the counts in the 3.8-6.3 keV band, though different authors may use different bands. Since BHXRBs are all thought to follow the same track on this diagram, we assume the full HR axis as we defined it, normalized between -1 and 1 (top panel), should match the total X-ray color axis (bottom panel). Note also the different energy bands used to define LX in the two panels. The appearance of M 87 and M 81 in the diagram at low luminosity is interesting as it may provide tentative evidence for AGN in the low state, if there is indeed a cycle. Markoff et al. (2015) show that the same model, scaled, fits emission in both V404 Cyg and M 81, a stellar mass black hole and a super-massive one, both with similar L/L Edd . Not many BHXRBs are observed in such a low accretion state, and this analysis implies that a reverse analogy from LINER AGN may give a better understanding to an extremely low accretion state in BHXRBs. Finally, note rise and reversal of slope in the luminous top of Fig. 6. This tip has been observed in several BHXRBs, including in XTE J1550-564 shown in the bottom panel. GRO J1655-40, which is known for its 2005 flare, displays an even more striking tip (Debnath et al. 2008, Fig. 2). Seyferts in Fig. 6 mostly occupy the hard part of the diagram as also found by Connolly et al. (2016), though now we address energetics as well. This also strengthens the claim by Falcke et al. (2004), that Seyfert AGN are the hard or intermediate counterpart of BHXRBs. On the other hand, the hard state of BHXRBs has more radio emission (associated with a jet) than the soft state, while Seyferts are known to be radio quiet. The disk in AGN emits predominantly in the UV, compared to BHXRBs where it emits as soft X-rays. As a consequence, Körding et al. (2006) uses simultaneous X-ray and UV observations to estimate HR, or a disk-fraction (UV) -luminosity diagram. Their normalized (between 0 and 1) definition captures the hardness state of the entire disk + corona system in a more precise way. Nonetheless, they find low luminosity AGN occupy the hard state of the diagram, as in Fig. 6. A short note on other relations In the course of analyzing the hardness behavior of the sample we attempted to find relations of the quantities measured here, Fvar, HR, and HR-slope with both L/L Edd and radio loudness (LR/LX ). There is only a weak softer-whenbrighter trend of mean HR with LX /L Edd seen only in Seyferts, see Fig. 6. This lack of correlation also holds for the HR slopes. Both of these are true when considering either counts or energy for the HR. Finally, none of these quantities show any correlation with X-ray radio loudness. SUMMARY AND CONCLUSIONS We analyze 44 AGN observed 20 times and above with Swift XRT, and measure their HR when considering both count and energy definitions. All HR definitions are flux based (cm −2 s −1 ) and consistently use the same hard (2-10 keV) and soft (0.4-2 keV) rest frame bands in all AGN. It is important that the soft band encapsulates the soft X-ray excess observed in many AGN, below 2 keV. These definitions are instrument-independent and can be compared with measurements in other X-ray instruments. We find that using counts HR provides a clear separation of the harder-when-brighter radio loud and the softer-when-brighter radio quiet AGN. The radio loud X-ray variability is dominated by the hard band, in contrast to Seyferts where it is dominated by the soft band, as evident by comparing Fvar in the hard and soft bands. When using energy flux in the HR definition this dichotomy disappears, and radio loud AGN are mixed with Seyferts. Radio loud AGN remain harder-when-brighter or consistent with a flat change of spectrum (see also Brinkmann et al. 2003;Ravasio et al. 2004;Pandey et al. 2017Pandey et al. , 2018, while Seyferts track back and forth across the HR phase diagram (Fig. 3 and 5). This energetic analysis implies radio loud and radio quiet AGN should not be discriminated by their HR behavior. This is expected as the physical origin of the X-ray emission is likely different in the two populations, as the X-ray emission of radio loud AGN is dominated by a jet. This is also seen in Trichas et al. (2013) and Svoboda et al. (2017), who through simultaneous observations in UV and X-ray find no true dichotomy of radio loud and radio quiet AGN in the HR diagram (or equivalent), with all objects populating the hard and luminous part of the diagram. We do not have a good physical explanation for the dichotomy observed when considering count HR (Upper left panel of Fig. 3). Considering energetics allows us to attribute physical scenarios to different regions of the HR phase diagram, such that harder-when-brighter objects can be considered objects with active and variable coronae, and soft, softerwhen-brighter objects have Compton cooling coronae. The HR behavior trend can be interpreted in terms of the location of the corona in the Seyfert system above the disk plane. While the Seyferts are complex and show diverse behavior, radio loud AGN are completely characterized by their mean HR as might be expected for jet dominated objects. We attempt to place the 27 AGN with measured black hole mass on a HR-LX /L Edd diagram (Fig. 6), comparing with the similar BHXRB diagram. While all Seyferts populate the hard, luminous state of the diagram, three radio loud AGN are observed inseparably from the Seyferts, somewhat hampering claims that the two can be separate branches of a unified cycle. The only tentative evidence for a cycle are two LINERs, M 81 and M 87, that are observed in the soft, dimmer part of the diagram, and may provide a counterpart to quiescent BHXRBs, if considered with the Seyferts in a unified scheme. Finally, we suggest a possible track on the HR-HR-slope phase diagram for Seyferts, when defined in energy (Fig. 5). This track may describe a transition from coronae above their host black hole to coronae which compromise the inner part of a thick disk. The present analysis shows that the fast changes on daily and shorter timescales of the flux and spectral shape of the X-ray emitting region of AGN provides a window to the nature of these coronae. Future works need to populate the HR phase diagram with a statistically complete sample of AGN, that is not X-ray selected, and test the hypothesis of a HR track for Seyferts.
Prevalence of poor and rapid metabolizers of drugs metabolized by CYP2B6 in North Indian population residing in Indian national capital territory Identification of poor and rapid metabolizers for the category of drugs metabolized by cytochrome P450 2B6 (CYP2B6) is important for understanding the differences in clinical responses of drugs metabolized by this enzyme. This study reports the prevalence of poor and rapid metabolizers in North Indian population residing in the National Capital Territory. The prevalence of poor and rapid metabolizers was determined in the target population for the category of drugs metabolized by CYP2B6 by measuring plasma bupropion, a drug metabolized by CYP2B6, and its metabolite. Bupropion (75 mg) was administered to 107 volunteers, and the drug (bupropion) and its metabolite (hydroxybupropion) were determined simultaneously by LCMS/MS in the plasma. CYP2B6 activity was measured as hydroxybupropion/bupropion ratio, and volunteers were categorized as rapid or poor metabolizers on the basis of cutoff value of log (hydroxybupropion/bupropion). Significant differences were observed between the mean metabolite/drug ratio of rapid metabolizers (Mean = 0.59) and poor metabolizers (Mean = 0.26) with p<0.0001. Results indicate that 20.56% individuals in the target population were poor metabolizers for the category of drugs metabolized by CYP2B6. Cutoff value defined in this study can be used as a tool for evaluating the status of CYP2B6 using bupropion as a probe drug. The baseline information would be clinically useful before administering the drugs metabolized by this isoform. Introduction Human cytochrome P450 2B6 (CYP2B6) is involved in the biotransformation of a variety of clinically important drugs such as the antiretroviral nevirapine (NVP) and efavirenz (EFV), which are used to treat AIDS and/or stop the spread of HIV infection (Erickson et al. 1999;Ward et al. 2003), antimalarial drug artemisinin (Simonsson et al. 2003;Mehlotra et al. 2006) and other drugs including cyclophosphamide, tamoxifen, diazepam, and bupropion (Lang et al. 2001;Wang and Thompkins 2008). However, the rates with which these drugs are metabolized vary considerably in individual hepatic microsomes, and this variation is believed to be caused by CYP2B6 isoforms, besides the environmental factors such as the enzyme inducers. Clinical importance of genetic variations and role of ethnicity of CYP2D6, CYP2C19, CYP2C9, and CYP2D6 are well known (Adithan et al. 2003;Anitha and Banerjee 2003;Kumar et al. 2010;Lamba et al. 1998a, b) but CYP2B6 has only recently been recognized to code for a highly variable enzyme of potential clinical importance (Lang et al. 2001;Lamba et al. 2003). More than 100 DNA variations have been reported in CYP2B6 gene, and many of them show extensive linkage disequilibrium giving rise to distinct haplotypes. The spectrum of functional consequences of these variations is wide and includes null alleles with no detectable function and/or expression (alleles CYP2B6*8, *12, *15, *18, *21), alleles with partially reduced function/expression (CYP2B6*5, *6, *7, *11, *14, *19, *20, *21) (Lamba et al. 2003; and alleles with increased expression (CYP2B6*22) (Zukunft et al. 2005). Clinical relevance of CYP2B6 variation has been demonstrated for the anti-HIV drug efavirenz. Common clinical practice of administering the same dose to all patients leads to profound differences in drug plasma concentration, which is correlated with patient genotype (Tsuchiya et al. 2004;Novoa et al. 2005). Patients with high drug concentrations are at risk of developing concentration related central nervous system toxicity, including insomnia, fatigue, and headache, which often lead to discontinuation of therapy. Thus, for a drug such as efavirenz, dose adjustment based on CYP2B6 genotype could prevent administration of too-high doses, and increase the safety and efficacy of therapy. Further, CYP2B6 variant genotyping at baseline may allow clinicians to identify patients who are at risk of treatment failure or drug toxicity Ramachandran et al. 2009). Some of these variations are rare, but many are common, with allele frequencies between 10% and almost 50%, depending on the population Solus et al. 2004). Ethnic or racial interindividual CYP2B6 polymorphism in various populations has been reported in Caucasians (Lang et al. 2001), Japanese (Hiratsuka et al. 2002 andHiratsuka et al. 2004), African-American-Hispanic (Lamba et al. 2003;Hesse et al. 2004), Korean (Cho et al. 2004), Mongolian (Davalkham et al. 2009), Spain , and South Indians (Ramachandran et al. 2009), but not in North Indian population, and, hence, CYP2B6 was selected in this study. Aim of the study This study was aimed at to find out the prevalence of poor and rapid metabolizers for the category of drugs metabolized by CYP2B6 in the target population by measuring plasma bupropion, a drug metabolized by CYP2B6, and its metabolite. Clinical study Study protocol and corresponding informed consent form (ICF) were reviewed by the Institutional Review Board, and procedures were in accordance with the Helsinki Declaration of 1975, as revised in 2000. Subjects were informed before initiation of the study through an oral presentation regarding the purpose of the study, procedures to be carried out, potential hazards and rights of the subjects. Subjects (170) were selected randomly from the volunteer bank of clinical pharmacology unit of Ranbaxy Laboratories Limited. The volunteer bank comprises of healthy volunteers from the Indian National Capital Region (INCR), which includes the metropolitan area encompassing the entire national capital territory (Delhi) and urban areas of neighboring states of Haryana, Uttar Pradesh and Rajasthan. Subjects were selected on the basis of inclusion and exclusion criteria after obtaining written informed consent. Medical histories and demographic data were recorded. Each subject underwent physical examination and laboratory tests of hematology, hepatic and renal function. Hematological parameters were analyzed on fully automated five part differential count autoanalyzer, Sysmex XT 1800xi, procured from Transasia Co. The biochemical parameters, which included plasma glucose, serum blood urea nitrogen, serum creatinine, serum total bilirubin, serum alkaline phosphatase, serum alanine and aspartate aminotransferases, serum cholesterol, and urine drug of abuse, were analyzed on a fully automated biochemistry analyzer, Dimension Rxl (Seimens Diagnostics, USA), according to manufacturer's instruction. Urinalysis, routine and microscopic examination, was done by manual dipstick method (Multistick from Seimens Diagnostics). Rejection or selection of subjects was based on specific clinical and medical examination as shown in Table 1. Subjects were kept under medical supervision in the clinical pharmacology unit of Ranbaxy Laboratories Limited, New Delhi. Bupropion (Wellbutrin R , GlaxoSmithKline, USA) (75 mg) was administered orally to selected (107) volunteers along with 240 ml of water under the supervision of a trained Medical Officer. The EDTA blood sample (6 ml each) was collected at 0, 1, 3, 6 and 10 h after drug administration. Number of volunteers was calculated statistically based on the prevalence of percentage poor metabolizers present worldwide. Sample size of 100 volunteers was calculated statistically. The prevalence of CYP2B6 in Indian population is~40%, and therefore, a sample size of 100 subjects was calculated and found sufficient to estimate the prevalence with expected 95% binomial confidence interval ranging 30 to 50%. Based on the simulation with higher sample size (200 or 300), not much benefit was found in precision with a sample size of 100 subjects. Determination of bupropion and its metabolite by LCMS/MS CYP2B6 activity was determined by calculating hydroxybupropion/bupropion ratio in plasma by LCMC/MS on Waters Quattro premier mass spectrometer. Samples were analyzed using a set of calibration standards spiked in human plasma. Three levels of quality control samples were distributed through each batch of study samples assayed to monitor the performance of testing. Experiments were carried out by liquid-liquid extraction with ethyl acetate selected as an optimum extraction solvent for the estimation of both bupropion and its metabolite. Briefly, 100 μl of 0.5N sodium carbonate was added to 100 μl of plasma and 50 μl of internal standard dilution (5 μg/ml diazepam solution) in a clean test tube. Sodium carbonate was added to the extraction buffer to maintain the drug and metabolite in un-ioned state. The mixture was vortexed for a minute, and 4 ml extraction solution (ethyl acetate) was added to it followed by centrifugation at 4,000 rpm for 5 minutes. The organic layer (3.5 ml) was removed and transferred to a fresh tube and mixed with 50 μl of 0.1N HCl. Supernatant was vortexed for 10 seconds and kept in an evaporator under nitrogen at 50°C for 10 minutes, and then reconstituted in 250 μl diluent consisting of 80 parts of water and 20 parts of acetonitrile. Reconstituted solution was injected into LCMS for bupropion and hydroxy bupropion estimation. Six replicates of aqueous dilutions of bupropion, hydroxybupropion, and diazepam were injected and their peak response ratios were recorded to check the interferences or specificity. The calibration ranges for bupropion (1-500 ng/ml) and hydroxybupropion (5-2500 ng/ml) were selected. Calibration curve was accepted if the back-calculated concentrations of minimum 75% of calibration standard (without including standard zero) were within 85% and 115% of the nominal concentration. Coefficient of correlation of linear regression (r 2 ) of calibration curve was 0.98. Six different batches of biological matrix (9204, 122412, 123892, 123852, 123202 and 122501) were analyzed for selectivity exercise. Six blank samples spiked with LLOQ (lower limit of quantification) were processed by sample preparation procedure. Peak area was evaluated at the retention time of analyte and internal standard. Selectivity was accepted only if the peak area in blank at retention time of analyte was <20% of mean peak area of analyte at LLOQ and <5% of mean peak of internal standard in calibration standard for internal standard. Long-term stability of analyte was evaluated using low and high QC samples stored below −50°C in deep freezer for a period of 15 days. Six replicates of low and high quality control samples were used for each stability exercise. The stored QC samples were analyzed against freshly spiked calibration curve. Data analysis The population was categorized as poor and rapid metabolizers for the group of drugs metabolized by CYP2B6 on the basis of CYP2B6 activity. The enzyme activity was determined by evaluating hydroxybupropion/ bupropion ratio. The concentration of bupropion and hydroxybupropion was quantified from the blood samples drawn at pre-dose, 1 h post-dose, 3 h post-dose, 6 h post-dose and 10 h post-dose from the subjects kept inhouse till 10 h post-dose. Based on the concentration of the parent drug at the above mentioned time points, t max (time to reach the drug at the highest concentration) of 3 h was selected for the evaluation of CYP2B6 activity. The frequency histogram was constructed with log (metabolite/drug at t max ) versus the number of volunteers. The presence of different categories of individuals was indicated in frequency histogram if the frequency histogram was different from the normal distribution. On visual inspection of frequency histogram, approximate antimode position was established as the point on graph where two different modes are separated. However the exact antimode was derived by probit plot analysis. This is a graphical method in which standard deviates of a normal distribution are plotted against the log (metabolite/drug). Deviations from linearity in probit plots have been interpreted as existence of polymorphism. Scatter type chart was prepared with log (metabolite/drug) on x-scale and probit on y-scale. Trendlines were added to the plot to get best linear fit. Based on the selected trendline, a polynomial equation of regression was obtained. Intercept at x was the antimode. Individuals having log metabolite/drug ratio less than the antimode were classified as poor metabolizers. The mean of the ratio (metabolite/drug) of poor and rapid metabolizers was analyzed by student t-test to evaluate the significance of difference between poor and rapid metabolizers and the P value of less than 0.05 was accepted as statistically significant. Results None of the volunteers reported any undesirable effect or adverse event during or after the study. No interference was observed in the retention time of bupropion or hydroxybupropion. The retention time of 1.16 (bupropion), 1.13 (hydroxybupropion), and 1.70 min (diazepam) was tuned. The sensitivity of estimation of bupropion and hydroxybupropion or percent coefficient variation at LLOQ was 6.59% and 6.87%, respectively. The estimation procedure was specific as no interfering peak was observed in six different batches of biological matrix. The coefficient of correlation of linear regression (r) was 0.9982 for bupropion and 0.9982 for hydroxybupropion. Calibration curves of bupropion and hydroxybupropion are shown in Figure 1. Percent accuracy of the calibrators and quality control samples was between 85 and 115%. Precision for bupropion and hydroxybupropion was estimated as mean %CV (coefficient of variation). It was between 3.9 and 5.7% for all three levels of quality control samples. The values were within FDA defined limit of <15%. The frequency histogram and probit plot analysis described the bimodality of the studied population with respect to log (metabolite/drug ratio) (Figures 2 and 3). Regression analysis done on the probit plot yielded a best linear fit at R 2 = 0.938. The trendline equation (y = −3.318x 2 + 3.747x + 1.429) was obtained. On solving the equation, intercept at x-axis, which was actually an antimode, was found to be 0.5 [log (hydroxybupropion/bupropion)]. Individuals having log ratio of hydroxybupropion/bupropion <0.5 were categorized as poor metabolizers. Based on the antimode value, 20.56% of population was categorized as poor metabolizer for the category of drugs metabolized by CYP2B6. Significant difference was observed between the mean ratio of metabolite/drug of rapid metabolizers (Mean = 0.59) and poor metabolizers (Mean = 0.26) with P<0.0001 using student t-test (Table 2). Discussion This study reports the prevalence of poor and rapid metabolizers for the category of drugs metabolized by CYP2B6 in the target population. Interest in CYP2B6 has been developed by an ever-increasing list of substrates metabolized by this isoform as well as polymorphic and ethnic variations in the expression and activity of CYP2B6. Previous in vitro heterologous expression studies have shown that the polymorphism found in alleles CYP2B6*5, *6, *7, and *9 can alter the expression and/or activity of the enzyme (Ariyoshi et al. 2001;Iwasaki et al. 2004;Jinno et al. 2003). The functional significance of CYP2B6 variants has been shown for a variety of drugs. For example, in AIDS clinical studies, CYP2B6 variants have been associated with 2to 4-fold higher plasma EFV and NVP (Haas et al. 2004;Rotger et al. 2005;Rodriguez-Novoa et al. 2005;Tsuchiya et al. 2004) in HIV patients; ≥2-fold higher plasma EFV concentration is associated with neuropsychological adverse effects (Haas et al. 2004;Rotger et al. 2005;Marzolini et al. 2001;Hasse et al. 2005). Besides the antiretroviral drugs, CYP2B6 variants have also been found to influence the metabolism and pharmacokinetics of bupropion (an antidepressant) (Hesse et al. 2004) and cyclophosphamide (an anticancer and immunosuppressive) (Xie et al. 2003). This study is the first attempt to identify poor and rapid metabolizers of the drugs metabolized by CYP2B6 in north Indian population residing in the national capital. Bupropion is widely used in phenotyping of CYP2B6 (Faucette et al. 2000;Kirchheiner et al. 2003;Rotger et al. 2007;Chung et al. 2011) and has been found to be a safe and tolerable drug. We did not report adverse effect of the drug during the clinical trial, and found it a safe, suitable and tolerable drug. Bupropion and its metabolites were measured in the plasma by LCMS/MS. Validation parameters were within the acceptable limits as recommended in FDA. Analysis of the results based on frequency histogram and probit analysis revealed that 20.56% of the target population was poor metabolizer. The prevalence of poor metabolizers, which we observed in this study, was comparatively lower than West Africa (54%) (Malhotra et al. 2006), Papua New Guinea (63%) (Malhotra et al. 2006 ) and Koreans (23.9%) . In India, percentage of poor metabolizers was 40% in South Indian population (Ramachandran et al. 2009). In comparison, North Indian population reported 20.56% poor metabolizers, which is considerably lower. The difference might be attributed to the life style and genetics of these two diverse groups of populations in India. In a study by Rendic (2002), nutrition has been reported to play an important role in drug metabolism and affect some of the CYP isoforms including 1A1, 1A2, 1B1, 2A6, 2B6, 2C8, 2C9, 2C19, 2D6, 3A4 and 3A5. Similarly, occupational exposure to hazardous chemicals is reported to affect CYP1A1 and CYP2E1 (Nan et al. 2001). In this study, we could not evaluate the correlation of phenotype with genotype, which would be advantageous to understand the genetic background of the difference in poor and rapid metabolizers. However, the prevalence of 20.56% of poor phenotype for CYP2B6 reported in this study cannot be ignored because of its involvement in the metabolism of drugs commonly used for the treatment of cancer, HIV infection and depression, where the treatment is usually long term, and these drugs may be toxic due to poor metabolism. Conclusion The antimode or cutoff defined in this study can be used as a tool for evaluating the status of CYP2B6 activity using bupropion as a probe drug. The baseline information would be clinically useful before administering the drugs metabolized by this isoform.
Long-range contributions to double beta decay revisited We discuss the systematic decomposition of all dimension-7 (d=7) lepton number violating operators. These d=7 operators produce momentum enhanced contributions to the long-range part of the neutrinoless double beta decay amplitude and thus are severely constrained by existing half-live limits. In our list of possible models one can find contributions to the long-range amplitude discussed previously in the literature, such as the left-right symmetric model or scalar leptoquarks, as well as some new models not considered before. The d=7 operators generate Majorana neutrino mass terms either at tree-level, 1-loop or 2-loop level. We systematically compare constraints derived from the mass mechanism to those derived from the long-range double beta decay amplitude and classify our list of models accordingly. We also study one particular example decomposition, which produces neutrino masses at 2-loop level, can fit oscillation data and yields a large contribution to the long-range double beta decay amplitude, in some detail. I. INTRODUCTION Majorana neutrino masses, lepton number violation and neutrinoless double beta decay (0νββ) are intimately related. It is therefore not surprising that many models contributing to 0νββ have been discussed in the literature, see for example the recent reviews [1,2]. However, the famous black-box theorem [3] guarantees only that -if 0νββ decay is observed -Majorana neutrino masses must appear at the 4-loop level, which is much too small [4] to explain current oscillation data [5]. Thus, a priori one does not know whether some "exotic" contribution or the mass mechanism dominates the 0νββ decay rate. Distinguishing the different contributions would not only be an important step towards determining the origin of neutrino masses, but would also have profound implications for leptogenesis [6,7]. In terms of only standard model (SM) fields, ∆L = 2 terms can be written as nonrenormalizable operators (NROs) of odd mass dimensions. At mass dimension d = 5, there is only one such operator, the famous Weinberg operator [8], O W = 1 Λ (LLHH). At tree-level the Weinberg operator can be understood as the low-energy limit of one of the three possible seesaw realizations [9][10][11][12][13]. All other ∆L = 2 operators up to d = 11 -excluding, however, possible operators containing derivatives -have been listed in [14]. When complemented with SM Yukawa interactions (and in some cases SM charged current interactions), these higher dimensional operators always also generate Majorana neutrino masses (at different loop-levels), leading again to the Weinberg operator 1 at low energies. All ∆L = 2 operators also contribute to 0νββ decay. From the nuclear point of view, the amplitude for 0νββ decay contains two parts: the long-range part and the short-range part. The so-called long-range part [23] describes all contributions involving the exchange of a light, virtual neutrino between two nucleons. This category contains the mass mechanism, i.e. the Weinberg operator sandwiched between two SM charged current interactions, and also contributions due to d = 7 lepton number violating operators. 2 The short-range part of the 0νββ decay amplitude [24], on the other hand, contains all contributions from the exchange of heavy particles and can be described by a certain subset of the d = 9 ∆L = 2 operators in the list of [14]. In total there are six d = 9 operators contributing to the shortrange part of the amplitude at tree-level and the complete decomposition for the (scalar induced) operators has been given in [25]. The relation of all these decompositions with neutrino mass models has been studied recently in [26]. 3 The general conclusion of [26] is that for 2-loop and 3-loop neutrino mass models, the short-range part of the amplitude could be as important as the mass mechanism, while for tree-level and 1-loop models one expects that the mass mechanism gives the dominant contribution to 0νββ decay. 4 In this paper we study d = 7 ∆L = 2 operators, their relation to neutrino masses and the long-range part of the 0νββ decay amplitude. We decompose all d = 7 ∆L = 2 operators and determine the level of perturbation theory, at which the different decompositions (or "proto-models") will generate neutrino masses. Tree-level, 1-loop and 2-loop neutrino mass models are found in the list of the decompositions. We then compare the contribution from the mass mechanism to the 0νββ decay amplitude with the long-range d = 7 contribution. Depending on which particular nuclear operator is generated, limits on the new physics scale Λ > ∼ g eff (17 − 180) TeV can be derived from the d = 7 contribution. Here, g eff is the mean of the couplings entering the (decomposed) d = 7 operator. This should be compared to limits of the order of roughly Λ > ∼ √ Y eff 10 11 TeV and Λ > ∼ Y 2 eff 50 TeV, derived from the upper limit on m ν for tree-level and 2-loop (d = 7) neutrino masses. (Here, Y eff is again some mean of couplings entering the neutrino mass diagram. We use a different symbol, to remind that Y eff is not necessarily the same combination of couplings as g eff .) Thus, only for a certain, well-defined subset of models can the contribution from the long-range amplitude be expected to be similar to or dominate over the mass mechanism. Note that, conversely a sub-dominant contribution to the long-range amplitude always exists also in all models with mass mechanism dominance. We then give the complete classification of all models contributing to the d = 7 operators in tabular form in the appendix of this paper. In this list all models giving long-range contributions to 0νββ decay can be found, such as, for example, supersymmetric models with R-parity violation [32,33] or scalar leptoquarks [34]. There are also models with non-SM vectors, which could fit into models with extended gauge sectors, such as the left-right symmetric model [35][36][37]. And, finally, there are new models in this list, not considered in the literature previously. We mention that our paper has some overlap with the recent work [38]. The authors of this paper also studied d = 7 ∆L = 2 operators. 5 They discuss 1-loop neutrino masses induced by these operators, lepton flavour violating decays and, in particular, LHC phenomenology for one example operator in detail. The main differences between our work and theirs is that we (a) focus here on the relation of these operators with the long-range amplitude of 0νββ decay, which was not studied in [38] and (b) also discuss tree-level and 2-loop neutrino mass models. In particular, we find that 2-loop neutrino mass models are particularly interesting, because the d = 7 long-range contribution dominates 0νββ only in the class of models. The rest of this paper is organized as follows. In the next section we lay the basis for the discussion, establishing the notation and recalling the main definitions for ∆L = 2 operators and 0νββ decay amplitude. In the following section we then discuss an example of each: tree-level, 1-loop and 2-loop neutrino mass models. In each case we estimate the contribution to the mass mechanism and the constraints from the long-range amplitude. We study a 2-loop d = 7 model in some more detail, comparing also to oscillation data and discuss the constraint from lepton flavour violating processes. In section IV we then discuss a special case, where a d = 9 operator can give an equally important contribution to the 0νββ decay amplitude as a d = 7 operator. The example we discuss is related to the left-right symmetric extension of the standard model and, thus, of particular interest. We then close the paper with a short summary. The complete list of decompositions for d = 7 operators is given as an appendix. II. GENERAL SETUP The 0νββ decay amplitude can be separated into two pieces: (a) the long-range part [23], including the well-known mass mechanism, and (b) the short-range part [24] of the decay rate describing heavy particle exchange. Here, we will concentrate exclusively on the long-range part of the amplitude. The long-range part of the amplitude exchanges a light, virtual neutrino between two point-like vertices. The numerator of the neutrino propagator involves two pieces, (m ν i +p / ). If the interaction vertices contain standard model charged current interactions, the m ν i -term is projected out. This yields the "mass mechanism" of 0νββ decay. However, if one of the two vertices involved in the diagram produces a neutrino in the wrong helicity state, i.e. (ν L ) c , the p / -term is picked from the propagator. Since the momentum of the virtual neutrino is typically of the order of the Fermi momentum of the nucleons, p F ≃ 100 MeV, the 0νββ amplitude from the operators proportional to p / is enhanced by p F /m ν O(10 8 ) 5 Decompositions of d = 7 operators were also discussed in [39,40]. 136 Xe 2.0 · 10 −9 3.9 · 10 −7 4.7 · 10 −9 4.7 · 10 −9 3.3 · 10 −10 5.6 · 10 −10 with respect to the amplitude of the standard mass mechanism. Consequently, any operator proportional to p / will be tightly constrained from non-observation of double beta decay. Following [23] we write the effective Lagrangian for 4-fermion interactions as The leptonic (hadronic) currents j β (J α ) are defined as: where γ µν is defined as γ µν = i 2 [γ µ , γ ν ]. The first term of Eq. (1) is the SM charged current interaction, the other terms contain all new physics contributions. We normalize the coefficients ǫ β α relative to the SM charged current strength G F / √ 2. Recall, P L/R = 1 2 (1 ∓ γ 5 ) and we will use the subscripts L and R for left-handed and right-handed fermions, respectively. Note also that all leptonic currents with (1 − γ 5 ) will pick m ν i from the propagator, leading to an amplitude proportional to ǫ β L α × m ν (β L ∈ {S − P, V − A, T L }), which is always smaller than the standard mass mechanism contribution and thus is not very interesting. Thus, only six particular ǫ β α can be constrained from 0νββ decay. For convenience, we repeat the currently best limits, all derived in [1], in Table I. Eq.(1) describes long-range 0νββ decay from the low-energy point of view. From the particle physics point of view, these ∆L = 2 currents can be described as being generated from d = 7 operators. Disregarding the d = 7 "Weinberg-like" operator O W × (H † H), there are four of these operators in the list of Babu & Leung [14]: Here, O 2 is included for completeness, although it is trivial that the mass mechanism will be the dominant contribution to 0νββ decay for this operator, since it does not involve any quark fields. We will therefore not discuss the detailed decomposition of O 2 , which can be found in [38]. The operators O 3b,4a,8 will contribute to the long-range amplitudes j β J α , and the coefficient of the amplitudes is described as where Λ 7 is the energy scale from which the d = 7 operators originate, and ǫ d=7 is one of (or a combination of two of) the ǫ β α of Table 1. The factor 1/4 is included to account for the fact that Eq. (2) is written in terms of (1 ± γ 5 ) while chiral fields are defined using P L/R . This leads to the numerical constraints on the scale Λ 7 mentioned in the introduction, taking the least/most stringent numbers from Table I. All ∆L = 2 operators generate Majorana neutrino masses. However, operators O 3a and O 4b will generate neutrino mass matrices without diagonal entries, since L i L j ǫ ij = 0 within a generation. Neutrino mass matrices with such a flavour structure result in very restricted neutrino spectra, and it was shown in [42] that such models necessarily predict sin 2 (2θ 12 ) = 1 − (1/16)(∆m 2 21 /∆m 2 31 ) 2 . This prediction is ruled out by current neutrino data at more than 8 σ c.l. [5]. Models that generate at low energies only O 3a or O 4b can therefore not be considered realistic explanation of neutrino data. 6 Flavour off-diagonality of O 3a and O 4b does also suppress strongly their contribution to long-range double beta decay, in case the resulting leptonic current is of type j S+P (see appendix 7 ). This is because the final state leptons are both electrons, while the virtual neutrino emitted from the L in O 3a,4b is necessarily either ν µ or ν τ . In the definition of the "effective" ǫ β α , then neutrino mixing matrices appear with the combination j U ej U * µj (or U ej U * τ j ), which is identically zero unless the mixing matrices are non-unitary when summed over the light neutrinos. Departures from unitarity can occur in models with extra (sterile/right-handed) neutrinos heavier than about ∼ 1 GeV. While the propagation of the heavy neutrinos also contributes to 0νββ, the nuclear matrix element appearing in the amplitude of the heavy neutrino exchange is strongly suppressed, when their masses are larger than 1 GeV [44,45]. Consequently, the heavy neutrino contribution is suppressed with respect to the light neutrino one and the sum over j U ej U * µj is incomplete, appearing effectively as a sum over mixing matrix elements which is non-unitary. Current limits on this non-unitary piece of the mixing are of the order of very roughly percent [46][47][48][49], thus weakening limits on the coefficients for O 3a and O 4b (for j S+P ), compared to other operators, by at least two orders of magnitude. 6 However, models that produce these operators usually allow to add additional interactions that will generate O 5 (O 6 ) in addition to O 3a (O 4b ), as for example in the model discussed in [43]. These constructions then allow to correctly explain neutrino oscillation data, since O 5 /O 6 produce non-zero elements in the diagonal entries of the neutrino mass matrix. 7 Decomposition #8 of O 3a also generates j TR which can contribute to 0νββ without the need for a nonunitarity of the mixing matrix. To the list in Eq. (3) one can add two more ∆L = 2 operators involving derivatives: We mention these operators for completeness. As shown in [50], tree-level decompositions of O Dµ 1 always involve one of the seesaw mediators, and thus one expects this operator to be always present in tree-level models of neutrino mass. As we will see, if neutrino masses are generated from tree-level, the mass mechanism contribution in general dominates 0νββ, and consequently the new physics effect from O Dµ 1 cannot make a measurable impact. The second type of the derivative operators, O Dµ 2 , has also been discussed in detail in [50] with an example of tree-level realization, we thus give only a brief summary for this operator in the appendix. III. CLASSIFICATION In this section we will discuss a classification scheme for the decompositions of the ∆L = 2 operators of Eq. (3), based on the number of loops, at which they generate neutrino masses. We will discuss one typical example each for tree-level, 1-loop and 2-loop models. The complete list of decompositions for the different cases can be found in the appendix. A. Tree level If the neutrino mass is generated at tree-level, one expects m ν ∝ v 2 /Λ, which for coefficients of O(1) give Λ ∼ 10 14 GeV for neutrino masses order 0.1 eV. The amplitude of the mass mechanism of 0νββ decay is proportional to . The d = 7 contribution is therefore favoured by a factor p F / m ν , but suppressed by (v/Λ) 3 . Inserting Λ ∼ 10 14 , the d = 7 amplitude should be smaller than the mass mechanism amplitude by a huge factor of order O(10 −27 ). However, this naive estimate assumes all coefficients in the operators to be order O(1). Since these coefficients are usually products of Yukawa (and other) couplings in the UV complete models, this is not necessarily the case in general and much smaller scales Λ could occur. To discuss this in a bit more detail, we consider a particular example based on O 3 , decomposition #4, where two new fields, (1) a Majorana fermion ψ with the SM charge (SU(3) c , SU(2) L , U(1) Y ) = (1, 1, 0) and (2) a scalar S with (3, 2, 1/6), are introduced to decompose the effective operator, see Table III and Fig. 1. The Lagrangian for this model contains the following terms: Here, we have suppressed generation indices for simplicity. The first term in Eq. (6) will generate Dirac masses for the neutrinos. The Majorana mass term for the neutral field ψ (equivalent to a right-handed neutrino) can not be forbidden in this model. We will discuss first the simplest case with only one copy of ψ and comment on the more complicated cases with two or three ψ below. The contribution to 0νββ decay can be read off directly from the diagram in Fig. 1 on the left. It is given by With only one copy of ψ, the effective mass term contributing to 0νββ decay is m ν = (Y ν ) 2 e v 2 /m ψ and we can replace (Y ν ) e by m ν to arrive at the rough estimate of the constraint derived from the d = 7 contribution to 0νββ: Eq. (8) shows that the upper limit on the Yukawa couplings disappears as m ν approaches zero. When the masses are greater than roughly m ψ ≃ m S ∼ 10 TeV, the Yukawa couplings must be non-perturbative to fulfil the equality in Eq. (8). This implies that the mass mechanism will always dominate the 0νββ contribution for scales Λ larger than roughly this value, independent of the exact choice of the couplings. We briefly comment on models with more than one ψ. As is well-known, neutrino oscillation data require at least two non-zero neutrino masses, while a model with only one ψ leaves two of the three active neutrinos massless. Any realistic model based on Eq. (6) will therefore need at least two copies of ψ. In this case Eq. (7) has to be modified to include the summation over the different ψ i and In this case, one still expects in general that limits derived from the long-range part of the amplitude are proportional to m ν . However, there is a special region in parameter space, where the different contributions to m ν cancel nearly exactly, leaving the long-range contribution being the dominant part of the amplitude. Unless the model parameters are fine-tuned in this way, the mass mechanism should win over the d = 7 contribution for all tree-level neutrino mass models. The tables in the appendix show, that all three types of seesaw mediators appear in the decompositions of O 3 , O 4 and O 8 : ψ 1,1,0 (type-I), ψ 1,3,0 (type-III) and S 1,3,1 (type-II). In order to generate a seesaw mechanism, for some of the decompositions one needs to introduce new interactions, such as S † 1,3,1 HH, not present in the corresponding decomposition itself. However, in all these cases, the additional interactions are allowed by the symmetries of the models and are thus expected to be present. One then expects for all tree-level decompositions that the mass mechanism dominates over the long-range part of the amplitude, unless (i) the new physics scale Λ is below a few TeV and (ii) some parameters are extremely fine-tuned to suppress light neutrino masses, as discussed above in our particular example decomposition. B. One-loop level We now turn to a discussion of one-loop neutrino mass models. For this class of neutrino mass models, naive estimates would put Λ at Λ ∼ O(10 12 ) GeV for coefficients of O(1) and neutrino masses of O(0.1) eV. Thus, in the same way as tree neutrino mass models, the mass mechanism dominates over the long-range amplitude, unless at least some of the couplings in the UV completion are significantly smaller than O(1), as discussed next. As shown in [51], there are only three genuine 1-loop topologies for (d = 5) neutrino masses. Decompositions of O 3 , O 4 or O 8 produce only two of them, namely Tν-I-ii or Tν-Iiii. We will discuss one example for Tν-I-ii, based on O 3 decomposition #2, see Table III and Fig. 2. The underlying leptoquark model was first discussed in [34,52], and for accelerator phenomenology see, e.g., [53]. The model adds two scalar states to the SM particle content, S(3, 1, −1/3) and S ′ (3, 2, 1/6). The Lagrangian of the model contains interactions with SM and the scalar interactions and mass terms: Lepton number is violated by the simultaneous presence of the terms in Eq. (9) and the first term in Eq. (10) [52]. Electro-weak symmetry breaking generates the off-diagonal element of the mass matrix for the scalars with the electric charge −1/3. The mass matrix is expressed as in the basis of (S −1/3 , S ′−1/3 ), which is diagonalized by the rotation matrix with the mixing angle θ LQ that is given as The neutrino mass matrix, which arises from the 1-loop diagram shown in Fig. 2, is calculated to be where N c = 3 is the colour factor. The loop-integral function ∆B 0 is given as with the eigenvalues m 2 1,2 of the leptoquark mass matrix Eq. (11) and the mass m d k of the down-type quark of the k-th generation. Due to the hierarchy in the down-type quark masses, it is expected that the contribution from m b dominates the neutrino mass Eq. (13). and this gives roughly The constraint on the effective neutrino mass m ν 0.2 eV is derived from the combined KamLAND-Zen and EXO data [41], which is T 1/2 ≥ 3.4 × 10 25 ys for 136 Xe. The same experimental results also constrain the coefficient of the d = 7 operator generated from the Lagrangians Eqs. (9) and (10) as ǫ T R T R < ∼ 5.6 × 10 −10 (cf. Table I), which gives Therefore, for (λ S ) e1 (λ D ) 1e ≃ (λ S ) e3 (λ D ) 3e , the mass mechanism and the d = 7 contribution are approximately of equal size withM ≃ 750 GeV. Since m ν ∝M −2 , while ǫ O 3,#2 ∝M −4 , the mass mechanism will dominate 0νββ decay forM larger thanM ≃ 750 GeV, unless the couplings (λ S ) e1 (λ D ) 1e are larger than (λ S ) e3 (λ D ) 3e . We note that, leptoquark searches by the ATLAS [54,55] and the CMS [56-58] collaborations have provided lower limits on the masses of the scalar leptoquarks, depending on the lepton generation they couple to and also on the decay branching ratios of the leptoquarks. The limits derived from the search for the pair-production of leptoquarks are roughly in the range 650 − 1000 GeV [54][55][56][57][58], depending on assumptions. The other 1-loop models are qualitatively similar to the example discussed above. However, the numerical values for masses and couplings in the high-energy completions should be different, depending on the Lorentz structure of the d = 7 operators, see also the appendix. C. Two-loop level We now turn to a discussion of 2-loop neutrino mass models. As shown in the appendix, in case of the operators O 3 and O 4 , 2-loop models appear only for the cases O 3a and O 4b . As explained in section II, these operators alone cannot give realistic neutrino mass models. We thus base our example model on O 8 . The 2-loop neutrino mass models based on O 8 are listed in Tab. V in the appendix. In this section, we will discuss decomposition #15 in detail, which has not been discussed in the literature before. In this model, we add the following states to the SM particle content: With the new fields, we have the interactions which mediate O 8 operator, as shown in the left diagram of Fig. 3. Here, i runs over the three quark generations. While Y d i LαS k and Y u i ψH could be different for different i, for simplicity we will assume the couplings to quarks are the same for all i and drop the index i in the following. We will comment below, when we discuss the numerical results, on how this choice affects phenomenology. For simplicity, we introduce only one generation of the new fermion ψ, while we allow for more than one copy of the scalar S 3,2,1/6 . Note that, in principle, the model would work also for one copy of S 3,2,1/6 and more than one ψ, but as we will see later, the fit to neutrino data becomes simpler in our setup. The fermion ψ 2/3 mixes with the up-type quarks through the following mass term: Due to the strong hierarchy in up-type quark masses, we have assumed the sub-matrix for the up-type quarks in Eq. (21) is completely dominated by the contribution from top quarks. The mass matrix Eq. (21) is diagonalized with the unitary matrices V L and V R as and the mass eigenstates Ψ 2/3 i are give as where the index a for the interaction basis takes a ∈ {t, ψ}. The interactions are written in the mass eigenbasis as follows: The 2-loop neutrino mass diagram generated by this model is shown in Fig. 3. Using the formulas given in [59], one can express the neutrino mass matrix as . (26) Here N c = 3 is the colour factor and I(z k,i , r i , t i ) is the loop integral defined as The dimensionless parameters z k,i , r i , t i are defined as and loop momenta q and k are also defined dimensionless. Due to the strong hierarchy in down-type quark masses, we expect that neutrino mass given in Eq. (26) is dominated by the contribution from bottom quark. If we assume in Eq. (26) that all Yukawa couplings are of the same order, then the entries of the neutrino mass matrix will have a strong hierarchy: (m ν ) ee : (m ν ) µµ : (m ν ) τ τ = m e : m µ : m τ . Such a flavor structure is not consistent with neutrino oscillation data. Therefore, in order to reproduce the observed neutrino masses and mixings, our Yukawa couplings need to have a certain compensative hierarchy in their flavor structure. Since the neutrino mass matrix, and thus the Yukawa couplings contained in the neutrino mass, have a non-trivial flavour pattern, these Yukawas will be also constrained by charged lepton flavour violation (LFV) searches. Here we discuss only µ → eγ which usually provides the most stringent constraints in many models. In order to calculate the process µ → eγ we adapt the general formulas shown in [60] for our particular case. The amplitude for µ → eγ decay is given by Here, ǫ α is the photon polarization vector and q β is the momentum of photon. Three different diagrams contribute to the amplitude for µ → eγ, which are finally summarized with the two coefficients σ R and σ L given by where Here, we have assumed that both the ψ −2/3 and the ψ −5/3 have the same mass M ψ . This neglects (small) mass shifts in the ψ −2/3 state, due to its mixing with the top quark. Due to the large value of M ψ , that we use in our numerical examples, this should be a good approximation. Note also, that the contribution from the top quark is negligible for those large values of M ψ used below. The functions F 1 (x) and F 2 (x) are defined in Eqs (40) and (41) in [60] as The branching ratio for µ → eγ can be expressed with the coefficients σ R and σ L as where Γ µ is the total decay width of muon. Later, we will numerically calculate the branching ratio to search for the parameter choices that are consistent with the oscillation data and the constraint from µ → eγ. Before discussing constraints from lepton flavour violation, we will compare the longrange contribution to 0νββ with the mass mechanism in this model. This model manifestly generates a d = 7 long-range contribution to 0νββ. The half-life of 0νββ induced by the long-range contribution is proportional to the coefficient ǫ V +A V +A which is expressed in terms of the model parameters as Here, we use the limit on ǫ V +A V +A from non-observation of 136 Xe 0νββ decay, see Table I. With one copy of the new scalar, the bound of Eq. (37) is directly related to the effective neutrino mass Eq. (26) and places the stringent constraint: where we have used the approximate relation with z k,1 = (m S k /m t ) 2 , r 1 = (m b /m t ) 2 , t 1 = (M W /m t ) 2 , and I(z k,1 , r 1 , t 1 ) ∼ 5 × 10 −2 for a scalar mass of m S = 10 TeV and M ψ ≃ 0.8 TeV. Note that this parameter choice is motivated by the fact that the model cannot fit neutrino data with perturbative Yukawa couplings with scalar masses larger than m S > ∼ 10 TeV. As one can see from Eq. (38), the long-range contribution to 0νββ clearly dominates over the mass mechanism in this setup. In short, this neutrino mass model predicts large decay rate of 0νββ but tiny m ν . This implies that, if future neutrino oscillation experiments determine that the neutrino mass pattern has normal hierarchy but 0νββ is discovered in the next round of experiments, the 0νββ decay rate is dominated by the long-range part of the amplitude. Recall that O 8 contains e c . This implies that the model predicts a different angular distribution than the mass mechanism, which in principle could be tested in an experiment such as Super-NEMO [61]. Note that, to satisfy the condition Eq. (38), cancellations among different contributions to m ν are necessary. This can be arranged only if we consider at least two generations of the new particles in the model (either the scalar S or the fermion ψ). Here we discuss more on the consistency of our model with the neutrino masses and mixings observed at the oscillation experiments. Instead of scanning whole the parameter space, we illustrate the parameter choice that reproduces the neutrino properties and is simultaneously consistent with the bound from lepton flavour violation. To simplify the discussion we use the following ansatz in the flavour structure of the Yukawa couplings: with a dimensionless parameter y. With Eq. (40), the neutrino mass matrix Eq. (26) is reduced to where Λ is defined as and I is given as We introduce three copies of the new scalar S −1/3 k . The resulting mass matrix Eq. (41) has the same index structure as that of the type-I seesaw mechanism, and therefore, the matrix Λ can be expressed as following the parameterization developed by Casas and Ibarra [62]. Here,m ν is the neutrino mass matrix in the mass eigenbasis, and the mass matrix m ν is diagonalized with the lepton mixing matrix U ν as for which we use the following standard parametrization Here c ij = cos θ ij , s ij = sin θ ij with the mixing angles θ ij , δ is the Dirac phase and α 21 , α 31 are Majorana phases. The matrix R is a complex orthogonal matrix which can be parametrized in terms of three complex angles as Note that it is assumed in this procedure that the charged lepton mass matrix is diagonal. After fitting the neutrino oscillation data with the parametrization shown above, there remain y, Y uψH and the masses M ψ , m S k for k = 1, 2, 3 as free parameters. For simplicity, we assume a degenerate spectrum of the heavy scalars m S = m S k . In Fig. 4-(a), we plot the half-life T 0νββ 1/2 as a function of m ν 1 for fixed values of the coupling Y uψH = 0.6 and the masses M ψ = 800 GeV and m S = 10 TeV. The parameter y is taken to be 10 −3 , since this minimizes the decay rate of µ → eγ, as we will discuss below. We have used oscillation parameters for the case of normal hierarchy. The region enclosed by the red curves is d = 7 long-range contribution to 0νββ, and the blue curves correspond to the mass mechanism contribution only, which is shown for comparison. The gray region is already excluded by 0νββ searches, and for the model under consideration only the cyan region is allowed. As one can see from Fig. 4-(a), the total contribution to 0νββ is dominated by the d = 7 long-range contribution. Note that the mass mechanism and the long-range contribution are strictly related only under the assumption that Y uψH and Y dLαS k are independent of the quark generation i. This is so, because the 2-loop diagram is dominated by 3rd generation quarks, while in 0νββ decay only first generation quarks participate. If we were to drop this assumption and put the first generation couplings to Y u 1 ψH < ∼ 10 −2 ×Y u 3 ψH and Y d 1 LαS k < ∼ 10 −2 ×Y d 3 LαS k , the half-life for the long-range amplitude would become comparable to the mass mechanism, without changing the fit to oscillation data. Note that non-zero Majorana phases are necessary to allow for cancellations among the mass mechanism contributions, so as to make m ν small as required by Eq. (38). In Fig. 4-(b), we plot the half-life T 0νββ 1/2 as a function of the scalar mass m S . Here we fixed the the decay rate versus m ν 1 (left) and m S (right). The gray region is the current lower limit in 0νββ decay half-life of 136 Xe. In the plot to the left the region between the red curves is the one allowed by the long-range contribution to the decay rate of 0νββ calculated scanning over oscillation parameters for the case of normal hierarchy and m S = 10 TeV. We also show the allowed region for the half-live for the mass mechanism as blue lines for comparison. The cyan region correspond to the parametric region where our model can be consistent with current 0νββ experimental data. In the plot to the right the red curve is the long-range contribution to the decay rate for the fixed oscillation parameters m ν 1 = 1.23 × 10 −3 eV , α 21 = 0, α 31 = π/2, s 2 23 = 1/2 and s 2 12 = 1/3 and the remaining oscillation parameters ∆m 2 31 and ∆m 2 21 fixed at their best-fit values for the case of normal hierarchy. oscillation parameters to m ν 1 = 1.23×10 −3 eV , α 21 = 0, α 31 = π/2, s 2 23 = 1/2 and s 2 12 = 1/3 and the remaining oscillation parameters ∆m 2 31 and ∆m 2 21 to their best-fit values for the case of normal hierarchy. The plot assumes that the matrix R is equal to the identity. The plot shows that the half-life increases to reach approximately T 0νββ 1/2 ∼ 10 26 yr for m S = 10 TeV. Now we discuss the constraint from lepton flavour violating process µ → eγ. In Fig. 5, we show Br(µ → eγ) as a function of the scalar m S and the parameter y for fixed values of the coupling Y uψH = 0.6 and the fermion mass M ψ = 800 GeV, which is the same parameter choice adopted in Fig. 4. These plots show that the current experimental limits on Br(µ → eγ) put strong constraints on the model under consideration. In Fig. 5-(a), we plot Br(µ → eγ) with different values of the parameter y = {10 −1 , 10 −2 , 10 −3 }. We have used again the parameters m ν 1 = 1.23 × 10 −3 eV, α 21 = 0, α 31 = π/2, s 2 23 = 1/2 and s 2 12 = 1/3 fixing the remaining oscillation parameters ∆m 2 31 and ∆m 2 21 at their best-fit values for the case of normal hierarchy. With the choice of y = 10 −1 , the entire region of m S is not consistent with the current experimental limits. On the other hand, we can easily avoid the constraint from µ → eγ by setting the parameter y to be roughly smaller than 10 −2 . Note that the curves with y = 10 −1 and y = 10 −3 do not cover the full range of m S . This is because the fit to neutrino data would require Yukawa couplings in the perturbative regime. (We define the boundary to perturbativity as at least one entry in the Yukawa matrix being smaller than √ 4π.) It is necessary to have smaller values of the parameter y to obey the experimental bound. This feature is also shown in Fig. 5-(b) where we plot the Br(µ → eγ) as a function of y with different values of the mass m S = {1, 5, 10} TeV. As shown, for y 10 −2 it is possible to fulfil the experimental limit, having the Br(µ → eγ) a minimum around y = 10 −3 . Because of the perturvative condition, the curves with m S = 5 TeV and m S = 10 TeV end in the middle of the y space. The reason for the strong dependence of Br(µ → eγ) on the parameter y can be understood as follows: As shown in Eq. (42) the Yukawa couplings Y dLαS k and Y eαψS k are related in the neutrino mass fit, but only up to an overall constant, 1 y . For values of y of the order of 10 −3 both Yukawas are of the same order and this minimizes Br(µ → eγ). If y is much larger (much smaller) than this value Y dLαS k (Y eαψS k ) becomes much larger than Y eαψS k (Y dLαS k ) and since the different diagrams contributing to Br(µ → eγ) are proportional to the individual Yukawas (and not their product) this leads to a much larger rate for Br(µ → eγ). In summary, for all 2-loop d = 7 models of neutrino mass, which lead to O 8 , the longrange part of the amplitude will dominate over the mass mechanism by a large factor, unless there is a strong hierarchy between the non-SM Yukawa couplings to the first and third generation quarks. Such models are severely constrained by lepton flavour violation and 0νββ decay. We note again, that these models predict an angular correlation among the out-going electrons which is different from the mass mechanism. IV. LEFT-RIGHT SYMMETRIC MODEL: d = 7 VERSUS d = 9 OPERATOR Writing new physics contributions to the SM in a series of NROs assumes implicitly that higher order operators are suppressed with respect to lower order ones by additional inverse powers of the new physics scale Λ. However, there are some particular example decompositions for (formally) higher-order operators, where this naive power counting fails. We will discuss again one particular example in more detail. The example we choose describes the situation encountered in left-right symmetric extensions of the standard model. Consider the following two Babu-Leung operators: O 8 can be decomposed in a variety of ways, decomposition #14 (see Table V) is shown in Fig. 6 to the left. The charged vector appearing in this diagram couples to a pair of right-handed quarks and, thus, can be interpreted as the charged component of the adjoint of the left-right symmetric (LR) extension of the SM, based on the gauge group SU(3) C × SU(2) L × SU(2) R × U(1) B−L . In LR right-handed quarks are doublets, Q c = Ψ3 ,1,2,−1/6 , the ψ 1,1,0 can be understood as the neutral member of L c , i.e. the right-handed neutrino, and the Higgs doublet is put into the bidoublet, Φ 1,2,2,0 . The resulting diagram for 0νββ decay is shown in Fig. 6 on the right. Fig. 6 gives a long-range contribution to 0νββ decay. We can estimate the size of ǫ O 8 from these diagrams: The first of these two equations shows ǫ O 8 for Fig. 6 on the left (notation for SM gauge group), the second for Fig. 6 on the right (notation for gauge group of the LR model). Here, g 1 and g 2 could be different, in principle, but are equal to g R in the LR model. v SM is the SM vev, fixed by the W -mass. In the LR model, the bi-doublet(s) contain in general two vevs. We call them v d and v u here and v 2 SM = v 2 d + v 2 u . In Eq. (49) only v u = v SM sin β, with tan β = v u /v d , appears. Note that we have suppressed again generation indices and summations in Eq. (49). We will come back to this important point below. Now, however, first consider O 7 . From the many different possible decompositions we concentrate on the one shown in Fig. 7. The diagram on the left shows the diagram in SM notation, the diagram on the right is the corresponding LR embedding. It is straightforward to estimate the size of these diagrams as: Arbitrarily we have called the 4-point coupling in the left diagram g 2 3 . In the LR model again the couplings are fixed to g L and g R . In the last relation in Eq. from a d = 9 operator. This a priori counter-intuitive result is a simple consequence of the decomposition containing the SM W L boson. Any higher-order operator which can be decomposed in such a way will behave similarly, i.e. 1/Λ 5 ⇒ 1/(Λ 3 v 2 SM ). 8 We note that in this particular example the contribution of O 7 is actually more stringently constrained than the one from O 8 . This is because O 8 leads to a low-energy current of the form (V + A) in both, the leptonic and the hadronic indices, i.e. the limit corresponds to ǫ V +A V +A . O 7 , on the other hand, leads to ǫ V +A V −A , which is much more tightly constraint due to contribution from the nuclear recoil matrix element [64], compare values in Table I. We note that, one can identify the diagrams in Fig. 6 and Fig. 7 with the terms proportional to λ and η in the notation of [64], used by many authors in 0νββ decay. For the complete expressions for the long-range part of the amplitude, one then has to sum over the light neutrino mass eigenstates, taking into account that the leptonic vertices in the diagrams in Figs. 6 and 7 are right-handed. Defining the mixing matrices for light and heavy neutrinos as U αj and V αj , respectively, as in [64], the coefficients ǫ O 8 and ǫ O 7 of the d = 7 and d = 9 operators are then the effective couplings [64]: Orthogonality of U ej and V ej leads to 6 j=1 U ej V ej ≡ 0. However, the sum in Eq. (51) runs only over the light states, which does not vanish exactly, but rather is expected to be of the order of the light-heavy neutrino mixing. In left-right symmetric models with seesaw (type-I), one expects this mixing to be of order and in the LR model (right). . In this case one expects the mass mechanism to dominate over both λ and η , given current limits on W L − W R mixing [65] and lower limits on the W R mass from LHC [66,67]. However, as in the LQ example model discussed previously in section III A, contributions to the neutrino mass matrix contain a sum over the three heavy right-handed neutrinos. In the case of severe fine-tuning of the parameters entering the neutrino mass matrix, the connection between the light-heavy neutrino mixing and m ν can be avoided, see section III A. In this particular part of parameter space, the incomplete 3 j=1 U ej V ej could in principle be larger than the naive expectation. Recall that the current bound on nonunitarity of U is of the order of 1 % [49]. For 3 j=1 U ej V ej as large as 3 j=1 U ej V ej ∼ O(10 −2 ) λ and/or η could dominate over the mass mechanism, even after taking into account all other existing limits. We stress again that this is not the natural expectation. In summary, there are some particular decompositions of d = 9 operators containing the SM W or Higgs boson. In those cases the d = 9 operator scales as 1/(Λ 3 v 2 SM ) and can be as important as the corresponding decomposition of the d = 7 operator. V. SUMMARY We have studied d = 7 ∆L = 2 operators and their relation with the long-range part of the amplitude for 0νββ decay. We have given the complete list of decompositions for the relevant operators and discussed a classification scheme for these decompositions based on the level of perturbation theory, at which the different models produce neutrino masses. For tree-level and 1-looop neutrino mass models we expect that the mass mechanism is more important than the long-range (p / -enhanced) amplitude. We have discussed how this conclusion may be avoided in highly fine-tuned regions in parameter space. For 2-loop neutrino mass models based on d = 7 operators, the long-range amplitude usually is more important than the mass mechanism. To demonstrate this, we have discussed in some detail We also discussed the connection of our work with previously considered long-range contributions in left-right symmetric models. This served to point out some particularities about the operator classification, that we rely on, in cases where higher order operators, such as d = 9 (O 9 ∝ Λ −5 LNV ), are effectively reduced to lower order operators, i.e. d = 7 Our main results are summarized in tabular form in the appendix, where we give the complete list of possible models, which lead to contributions to the long-range part of the amplitude for 0νββ decay. From this list one can deduce, which contractions can lead to interesting phenomenology, i.e. models that are testable also at the LHC. corresponding decompositions are listed at the "Mediators" column. The symbols S and ψ represents the Lorentz nature of the mediators: S ( ′ ) is a scalar field, and ψ L(R) is a left(right)handed fermion. The charges of the mediators under the SM gauge groups are identified and expressed with the format (SU(3) c , SU(2) L ) U (1) Y . It is easy to find the contributions of the effective operators to neutrinoless double beta decay processes at the "Projection to the basis ops." column. The basis operators are defined as Here we explicitly write all the indices: α, β for lepton flavour, the lower (upper) I for 3 (3) of SU(3) colour, i, j, k, l for 2 of SU(2) left, ρ, σ for Lorentz vector, and a, b, c, d (ȧ,ḃ) for left(right)-handed Lorentz spinor. The lowest-loop contributions (i.e., dominant contributions) to neutrino masses are found at the columns "m ν ". We are mainly interested in decompositions (=proto-models) where new physics contributions to 0νββ can compete with the mass mechanism contribution mediated by the effective neutrino mass m ν . An annotation "w. (additional interaction)" is given in the column of "m ν @1loop" for some decompositions. This shows that one can draw the 1-loop diagram, putting the interactions that appear in the decomposition and the additional interaction together. The additional interactions given in the tables are not included in the decomposition but are not forbidden by the SM gauge symmetries, nor can they be eliminated by any (abelian) discrete symmetry, without removing at least some of the interactions present in the decomposition. For example, using the interactions appear in decomposition #11 of Babu-Leung operator #8 (see Tab. V), one can construct two 2-loop neutrino mass diagrams mediated by the Nambu-Goldstone boson H + , whose topologies are T 2 B 2 and T 2 B 4 of [59]. This also corresponds to the 2-loop neutrino mass model labelled with O 1 8 in [38]. However, to regularize the divergence in diagram T 2 B 4 , the additional interaction (Q c ) a Ii (iτ 2 ) ij (L) ia S I is necessary, and this interaction generates a 1-loop neutrino mass diagram. Consequently, this decomposition should be regarded as a 1-loop neutrino mass model. 9 We also show the 1-loop neutrino mass models that require an additional interaction with an additional field (second Higgs doublet H ′ ) with bracket. 10 The two contributions to 0νββ are compared in Sec. III with some concrete examples. The comparison is summarized at Tab. II. In short, the mass mechanism dominates 0νββ if neutrino masses are generated at the tree or the 1-loop level. When neutrino masses are generated from 2-loop diagrams, new physics contributions to 0νββ become comparable with the mass mechanism contribution and can be large enough to be within reach of the sensitivities of next generation experiments. However, the 2-loop neutrino masses generated from the decompositions of the Babu-Leung operators of #3 and #4 are anti-symmetric with respect to the flavour indices, such as the original Zee model and, thus, are already excluded by oscillation experiments. Therefore, if we adopt those decompositions as neutrino mass models, we must extend the models to make the neutrino masses compatible with oscillation data. In such models, the extension part controls the mass mechanism contribution and also the new physics contribution to 0νββ, and consequently, we cannot compare the contributions without a full description of the models including the extension. Nonetheless, it might be interesting to point out that decomposition #8 of the Babu-Leung #3 contains the tensor operator O ten. 3a (e, e), which gives a contribution to 0νββ and generates neutrino masses with the (e, e) component at the two-loop level. On the other hand, 2-loop neutrino mass models inspired by decompositions of Babu-Leung #8 possess a favourable flavour structure. This possibility has been investigated in Sec. III C with a concrete example. There is another category of lepton-number-violating effective operators, not contained in the catalogue by Babu and Leung: operators with covariant derivatives D ρ . These have been intensively studied in Refs. [39,40,50]. The derivative operators with mass dimension seven are classified into two types by their ingredient fields; One is D ρ D ρ LLHH and the other is D ρ Lγ ρ e R HHH. With the full decomposition, it is straightforward to show that the tree-level decompositions of the first type must contain one of the seesaw mediators. Therefore, the neutrino masses are generated at the tree level and the mass mechanism always dominate the contributions to 0νββ. The decompositions of the second type also require the scalar triplet of the type II seesaw mechanism when we do not employ vector fields as 4,6,7,9, 10,13,15 0νββ and that A MM of the mass mechanism. When the neutrino mass is generated at the tree and one-loop level, the new physics scale Λ 7 must be sufficiently high to reproduce the correct size of neutrino masses, consequently, the long-range contributions A LR are suppressed and the mass mechanism dominates the contribution to 0νββ. As usual in such operator analysis, these estimates do not take into account that some non-SM Yukawa couplings, appearing in the ultraviolet completion of the operators, could be sizably smaller than one, which would lead to lower scales Λ 7 . Also, for loop model the scales could be overestimated, since they neglect loop integrals. The neutrino masses generated at the two-loop level from the decompositions of the Babu-Leung #8 operator should be estimated with d = 7 LLHHHH † operator (as illustrated in sect. III C). In addition, they receive additional suppression from the lepton Yukawa coupling y ℓ β , which further lowers the new physics scale Λ 7 . Note that in particular for the 2-loop d = 7 models, as the concrete example in sect. III C shows, the estimate for A MM /A LR can vary by several orders of magnitude, depending on parameters. However, both the estimate shown here and the explicit calculation in sect. III C give numbers A MM /A LR ≪ 1 , such that the long-range contribution dominates always over the mass mechanism for these decompositions. mediators, and the new physics contributions to 0νββ become insignificant again compared to the mass mechanism. In Ref. [50], the authors successfully obtained the derivative operator (e R c γ ρ Liτ 2 τ W ρ H ′ )(Hiτ 2 H ′ ) at the tree level and simultaneously avoided the tree-level neutrino mass with the help of a second Higgs doublet H ′ (1, 2) +1/2 and a Z 2 parity which is broken spontaneously. Here we restrict ourselves to use the ingredients obtained from #3 (L α L β )(Q)(d R H) S(1, 1) +1 ψ L,R (3, 2) − 5 to 0ν2β are given as the combinations of the basis operators in the "Projection to the basis ops." column. The tensor operators O ten. play an important role in the long-range contribution. Mediators Projection to the basis ops. m ν @tree m ν @1loop m ν @2loop #1 (L α L β )(H)(Qu R ) S(1, 1) +1 S ′ (1, 2) + 1 decompositions and do not discuss such extensions. Within our framework, the derivative operators are always associated with tree-level neutrino masses. In this study, we have mainly focused on the cases where the new physics contributions give a considerable impact on the 0νββ processes. Therefore, we do not go into the details of the decompositions of the derivative operators.
Compressed spectral screening for large-scale differential correlation analysis with application in selecting Glioblastoma gene modules Differential co-expression analysis has been widely applied by scientists in understanding the biological mechanisms of diseases. However, the unknown differential patterns are often complicated; thus, models based on simplified parametric assumptions can be ineffective in identifying the differences. Meanwhile, the gene expression data involved in such analysis are in extremely high dimensions by nature, whose correlation matrices may not even be computable. Such a large scale seriously limits the application of most well-studied statistical methods. This paper introduces a simple yet powerful approach to the differential correlation analysis problem called compressed spectral screening. By leveraging spectral structures and random sampling techniques, our approach could achieve a highly accurate screening of features with complicated differential patterns while maintaining the scalability to analyze correlation matrices of $10^4$--$10^5$ variables within a few minutes on a standard personal computer. We have applied this screening approach in comparing a TCGA data set about Glioblastoma with normal subjects. Our analysis successfully identifies multiple functional modules of genes that exhibit different co-expression patterns. The findings reveal new insights about Glioblastoma's evolving mechanism. The validity of our approach is also justified by a theoretical analysis, showing that the compressed spectral analysis can achieve variable screening consistency. Differential expression and co-expression analysis High-throughput RNA sequencing (RNA-seq) data have recently drawn great attention in genomic studies [Anders and Huber, 2010, Soneson and Delorenzi, 2013, Zhang et al., 2014. As a powerful tool to quantify the abundance of mRNA transcripts in a sample, RNA-seq data have increasingly been used to identify differentially expressed genes associated with specific biological and clinical phenotypic variations [Krupp et al., 2012, Lonsdale et al., 2013. For example, differential gene expression analysis can be adopted to detect mRNA transcripts with varying abundances in tumor samples versus normal tissue samples [Wan et al., 2015, Li et al., 2016. RNA-seq data therefore represent a popular alternative to microarrays in such work [Tarazona et al., 2011, Costa-Silva et al., 2017. Conventional differential expression analysis focuses on comparing marginal gene expression levels between conditions or experimental groups [Wang and He, 2007, Li and Tseng, 2011, Trapnell et al., 2012, Sun et al., 2015, Zhao et al., 2017, Dadaneh et al., 2018. A complete understanding of the molecular basis of phenotypic variation also requires characterizing the interactions between genetic components [Ballouz et al., 2015, Van Der Wijst et al., 2018. Many clustering algorithms have been proposed to identify groups of co-expressed genes (see Sarmah and Bhattacharyya [2021] for a review of popular tools used to analyze RNA-seq data). In many cases, researchers' primary interest lies in discerning co-expression pattern variation among genes. For example, [Hudson et al., 2009] discovered that the key myostatin gene is not differentially expressed between two cattle breeds but exhibits distinct co-expression structures of the involved causal mutations. Statistical methods appropriate for this type of analysis remain understudied. A widely used strategy for statistical differential co-expression analysis is modeling differential networks (see Shojaie [2021] for a detailed review). In particular, many approaches involve fitting Gaussian graphical models with certain sparsity assumptions [Friedman et al., 2008, Yuan, 2010, Cai et al., 2011. One method entails jointly estimating multiple precision matrices [Chiquet et al., 2011, Guo et al., 2011, Saegusa and Shojaie, 2016, wherein the individual matrices are often assumed to be sparse. Danaher et al. [2014] introduced the fused graphical lasso approach to encourage fusion over entries of estimated precision matrices. Zhao et al. [2014] extended Cai et al. [2011]'s constraint 1 -minimization algorithm to directly estimate the differential network, that is, the difference of two precision matrices. Xia et al. [2015] proposed a testing framework to identify the difference between two partial dependence networks. Yuan et al. [2017] proposed a D-trace loss function with Lasso penalty to estimate a differential network with sparsity. Despite its relative popularity, this class of approaches suffers from many drawbacks in our application scenario. First, the partial correlation structure is usually estimable based on the Gaussian assumption of data. However, non-normal distributions of gene expression data have been widely documented [Marko and Weil, 2012], and growing evidence suggests a non-linear relationship between genes [Yang et al., 2021]. Second, partial dependence interpretation is less straightforward than marginal dependence and thus appears less frequently in biological and medical studies. Third, due to the noisy nature of data, medical researchers typically prefer to adopt robust versions of dependence metrics [Chatrath et al., 2020], such as the Spearman correlation coefficient; less is known about statistical methods related to robust partial dependence. Finally, differential analysis based on partial dependence tends to be computationally expensive and unable to tackle the scale of our problem. We therefore focus on the differential analysis of marginal co-expression dependence in this paper. The differential analysis of marginal correlations has been considered in several contexts, mainly when testing variations. Schott [2007] and Li and Chen [2012] consider testing two covariance matrices that are different in most entries, while Cai et al. [2013], Cai and Zhang [2016], and Chang et al. [2017] introduced methods appropriate for cases when differential patterns are sparsely distributed. In our problem (to be introduced in the next section), we aim to discover differentially co-expressed genes between cancer patients and healthy subjects. This differential pattern does not exist everywhere in a correlation matrix. Yet when considering the correlation structure, each gene simultaneously affects all its correlation entries. The expected differential pattern of the correlation matrices should hence be globally sparse but locally dense. Therefore, neither of the above two scenarios reasonably approximates our case. Zhu et al. [2017] rrevealed that spectral properties are far more effective for detecting this type of differential signal. In particular, Zhu et al. [2017] proposed using the spectral norm of differential covariance matrices as the test statistic for the differential hypothesis. However, differential gene selection is not studied in their work. In this paper, we consider a general correlation matrix which enables us to adopt more flexible data-based correlation measurements and thus better capture the co-expression structure. Our proposed method is grounded in spectral methods. Compared with prior work on the differential analysis of covariance/correlation matrices, our primary contributions are two-fold. Firstly, we introduce a "spectral screening" method, which identifies the differentially co-expressed genes instead of simply testing whether there are differences. Our method considers block differences, which are more informative for our application than the aforementioned studies. Our approach can also be seen as a submatrix localization method. However, different from existing matrix localization methods, we do not enforce additional assumptions on the differential pattern (e.g., the sign constraint in Butucea et al. [2015], or the constant difference in Chen and Xu [2016], , Liu and Arias-Castro [2019]). This generality affords us appreciable advantages in analyzing real-world datasets. Second, we present a simple yet powerful randomized version of the localization method that significantly improves the scalability of our spectral screening approach to handle large-scale differential analysis; the previous mentioned methods cannot do so. Specifically, we calculate a small proportion of correlation differences and then use the spectral properties of the incomplete differential matrix to localize differential variables. This strategy possesses much lower computational complexity while maintaining sound accuracy and theoretical guarantees. Motivating application and the data set The Cancer Genome Atlas (TCGA) is a large set of tumor and normal tissue samples collected from over 10,000 patients cataloging molecular abnormalities observed across 33 cancer types [Network, 2015]. The Genotype Tissue Expression (GTEx) project is a large-scale sequencing project including more than 9,000 samples of 53 different tissues from over 500 healthy individuals [Lonsdale et al., 2013, Consortium et al., 2015. Scholars have identified differentially expressed genes in multiple tumor types via differential expression analysis by comparing tumor samples from TCGA to normal tissue samples from GTEx after batch-effects correction. However, gliomas were excluded from these studies due to a lack of normal brain samples in TCGA . Additionally, it is impossible to normalize the samples for differences in batch effects across studies when using standard approaches given this absence of normal brain samples. This paper focuses on differential patterns of correlation rather than expression levels. As such, our approach offers an alternative to conventional differential analysis to discern meaningful differential patterns when normalization is not possible through standard methods. Gliomas are highly invasive primary brain tumors that are challenging to resect neurosurgically without substantial patient morbidity. Approximately 20,000 patients are diagnosed with gliomas each year in the United States. Gliomas can be divided into several grades according to severity. Grade II and Grade III gliomas are lower-grade gliomas; Grade IV gliomas are also known as glioblastoma multiforme. Glioblastoma multiforme is highly lethal and has a five-year survival rate of less than 5%. Although lower-grade gliomas do not progress as quickly as glioblastoma multiforme, they are essentially uniformly lethal and can progress to glioblastoma multiforme; the median period of survival for lower-grade gliomas is seven years. Typically, lower-grade gliomas and glioblastoma multiforme are first managed surgically, followed by chemotherapy and radiation therapy [Network, 2015, Bauman et al., 2009, Yan et al., 2012, Louis et al., 2016. The 2016 World Health Organization's guidelines for brain tumors suggest that gliomas should be classified based on molecular characteristics [Louis et al., 2016]. Given recent recognition of the roles of molecular characteristics in classifying gliomas and the lethality of these tumors, it is crucial to identify genes of interest warranting further study. We compare glioblastoma multiforme samples from the TCGA dataset to normal brain samples from the GTEx dataset using the approach described in this paper. As mentioned, lower-grade gliomas (Grade II or III) have a better prognosis than glioblastoma multiforme (Grade IV) but can progress to glioblastoma multiforme. We hypothesize that lower-grade gliomas would show an intermediate degree of dysregulation between glioblastoma multiforme and normal brain tissue. Specifically, we use lower-grade gliomas as a validation set in this study, as suggested in prior research [Lonsdale et al., 2013, Network, 2015, Consortium et al., 2015, Yan et al., 2012, Louis et al., 2016. The key challenge in this setting lies in the dimensionality of the problem: we have a few hundred observations among 51,448 genes. It would be impossible to calculate all correlation coefficients for this number of genes -never mind performing in-depth differential analysis. A computationally feasible analysis would remain elusive even with basic filtering. Our method is intended to handle differential correlation analysis of such large-scale problems. A detailed analysis is presented in Section 5. Organization of the paper. Section 2 introduces the proposed methodology. Practical considerations such as parameter tuning and implementation, and complexity analysis are discussed in Section 3. Section 4 illustrates the model performance by simulation studies. Section 5 presents our real data analysis findings. Section 6 provides theoretical results. Section 7 concludes with further discussions. Methodology Notations. Given a positive integer p, define [p] = {1, 2, · · · , p}. Let R p×p be the set of all p × p matrices, and S p×p + be the set of all p × p positive definite matrices. Given a square matrix M , denote its spectral norm and Frobenius norm by M and M F , respectively. For two sequences {a n } and {b n }, we write a n = O(b n ) if a n /b n is bounded for all sufficiently large n. We also write a n = o(b n ) if a n /b n → 0 as n goes to ∞, in which case we may also use the notation a n b n . Given a p × p matrix M and two sets G 1 , G 2 ⊂ [p], we write M G1,G2 as the submatrix from constraining on rows in G 1 and columns in G 2 . We will describe our algorithms assuming the statistic of interest is the covariance matrix. The study of correlation matrix just needs a rescaling step, so will not be distinguished conceptually. Assume we have a size-n 1 random sample x i , i = 1, · · · , n 1 from a distribution with mean µ 1 and covariance Σ 1 and a size-n 2 sample y i , i = 1, · · · , n 2 from a distribution with mean µ 2 and covariance Σ 2 . Here, µ 1 , µ 2 ∈ R p and Σ 1 , Σ 2 ∈ S p×p + . We do not assume special structures (e.g., sparsity) for the individual matrix Σ 1 (or Σ 2 ). We are interested in the situation where max(n 1 , n 2 ) p. Even loading the matrices Σ 1 and Σ 2 into a computer's memory can be challenging, never mind the associated calculation. Fortunately, in many differential correlation analyses, it is reasonable to assume that there is only a small set of coordinates G ⊂ [p] lead to different correlations, with |G| = m p. That is, Our main objective in this paper is to identify G. The challenge lies in the problem scale. It is infeasible to examine G even by calculating all pairwise correlations. Our example in Section 1.2 involves more than 50, 000 genes , yielding about 2.5 × 10 9 marginal correlations for two samples. Therefore, we need a rapid screening method that can reduce the problem to a more manageable size for downstream analysis with precise detection. Specifically, letΣ 1 andΣ 2 be the sample covariance matrices of X 1 and X 2 , respectively. Define D = Σ 1 − Σ 2 andD =Σ 1 −Σ 2 . We will first introduce a simple screening method for G based onD in Section 2.1. Then in Section 2.2, we will approximateD without computing the full matrix for the screening purposes. Spectral screening Spectral algorithms constitute a family of methods intended to handle structures for large-scale datasets, ranging from classical clustering analysis Malik, 2000, Ng et al., 2002], to more complicated data structures such as text data [Anandkumar et al., 2015], time series [Hsu et al., 2012] and network data [Rohe et al., 2011, Lei and Rinaldo, 2014, Li et al., 2020c, Miao and Li, 2021. From a macro perspective, our approach echoes network method of Miao and Li [2021]. The primary difference is that we are working with differential correlation matrices instead of a single network. More importantly, in our case, the dataset has a computationally prohibitive scale; this analysis thus calls for a different spectral method design. We will begin with a discussion of our problem's spectral properties. Denote the rank of D by K. We have K ≤ m in the current context. Let D = U ΛU T be the eigen-decomposition of D, where Λ = diag(λ 1 , · · · , λ K ) is a square diagonal matrix of all the nonzero eigenvalues of D (with non-increasing magnitude), and U = (u 1 , · · · , u K ) consists of the corresponding eigenvectors. Without loss of generality, throughout this paper, we assume G is the set of the first m variables. That is, G = [m]. In this case, let U 1 be the matrix of first m rows in U and U 2 be the matrix of the last p − m rows. We have Such a pattern indicates 0 (p−m)×p = U 2 ΛU T and leads to This relation suggests a simple strategy to identify G based on the rows in U , as these rows fully capture the sparsity pattern of the differential correlation structure. More notably, the current property does not depend on any additional structural assumption about D G,G , and can hence potentially be more general than many other methods. The algorithm based on the above idea is summarized as follows: Algorithm 1 (Spectral screening SpScreen (D, K)). Given the differential correlation matrixD, a positive integer K: 1. Calculate the eigen-decomposition ofD =ÛΛÛ T up to rank K. 3. Return the spectral score {s i }. 4. (Optional) If a threshold vector ∆ ∈ R p + is available, select all variables with s i > ∆ i . We expect that K is an integer that is approximately the rank of D. It is usually unknown and will be treated as a tuning parameter. The strategy of tuning K will be discussed in Section 3. Fast approximation by random sampling Algorithm 1 requires the input ofD =Σ 1 −Σ 2 . In the scenario of our application problems, p is large, and bothΣ 1 andΣ 2 tend to be dense; it would therefore be impractical to calculate the matrices. Alternatively, we resort to approximation methods for screening. Our solution is based on the following two observations: 1) Algorithm 1 only needs the leading eigenspaces; 2) the rank ofD is at most n 1 +n 2 , much smaller than p. Because low-rank matrices can be represented efficiently with far fewer entries than O(p 2 ) [Chatterjee, 2015, Li et al., 2020b, Abbe et al., 2017, Chen et al., 2019, it becomes possible to approximateD or its eigen-space without needing to know all the entries. Specifically, we sample a subset of entries in D, with each entry being sampled independently with a pre-specified probability ρ. Instead of calculating the fullD for all p(p − 1)/2 entries, we calculate the sample covariance values on these sampled positions only. Depending on the problem size, using a small ρ (e.g., ρ < 0.05) can greatly conserve computational time and memory. LetD be the incomplete matrix with the only values on the subsampled entries. We then approximate the spectral structure of D based on the sparse approximatioñ D with only those sampled entries filled. Analogous to matrix completion problems, we impute the missing entries by zeros and then calculate the corresponding eigenstructures as our approximation. The independent sampling strategy allows for accurate eigenstructures. Our two-step screening algorithm is summarized below. In light of its connections with compressed sensing and matrix completion problems, we call it Compressed Spectral Screening (CSS). Return SpScreen(D, K). This subsampling strategy can greatly accelerate spectral screening for large-scale datasets, assuming well-designed implementation. The subsampling and covariance calculation are each simple to implement in a parallel setting; as such, the computational speed can be further improved when a distributed system is available. The CSS algorithm also removes the crucial memory constraint of the originally infeasible problem and renders the computation more scalable. 3 Practical considerations: tuning and computation 3.1 Tuning parameter selection Tuning the rank K. K be tuned by cross-validation as in many matrix completion problems [Chi and Li, 2019]. However, given the scope of our problem, a full-scale cross-validation is infeasible. We therefore replace the K-fold cross-validation procedure with basic random sampling validation, coupled with Algorithm 2. Each time, in the random sampling step of the CSS algorithm, we sample additional pairs of variables and validate prediction performance using different tuning parameters by running the eigendecomposition only once. This procedure is shown in Algorithm 3. Algorithm 3 (Compressed spectral screening with tuning). Given observation matrices X 1 , X 2 , sampling proportion ρ, validation proportion τ , and K l < K u . Initiate zero sparse matricesD,D, Ω,Ω, Φ ∈ R p×p . Do the following: Notice that marginally, Ω ij can be see as random Bernoulli samples with probability ρ. 2. Calculate the partial eigen-decomposition up to rank K u forD, denoted byD =ŨΛŨ T . For (a) Use the rank K approximation to predict entries for positions (i, j) such thatΩ ij = 1, Ω ij = 0, calculated byD Return SpScreen (D,K). Note that this step can use the previous partial decomposition in Step 2. In Algorithm 3, the estimatedD (K) is only based on the pairs with Ω ij = 1,Ω ij = 1, and the pairs with Ω ij = 0,Ω ij = 1 constitute the validation set. By default, we always use τ = 0.1 in all experiments. It is easy to see that in the current context of two data matrices, the natural range would be K l = 2. Also, since we observe only ρ proportion of the entries from D, which is a most rank n 1 + n 2 . To ensure a reasonable signal to noise ratio, we can constraint K u = ρ(n 1 + n 2 ). Empirically, theses choices give very effective range for performance in all of our evaluations. Additional constraints from side information can be further enforced to reduce the computational cost and model selection variance. Selection of sampling proportion ρ. The larger ρ is, the more information we can include for screening. We would always prefer to use a larger ρ within the affordable computational limit. Also, it can be expected that if ρ is too small, eventually we will not have sufficient information for meaningful screening. Here we introduce an ad hoc rule to choose the lower bound of ρ based on our theoretical analysis (see Section 6), which works well in all of our experiments. MatrixD has p(p + 1)/2 entries calculated from (n 1 + n 2 )p entries from the raw data. Therefore, intuitively, we need more than (n 1 + n 2 )p entries, so as to retain reasonable amount of information to make the recovery problem feasible; accordingly, ρ > 2(n 1 + n 2 )/(p + 1). Determining the selection threshold. In the previous algorithms, e.g., Algorithm 1, we did not specify how to determine the cutoff of the spectral scores to identify G; this decision may be casespecific. In many situations, data analysts already have a sense of how many variables will be selected. Similarly, when the CSS is used for initial screening to reduce the problem size for a refined analysis (as in our analysis in Section 5), the threshold can be set to produce a feasible number of variables for the downstream algorithm. In other scenarios, we need an automatic and data-driven strategy to determine differential variables based on spectral scores. Resampling methods have been widely applied to determine reasonable selection in many problems [Meinshausen and Bühlmann, 2010, Lei, 2020, Le and Li, 2020. We assume a similar angle and propose a bootstrap procedure [Efron, 1979] to determine ∆. Because the spectral scores of differential variables are large, we should determine our selection by measuring the upper bound of the non-differential variables' scores that one can expect to have. Although these scores are unknown, we can use bootstrapping to create a null distribution where all variables follow the same covariance structure. The procedure is shown below: Algorithm 4 (Stability selection for spectral screening). Given data X 1 ∈ R n1×p , X 2 ∈ R n2×p , the number of bootstrapping replications, B, and K and ρ in the spectral screening algorithm. 2. For b = 1, 2, · · · , B, generate the bootstrap samples: (a) Sample n 1 rows from X 2 with replacement to stack into a matrixX (b) 1 , and n 2 rows from X 2 with replacement to stack into a matrixX It is worth noting that here, the thresholds for the p variable generally differ because the bootstrap procedure adapts to the variability of each variable. This method is quite effective in our evaluation (see Section 4). The stability selection requires employing spectral screening B times, which carries a high computational cost. Therefore, for large-scale differential analysis, we recommend the following strategy: (1) apply compressed spectral screening to conduct a rough screening and significantly reduce the problem size; (2) use the full version of spectral screening on the reduced dataset; (3) adopt stability selection to determine the selection. We use this strategy for glioblastoma gene analysis in Section 5. Sampling implementation and complexity analysis The algorithms introduced thus far require efficient implementation to be scalable in practice. In this section, we provide additional computational details and a corresponding complexity analysis. We do not consider the extra tuning step because, as discussed, the tuning cost is in a lower order of complexity. For simplicity, we assume that n 1 = n 2 = n in this section. If not, all of the results in this section still hold after replacing n with n 1 + n 2 . The sampling step of Ω naive implementation of sampling the Bernoulli generator takes O(p 2 ) operations, which is excessive when p is large. Instead, the sampling step should be based on generating geometric random numbers. Given any ordered list of the indices of paired nodes and map them to 1, 2, · · · , p 2 . Denote this mapping by π : Notice that the gap between two consecutively sampled indices by the Bernoulli sampler follows a geometric distribution. Therefore, instead of sampling the status of each pair, we can use the geometric distribution to generate the sequence of the sampled positions. Return Y as the sampled indices of the order pairs. So π −1 (Y ) is the sampled pairs of the original matrix. According to Bringmann and Friedrich [2013], the complexity of generating a geometric random number is O (log(1/ρ)). The generating procedure of Ω according to the algorithm above is O p 2 ρ log(1/ρ) . Compared with a naive sampling, we save a factor of ρ log(1/ρ) in complexity. The sampling step also involves computing the covariance values, which needs O p 2 ρn complexity. The next computational chunk is the rank-K eigen-decomposition. Notice that the matrixD is now a sparse matrix with roughly ρp 2 nonzero entries. The computational complexity of this step depends on the signal distribution of the matrix, but it is generally estimated [Saad, 2011] to have a complexity in the order of O(Kρp 2 + K 2 p). The remaining algorithm steps have a lower-order complexity. Let d = p · ρ be the expected number of observed entries in each row ofD. In summary, the expected complexity for the whole procedure is (1) Regarding the order of ρ, a natural requirement is that we should at least observe observations for each row (or column) of the full matrixD; otherwise, there is no chance of recovering corresponding information about the missing variable. Therefore, we cannot use an arbitrarily small ρ. It is known [Erdös and Rényi, 1960] that we should ensure d > 2 log p (or equivalently, ρ > 2 log p/p) to make sure that we have observations for each row (or column) fromD, with high probability. This is always what we assume. A few other natural constraints also exist: K should always be smaller than 2n; For a reasonable differential recovery, we consider only the cases when log p n. Our recommended setup ρ ≈ n/p in the previous section satisfies all the requirements. Combining these constraints together, the complexity (1) The whole procedure also stores O(np) numbers in memory, with the help of sparse matrix data structure. In comparison, the naive method requires O(p 2 n) computational complexity with O(p 2 ) for memory. The CSS procedure thus results in a saving factor of n/p in both timing and memory with in its sequential implementation, which is huge in high-dimensional settings. The timing advantage is even larger with parallel computing. Simulation In this section, we use simulated data to evaluate the proposed method for differential analysis on both Pearson-type correlation and Spearman-type correlation. We also compare the proposed method with several recent methods for submatrix localization in both small-scale problems (n 1 = n 2 = 100, p = 2000) and large-scale problems (n 1 = n 2 = 100, p = 40000). In the first setting, calculating the full correlation matrices is still computationally feasible and all methods of submatrix localization can be used. We specifically include the spectral projection method of , which is based on a model property studied in . We then embed this method into our subsampling strategy to extend its scalability. Moreover, we include the adaptive large average submatrix (LAS) algorithm proposed in Liu and Arias-Castro [2019], as an improved version of Shabalin et al. [2009], along with the Golden Section Search algorithm (from the same paper) as two other benchmarks. However, both methods require a full correlation matrix as input and are hence inapplicable to large-scale problems. The data x i and y i (i = 1, · · · , 100) are generated from multivariate normal distributions with zero mean and covariance matrices. The covariance structures are given by the so-called spiked covariance model [Johnstone and Lu, 2009] where v 1 has its first 50 entries generated from N (1, 0.2) and the rest are zeros; v 2 has its 51st to 100th entries generated from N (1, 0.2) and the rest as zeros. Therefore, the two models have different correlations only on the first 100 coordinates. The spiked covariance model includes the differential correlation structure with the models in Chen and Xu [2016], , Liu and Arias-Castro [2019] as special cases, but is more general because the differential components do not have to be a constant. We will evaluate the screening (or, differentially correlated variable selection) accuracy of the true differential variables by their sensitivity and specificity. Sensitivity refers here to the proportion of differential variables that are retained, while specificity reflects the proportion of null variables that are filtered out. For our method and that of , we can generate the full ROC curve with respect to sensitivity and specificity by varying the number of selected variables which, as mentioned, represents a core advantage in large-scale analysis. The adaptive LAS and Golden Section Search algorithms automatically produce a final separation of the data and therefore only result in a single point in the ROC plane given each instantiation of the data. The timing of computation is another important aspect we wish to evaluate given our focus on computationally feasible methods for large-scale problems. We implement our approach and the spectral projection algorithm in R. The adaptive LAS and Golden Section Search algorithms are based on the implementation provided in Liu and Arias-Castro [2019], available for download at https://github.com/nozoeli/biclustering. In all the configurations, we repeat the analysis for 50 independent instantiations. Results for small-scale problems For the same problems with n 1 = n 2 = 100 and p = 2000, we test the full data versions of our approach, the spectral projection method, as well as the corresponding subsampled version with ρ = 0.05, 0.1, 0.2. We consider both Pearson's correlation and Spearman's correlation as the statistic for analysis. Table 1 shows the average area under the ROC curve (AUC) of the 50 replications for all three values of ρ and the full version (ρ = 1). Findings in the table indicate that the methods' performance is fairly robust to the three sampling proportions in this case. Moreover, no observable differences in performance emerge when using either Pearson's or Spearman's correlation as the metric. Specificity Sensitivity q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q + q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q + q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q + q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q + + + + + Specificity Sensitivity q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q + q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q + q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q + q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q + + + + + Our method Spectral projection Adaptive LAS Golden Section Search (b) Spearman's correlation Figure 1: Sensitivity-specificity ROC plot for all four methods with small-scale problems (n 1 = n 2 = 100, p = 2000). The "+" signs are the average point of the single estimates of each method. Next, we use the variant of ρ = 0.1 in our approach and the spectral projection method for comparison with the adaptive LAS and Golden Section Search algorithms based on the full correlation matrices. Figure 1 depicts the average ROC curves of our method and the spectral projection, with 50 individual ROC curves in the background. The adaptive LAS and Golden Section Search algorithms can produce single selections but not ROC curves. Thus, we show 50 individual points on the background with their averaged sensitivity and specificity indicated by "+". Spectral projection can also produce a single selection, based on the clustering properties indicated in . The 50 single instantiations and their average appear in the figure as small background circles and a solid "+". For our case, we use our bootstrap method to produce a single selection. The 50 individual selection points and the average are shown in the same way. Both the adaptive LAS and Golden Section Search algorithms offer good specificity, but they tend to be overly conservative and do not allow for the flexibility of choosing a threshold. Spectral projection accommodates subsampling and produces a full ROC curve. Its performance is similar to the other methods in early stages; however, it suffers from poor tradeoffs between sensitivity and specificity if one wants to raise the power beyond the Golden Section Search. These disadvantages of the three methods may be due to the approaches' restrictive constant assumption of the signal structure. By contrast, our method -even when using only 10% of the data -affords us the flexibility necessary for full-range detection. Additionally, the ROC curve is more impressive than for the three other methods. The singleselection results (i.e., green dots) significantly outperform the other methods as well, conveying the effectiveness of this bootstrap method in identifying a good threshold. Finally, we compare the computational time for each instantiation. Parameter tuning is also included in the timing. Findings are listed in Table 2. When the full differential matrix is used, spectral projection is most efficient. The computational difference between our method and spectral projection solely involves tuning, as we have an extra step to select the rank. Yet this difference is negligible; both methods demonstrate comparable timing performance. The adaptive LAS is much slower than the other methods. However, if we use our bootstrapping approach to locate the best model, it takes longer: an average of 64.88 sec. and 80.14 sec. for Pearson's and Spearman's correlations, respectively. However, given its effectiveness in model selection, the bootstrap remains worthwhile as long as the problem size is not overly large. For large-scale problems, we still need to use compressed spectral screening to reduce the dimensionality first, as is evaluated next. Also, for this small-scale problem, the full version may be faster than the subsampled version purely due to implementation efficiency; that is, the subsampling and correlation calculation are implemented in R, but the default full correlation matrix calculation is implemented in C (as default functions in R). Results for large-scale problems In this section, we evaluate our method for problems with a the size that more commonly seen in gene differential analysis. Specifically, we now evaluate our method in the scenario of n 1 = n 2 = 100 and p = 40000. The scale of this problem makes our subsampling strategy necessary. Therefore, we focus on our CSS method and spectral projection based on our subsampling technique. When we use 5% of the entries, the CSS performance is impressive -much better than subsampled spectral projection. A nontrivial proportion of curves work nearly perfectly. When the sampling proportion drops to 2%, the ROC curves become noisier. The advantage of the CSS over subsampled spectral projection degrades as well but still remains significant. When ρ declines further to 1%, the difference between the two methods becomes negligible. In practice, as addresed before, the choice of ρ should depend on the computational capacity. A larger ρ is always preferable. Our rule of thumb, ρ > 2(n 1 + n 2 )/(p + 1) gives ρ > 0.01 in this setting. This indicates having ρ = 0.01 is too low for an accurate estimation, matching our observation in Figure 2. Finally, to determine whether the effective performance at ρ = 0.05 is computationally feasible for the current problem, we include the timing results in Table 3. The average computational time for our method with ρ = 0.05 is about 438 seconds. The spectral projection is roughly 5% faster but exhibits significantly lower accuracy (Figure 2a and 2b). The configuration of ρ = 0.02 is faster and gives reasonable accuracy as well. In our view, delivering such accurate screening results within 438 seconds is noteworthy given the size of the problem. The proposed method can therefore significantly expand the scalability of differential correlation analysis. 5 Gene co-expression differential analysis for Glioblastoma Now we will demonstrate the compressed spectral screening method by applying it to analyze genes for glioblastoma. As mentioned, the original dataset contains 51,448 genes; 139 observations for the GBM group, 254 for the normal group, and 507 for the LGG group. After removing deficiently expressed genes (with a median expression level of less than 0.25) from the three groups, 23,296 genes remain for analysis. We focus on identifying a subset of genes with significant differences in their correlation matrices between the GBM patient group and the normal patient group (GTex). Our analysis is mainly based on the difference between the GBM group and the normal group, while the LGG group acts as the validation set. Due to potential skewness and the noisy nature of gene expression data, we apply logarithmic transformation and use Spearman's correlation for our analysis. The sample is too large for direct correlation analysis. Therefore, we begin by applying compressed spectral screening to the 23,296 genes. In the tuning procedure, K = 2 is selected as the approximating rank. We select 2000 genes through this process to reduce the problem to a manageable size for all benchmark methods. Next, we narrow our study to a smaller subset using the exact benchmark algorithms on the 2000 × 2000 correlation matrices. Our spectral screening method effectively identifies a subgroup of genes. This pattern indicates effective detection of a subgroup of differential correlations with separably larger spectral scores, as shown in Figure 3. The other three benchmark methods fail to achieve a similar level of effectiveness for selection and downstream analysis (Appendix C). Based on the proposed bootstrap selection method, 254 genes appear to be differentially correlated. Among the selected genes, the average correlation value is 0.628 for the GBM group, 0.41 for the LGG group, and 0.84 for the normal group. In addition to overall magnitude differences between the GBM and normal groups, we are interested in understanding the differential patterns in how genes are correlated. We analyze these data using network analysis on the differential networks, constructed by removing the differential effects of marginal magnitude. We next describe our analysis of the selected genes. Network analysis has been widely employed to evaluate biological and genetic relations [Gambardella et al., 2013]. One main advantage of using a network structure to represent correlations is that this data structure is potentially more robust and can remove magnitude effects. We first construct a network between genes according to Spearman's correlation in the normal group by treating correlation entries larger than a threshold as an edge. The threshold is selected based on Gambardella et al. [2013]'s strategy wherein the network degree best matches the power-law or scale-free distribution, which is generally believed to be a principled structure in biological networks [Barabási andBonabeau, 2003, Barabási, 2009]. To ensure a meaningful comparison, we then select truncation thresholds for the GBM group and LGG group such that the resulting networks have the same density as the normal group. Finally, we select the largest connected component of the three networks, thereby obtaining three networks with 198 genes and an average degree of around 4.82. Magnitude effects have already been eliminated from these networks, and the structural differences between them only reflect relative differences in the networks' connection patterns ( Figure 4). Next, we leverage network analysis methods to explore differential patterns between these networks. Given two networks on the same set of nodes, the networks can be represented by their adjacency matrices A 1 ∈ {0, 1} n×n and A 2 ∈ {0, 1} n×n . Their differential adjacency can be defined as A ∈ {0, 1} n×n such that A ij = I(A 1,ij = A 2,ij ). The matrix A The matrix A gives the dyad indices where the two networks differ. In particular, we take the differential adjacency matrix between the GBM network and the normal network. We want to summarize differential patterns based on multiple gene modules; that is, the differential patterns are similar for genes within the same module. We apply the hierarchical community detection method (HCD) of Li et al. [2020a] is applied, which can automatically determine the module partition as well as the hierarchical relation between modules. Nodes' module labels returned by the HCD algorithm are color-coded in Figure 4; the corresponding hierarchical relation between detected gene modules (communities) appears in Figure 5. The modules and hierarchy provide informative distinctions regarding connection patterns. For example, in the normal group network, Modules 1 and 2 are densely intraconnected; both groups in the GBM network have sparse within-module connections. On the contrary, Module 3 is densely intraconnected in the GBM network but has few within-module connections in the normal network. Modules 1 and 2 thus exhibit different outgoing connection patterns. The connection between Modules 2 and 3 is much weaker in the GBM network than in the normal network, whereas this pattern does not hold for the connection between Modules 1 and 3. These distinct variation patterns consistently align with the hierarchical structure in Figure 5, indicating that Modules 1 and 2 may be treated as a meta-module with higher similarity compared with Module 3. Moreover, most notable changes in the connection pattern can be validated on the LGG network. To quantify this finding, in Table 4, we present the three networks' within-module edge density. This density perfectly matches the normal-LGG-GBM order. As our analysis is performed solely based on the GBM and normal groups, this consistency offers compelling evidence for the effectiveness of our analysis. GBM LGG Normal module 1 module 2 module 3 Figure 4: Three networks of GBM, LGG, and normal groups and three gene modules identified by the HCD algorithm on the differential network between the GBM and normal groups. The identified modules are also biologically meaningful. Specifically, we characterize each of the three gene modules via gene ontology analysis to identify their biological processes, molecular functions, and cellular components [Mi et al., 2019]. The main results are summarized below. • Module 1: The ontology analysis suggests that this module is enriched in genes exhibiting protein threonine kinase activity, protein serine kinase activity, and ATP binding function; and in genes coding for proteins with catalytic complex activity. Numerous genes coding for proteins with kinase activity have been shown to be overexpressed in gliomas; suppressing the activity of these overexpressed kinases is an area of active investigation [El-Khayat andArafat, 2021, Kai et al., 2020]. The low connectivity in GBM relative to normal samples is likely reflective of this dysregulation. • Module 2: : This module is enriched in genes involved in the regulation of chromatin organization, histone methylation, RNA localization, and transcription coregulator activity. Chromatin organization, histone methylation, and transcription factor activity have been found to differ substantially between glioma and normal samples [Bacolod and Barany, 2021, Shi et al., 2020, Ciechomska et al., 2020, Jia et al., 2020. Gene dysregulation in this module could contribute to aberrations in the processes that control gene expression, leading to the substantial differences in gene expression profiles seen between glioma and normal samples. • Module 3: Genes in this module are enriched for gene sets involving the regulation of RNA metabolic processing and nuclear cellular components. Naturally, nucleic acids and nuclear components are essential to the rapid cell division characteristic of Grade IV gliomas [Krupp et al., 2012]. It therefore seems logical that within-module connectivity is increased for the GBM group. The detailed ontology enriched categories are presented in Appendix B. Theoretical properties In this section, we present basic theoretical properties of the proposed method. We first study the complete version of the spectral screening and show that the method can perfectly identify the differential variables under proper models. We call this a strong consistency property. In the second part, we study the compressed spectral clustering and show that at most a vanishing proportion of differential genes will be falsely removed in this case. We call this a weak consistency property. We start with introducing a few more notations for theoretical discussions. For a matrix M , we will use M i· to denote its ith row and M ·j to denote its jth column. Define M max = max ij |M ij |, M ∞ = max i M i· 1 = max i j |M ij |, and M 2,∞ = max i M i· 2 . For a positive semidefinite matrix M , define its stable rank as τ (M ) = tr(M )/ M and note that τ (M ) ≤ rank(M ). Assumption 1. Assume x 1 , · · · , x n1 be i.i.d sub-gaussian random vectors with zero mean and covariance Σ 1 and y 1 , · · · , y n2 be i.i.d random vectors with zero mean and covariance Σ 2 . Moreover, max( x i ψ2 , y i ψ2 ) ≤ σ 2 for some constant σ 2 > 0 where ψ2 is the Orlicz norm for sub-gaussian distributions. In particular, this is equivalent to the definition that for any v ∈ R p with v 2 = 1, for any t > 0, 1 ≤ i ≤ n 1 . And the same property holds for each y i as well. As mentioned, our main structural assumption is that D = Σ 1 − Σ 2 has zero entries except within a diagonal blocks. Without loss of generality, we take the first m genes as differential variables. In additional, we need D to be low-rank. We formulate these intuitions in the following assumption. In addition, we will need sufficient regularity for the problem to be solvable. A common property involved in spectral methods is the so-called incoherence assumption, originally introduced by Candès and Recht [2009]. It is believed to be necessary to control entrywise errors in matrix recovery [Fan et al., 2018, Abbe et al., 2017, Cape et al., 2019. In our situation, we would have U to be zero in most rows, so the standard incoherence assumption is not meaningful. Instead, we constrain the assumption on only the nonzero rows corresponding to the differential variables with a slightly stronger requirement. Essentially, we are assuming the none of the differential variables has negligible magnitude of U i· compared to others. The next theorem shows that the spectral screening on the complete data set is guaranteed to separate the differential variables from the rest according to the spectral scores. Theorem 1 (Strong consistency of the complete spectral screening). Under Assumptions 1, 2 and 3, let U be the vector of the top r leading eigenvectors of the matrixD =Σ 1 −Σ 2 , whereΣ 1 andΣ 2 be the sample covariance matrices of {x i } n1 i=1 and {y i } n2 i=1 , respectively. For some constant c > 0, there exists a constant C depending only on K, c and µ, such that if for sufficiently large n with probability at least 1 − 2n −c , where n = min(n 1 , n 2 ). Next, we introduce the theoretical guarantee for the compressed spectral screening, based on randomly subsampling the entries ofD. Intuitively, since we use only a subset of the data, we may not be able to achieve exactly the same level of accuracy as the complete data set. However, since we are using a randomly sampled subset, our result should still be correct in the average sense. This intuition can be formally justified by the following weak consistency of the compressed spectral screening. We will state this property in the number of differential variables that are not among the top m ones with the largest spectral scores. Definition 1. LetŨ be the top r eigenvector matrix ofD in Algorithm 2. Let ηŨ (i) be the rank of Ũ i· among all the p rows in decreasing order. Define the confusion count of the spectral screening to be qŨ = |{i : 1 ≤ i ≤ m, ηŨ (i) > m}|. Notice that there is a gap between Theorem 1 and Theorem 2, because even if ρ = 1 in Theorem 2, the result is still weaker than Theorem 1. We believe this is an artifact of our proving strategy and will leave the refinement for future work. We now discuss the indications of Theorem 1 and Theorem 2 in a simplified case where the results more interpretable. In particular, we consider a special case of the spiked covariance model (3): with n 1 = n 2 = n and u 1 satisfying the incoherence Assumption 3. Assume that τ (Σ 2 ) ≤ n/ log p and max jj Σ 2,jj = 1. In this case, we can see that we can set σ 2 = C max(1, 1 + |λ 1 |µ/m) following Assumption 2. Therefore, Theorem 1 indicates that if p log p n |λ 1 | ≤ m/µ the strong consistency can be achieved. For example, when m = κp for some constant κ, we require log p n to have a nontrivial regime for λ 1 . Similar, for the weak consistency, we need |λ 1 |/m = O(1). Combining this requirement with the one for weak consistency, a necessary condition is Therefore, we need to ensure ρ n/p, which coincides with our rule of thumb about the lower bound of ρ in Section 3. Empirically, this selection give reasonable ρ range according to our experiments. Discussion This paper proposes spectral screening algorithms to identify variables with differential patterns between two covariance/correlation matrices based on the need to assess a large-scale gene expression dataset related to glioblastoma. Our method assumes that the differences are constrained within a block of differential matrices but makes no additional assumptions about differential patterns. By leveraging spectral properties, we can incorporate random sampling to significantly boost the computational efficiency, such that this method can easily handle analyses of high-dimensional covariance matrices that are not even readable by computer memory. We have demonstrated the effectiveness of our spectral screening method in terms of variable selection accuracy and computational efficiency. A detailed differential co-expression study of TCGA and GTeX data demonstrates how the spectral screening method can clarify glioblastoma mechanisms. Our method is also well suited to a much broader range of applications, and its performance is theoretically guaranteed. Of note, this spectral screening method can identify differential genes but cannot provide further insight into specific differential patterns. More in-depth studies of differential patterns require downstream analysis. In our glioblastoma example, we use a hierarchical community detection algorithm from Li et al. [2020a] to extract precise differential patterns. The properties of this step are unknown with respect to differential correlation settings. As such, it would be useful to incorporate these two analytical steps into one systematic modeling procedure given that both are based on the spectral properties of data. The design and theoretical study of such a method presents an intriguing avenue for future work. A Proofs Proof of Theorem 1. Under Assumption 1, by Lemma 1 in Ravikumar et al. [2011], we have max Σ 1 − Σ 1 max , Σ 2 − Σ 2 max ≤ σ 2 2c log p n (6) with probability at least 1 − 2n −c . Under this event, by Lemma 1, we have some orthogonal matrix O ∈ R r×r such that as long as Notice that the zero pattern and the norms of rows do not change from U to U O for any orthogonal matrix O. Therefore, we will not distinguish U and U O for notational simplicity. Using Assumption 3, indicates In particular, when 112µ √ 2cσ 2 p |λ K | log p n < 1, which indicates (8), we have because of (4). The claim of the theorem directly follows. Proof of Theorem 2. LetD = 1 ρD • Ω where Ω is the matrix with Ω ij being the sampled indicator. Let U be the top r eigenvectors ofD, according to decreasing order (in magnitude) of its eigenvalues. By Davis-Kahan theorem [Yu et al., 2014], there exits an orthogonal matrixÔ ∈ R r×r such that where C is a constant depending on r. We now proceed to bound both the nominator and denominator. Firstly, notice that Ω can be seen as the adjacency matrix of an Erdös-Renyi random graph with edge probability ρ. Because of the independence between Ω and our data, we can treatD as fixed when calculating the probability related to the randomness of Ω. By Lemma 2 and Assumption 1, we have with probability at least 1 − p −c . Consider the joint event of the above and the event of (6), and assume 2c log p n < 1, we have D −D 2 ≤ C p(n 1 + n 2 ) ρ (2σ 2 + 2σ 2 2c log p n ) 2 ≤ 16C σ 4 p(n 1 + n 2 ) ρ (10) with probability at least 1 − 2n −c − p −c . Secondly, by Weyl's theorem, we also have Standard spectral analysis of sub-gaussian random vectors (e.g. Vershynin [2018]) indicates that with probability at least 1 − 2n −c , in which τ (Σ 1 ) = tr(Σ 1 )/ Σ 1 is the stable rank. Combining the above results with (9) and if we have with probability at least 1 − 4n −c − p −c . Note that every differential variable i ∈ [1, m] falling out of the first m in Ũ i· indicates one null variable enters the top m. Without lost of generality, assume the first 1 · · · qŨ variables fall out of the top m in the spectral scores. So we can match qŨ of these differential variables with qŨ null variables (this mapping is not unique). Denote this one-to-one mapping by i → t(i). Under the current event, according to the proof of Theorem 1, we have Û i· 2 > Û t(i)· 2 + 2∆. Lemma 1 (Theorem 4.2 of Cape et al. [2019]). Let X and E be two p × p symmetric matrices and rank(X) = r. If the eigen-decomposition of X is given by X = U ΛU T with Λ including the eigenvalues in decreasing order (in magnitude). Further, letÛ be the corresponding top r eigenvectors of X + E. If |Λ rr | ≥ 4 E ∞ , there exists an orthogonal matrix O ∈ R r×r such that Let G ∈ {0, 1} n×n be an adjacency matrix of an Erdös-Renyi random graph where all edges appear independently with probability ρ and Z ∈ R n×n be a symmetric matrix. Let Z • G be the Hadamard (element-wise) matrix product of the two matrices. Lemma 2 (Lemma 2 of Li et al. [2020b]). Let G ∈ {0, 1} p×p be a p × p adjacency matrix of an Erdös-Renyi random graph, where upper triangular entries are generated by independent Bernoulli distribution with expectation ρ. If ρ ≥ C log p/p for a constant C. Then for any c > 0 and for any fixed matrix Z ∈ R p×p with rank(Z) ≤ K, the following relationship holds B Ontology results about hierarchical gene modules C Additional results about the differential correlation analysis of genes Figure 6 shows the spectral projection scores of on the 2000 genes. It can be seen that there is no clear cluster pattern observed (compared with Figure 3). Running Kmeans algorithm of would result in a The adaptive LAS selects 350 genes. The GSS selects 175 genes, which is a strict subset of the adaptive LAS selection. Next, we will mainly compare our results with the adaptive LAS selection. The average correlation level for the selected genes is 0.54 in the GBM group, and 0.48 in the normal group. Compared with the selection of our method, this gap is much smaller. Furthermore, we repeat the same type of network analysis on the 350 selected genes from the adaptive LAS. The networks and the detected modules are shown in Figure 7. The variation patterns of module 3 and 4 between the GBM and normal group are not consistently reflected in the LGG group. Therefore, the result is considered as inferior to the result based on the spectral screening selection. GBM LGG Normal module 1 module 2 module 3 module 4 Figure 7: The networks of GBM, LGG and normal groups and the 4 gene modules identified by the HCD algorithm based on the 350 genes selected by the adaptive LAS algorithm.
Novel Candidate Genes Associated with Hippocampal Oscillations The hippocampus is critical for a wide range of emotional and cognitive behaviors. Here, we performed the first genome-wide search for genes influencing hippocampal oscillations. We measured local field potentials (LFPs) using 64-channel multi-electrode arrays in acute hippocampal slices of 29 BXD recombinant inbred mouse strains. Spontaneous activity and carbachol-induced fast network oscillations were analyzed with spectral and cross-correlation methods and the resulting traits were used for mapping quantitative trait loci (QTLs), i.e., regions on the genome that may influence hippocampal function. Using genome-wide hippocampal gene expression data, we narrowed the QTLs to eight candidate genes, including Plcb1, a phospholipase that is known to influence hippocampal oscillations. We also identified two genes coding for calcium channels, Cacna1b and Cacna1e, which mediate presynaptic transmitter release and have not been shown to regulate hippocampal network activity previously. Furthermore, we showed that the amplitude of the hippocampal oscillations is genetically correlated with hippocampal volume and several measures of novel environment exploration. Introduction The hippocampus is critical for a wide range of emotional and cognitive behaviors. Changes in hippocampal oscillatory activity have been established during hippocampus dependent behaviors, such as anxiety-related behavior and spatial orientation [1,2,3]. Furthermore, an increase in amplitude of gamma oscillations in the hippocampus has been associated with memory retrieval in humans [4] and rats [5]. Together, these data suggest an important role for gamma oscillatory activity in hippocampal function. Oscillations can be pharmacologically induced in ventral hippocampal slices of rodents by applying the acetylcholine receptor agonist carbachol [6,7]. This in vitro activity, which we will refer to as ''fast network oscillations'', shares many characteristics with gamma oscillations in vivo [8,9]. In particular, the amplitude of in vitro ventral hippocampal oscillations correlates with in vivo gamma amplitude and performance in a memory task [10]. Moreover, we recently reported differences among eight common inbred mouse strains in traits of carbachol-induced fast network oscillations in hippocampal slices, which implies the contribution of genetic variation to these traits [11]. Therefore, in vitro hippocampal activity is a physiologically relevant source of information to identify genetic variants affecting hippocampal function. Here, we aimed at identifying genes that underlie variation in hippocampal spontaneous activity and carbachol-induced oscillations in vitro, using a population of 29 BXD recombinant inbred mouse strains [12]. The BXD strains were derived from an intercross of the common inbred mouse strains C57BL/6J and DBA/2J, which differ in many neurophysiologic hippocampal traits and hippocampus-related behavioral traits. For example, C57BL/6J outperforms DBA/2J in spatial memory tasks [13,14,15], which has been associated with their differences in synaptic plasticity [16], and hippocampal mossy fiber projections [17,18,19]. The BXD strains, therefore, form an excellent resource to identify the segregating genetic variants that affect hippocampus-related traits, and they enabled us to identify quantitative trait loci (QTLs) associated with these traits. These QTLs contained many candidate genes and, therefore, we used gene expression data to identify genes of which the expression is linked to hippocampal activity. Using this approach, we identified three genes that were linked to hippocampal activity previously and we identified five novel candidate genes. In addition we questioned whether genetic predisposition for having a certain level of amplitude, frequency or coherence of hippocampal activity affects behavior. To address this, we computed correlations between the hippocampal activity traits and the behavioral phenotypes assembled in the GeneNetwork database (www.genenetwork.org). We found that several behavioral traits and hippocampal activity parameters were correlated in the mouse strains used, indicating a shared genetic component. Results To identify genes that affect hippocampal activity, we measured local field potentials (LFPs) in hippocampal slices from 29 BXD recombinant inbred strains. Measurements were performed using 60channel multi-electrode arrays that covered the entire hippocampal cross-section in the slice (Fig. 1A), and the electrodes were classified as located in one of nine anatomical subregions (Fig. 1B). In the first condition, slices were perfused with artificial cerebrospinal fluid (ACSF), which gave rise to asynchronous activity characterized by 1/ f-like amplitude spectra (Fig. 1C-E). We computed the integrated amplitudes in the frequency bands 1-4, 4-7, 7-13, 13-25, 25-35, and 35-45 Hz. These amplitudes differed considerably across mouse strains as illustrated with the two extreme mouse strains in Figure 2A. Following the ACSF condition, we applied the acetylcholine receptor agonist carbachol (25 mM) to pharmacologically induce fast network oscillations (see Fig. 1C-E and Materials and Methods). The amplitude of these oscillations also differed conspicuously between strains (Fig. 2B). To selectively analyze the effect of carbachol on hippocampal activity, we divided the value of a trait in the carbachol condition by that obtained in the ACSF condition and computed heritability scores and genetic correlations. Hippocampal activity traits exhibit prominent heritability and genetic correlations The analysis of amplitude, peak frequency and inter-areal correlations (see Materials and Methods) for the two conditions in the nine hippocampal subregions define a total of 198 trait values per slice. Several traits were observed to exhibit prominent variation across the mouse strains, e.g., the peak amplitude in the presence of carbachol varied by a factor of three (Fig. 3A) in the CA1 stratum pyramidale. P-values from F statistics (ANOVA) and heritability scores were calculated for every trait (Tables S1 and S2). The heritabilities ranged from 1 to 25%. Amplitude of carbachol-induced oscillations shows prominent genetic correlation with hippocampal volume and locomotion traits Studying hippocampal activity in BXD strains opens up the exciting possibility to relate genetic variation in brain activity to that of phenotypes from the GeneNetwork database, which contains more than 2000 behavioral, anatomical and physiological traits from previous studies on BXD strains. We computed genetic correlations between the hippocampal activity traits and two subsets of phenotypes from the GeneNetwork database (Materials and Methods). See Tables S3 and S4 for descriptions of phenotypes in the subsets. The first subset (n = 35) consisted of physiological traits of the hippocampus, such as the weight or volume of different subregions of the hippocampus. Interestingly, the trait amplitude 1-45 Hz (CCH) was negatively correlated with volume of the hippocampus. The four phenotypes from the subset with the most significant correlations with amplitude 1-45 Hz (CCH), were two measures of hippocampus volume (GeneNetwork ID 10457: r = 20.68, p,0.002 (Fig. 5A) and ID 10456: r = 20.66, p,0.002 [20]), and two measures of ventral hippocampus volume (ID 10756: r = 20.57, p,0.01 and ID 10757: r = 20.53, p,0.05 [21], uncorrected p-values). The four correlations were significant at a false discovery rate of 0.125. The second subset (n = 351) consists of a selection of behavioral traits from the database (see Materials and Methods). We found strong negative correlations between peak amplitude and four traits representing locomotion in a novel environment (ID 11510: r = 20.62, p,0.0005 (Fig. 5B), ID 10916: r = 20.83, p,0.0005, ID 10037: r = 20.76, p,0.001, ID 10416: r = 20.89, p,0.005, uncorrected p-values). The four locomotion traits were strongly correlated with each other, despite having been measured in different studies [22,23,24,25], which reflects that locomotion is a very reproducible trait [26]. The four locomotion traits were part of the top-10 of strongest correlations, which were all significant at a false discovery rate of 0.125. Interestingly, we also found a high positive correlation of peak amplitude with the performance in the Morris water maze task [27] (ID 10816, n = 7, r = 0.74, p,0.05, uncorrected p-value). This correlation, however, did not survive correction for multiple testing, possibly because of the low number of observations. We then measured locomotion in a novel open field in several BXD strains in our own laboratory. We used SEE software to dissect locomotion into lingering and progression segments (see Materials and Methods). Peak amplitude correlated negatively with total distance moved (r = 20.52, p,0.005, data not shown) as it was found also using the GeneNetwork database (Fig. 5B), and with the duration of progression segments (r = 20.54, p,0.005, Fig. 5C), but positively with the duration of lingering segments (r = 0.48, p,0.01, Fig. 5D). Taken together, these findings indicate that the inverse relation between peak amplitude and locomotion in a novel environment is a robust effect. Amplitude 1-45 Hz (ACSF) and correlation (ACSF) had overlapping QTLs located on chromosome four (Fig. 6A, B). Amplitude 1-45 Hz (CCH), peak amplitude, and correlation (CCH) had overlapping QTLs on chromosome five; the one from peak amplitude overlapped a QTL from Amplitude 1-45 Hz (ACSF) (Fig. 6A, C-E and Fig. S3, S11, S14 and S17). Also, we identified for each trait one or more suggestive QTLs that were not found for other traits. The partially shared QTLs suggest that the traits share genetic components in addition to having unique genetic component(s). For example, peak frequency ( Fig. 6F) had no QTLs in common with other traits. This suggests a dissimilar genetic underpinning of peak frequency and, e.g., peak amplitude. Correlation with gene expression data points to candidate genes The nineteen suggestive or significant QTLs identified (see above) varied in length from 2 to 19 Mb, and contained between 6 and 155 genes each. In order to evaluate these genes, we correlated the hippocampal activity traits with expression data from the hippocampus of BXD mice (see Materials and Methods). For each of the six main traits, we selected genes within the QTLs of the trait, and correlated the expression of these genes with the trait. The significance of the correlations was determined with permutation tests (see Materials and Methods). Table 1 gives an overview of the eight genes from these nineteen QTLs that had significant expression correlations. Peak amplitude was associated with Plcb1 (phospholipase C, beta 1) and Cacna1b, the gene coding for calcium channel alpha1B. The gene coding for calcium channel alpha1E (Cacna1e) was linked to amplitude 1-45 Hz (CCH). Plcb1 is known to influence hippocampal oscillations [28]. Cacna1b and Cacna1e have been implicated in hippocampal LTP [29,30], but not in the formation of synchronous network activity. For peak frequency, we identified Eps15-homology domain protein 3 (Ehd3), which, like the other genes identified (Creb3, Psmc2, Dctn3, and Ralgps2) has not yet been related to hippocampal activity. Discussion Neuronal oscillations have been implicated in cognitive and emotional behavior [1,31,32] and are heritable [11,33,34] which make quantitative traits derived from oscillatory activity potentially useful in gene-finding strategies. Here, we searched for genes that underlie variation in hippocampal network activity in vitro based on 29 recombinant inbred strains from the BXD population [35]. QTL mapping pointed to regions on the genome associated with variability in amplitude of oscillatory and non-oscillatory activity, as well as in functional coupling between hippocampal areas. To evaluate genes in the QTLs for a potential contribution in hippocampal activity, we correlated their expression in the hippocampus with the hippocampal activity traits, and identified eight candidate genes. Hippocampal activity traits have relatively low heritability in BXD strains The heritability estimates of amplitude and functional coupling ranged from 1 to 25%, which is similar to what we found in a population of eight inbred mouse strains [11]. Higher-order statistical measures of oscillatory dynamics, such as long-range temporal correlations [36] and markers from Langevin dynamics [37] exhibit low-albeit significant-heritability, and were not included in the present QTL analysis [38]. To our knowledge, heritability of in vivo hippocampal gamma-band amplitude has not been estimated yet, but EEG studies in humans show that the early auditory gamma-band response has a heritability of 65% [39], and heritability of amplitude in the classical delta-, theta-, alpha-and beta-frequency bands ranges from 40 to 90% [33]. Thus, the heritability we observed here may be considered low. This may be explained by the environmental noise introduced by the experimental procedure, e.g., the slicing of the hippocampus. Moreover, heritability depends also on the population in which it is measured; the heritability we estimated holds for the offspring of the strains C57BL/6J and DBA/2J, which obviously does not comprise the genetic variation as present in the human population. Reduction of traits inspired by cluster analysis We used cluster analysis to evaluate the genetic correlations of the 198 hippocampal activity traits. The clusters showed which traits are strongly correlated and, therefore, could be merged. The clusters we identified exhibited a great overlap with six main The mean trait value is represented in color code for each strain after normalization across strains, i.e., for every column the mean equals zero and the variance equals one. The clustering in (A) largely corresponds to six classes of traits as indicated by the color-coding in (C), i.e., the peak frequency and peak amplitude for the carbachol (CCH) condition, and the broad-band amplitude (1-45 Hz) and the inter-regional correlations for both conditions. We based the QTL mapping on the mean of the traits within these six classes (see labeling below the cluster diagram). doi:10.1371/journal.pone.0026586.g004 classes of traits representing experimental conditions and type of analyses. We chose to supervise the merging of traits by using the classes instead of the exact clusters. This approach had the advantage over commonly used unsupervised methods, such as principal component analysis, that the resulting traits have a straightforward analytic and physiological interpretation. By collapsing the information on hippocampal subregions and frequency bands, we reduced the amount of traits to six. The cluster analysis showed that the genetic correlation between the traits measured during the ACSF condition and those during the CCH condition is relatively low. The QTL mapping, however, showed that this correlation is substantial: the traits from the ACSF condition have some overlapping and some non-overlapping QTLs with the traits from the carbachol condition, suggesting a partially unique and partially shared genetic architecture. Therefore, it is also likely that partially shared and partially unique downstream mechanisms underlie the traits from the two conditions.'' Genetic correlations with behavioral traits from the GeneNetwork The negative genetic correlation between hippocampal volume and hippocampal activity traits suggests that there are genes that influence both traits. In two subsequent BXD studies [20,40], a QTL for hippocampal volume was reported at chromosome 1, which overlaps with one of the QTLs we identified for amplitude 1-45 Hz (CCH). This QTL might contain genes that influence both hippocampal activity and hippocampal volume. Recently it has been reported that tenascin-C deficient mice have smaller hippocampal subregions and higher gamma oscillation amplitude compared to wild-type mice [41], which corroborates our finding that small hippocampal volume is associated with high amplitude oscillations. Locomotion in a novel open field is a complex trait used as a measure for, e.g., exploration, anxiety and hyperactivity. The locomotor behavior of a mouse that is placed in a novel environment can be divided in lingering and progressing segments [42]. During lingering, the animal is actively gathering information about the environment by sniffing, rearing and looking around. During progression, the animal moves from one location to the next. We observed that peak amplitude was negatively correlated with the duration of the progression segments, but positively with the duration of the lingering segments. Future studies should test whether the same relation holds between locomotion and network oscillations in freely behaving mice. This is not unlikely, because hippocampal oscillations in the 20-40 Hz range are prominent when mice enter a novel environment [43], and gamma oscillations have been associated with novelty in rats [44]. The positive correlation between the performance in the Morris water maze and the peak amplitude suggests that BXD strains capable of producing high-amplitude gamma have good spatial memory. Elevated activity of gamma oscillations during encoding and retention of information in working memory has been reported in humans [4,45] and in rodents [5] Our results, however, provide the first indication that genetic predisposition for high-amplitude gamma oscillations is beneficial for workingmemory performance. Genes previously associated with carbachol-induced hippocampal oscillations Genetic influences on hippocampal carbachol-induced oscillations in vitro have been studied extensively and it has pointed to several genes involved, including Chrm1 [46], Gabra5 [47], Gabrb2 [48], Plcb1 [28]. Plcb1 is essential for the genesis of carbacholinduced oscillations as indicated by the inability to induce oscillations with carbachol in the hippocampus of Plcb1 knockout mice [28]. Plcb1 is one of the candidate genes we identified, which can be regarded as an internal validation of our experimental and statistical procedures. Our paradigm did not reveal other genes previously associated with hippocampal oscillations. A reason for this may be that the influence of such a gene may be caused by only a few singlenucleotide polymorphisms (SNPs). If C57BL/6J and DBA/2J do not differ in these SNPs, the paradigm we followed would not have revealed these genes. Moreover, most of the studies that try to link genes to brain activity use knockout-mice, in which the effect of the particular gene is likely to be stronger than in the BXD population. Also, the effect sizes of the genes known to be involved in hippocampal oscillations may be too small to be detected by our analysis. Novel candidate genes associated with hippocampal activity Our combined use of QTL mapping and correlation with expression data has some notable advantages. The QTL mapping was merely used to select stretches of the genome for further analysis, which justifies the use of suggestive significance. We qualified our findings with the significance level of the correlation with the expression data of genes within the QTLs. This significance increases because of the use of the relatively small QTLs. We identified two candidate genes for shaping hippocampal network that code for calcium channels: the alpha1b subunit (Cacna1b), and the alpha1e subunit (Cacna1e). Calcium channels mediate synaptic transmission [49], and are essential in the formation of thalamo-cortical gamma band activity [50]. Also, Cacna1e and Cacna1b facilitate hippocampal long-term potentiation (LTP) [29,30], and the Cacna1b knock-out mouse exhibits impaired long-term memory and LTP [51]. Thus, Cacna1e and Cacna1b are interesting candidates for playing a role in hippocampal oscillations. Moreover, Cacna1b has been associated with schizophrenia in three recent linkage studies [52,53,54]. Thus, we may hypothesize that alterations in Cacna1e and Cacna1b affect hippocampal network activity such as to impair memory performance in, for example, schizophrenia patients known to suffer from memory impairment. In a QTL for correlations (ACSF) we identified the gene Creb3, coding for the transcription factor cAMP responsive elementbinding protein 3. Creb1 plays an important role in (spatial) memory [55]; increasing the expression level of Creb1 in the hippocampus facilitates long-term memory [56]. Therefore, it might well be that Creb3 is involved in hippocampal activity as well. The other gene identified for this trait is Dctn3, which has a function in the cytoskeleton [57]. Peak frequency was linked to Ehd3 which is involved in endosome to Golgi transport [58]. Psmc2, associated with amplitude 1-45 Hz (ACSF), is involved in developmentally programmed cell death [59]. Ralgps2, linked to amplitude 1-45 Hz (CCH), affects neurite outgrowth [60]. In summary, we identified eight candidate genes for influencing different aspects of hippocampal network activity. Future research, by means of knockout mice or pharmacological manipulations, should reveal the mechanisms by which these genes affect hippocampal activity and related cognitive functions. Figure 6. QTL mapping of six hippocampal activity traits peaks at 19 different locations. The LRS scores (y-axis) quantify the relation between genomic markers (x-axis) and six traits. Integrated amplitude between 1 and 45 Hz at the ACSF (A) and carbachol (CCH) condition (C), peak amplitude (D) and frequency (F) at the carbachol condition, inter-regional correlation at the ACSF (B) and carbachol condition (E Animals, hippocampal slice preparation and extracellular recording All experiments were performed in accordance with the guidelines and under approval of the Animal Welfare Committee of the VU University Amsterdam. BXD strains were originally received from Jackson Lab, or from Oak Ridge Laboratory (BXD43, BXD51, BXD61, BXD65, BXD68, BXD69, BXD73, BXD75, BXD87, BXD90), and were bred by the NeuroBsik consortium. In this study we used in total 586 slices from 322 animals (62% male), from 29 Slice selection and subregion classification For each experiment a photograph was taken of the slice in the recording unit, to visualize the locations of the electrodes in the hippocampus (Fig. 1A). The hippocampus consists of three main anatomical regions: CA1, CA3 and dentate gyrus (DG). We divided CA3 and CA1 into the subregions stratum oriens, stratum pyramidale and stratum radiatum/lacunosum-moleculare, and DG into stratum moleculare, stratum granulosum and hilus (Fig. 1B). To classify electrode locations into one of these nine subregions, we used an in-house written interactive Matlab procedure based on the photograph of the electrode grid. Using Fourier analysis (see below), we determined for each electrode whether oscillatory activity was present. A slice was excluded from further analysis if none of the 60 electrodes showed oscillations. For each condition, in order to detect electrodes producing noisy signals and transient artifacts before the quantitative trait analysis, each slice recording was subjected to a principal component analysis. If noisy signals were present, then the first few spatial components had high values only for one or a few of these signals. These signals were identified and excluded. The time series of the remaining signals were averaged; this average was used to identify noisy segments. Samples from this average with absolute values exceeding five times the standard deviation of the averaged signal, were excluded from each signal before the analysis. Experimental protocol to measure hippocampal network activity After placing the slices in the recording units with ACSF, 15 minutes of spontaneous activity was recorded (see Fig. 1C). These first 15 minutes will be referred to as the ''ACSF condition''. Then, carbachol (25 mM) was bath applied to the slice. Carbacholinduced oscillations at around 20 Hz were initially unstable in frequency and amplitude, but stabilized after 45 minutes. After this 45-minute wash-in period fast network oscillations were recorded for a period of 30 minutes, which will be referred to as the ''carbachol condition''. In Figure 1C a time-frequency representation of a representative signal is shown for a complete recording. Example LFP traces for the two conditions are shown in Figure 1D. The frequency of oscillations increased with temperature (Fig. 1F), which has been observed previously [61,62]. Thus, the oscillations at around 20 Hz, which were recorded at 30uC in the present experiments, are expected to have frequencies in the gamma range (.30 Hz) at the physiological temperature of 36.9uC. However, the amplitude of oscillations at higher temperatures was markedly lower than at 30uC, resulting in an unfavorable signal-to-noise ratio. Therefore, all experiments were performed at 30uC. Fourier analysis For the two conditions (ACSF and carbachol), and for each electrode that was classified into one of the nine regions, we calculated the Fourier amplitude spectrum using Welch's method [63]. Figure 1E shows representative spectra in the two conditions. For the ACSF condition, we calculated the integrated amplitude in the frequency bands 1-4, 4-7, 7-13, 13-25, 25-35, and 35-45 Hz. In the carbachol condition, we observed oscillations at around 20 Hz, which is similar to previous reports using a temperature of around 30uC in mouse hippocampus [11,64,65]. We calculated the amplitude and the frequency of these oscillations, which we will refer to as the peak amplitude and the peak frequency, respectively. Moreover, a 1/f curve was fitted to the spectrum outside the interval at which the peak occurred, and from this curve we calculated the integrated amplitude in the frequency bands 1-4, 4-7, 7-13, 13-25, 25-35, and 35-45 Hz. For each of these measures, the traits we used for the cluster analysis (see below) were the mean trait values across electrodes per anatomical subregion (n = 54 traits for the ACSF condition, n = 72 traits for the carbachol condition). To establish whether oscillations were detected at a given electrode, we applied the following procedure. First, a frequency interval in which the peak of the spectrum occurred was determined visually, e.g., for the spectrum in Figure 1E this interval would be from 10 to 25 Hz. Next, a 1/f curve was fitted to the spectrum outside this interval. This 1/f curve was then subtracted from the original spectrum. Finally, a Gaussian curve was fitted to the remaining spectrum. If the peak of this Gaussian curve did not exceed the 95% confidence interval of the fitted 1/f curve, we classified the signal as not oscillating. Slices were excluded from further analysis when none of the electrodes detected oscillations. Interaction between hippocampal regions To quantify the interaction between two hippocampal subregions, e.g., between CA1 stratum oriens and CA3 stratum oriens, we calculated a suitable cross-correlation measure (as described below) between signals from all possible pairs of electrodes from these subregions, and used the mean over these pairs for the cluster analysis (see below). Oscillatory activity was not observed in the ACSF condition and, therefore, we quantified cross-correlations between subregions in this condition using Pearson's linear correlation of the LFPs. Prior to this analysis, the signals were filtered between 5 and 40 Hz to remove the fairly large amount of noise outside this interval. Thus, for every pair of subregions, the mean correlation over all possible electrode-pairs from the subregion-pair was used (n = 36 traits). In the carbachol condition, in contrast, the signals were strongly oscillatory. Therefore, we calculated the phase-locking factor (PLF) between signals in this condition. The PLF is a well established measure for quantifying the interaction between two oscillating signals that can be out of phase and possibly have independent amplitude fluctuations [66,67]. To reduce volume conduction effects, the current-source density of the LFPs was computed [9,68]. After this transformation, we computed the phase-locking factors between signals that were band-pass filtered 4 Hz around the peak frequency of the fast network oscillations, for every subregion pair (n = 36 traits). Normalization To specifically analyze the effect of carbachol, we normalized the amplitude and correlation traits of carbachol-induced oscillations by dividing them by the same traits from the ACSF condition, except for the peak frequency, because there was no peak in the amplitude spectrum during ACSF. For the same reason, the peak amplitude from the carbachol condition was normalized by the integrated amplitude between 15 and 25 Hz from the ACSF condition. The PLF traits from the carbachol condition were divided by the correlation traits from the ACSF condition. Thus, the normalized traits express the relative sensitivity to experimental manipulations. ANOVA To determine whether a given trait differed significantly between mouse strains, we performed a one-way ANOVA and the corresponding F-test with the trait as dependent variable and the mouse strain as factor. The null hypothesis of this test is that for at least one strain the trait mean is significantly different from the trait means of the other strains. Where necessary, the data were transformed with the natural logarithm order not to violate the normality assumption for ANOVA. Heritability The observed value of a trait (e.g. peak amplitude) from a given slice is the result of both genetic and environmental influences, including measurement noise. To quantify the extent to which a trait is influenced by genetic factors, we computed its heritability. The heritability of a trait is a measure for the proportion of the total variance of the trait that is caused by genetic variation. The remainder of the variance is assumed to be due to environmental factors. For inbred strains the heritability h 2 of a trait can be defined as h 2~s , where s 2 G is the component of variance between strains, and s 2 E the component of variance within strains [69]. The value of h 2 ranges between 0 and 1, where 0 means no genetic contribution to the trait, and 1 means that the trait is controlled only by genetic factors. We estimated heritability as described in detail in Jansen et al. (2009). Genetic correlation between traits To reveal the extent to which two traits share genetic factors, we studied the correlation between the genetic effects of the two traits, the so-called genetic correlation. For inbred strains, we can estimate the genetic correlation between two traits as the Pearson's linear correlation between the 29 mouse strain means of one trait and the 29 mouse strain means of the other trait [69,70]. The mouse strain means were taken over all slices from a given mouse strain. The estimated genetic correlations were used in a cluster analysis, as explained below. Cluster analysis of traits In order to identify clusters of genetically correlated traits, hierarchical clustering was performed on the complete set of n = 198 traits. In this analysis, traits are clustered based on a distance measure between the traits. To measure the distance between two traits, we subtracted the estimated genetic correlation between the two traits from 1, so traits with high genetic correlation are close to each other. No strong negative correlations were present: using absolute genetic correlation yielded similar results. Average linkage was used as a clustering method. This method starts with as many clusters as there are traits, and then sequentially joins the two clusters that are closest to each other in terms of the mean of distances between all possible pairs of traits in the two clusters; the procedure ends when all traits are joined in one cluster. A particular classification of traits into clusters is obtained by setting a threshold for the minimal distance that the clusters are allowed to have between them. The result of the cluster analysis was visualized in a dendrogram, in which the sequential union of clusters was depicted together with the distance value (the height of the horizontal lines that connect the objects or clusters) leading to this union. The threshold procedure can be visualized by a horizontal line in the dendrogram; the clusters under this line correspond to that particular threshold. BXD recombinant inbred strains and QTL mapping The BXD strains were created by crossing the inbred mouse strains C57BL/6J and DBA/2J and by inbreeding several groups of the crossed offspring [35]. It is one of the largest mammalian recombinant inbred strain panels currently available. Genetically, each of these BXD strains is a unique combination of the C57BL/ 6J and DBA/2J strains. The chromosomes of the BXD strains consist of haplotypes (stretches of chromosomes inherited intact from the parental strains). Each BXD strain was genotyped at 3795 markers covering the entire genome; each marker was classified as originating from C57BL/6J or DBA/2J. In order to compute the correlation between a trait and these markers, the markers were encoded, 21 for DBA/2J version of the marker and 1 for C57BL/6J version of the marker. Markers that correlate with a trait are called QTLs. We used WebQTL (www.genenetwork. org) to compute and visualize the QTL interval mapping. In WebQTL, the correlation between a marker and a trait was transformed into likelihood ratio statistics (LRS) in the following way: LRS = N*log(1/(1-r 2 )), where N is the number of strains, and r the correlation [71]. For intervals with unknown genotype, LRS scores of flanking markers were linearly interpolated. Threshold for significant LRS scores were computed using a permutation test: the N strain means from the trait were permuted, and for this permutation the maximum LRS score over all markers was computed, which resulted in an observation of the nulldistribution. Significance of LRS scores was computed by comparing them with the empirical null-distribution. LRS scores were termed significant if p,0.05, and suggestive if p,0.63. The QTL mapping was used to select regions of the genome for further analysis, which justifies the use of suggestive significance. The QTL intervals were determined with the 1 LOD drop-off method [72]; the interval ends where the LRS score drops more than 4.61 LRS ( = 1 LOD) with respect to the maximum LRS score in the interval. As in previous studies using BXD strains [73,74], we did not use the parental strains for QTL mapping. Correlations with traits from the GeneNetwork phenotype database The GeneNetwork database (www.genenetwork.org) contains more than 2000 phenotypes from previous studies using BXD strains. We computed genetic correlations between the hippocampal activity traits and two subsets of phenotypes from this database. By using subsets, the correction for multiple testing is reduced. To further reduce the risk of chance correlations, we only included phenotypes from the database that were reported for more than six BXD strains that were also used in the present study. The first subset (n = 35) contained physiological traits of the hippocampus. The second subset (n = 351) contained the behavioral traits that do not involve pharmacological manipulations. See Tables S1 and S2 for trait description and GeneNetwork IDs of both subsets. To correct the significance for multiple testing, we used the false discovery rate (FDR) [75,76]. The FDR controls the expected proportion of erroneously rejected hypothesis. It is the number of falsely rejected hypotheses divided by the total number of rejected hypotheses. In our case, the total number of rejected hypotheses is the number of observed correlations with p-values lower than a threshold. The number of falsely rejected hypotheses was estimated with a permutation paradigm. The hippocampal activity trait was permuted thousand times across strains, and the correlation between the permuted trait and the traits from the subsets was computed. The number of falsely rejected hypotheses was estimated as the average number of correlations with p-values smaller than the threshold. Gene expression data Data on gene expression in hippocampal tissue of adult mice, measured with Affymetrix Mouse Exon 1.0 ST Arrays, were accessed through GeneNetwork (UMUTAffy Hippocampus Exon (Feb09) RMA, accession number GN206, from www.genenetwork.org). The original data set contained over 1.2 million probe sets at exon level uniformly spread over the entire genome. Each probe set consisted of the RMA-summarized [77] value of the collective probes each targeting 25 base pairs, measured at adult mice from BXD strains [78]. For our analysis, we removed data from probe sets targeting regions that contain SNPs that differed between the two parental strains (according to databases snp_celera_b37 and snp_perlegen_b37 (2008) downloaded from http://phenome.jax.org). Probe sets targeting introns and intergenic regions were also removed, which reduced the amount of probe sets to 340318. We analyzed the expression per gene by taking the mean over all probes that target the same gene. For each hippocampal activity trait, we only calculated correlations with expression of genes from the QTLs of the trait. Significance levels for these correlations were determined with a permutation test; the hippocampal activity trait was permuted across strains, and the maximum of the correlations between the permuted trait and the expression of the genes was computed. This was done a thousand times; the thousand maxima so obtained formed the empirical null distribution against which the significance of a correlation was tested. Subjects for locomotion in open field test Six-week-old male mice (n.10 per strain, see section ''animals, slice preparation and recording'' for strain names) arrived in the facility in different batches in a period spanning 2 years. Mice were housed individually in Macrolon cages on sawdust bedding, which were, for the purpose of animal welfare, enriched with cardboard nesting material and a curved PVC tube. Food (Harlan Teklad) and water was provided ad libitum. All mice were habituated to the facility for at least 7 days before testing started. Prior to the open field testing described below, mice had been exposed to novelty tests in the home cage, an elevated plus maze and a light dark box apparatus, as described previously [79]. Housing and testing rooms were controlled for temperature, humidity and light-dark cycle (7 AM lights on, 7 PM lights off; testing during the light phase). Locomotion in open field All experimental procedures were approved by the local animal research committee and complied with the European Council Directive (86/609/EEC). Mice were introduced into a corner of a white square open field (50650 cm, walls 35 cm high) illuminated with a single white fluorescent light bulb from above (130 lx), and exploration was tracked for 10 minutes (12.5 frames/s; EthoVision 3.0, Noldus Information Technology). The SEE software (Strategy for the Exploration of Exploration [42,80] was used to smoothen path shape to calculate the total distance moved. Furthermore, SEE uses the distribution of speed peaks to parse the locomotor data into lingering segments (slow local movements) and progression segments, which together constitute the total distance moved. Table S1 Heritability scores (h) and P-values from F statistics from the ANOVAs of all the traits derived in the ACSF condition (spontaneous activity). The trait names are coded: Amplitude a_b_Hz_c indicates the integrated amplitude between a and b Hz, in region number c; Corr(a,b) indicates the correlation of activity between region a and region b. The numbers refer to the following regions: 1 = CA3 stratum radiatum/lacunosum moleculare, 2 = CA3 stratum pyramidale, 3 = CA3 stratum oriens, 4 = CA1 stratum radiatum/lacunosum moleculare, 5 = CA1 stratum pyramidale, 6 = CA1 stratum oriens, 7 = Dentate Gyrus hilus, 8 = Dentate Gyrus stratum granulosum, 9 = Dentate Gyrus stratum moleculare. (XLS) Supporting Information Table S2 Heritability scores (h) and P-values from F statistics from the ANOVAs of all the traits derived in the carbachol condition (oscillations). The trait names are coded: Amplitude a_b_Hz_c indicates the integrated amplitude between a and b Hz, in region c. Amplitude_a is the peak amplitude in region a, Frequency_a indicates the peak frequency in region a. PLF(a,b) is the phase locking factor of the activity between region a and region b. The numbers refer to the following regions: 1 = CA3 stratum radiatum/lacunosum moleculare, 2 = CA3 stratum pyramidale, 3 = CA3 stratum oriens, 4 = CA1 stratum radiatum/ lacunosum moleculare, 5 = CA1 stratum pyramidale, 6 = CA1 stratum oriens, 7 = Dentate Gyrus hilus, 8 = Dentate Gyrus stratum granulosum, 9 = Dentate Gyrus stratum moleculare. (XLS)
Interleukin-6 Modulates Colonic Transepithelial Ion Transport in the Stress-Sensitive Wistar Kyoto Rat Immunological challenge stimulates secretion of the pro-inflammatory cytokine interleukin (IL)-6, resulting in variety of biological responses. In the gastrointestinal tract, IL-6 modulates the excitability of submucosal neurons and stimulates secretion into the colonic lumen. When considered in the context of the functional bowel disorder, irritable bowel syndrome (IBS), where plasma levels of IL-6 are elevated, this may reflect an important molecular mechanism contributing to symptom flares, particularly in the diarrhea-predominant phenotype. In these studies, colonic ion transport, an indicator of absorption and secretion, was assessed in the stress-sensitive Wistar Kyoto (WKY) rat model of IBS. Mucosa-submucosal colonic preparations from WKY and control Sprague Dawley (SD) rats were mounted in Ussing chambers and the basal short circuit current (ISC) was electrophysiologically recorded and compared between the strains. Exposure to IL-6 (1 nM) stimulated a secretory current of greater amplitude in WKY as compared to SD samples. Furthermore, the observed IL-6-mediated potentiation of secretory currents evoked by veratridine and capsaicin in SD rats was blunted in WKY rats. Exposure to IL-6 also stimulated an increase in transepithelial resistance in both SD and WKY colonic tissue. These studies demonstrate that the neuroexcitatory effects of IL-6 on submucosal plexi have functional consequences with alterations in both colonic secretory activity and permeability. The IL-6-induced increase in colonic secretory activity appears to neurally mediated. Thus, local increases in IL-6 levels and subsequent activation of enteric neurons may underlie alterations in absorpto-secretory function in the WKY model of IBS. INTRODUCTION The functional gastrointestinal (GI) disorder, irritable bowel syndrome (IBS) is characterized by episodic bouts of abdominal pain, bloating, and altered bowel habit including diarrhea, constipation, or both. Although the pathophysiological changes underlying IBS are still being investigated, stress has been attributed a role in the initiation, exacerbation, and persistence of IBS symptom flares (Lydiard et al., 1993;Spiller, 2004;Fitzgerald et al., 2008). Additionally, a growing body of data implicates local activation of gut immune factors in the development and persistence of IBS symptoms (Quigley, 2006;O'Malley et al., 2011c). Mucosal biopsies from IBS patients express higher levels of T-cells, lymphocytes, and mast cells (Chadwick et al., 2002) and plasma samples from IBS patients exhibit altered pro-inflammatory cytokine profiles (Macsharry et al., 2008). Indeed, interleukin (IL)-6 has reproducibly been found to be elevated in plasma samples from IBS patients (Dinan et al., 2006Liebregts et al., 2007;Clarke et al., 2009;Scully et al., 2010;McKernan et al., 2011). As yet, the mechanisms that link altered cytokine profiles with the development of functional GI disorders such as IBS are poorly understood. However, there is growing evidence that IBS patients have altered GI permeability (Camilleri et al., 2012) and most pro-inflammatory cytokines have the capacity to influence intestinal epithelial permeability (Al-Sadi et al., 2009). Indeed, the importance of cytokines in neuromuscular dysfunction in the inflamed intestine has been demonstrated (Hurst et al., 1993;Ruhl et al., 1994), thus, with particular relevance to post-infective IBS, immunomodulation of enteric neurons by cytokines released from within the GI milieu may be important in the persistence of IBS symptomatology (Ruhl et al., 2001). Increased IL-6 synthesis following administration of a cholinesterase inhibitor has been correlated with increased abdominal pain and bloating and IL-6 can modulate mucosal ion transport and epithelial permeability, in addition to enhancing cholinergically mediated neurotransmission in rodents (Natale et al., 2003). Moreover, both IL-1β and IL-6 act as excitatory neuromodulators in a subset of myenteric neurons via presynaptic inhibition of acetylcholine release (Kelles et al., 2000). IL-6 has also been shown to suppress nicotinic and noradrenergic neurotransmission in guinea-pig submucosal neurons (Xia et al., 1999). Previous studies from our group have shown expression of IL-6 receptors on a subset of rat colonic submucosal neurons. Exposure of these neurons to recombinant IL-6 results in increases in intracellular calcium [(Ca 2+ ) i ] levels, which in turn results in increased colonic secretion (O'Malley et al., 2011b). The Wistar Kyoto (WKY) rat has been validated (Greenwood-Van Meerveld et al., 2005;Gibney et al., 2010;O'Malley et al., 2010a) as an appropriate pre-clinical model of IBS, displaying increased visceral sensitivity to colorectal distension and enhanced colonic motility and fecal output following exposure to a psychological stressor (Gibney et al., 2010;O'Malley et al., 2010a). Colonic morphology and goblet cell expression is also altered in this rat (O'Malley et al., 2010a) and it exhibits evidence of altered cytokine expression. Although plasma levels of IL-6 are not different between WKY and SD rats (unpublished observation), mucosal scrapings from WKY colons contain higher levels of IL-6 and excised WKY colons secrete more IL-6 than control Sprague Dawley (SD) colons. Moreover, these secretions stimulate calcium responses of greater amplitude in naïve submucosal neurons than the SD secretions (O'Malley et al., 2011a). These observations are comparable to studies carried out in the maternal separation (MS) model of IBS where MS secretions stimulated a larger response in submucosal neurons than control non-separated colonic secretions. Moreover, recombinant IL-6 was shown to stimulate an increase in secretory activity (O'Malley et al., 2011b). Evidence is mounting that IL-6 has neuromodulatory effects that contribute to altered GI function, however it is currently unclear whether these effects translate into functional changes. The current studies use Ussing chamber electrophysiology to investigate absorpto-secretory function in WKY rats following exposure to IL-6 and compare these effects to the SD control strain, which has normal GI function. ANIMALS Sprague Dawley and WKY rats (200-250 g) purchased from Harlan, UK were group-housed 4-6/cage and maintained on a 12/12 h dark-light cycle (08.00-20.00). All experiments were in full accordance with the European Community Council Directive (86/609/EEC) and the local University College Cork animal ethical committee. USSING CHAMBER ELECTROPHYSIOLOGY Mucosa-submucosal preparations of distal colon were mounted in Ussing chambers (exposed area of 0.12 cm 2 ) with 5 ml of Krebs solution (95% O 2 /5% CO 2 , 37˚C) in the basolateral and luminal reservoirs. Tissues were voltage-clamped at 0 mV using an automatic voltage clamp (EVC 4000, World Precision Instruments, Sarasota, FL, USA) and the short circuit current (I SC ) required to maintain the 0 mV potential was monitored as a recording of the net active ion transport across the epithelium. Experiments were carried out simultaneously in all chambers and connected to a PC equipped with DataTrax II software (WPI). This software was used to measure the peak response and resistance was calculated using Ohms law. Based on previous evaluations of the pro-secretory effects of IL-6 in SD tissues (O'Malley et al., 2011b), it was determined that the peak response to IL-6 occurred within 10 min of application. Thus, this time point was used to compare the effects of IL-6 on secretion in WKY versus SD rats. Following a period of stabilization (30-60 min) and prior to addition of any reagents, transepithelial resistance (TER) was measured. Another measurement of TER was taken at the end of the experiment (60-90 min later) and the difference (∆ resistance) between the two measurements was calculated. STATISTICS The data are represented as mean values ± the standard error of the mean (SEM). Students' t -test and one-way ANOVA with Neumann Keuls post hoc test were used to compare groups. Two-way ANOVA was used to analyze strain and treatment effects as independent variables. p ≤ 0.05 was considered significant. All experiments were conducted in tissue taken from at least six different animals. IL-6 EVOKES INCREASED COLONIC SECRETION AND TRANSEPITHELIAL RESISTANCE IN SD AND WKY RATS Electrophysiological Ussing chamber studies were used to compare colonic transepithelial ion transfer and tissue resistance in the stress-sensitive WKY rats with the widely used SD comparator strain. Short circuit current (I SC ) was measured and used as an indicator of net ionic movement across the tissue. In control colonic sections, not treated with IL-6, basal I SC was found to be lower in WKY (n = 9) colonic sections as compared to SDs (n = 18, p < 0.05, Figure 1A). However, TER, an indicator of colonic permeability, was not different between SD (n = 17) and WKY (n = 8, p > 0.05, Figure 1B) tissues. Previous studies in SD tissues determined that a peak increase in colonic I SC was observed at ∼10 min during a 30 min application of IL-6 (1 nM) to the serosal reservoir (O'Malley et al., 2011b). Therefore, all measurements were taken at the 10 min timepoint in both SD and WKY colons. Replicating our previous findings (O'Malley et al., 2011b), IL-6 evoked a small increase in I SC in SD controls (n = 23). Application of IL-6 to WKY tissue samples (n = 9) also induced a secretory current. However, the amplitude of the secretory response was larger in WKY tissues than SDs (p = 0.07, Figure 1C). The change in TER was calculated by comparing a measurement of TER at the beginning and end of each experiment (60-90 min). In control tissue, not exposed to IL-6, no change was observed in TER in either SD or WKY rats ( Figure 1D). However, the continued presence of IL-6 stimulated a significant increase in TER in both SD (12.6 ± 5.3 Ω cm 2 , n = 18) and WKY (27.8 ± 11.7 Ω cm 2 , n = 6, p > 0.05, Figure 1D) tissues. Two-way ANOVA analysis demonstrated a clear effect of the IL-6 treatment on tissue resistance [F (1, 44) = 14.2, p < 0.001] but there were not any strain differences or interaction between the factors despite a trend toward a larger effect in the WKY tissue. IL-6 POTENTIATES VERATRIDINE-STIMULATED SECRETORY CURRENTS IN SD BUT NOT WKY RATS To assess differences in the sensitivity of neuronally mediated colonic secretion between SD and WKY rats, the sodium channel activator, veratridine (10 µM) was applied to the basolateral reservoir. In non-stimulated colon samples no differences were noted in the peak response to veratridine between SD (n = 19) and WKY (n = 10, p > 0.05, Figures 2A,B) rats. In paired experiments we found that exposure to IL-6 (1 nM, 30 min) potentiates the secretory effects of veratridine in SD tissues when compared to control non-stimulated samples (n = 15, p < 0.05), which is consistent with previous findings (O'Malley et al., 2011b). However, in WKY tissues, IL-6 exposure had no effect on veratridine-induced currents (n = 10, p > 0.05, Figures 2A,B). IL-6 POTENTIATES BETHANECHOL-STIMULATED SECRETORY CURRENTS To investigate strain differences in cholinergically mediated currents, the muscarinic receptor agonist, bethanechol (10 µM) was added to the basolateral chamber. The agonist evoked a rapid biphasic current in control SD tissues (n = 12). The peak response in WKY rats was comparable (n = 9, p > 0.05, Figures 3A,B). The modulatory effects of IL-6 on the bethanechol response were subsequently examined in both tissues. As we have previously demonstrated (O'Malley et al., 2011b), IL-6 potentiated the evoked bethanechol response in SD tissues (n = 12, p < 0.05). IL-6 also enhanced bethanechol-evoked secretion in WKY tissues (n = 9, p = 0.05, Figures 3A,B). Two-way ANOVA analysis demonstrated a significant effect of IL-6 treatment [F (1, 38) = 5.4, p < 0.05], but no differences in strain or any interaction between the factors was identified. Muscarinic acetylcholine receptors can be present on both epithelial cells and neurons in the gut. To determine which cell type excited by bethanechol were sensitive to the effects of IL-6, control experiments in SD tissues were carried out in the presence of the sodium channel blocker, tetrodotoxin. In paired experiments (n = 5 each), I SC in IL-6-treated (98.8 ± 22.4 µA/cm 2 ) and control (104 ± 29.7 µA/cm 2 ) tissues following administration bethanechol in the presence of tetrodotoxin (100 nM, 15 min) were similar (p > 0.05). These data indicate that the potentiating effect of IL-6 on the bethanechol response appears to be mediated through neuronal activation. IL-6 POTENTIATES THE ANTI-SECRETORY PHASE OF CAPSAICIN-STIMULATED SECRETORY CURRENTS IN SD BUT NOT WKY RATS Finally, the sensory nerve stimulant capsaicin was examined in SD and WKY tissues. In control tissues, addition of capsaicin (1 µM) caused a rapid biphasic response as previously described (Yarrow et al., 1991). The early secretory phase (phase I) was comparable in both SD (n = 10) and WKY samples (n = 10, p > 0.05, Figures 4A,B). In phase II, where capsaicin evokes an anti-secretory current, I SC values in non-stimulated SD and WKY tissues were also comparable (p > 0.05, Figures 4A,B). Pretreatment with IL-6 (1 nM, 30 min) did not affect I SC in phase I in either SD (n = 11) or WKY (n = 7) colons such that they remained comparable (p > 0.05, Figures 4C,D). However, IL-6 potentiated the capsaicin-evoked anti-secretory current in SD rats but not WKY rats such that a significant difference was apparent between the strains (p < 0.05, Figures 4C,D). Using two-way ANOVA, a difference in strain is approaching significance in the secretory phase [F (1, 36) = 3.7, p = 0.06] with no effect of IL-6 and no interaction. In the anti-secretory phase, a strain difference is also apparent [F (1, 31) = 7.7, p < 0.01] but there is no effect of the treatment itself and no interaction between the factors. DISCUSSION This series of electrophysiological studies builds on previous work from our group in which we demonstrated the capacity of IL-6 to directly stimulate a secretory current and decrease membrane permeability in colons from SD rats (O'Malley et al., 2011b). These www.frontiersin.org studies have investigated the effects of IL-6 on colonic secretory and permeability parameters in WKY rats which exhibit several markers of GI dysfunction and have been used as an animal model of IBS. By comparing changes in colonic I SC and TER between the strains, we have demonstrated that IL-6-induced changes in secretory activity and colonic permeability differ in WKY rats, thereby revealing a possible immune-mediated mechanism which could contribute to the dysfunctional bowel activity described in this rat (O'Malley et al., 2010a). The WKY rat has been well characterized as a suitable preclinical model of IBS (Gunter et al., 2000;Gosselin et al., 2009;Gibney et al., 2010;O'Malley et al., 2010a). The GI dysfunction exhibited by the WKY rat includes an innate hypersensitivity to visceral pain stimuli such as that induced by colorectal distension (Gibney et al., 2010) and altered defecation patterns, particularly when exposed to stress (O'Malley et al., 2010a). Expression of the stress-related peptide, corticotropinreleasing factor (CRF) receptors are altered both in the colon (O'Malley et al., 2010b) and centrally (O'Malley et al., 2011d) in this strain. Given that amygdalar CRFR1 activation can contribute to visceral hypersensitivity in WKY rats (Johnson et al., 2012) these changes may have direct effects on the IBS-like symptom profile. Moreover, colonic toll-like receptor expression is also altered (McKernan et al., 2009) in this strain. With regard to their colonic secretory parameters, WKY rats appear to display a pro-absorptive phenotype, which is thought to be reliant on decreased epithelial cholinergic sensitivity (Hyland et al., 2008). Under resting conditions we also observed this proabsorptive phenotype as indicated by lower I SC in the WKY rat as compared to SD. Interestingly, this relationship is reversed in the presence of IL-6, with a larger current being evoked from the WKY colonic tissues. The mechanisms underlying this secretory event are as yet unclear, however we have previously determined that submucosal neurons prepared from WKY colons display increased sensitivity to the neuroexcitatory effects of IL-6 (O'Malley et al., 2011a). As submucosal neurons have been attributed a primary role in regulating mucosal secretion and absorption, IL-6-mediated neural activation of colonic secretion may override the pro-absorptive phenotype regulated by epithelial cholinergic activity at rest (Hyland et al., 2008). This change in absorpto-secretory function would be consistent with the increase in mucus secretion and stress-induced fecal output evident in these animals (O'Malley et al., 2010a). To further assess the importance of neurally evoked changes in secretion following application of IL-6 to the basolateral side of the tissue, pharmacological stimulators were applied as previously described (Julio-Pieper et al., 2010). Veratridine evokes neuronally mediated secretory currents by depolarizing intrinsic neurons via increased permeability through voltage-gated Na + channels. This secretory response is caused by a net increase in Cl − secretion (Sheldon et al., 1990). Under control conditions the veratridineevoked responses were of similar amplitude in both SD and WKY colonic tissues and unlike SD tissue (O'Malley et al., 2011b), IL-6 had no effect on currents evoked in WKY tissue. Veratridine has been shown to stimulate the release of enteric neurotransmitters such as substance P, VIP, (Belai and Burnstock, 1988) and acetylcholine (Yau et al., 1986). However, further investigation will be required to determine the neurotransmitters underlying the IL-6induced modulation of veratridine-stimulated ion secretion in SD rats. Evidently, this potentiating mechanism is not active in WKY rats. The contribution of the cholinergic system to IL-6 secretion has been demonstrated in IBS patients . Indeed, activation of secretomotor neurons may underlie neurogenic secretory diarrhea (Liebregts et al., 2007). In SD rats we provided evidence that IL-6 exposure potentiated currents induced by the muscarinic receptor agonist, bethanechol that were sensitive to the sodium channel blocker, tetrodotoxin. This effect was intact in WKY tissues as IL-6 similarly enhanced the bethanechol current. Finally, currents evoked by activating transient receptor potential cation channels (TRPV1) were examined by exposing the tissue to capsaicin. Capsaicin stimulates visceral afferent neurons in the GI tract causing the subsequent release of nerve terminal neuropeptides which in turn, stimulate mucosal electrolyte transport and fluid secretion (Holzer et al., 1990;Vanner and Mac-Naughton, 1995), motility (Takaki and Nakayama, 1989), mucus secretion (Moore et al., 1996), and mucosal blood flow (Akiba et al., 1999) in addition to playing a protective role in maintaining mucosal integrity (Evangelista and Meli, 1989;Esplugues et al., 1990;Holzer et al., 1990). Indeed, TRPV1 is increased in inflammatory diseases of the GI tract (Yiangou et al., 2001) and in patients with rectal hypersensitivity (Chan et al., 2003). Application of capsaicin-evoked a large biphasic response in SD tissues which was comprised of an initial secretory phase followed by a larger more sustained anti-secretory phase consistent with previous studies (Yarrow et al., 1991). Interestingly, responses evoked by capsaicin in WKY tissues did not have such distinct phases. Rather than a small secretory response, there appeared to be a delay prior to the longer-lasting anti-secretory event, possibly indicating a balance between the two opposing mechanisms. Indeed, the differences between the strains came into sharper focus following addition of IL-6, which potentiated the anti-secretory phase in SD tissues only. Thus, in SD GI tissue, IL-6 exposure enhances the anti-secretory activity of afferent nerves whereas WKY rats have lost this regulatory response to IL-6 which could underlie the overall increased secretory activity in this strain. Moreover, the extrinsic, afferent innervation of the GI tract conveys information to the CNS that gives rise to the sensations of pain and discomfort. Thus, the insensitivity of WKY rats to IL-6-evoked potentiation of www.frontiersin.org the capsaicin anti-secretory response may also be important in the increased sensitivity to visceral pain (Gibney et al., 2010). Although the mechanisms of this effect require further elucidation, it is feasible that low-grade inflammation in the WKY rat may result in constant stimulation of capsaicin-sensitive nerve terminals causing neurotransmitter depletion or that IL-6 directly inhibits neurotransmitter release in these neurons. Alternatively, altered sensitivity to stress may contribute to these observations. CRF1 receptor antagonists alleviate visceral sensitivity in the WKY rat (Greenwood-Van Meerveld et al., 2005) but evidence exists for crosstalk between IL-6 and CRF (O'Malley et al., 2011c). Therefore, alterations in stress-induced expression of CRF receptors (O'Malley et al., 2010b;O'Malley et al., 2011d), may be linked to the increase in IL-6 sensitivity observed in the WKY rats. Tissue resistance was also measured in these animals as a marker of membrane permeability and was found to be similar between strains. Interestingly, IL-6 stimulated an increase in TER in both strains. Changes in permeability can occur as a result of alterations in the expression of tight junctions (Chen et al., 2010;Martinez et al., 2012), dysbiosis of microbiota (Fukuda et al., 2011) or through the increased presence of pro-inflammatory cytokines (Arrieta et al., 2006). Furthermore, stress can contribute to changes in permeability as has previously been demonstrated in WKY rats (Saunders et al., 2002). Over the 90 min duration of these recordings, one possibility might be an increase in mucus secretion evoked by IL-6, which could influence membrane permeability. Indeed, we have previously observed increased expression of goblet cell number in the WKY strain (O'Malley et al., 2010a). We demonstrated that acute application of IL-6 increases TER, appearing to help maintain the integrity of the epithelial cell layer in both SD and WKY rats. This is consistent with one previous study (Wang et al., 2007). On the other hand, chronic exposure to elevated IL-6 levels may result in increased gut permeability (Hiribarren et al., 1993;Natale et al., 2003). As mucosal levels of the pro-inflammatory cytokine, IL-6 are elevated in WKY colons (O'Malley et al., 2011a), one might have expected that continuous exposure to IL-6 would have resulted in increased colonic permeability in this strain. However, TER at rest is comparable between SD and WKY rats. This may indicate that there are increased numbers of IL-6-containing cells in the mucosa of WKY rats but not necessarily increased levels of IL-6 secretion. At a functional level, these studies have demonstrated that IL-6-evoked secretion is enhanced in WKY colons and this is likely to be due to increased sensitivity of submucosal neurons to the pro-inflammatory cytokine. We have provided evidence that inhibition of the potentiating effect of IL-6 on capsaicinevoked anti-secretory currents is a likely contributor to the changes in colonic secretion. These data further demonstrate the neuromodulatory effects of IL-6 in colonic function and provide mechanistic evidence of how elevations in systemic IL-6 in IBS patients could be a contributory factor in the pathophysiology of the disorder. CONTRIBUTION OF EACH AUTHOR Dervla O'Malley: study concept and design; acquisition of data; analysis and interpretation of data; drafting of the manuscript; critical revision of the manuscript for important intellectual content; statistical analysis. Timothy G. Dinan: critical revision of the manuscript for important intellectual content; study supervision. John F. Cryan: critical revision of the manuscript for important intellectual content; study supervision. females with irritable bowel syndrome: relationship to interferongamma, severity of symptoms and psychiatric co-morbidity.
HURT OR HELP? UNDERSTANDING INTIMATE PARTNER VIOLENCE IN THE CONTEXT OF SOCIAL NORMS AS PRACTISED IN RURAL AREAS Intimate partner violence (IPV) poses a serious threat to the welfare of women. IPV against women has aroused intense interest amongst policymakers, practitioners and researchers. Despite this development, IPV against women remains rife but there is still a dearth of research on the linkages between IPV and social norms. This study is a critical review of the literature on IPV and social norms as well as its impact on social work practice and policy. The authors argue that social norms can either promote or prevent IPV intervention and therefore propose an integrated approach to addressing IPV against women. INTRODUCTION Globally, violence against women has been a serious threat to their wellbeing, with gender-based violence (GBV) being one of the most common forms of such violence. IPV, which has been singled out as the most pervasive form of GBV, has become a global socio-economic and socio-cultural crisis. IPV is physical aggression, sexual coercion, emotional and psychological oppression, economic abuse and controlling behaviour in relation to an individual by a current or past intimate partner (Edwards, 2015;Klugman, 2017;Ogundipe, Woollet, Ogunbanjo, Olashore & Thsitenge, 2018). This results in physical, sexual and psychological harm. Research has indicated that 30% of women aged 15 or older are experiencing lifetime physical and/or sexual violence (Klugman, Hanmer, Twigg, Hasan, McCleary-Sill & Santa-Mary, 2014;Clark, Ferguson, Shreshtha, Shreshtha, Oakes, Gupta, McGhee, Cheong & Yount, 2018). To put it another way, IPV is experienced by one in three women in their lifetimes (Giddens & Sutton, 2017). More women in Africa are subjected to IPV (46%) and sexual violence (12%) than women anywhere in the world (McCloskey, Boonzaier, Steinbrenner & Hunter, 2016). This is mainly because more women in Africa reside in rural areas and are more vulnerable to IPV, as traditional norms that condone such violence are strongly recognised and regarded with high esteem in rural communities (Chigwata, 2016;McCloskey et al., 2016). The prevalence of IPV ranks higher in sub-Saharan Africa than in other regions, with the rate of 40%, and with 36% of the total population being affected by it (McCloskey et al., 2016;Klugman, 2017). Although various studies have attempted to unravel the influence of social norms on violence against women (Linos, Slopen, Subramanian, Berkman & Kawachi, 2013;Baldry & Pagliaro, 2014;Edwards, 2015;Strauss Gelles & Steinmetz, 2017;Clark et al., 2018;Cislaghi & Heise, 2018), there has not been much clarity on the association between IPV and social norms and how this promotes or hinders IPV prevention, particularly in rural areas. The few studies conducted (Roberto, Brossole, McPherson, Pulsifier & Brow, 2013;Hatcher, Colvin, Ndlovu & Dworkin, 2015;McCloskey et al., 2016) do not adequately focus on the influence of social norms in the perpetuation of IPV against women residing in rural areas. Furthermore, absent from the social work literature are studies that seek to elucidate the linkages between IPV and social norms and their consequences for social work practice and policy. Understanding social norms and their influence on the perpetuation of IPV is critical for social workers working with victims/survivors, particularly those residing in rural areas. Globally, much attention has been drawn to the need for better intervention strategies in ameliorating IPV. Most countries in Africa have ratified international legislation prescribing punitive measures for the perpetrators of IPV. However, the efficacy of this legislation in terms of tackling IPV remains doubtful (Ogundipe et al., 2018). Although formal written norms (i.e. laws, policies and frameworks) prohibiting violence against women have been in existence for many centuries, IPV remains a serious problem. This is mainly because IPV is viewed as a 'private tragedy' in different societies, especially in rural areas (Baldry & Pagliaro, 2014;Strauss et al., 2017). Thus it is critical to also consider the role that social norms can play in informing intervention strategies against IPV, especially in rural areas. Therefore, the aim of this article is to examine the linkages between social norms and IPV, particularly in rural areas, and how social norms can promote or prevent intervention strategies. The article also examines the implications of these linkages for social work practice and policy. The first section of the paper focuses on conceptualising IPV and social norms. The second section discusses social norms through the ecological and intersectional theoretical perspectives and examines how these norms facilitate IPV perpetuation and hinder IPV intervention. The last section focuses on the implications of social norms and IPV for social work practice and policy, with a focus on how social norms can also facilitate IPV prevention. CONCEPTUALISING SOCIAL NORMS Social norms are informal/unwritten rules and shared preferences derived from social systems that dictate the behaviour expected, allowed or sanctioned in particular situations (World Health Organisation (WHO), 2009;Baldry & Pagliaro, 2014;Guala, 2017;Clark et al., 2018). People follow these prescribed rules because they i) perceive that other people are following and conforming to the rules, ii) perceive that other members of society expect the rules to be followed, and iii) recognise that failure to conform to the social norms results in social disapproval and punishment (Baldry & Pagliaro, 2014;Guala, 2017;Strauss et al., 2017;Clark et al., 2018). Similarly, an individual's perception of the expected behaviour also matters for adherence to the social norms of a particular society. In this regard, researchers have grouped social norms into two broad categories: descriptive norms and injunctive norms. Firstly, descriptive norms refer to perceptions about what members of social groups ought to do (e.g. in certain communities it is accepted practice for husbands to beat their wives). Injunctive norms refer to consensus about a prescribed or prohibited behaviour (e.g. in our community it is acceptable for men to beat their wives when they are deemed to have done wrong) (Linos et al., 2013;Lilleston Goldmann & McCleary-Sills, 2017;Clark et al., 2018). In addition to the issue of understanding social norms, Cislaghi and Heise (2018) provide four main compliance mechanisms that people use to conform to social norms. These are, firstly, coordination: to achieve a goal in a society, there is need for coordination; hence people comply with the rules of the society. Secondly, there is social pressure: anticipation of rewards or social punishment can force to people to comply with social norms even when they don't feel like doing so. The third compliance mechanism is signalling and symbolism, which entail showing membership of a society, or group that people belong to, and adhering to the rules/norms specific to it. Fourthly, there are reference points: people internalise norms that are considered normal so that they comply automatically. Research has indicated that in sub-Saharan Africa social norms are mostly acknowledged in rural areas, where they are regarded as mechanisms for maintaining social order and social coordination, as they reflect beliefs, attitudes, behaviours and moral judgements about what is right and wrong (Matavire, 2012;Conroy, 2014;McCloskey et al., 2016;Chigwata, 2016). For example, they ensure social coherence, consensus regarding values and beliefs and less tolerance of diversity, which ultimately results in social order (Riddell, Ford-Gilboe & Leipert, 2009). In line with this, a study conducted in Muzarabani, a rural area in Zimbabwe, reported that among the Shona tribe social norms are premised on the principle of unhu (personhood and morality), and the community defines an individual as a person who adheres to the prescribed traditional values and ethics in order to maintain society's dignity and integrity (Matavire, 2012). Personal attitudes, beliefs and moral judgements stem from a person's positive or negative evaluation of something (Linos et al., 2013). Even when personal attitudes are not congruent with social norms, social norms can exert a great influence on how an individual behaves in particular circumstances and situations (Linos et al., 2013;Baldry & Pagliaro, 2014;Lilleston et al., 2017;Ogundipe et al., 2018). Therefore, for a social norm to be perpetuated, the majority of people do not have to believe it is true or not; rather, they simply perceive that society believes it to be right (Lilleston et al., 2017). Social norms do not operate in isolation: they are influenced by other social forces such as culture and religion. In Malawi and South Asia social norms that relate to child marriages and the sexual abuse of young girls are reinforced by entrenched beliefs in the system of patriarchy that minimises women's and girls' rights to their own body (Malhotra, Warner, McGonagle, Lee-Rife, Powell, Cantrell. & Trasi, 2011;Mwambene & Mawodza, 2017). Similarly, in countries like Sierra Leone and Senegal, social norms that are related to female genital mutilation are deeply rooted in rural cultural and religious beliefs that view this practice as proof of female decency and fertility (Kandala & Komba, 2015;Lillestone et al., 2017). In Zimbabwe, virginity testing is highly valued in rural areas and regarded as a sign of honour to the husband, families and the community. A study conducted by Matswetu and Bhana (2018) in Shamva, a rural area in Mashonaland Central in Zimbabwe, indicated that women who get married as non-virgins are not respected and are often humiliated by their husbands, as they are regarded as incomplete This increases their vulnerability to IPV. Additionally, a study conducted in Mashonaland Central in Zimbabwe indicated that women in rural areas are regarded as carriers of culture and are tasked with the responsibility of ensuring that young girls and women adhere to the traditional norms of kudhonza matinji (labia elongation) (Venganai, 2016). As such women in urban areas rely on them for guidance on labia elongation. This is done to ensure that men get maximum pleasure during sexual intercourse and also to curb promiscuity. Thus, women who get married without undergoing the labia elongation process are regarded as incomplete and become vulnerable to IPV (Venganai, 2016). SOCIAL NORMS AND IPV Among the many possible explanations of the high prevalence of IPV globally, particularly in sub-Saharan Africa, are social norms that perpetuate and justify male dominance within families and societies. A number of social norms have been identified that singly or jointly increase the risk of women experiencing IPV. These include cultural and religious practices such as female genital mutilation (Kandala & Komba, 2015); male dominance and superiority over women within families and society (McCloskey et al., 2016); acceptance of wife-beating as a way of 'correcting a stray wife' and a sign of love (Oyediran & Feyisetan, 2017); family privacy and stigma associated with divorce or being unmarried (Matavire, 2012); women's responsibility to maintain a marriage and their reproductive role (Shamu, Abrahams, Zarowsky, Shefer & Temmerman, 2013); social norms surrounding lobola (bride-price) payment, which acts as a compromising factor in IPV tolerance (Mesatywa, 2014); and men's entitlement to sex (Mukanangana, Moyo, Zvoushe & Rusinga, 2014). These social norms have been ingrained in some women in rural areas to such an extent that there is a tolerance of abuse and thus a heightened vulnerability to IPV. Also, because their rights are violated, women's productivity diminishes and in some instances such violation can cause premature death, which has dire consequences for both the family and the welfare of children. In situations where they become unproductive, there is over-reliance on the partner because of the fear of being left destitute without a source of income. Thus, for women experiencing IPV, it becomes very difficult to leave abusive relationships, as they feel they will not be able to provide for their children. Furthermore, attributes of masculinity are associated with dominance and aggression, with men holding the decision-making power in marriages (Mesatywa, 2014;Clark et al., 2018). Traditionally lobola (bride-price), referred to as roora in Zimbabwe, has always been regarded as a noble practice that gives status to both men and women. A study conducted in Zimbabwe on Christian women's experiences of IPV indicates that lobola has become the basis of oppression, which ultimately results in IPV (Chireshe, 2015). The social norms around lobola (such as entitlement to sex as a conjugal right and submission to the husband even when he is wrong) has a silencing effect on women and give males power over them. Consequently, women are forced to accept violence in their marriages, as they feel that by virtue of the lobola payment IPV is normal and should not be questioned (Matavire, 2012;Chireshe, 2015). Male dominance had been recorded as higher in rural than in urban areas because of the nature of rural communities and their strong adherence to social norms (Riddell et al., 2009). Accordingly, women who reside in rural areas are at a higher risk of IPV than their urban counterparts are. Patriarchal views and social norms are strongly upheld in rural communities, and this rural culture (common understandings, values, ideas and practices in a rural location) reinforces adherence to these norms and evinces less tolerance for non-conforming behaviour (Riddell et al., 2009;Conroy, 2014). Consequently, the fear of social sanctions and disapproval causes people to condone and accept violent behaviour (Lilleston et al., 2017). Hence, a population of both men and women still regard IPV perpetrated by the husband as a normal occurrence in marriages (Linos et al., 2013). A study conducted in 17 African countries revealed that more than half of the women surveyed justified and accepted IPV scenarios in their marriages and families (Linos et al., 2013;McCloskey et al., 2017). In the same vein, social norms do influence the responses of informal support systems and bystanders (neighbours, friends, community members) to IPV. These are important people, for they provide immediate informal support to victims of IPV. Their reaction to IPV cases is strongly influenced by the extent of the violation and the behavioural social scripts (patterns of behaviour learned and motivated by social norms that prescribe how a person should react in a particular situation) (Banyard, Edwards, Moschella & Seavey, 2018). In her study of a rural area in Zimbabwe Matavire (2012) argues that social norms do not allow disclosure of family matters to the public, as this destroys the dignity and integrity of the familyhence victims/survivors consign IPV to the private realm. Thus, moral judgements and beliefs are overridden by social norms (privacy concerns) and consequently individuals comply and do not intervene or offer help to IPV victims (Baldry & Pagliaro. 2014;Clark et al., 2018). Additionally, the police, who are often the first outside formal point of intervention for victims of IPV, often avoid dealing with such cases because of intrapersonal factors (beliefs and attitudes) and contextual factors (behavioural social scripts, characteristics of the situation) that are influenced by social norms that determine the decision to help or not (Baldry & Pagliaro, 2014). Most of the time police leniency to perpetrators and blaming victims for disobedience is are so ingrained in the social norms that they conform in a way that condones male dominance (Baldry & Pagliaro, 2014). A study conducted in the Western Cape in South Africa indicated that police officers also resort to victim blaming and upholding the patriarchal views and beliefs of the communities that they serve (Retief & Green, 2015). This then supports the patriarchal assumption that the women who are often victims of IPV must have provoked the perpetrator and they deserve the treatment they received (McCloskey et al., 2016). Therefore, help is limited, as the behavioural social scripts regard IPV as an acceptable way of dealing with conflicts in an intimate relationship (Riddell et al., 2009;Banyard et al., 2018). The above discussion makes it clear that the acceptability of IPV and its perpetuation are not reliant only on social norms: different factors also come into play. No single social variable causes IPV: the phenomenon is a consequence of different factors operating at different levels of the social ecology that tend to influence each other. The ecology of factors contributing to IPV goes beyond one driver. Social norms have not been clearly examined jointly with other social variables (i.e. education, geographical location, class, patriarchy) that also influence the perpetuation of IPV, particularly in rural areas, although research on IPV prevalence and its consequences is growing. Understanding how social norms intersect with other factors is fundamental to unearthing the different pathways that cause people to conform to certain practices, IPV included. Theoretical underpinnings The scholarship on intersectional feminism examines the multiplicity and interactivity of different social variables and how they influence the realities and lived experiences of different population groups (Gopaldas, 2013;Hill-Collins & Bilge, 2016). Intersectionality gained currency when women of colour and feminists began to explain their experiences and struggles in society and movements for change in the late 1980s and early 1990s. Crenshaw is credited for coining the term to explain how various systems of oppression converge to marginalise women of colour. According to Gopaldas, (2013) the concept of intersectionality has been expanded and moves beyond specific social identity structures (i.e. race, gender and class) to include other social variables such as age, ethnicity, religion, education, physical ability to mention but a few depending on the context. Thus, intersectionality has become applicable in different contexts and population groups and now integrates other marginalised groups of society (such as women residing in rural areas). Additionally, intersectionality brings to the fore the debate on homogeneity with regard to the perpetuation of IPV. There are differences in population groups that are often deemed homogeneous (in terms of their lived experiences) such as women and men. For instance, women residing in urban areas may find difficulties in reporting cases of IPV because of their gender and attitudes and the beliefs of police officers who condone the consigning of IPV cases to the realm of privacy within the family institution (Retief & Green, 2015). These urban women will have a relative advantage over women residing in a rural setting, where police stations and health services are not available and women are mostly unemployed, which makes escaping the abusive relationship difficult. In the same vein, a study conducted on IPV in Zimbabwe showed that women in rural areas experience IPV more than their urban counterparts because of their low employment status, weak access to resources, their geographical location and lower levels of education, which all constrain their decision-making powers in intimate relationships (Fidan & Bui, 2015). This makes it clear that the perpetration of IPV varies according to the politics of location. The intersection of social norms with other social variables impacts on the struggles and experiences that are linked to the marginalisation of poor women, particularly those residing in rural areas. A study conducted in South Africa with Xhosa women residing in rural areas revealed that the intersection of social norms relating to lobola payment, their poor family background, unemployment because of lack of education and their geographical location means that they have limited access to opportunities and resources contributed to their vulnerability to IPV and limited their ability to escape violent relationships (Mesatywa, 2014). Similarly, a study conducted in Zimbabwe indicated that social norms relating to patriarchy and a husband's conjugal rights following payment of lobola, limited access to knowledge and adherence to a religion that promotes forced marriage heightened women's risk of IPV and limited their chances of leaving abusive relationships (Mukanangana et al., 2014). Thus focusing on social norms alone is inadequate in as far as IPV against women, especially those residing in rural areas, is concerned. Similarly, Bronfenbrenner's (1979) ecological model also explains the various social systems in which human beings live as a series of layers. The ecological model posits that throughout the development of humans there is a reciprocal interaction of objects, humans and their environment (Salihu, Wilson, King, Marty & Whiteman, 2015). The interactions of different factors on the four levels (individual, relationship, political and societal) of social ecology could contribute to IPV. This interaction determines the behaviour of a person, including perpetrators' and victims' reactions to IPV. Hence the environment and the norms associated with it play a significant role in the perpetuation of IPV. A recent study by Eriksson and Mazerolle (2015) on child abuse, which observed IPV by male perpetrators, indicates how the influence of the environment can explain the transmission of violence across groups and generations through children learning from the family and the society at large. The results of the study indicate that violence is also transmitted through the acquisition of beliefs, norms and attitudes that individuals conform to that are considered as appropriate behaviour in society (Eriksson & Mazerolle, 2015). This speaks to the internalisation of social norms that condone the abuse of women, which in turn reinforces the acceptability of IPV against women. The systems theory further explains the interaction of social norms and other elements in perpetuating IPV. According to systems theory, any system consists of subordinate subsystems that are interrelated to make up a whole. It focuses on social homeostasis (the system maintaining a relatively stable state of balance) which, when disturbed, can readjust itself and regain social stability (Zastrow-Kirst Ashman, 2010). This then means that through social norms the 'system' maintains its stability and, if disturbed, in readjusting itself it applies 'social sanctions and social rejection' to those who do not comply with the social norms of the system. Therefore, the subsystems (families, individuals) feel compelled to conform to rules that keep the system (society) together through adhering to social norms. Hence, male perpetrators of IPV are deemed right by society for exercising their masculine authority over women, particularly in rural areas where patriarchy is strongly recognised and regarded as a sign of power and dominance, which in turn maintains order and stability (McCloskey et al., 2016). IMPLICATIONS FOR SOCIAL WORK PRACTICE AND POLICY The existence of IPV and its consequences for the wellbeing of women, especially those residing in rural communities, cannot be ignored. Individuals are part of the ecosystem of society; hence the ability to solve social problems, IPV included, also enhances their wellbeing and functioning. IPV is deeply rooted in traditional social norms and ideologies. The role of these social norms and traditional values and beliefs cannot be ignored. It has been argued that social work practice in Africa fails to tackle social problems because it deploys skills and strategies that are deeply rooted in European standards and principles. Baffoe and Dako-Gyeke (2013) argue that social work education that also incorporates gender studies in Africa is largely embedded in the dictates and principles of the European community. Thus, it fails to have the desired impact in addressing social problems such as poverty and IPV. For example, social work practice in Zimbabwe dates back to the colonial era, and the fact that present-day social work still uses the dictates and principles of the Global North tends to make communities unreceptive of the social work skills and intervention strategies (Baffoe & Dako-Gyeke 2013;Dziro, 2013). This is largely due to the incongruence of the social workers' ethical principles and the social norms of the clients or communities. Yan (2008) argues that social work ethical principles (such as self-determination and respect for an individual) are incompatible with various social norms that prescribe male domination and put much emphasis on the subordination of women. Thus, social work practice in rural areas becomes a mammoth task, given the nature and extent of the influence of social norms on behaviour and limited access to resources (Chiwara & Lombard, 2017). It is important then for social workers to ensure that their skills and intervention strategies address the foundation of IPV, which is the traditional belief system that prevails in different contexts. As social workers strive to ensure sustainable intervention strategies in dealing with IPV, it is critical that they understand the social norms that can perpetuate or change attitudes relating to IPV. Additionally, social workers are encouraged to establish a close and engaging relationship with victims/survivors of IPV. In creating rapport, they will be able to identify the problems emanating from different social norms and provide the necessary professional assistance. Research conducted by Keeling and Van Wormer (2012) revealed that women affected by IPV are usually scared of social workers, because to them social workers do not consider their plight but are instead more concerned with the removal of the child(ren) from the abusive environment. These women indicated that they are in conflict with the system that is meant to help them and regard social work as a professional way of abusing them and taking away their children. Thus, instead of helping the victims/survivors of IPV, the social workers actually hurt them by taking away their children, which is traumatising and painful for them. Changing attitudes, beliefs and behaviour influenced by social norms is critical in IPV intervention. Social norms can either emphasise continuity of beliefs and attitudes, or they can actively promote change (Giddens & Sutton, 2017). Marx and Delport (2017) argue that sustainable change implies transformation in attitudes and the self, which points to behavioural changes. Such shifts in attitude and beliefs can best be informed by health behaviour change models such as the Health Belief Model (HBM). HBM is a psychological model which explains health behaviours by focusing on individuals' attitudes and beliefs (Jones, Jensen, Scherr, Brown, Christy & Weaver, 2015). Such an intervention model can assist in changing injunctive norms that support IPV (Lilleston et al., 2017). Moreover, Clark et al. (2018) and Ogundipe et al. (2018) argue that if a large number of people shift their attitudes and beliefs and change their behaviour, injunctive norms become less effective. A good example is the SASA! (KiSwahili word meaning 'now') programme in Uganda which seeks to reassess the norms around the acceptability of violence and gender inequality (McCloskey et al., 2016). Lilleston et al. (2017) further argue that shifting social norms also entails directly challenging descriptive norms and injunctive norms. It is critical for social workers practising within IPV-tolerant spaces, especially those who work in rural communities, to engage not only with victims, but also with different stakeholders (such as chiefs, headmen) who are responsible for ensuring that social norms are being adhered to in their communities (Ogundipe et al., 2018). Hence a comprehensive approach to social work practice within the IPV space should address the co-existing adversities and the way that they interconnect in perpetuating IPV (Etherington & Baker, 2018). Behaviour change in shifting norms that condone IPV also requires structural changes. Therefore, policy formulation on violence against women, IPV included, should incorporate an intersectional approach that seeks to address the way different groups of society experience inequality and oppression. Policies that target a single aspect of identity (e.g. gender or class) fail to respond to the multifaceted problems that shape women's experiences of IPV. Etherington and Baker (2018) concur and argue that policies that do not adequately address the different levels of inequalities and how they intersect in perpetuating IPV become ineffective. For instance, a policy that addresses gender issues in relation to IPV without taking into consideration the influence of class, literacy, ethnicity, religion and geographical location becomes ineffective. Thus, social workers who work within different IPV spaces and consult such policies face difficulties in serving the communities, as the policies fail to speak to the actual practical situations. Hence, policy analysts can use the notion of intersectionality to identify disparities in the existing policies and programmes, and thus determine how they can eliminate the unwanted consequences (Etherington & Baker, 2018). Additionally, formal laws and policies may be less desirable than the social sanctions imposed on individuals who violate social norms (Lilleston et al., 2017). Such a move has been witnessed in Zimbabwe through the enactment of the Domestic Violence Act (2007), which criminalises violence against women. CONCLUSION This article provided an overview of the linkages between IPV and social norms. A detailed conceptualisation of social norms was presented and the key elements of these norms were discussed. The discussion went on to provide linkages between IPV and social norms and considered how these facilitate IPV and hinder IPV prevention. The article also focused on the implications of these linkages on social work practice and how social norms can be adapted to also facilitate the prevention of IPV against women. Based on the discussion it is recommended that social workers working within IPV spaces, especially in rural areas, ensure that the intervention strategies that they use to deal with IPV also take into consideration the role that social norms play. Also, further research on the intervention strategies that social workers working in rural areas use and their efficacy in ameliorating IPV should be undertaken. This could also include the influence of bystanders and how social workers can actually empower them, as they are often the first point of contact for IPV victims.
Arsenite Exposure of Cultured Airway Epithelial Cells Activates κB-dependent Interleukin-8 Gene Expression in the Absence of Nuclear Factor-κB Nuclear Translocation* Airway epithelial cells respond to certain environmental stresses by mounting a proinflammatory response, which is characterized by enhanced synthesis and release of the neutrophil chemotactic and activating factor interleukin-8 (IL-8). IL-8 expression is regulated at the transcriptional level in part by the transcription factor nuclear factor (NF)-κB. We compared intracellular signaling mediating IL-8 gene expression in bronchial epithelial cells culturedin vitro and exposed to two inducers of cellular stress, sodium arsenite (AsIII), and vanadyl sulfate (VIV). Unstimulated bronchial epithelial cells expressed IL-8, and exposure to both metal compounds significantly enhanced IL-8 expression. Overexpression of a dominant negative inhibitor of NF-κB depressed both basal and metal-induced IL-8 expression. Low levels of nuclear NF-κB were constitutively present in unstimulated cultures. These levels were augmented by exposure to VIV, but not AsIII. Accordingly, VIV induced IκBα breakdown and NF-κB nuclear translocation, whereas AsIIIdid not. However, both AsIII and VIV enhanced κB-dependent transcription. In addition, AsIII activation of an IL-8 promoter-reporter construct was partially κB-dependent. These data suggested that AsIII enhanced IL-8 gene transcription independently of IκB breakdown and nuclear translocation of NF-κB in part by enhancing transcription mediated by low levels of constitutive nuclear NF-κB. Nuclear factor-B (NF-B) 1 was originally described as a constitutive nuclear transcription activator in mature B lymphocytes that bound a specific DNA sequence in the intronic enhancer of the immunoglobulin -light chain (Ig) gene and mediated constitutive Ig expression. (1). However, numerous subsequent studies have shown that NF-B is polymorphic. It is composed of homo-or heterodimers of at least five structurally related mammalian proteins that have a broad tissue distribution. Likewise, NF-B modulates the expression of a large number of genes whose products participate in immune, inflammatory, and environmental stress responses (2,3). In many tissues NF-B mediates transient changes in gene expression in response to humoral and environmental stimuli. In this case, NF-B is held inactive in the cytoplasm by IBs, a family of inhibitor proteins that mask its nuclear translocation signal. The activation of NF-B is mediated in part by the inactivation of IBs through stimulus-specific posttranslational modifications of IBs. To date, the most commonly observed mechanism of IB inactivaton involves phosphorylation of two N-terminal serine residues by IB kinase, a large multimeric complex that receives input from a variety of signal transduction pathways (4,5). Phosphorylation of IBs by the IB kinase complex targets IBs for ubiquitination and proteolytic degradation. Upon its release from IB, NF-B translocates into the nucleus and binds to B response elements (RE) in the enhancer regions of target genes. A number of pharmacological interventions that inhibit inducible B-dependent transcription, however, do not inhibit the translocation of cytoplasmic NF-B into the nucleus or its DNA binding activity (6 -10). So an increase in nuclear NF-B alone is not sufficient for the maximal activation of B-dependent transcription. Conversely, enhanced B-dependent transcription has been observed in the absence of an increase in nuclear NF-B in cells that have low levels of constitutive nuclear NF-B (11). This indicates that the mobilization of cytoplasmic NF-B is not invariably necessary for the activation of transcription. Thus B-dependent transcription is dependent upon both the abundance of nuclear NF-B and additional cooperative factors and regulatory processes that influence the transcription activating (transactivating) potential of NF-B. Of the five known mammalian NF-B family members, p65 (RelA), RelB, c-Rel, p50/p105, and p52/p100, only three, p65, RelB, and c-Rel, are capable of transcriptional activation (2). The transactivation potential of the p65 subunit of NF-B has been shown to depend upon specific p65 protein domains. NF-B family members share a conserved 300-amino acid, N-terminal Rel homology domain that mediates dimerization, nuclear localization, and DNA binding. Phosphorylation of a cAMP-dependent protein kinase site in the Rel homology domain of p65 strongly increases B-dependent transcription and requires IB␣ degradation (6). In contrast, studies using constitutive nuclear chimeric transcription factors have suggested that the transactivation potential of p65 can also be regulated by nuclear processes that are independent of IB␣ degradation and nuclear translocation of p65. Transcriptional regulation by these processes requires one or both of two C-terminal transactivation domains of p65 (11,12). The mitogen-activated protein (MAP) kinase signal transduction cascades (13)(14)(15) have been implicated as upstream regulatory pathways that mediate the activation of B-dependent transcription by processes that are independent of IB␣ degradation and NF-B nuclear translocation (7,9,11,12). We have recently demonstrated that two metal compounds, sodium arsenite and vanadyl sulfate, activate MAP kinases in airway epthelial cells in vitro (16). These metals also evoke a proinflammatory response as indicated by enhanced production of interleukin-8 (IL-8), an ␣-chemokine that is a neutrophil chemoattractant and stimulant (17,18). IL-8 gene transcription is induced by phorbol esters and the proinflammatory cytokines tumor necrosis factor-␣ and interleukin-1␤ (IL-1␤). This induction depends upon an enhancer region of the IL-8 gene located upstream of the transcription start site (base pairs Ϫ126 to Ϫ72), which includes activator protein-1, C/EBP, and NF-B response elements. All three of these cis-acting elements are necessary for maximal transcriptional activation, although there are tissue-specific differences in this dependence. The activator protein-1 and C/EBP elements are employed in a tissue-specific fashion, whereas the NF-B element is necessary in all tissues examined (19 -22). In this study, we have investigated the role of NF-B mobilization in B-dependent gene transcription following treatment with As III and V IV . Our results suggest that in cultured airway epithelial cells both As III and V IV activated B-dependent transcription; however, V IV mobilized cytoplasmic NF-B, whereas As III did not. EXPERIMENTAL PROCEDURES Cell Culture and in Vitro Exposure-Primary normal human bronchial epithelial (NHBE) cells were obtained from healthy, nonsmoking adult volunteers. Epithelial specimens were obtained by cytologic brushing at bronchoscopy and subsequently expanded in culture as described previously (23). The human BEAS-2B bronchoepithelial cell line was cultured as described previously (24). Vanadyl sulfate or sodium arsenite (both from Sigma) were diluted in BEGM (NHBE) or KGM (BEAS-2B) before addition to the cell culture. Analysis of IL-8 Expression by RT-PCR and Enzyme-linked Immunosorbent Assay-Extraction of RNA, first-strand cDNA synthesis, and DNA amplification were performed as described previously (23) using the following oligonucleotide primers: GAPDH, sense, CCATG-GAGAAGGCTGGGG, and antisense, CAAAGTTGTCATGGATGACC; IL-8, sense, TCTGCAGCTCTGTGTGAAGGTGCAGTT, and antisense, AACCCTCTGCACCCAGTTTTCCTT; and c-Jun, sense, CGAGCTGGA-GCGCCTGATAAT, and antisense, GCGTGTTCTGGCTGTGCAGTT. Following amplification, products were analyzed by alkaline gel electrophoresis through 2% agarose gels in 1ϫ Tris/borate/EDTA buffer. The gel was stained using 1 g/ml ethidium bromide and photographed under UV illumination with Polaroid type 55 P/N film (Polaroid, Cambridge, MA). The specific bands were quantified using the Kodak 1D Image Analysis Software (Eastman Kodak Company, Rochester, NY), and optical densities for IL-8 mRNA bands were normalized to GAPDH band intensities. IL-8 content in conditioned medium collected from NHBE cells treated with sodium arsenite or vanadyl sulfate was assayed using a commercial IL-8 enzyme-linked immunosorbent assay kit (R & D Systems). Separation of Cytoplasmic and Nuclear Fractions-After washing NHBE cells with ice-cold PBS, 200 l of cold cytoplasmic extraction buffer, CEB (10 mM Tris-HCl, pH 7.9, 60 mM KCl, 1 mM EDTA, 1 mM dithiothreitol) with protease inhibitors (1 mM Pefabloc, 50 g/ml antipain, 1 g/ml leupeptin, 1 g/ml pepstatin, 40 g/ml bestatin, 3 g/ml E-64, 100 g/ml chymostatin; all purchased from Roche Molecular Biochemicals) was added to each well. Using a rubber policeman, cells were scraped up and transferred into a microcentrifuge tube. The cells were allowed to swell on ice for 15 min and then Nonidet P-40 (Sigma) was added to a final concentration of 0.1%, and the tube was vortexed for 10 s. Nuclei were pelleted by centrifugation at 15,000 ϫ g for 30 s. The supernatant containing the cytoplasmic fraction was mixed with [ 1 ⁄4] volume of 4ϫ loading buffer (62.5 mM Tris-HCl, pH 6.8, 10% glycerol, 2% SDS, 0.7M ␤-mercaptoethanol, 0.05% bromphenol blue), denatured at 95°C for 10 min, and stored at Ϫ70°C for immunoblot analysis. Protein content of a small aliquot of the cytoplasmic fraction was determined using the DC Bradford assay (Bio-Rad). The nuclei were washed with CEB and centrifuged again at 15,000 ϫ g for 30 s. The supernatant was aspirated, and the nuclei were incubated for 10 min on ice in nuclear extraction buffer (20 mM Tris-HCl, pH 8.0, 400 mM NaCl, 1.5 mM MgCl 2 , 1.5 mM EDTA, 1 mM dithiothreitol, 25% glycerol) with protease inhibitors. After brief centrifugation, the supernatants, containing the nuclear fraction, were either stored at Ϫ80°C until analysis by electrophoretic mobility shift assay or denatured and stored for immunoblot analysis as described above. Promoter Reporter Constructs, Transfection, and Promoter-Reporter Assay-A region of the 5Ј flank of the IL-8 gene (Ϫ1370 to ϩ82) that included the transcription start site was synthesized by amplification of human genomic DNA (Promega, Madison, WI). The amplification products were subcloned into pCR2.1 (Invitrogen, San Diego, CA), and the insert of a clone with a suitable orientation was excised with KpnI and XhoI restriction enzymes (Promega) and inserted upstream of the coding region of the firefly luciferase gene in pGL2-basic (Promega), generating the construct p1.5IL8wt-luc. The NF-B and C/EBP response elements in p1.5IL8wt-luc were disrupted by site-directed mutagenesis using PCR and uracil-containing oligonucleotides as described (25,26). The NF-B response element was mutated from Ϫ82 GTGGAATTTCC Ϫ72 to Ϫ82 GaatAATTTCC Ϫ72 (27), generating p1.5IL8B Ϫ . The C/EBP response element was mutated from Ϫ92 GTTGCAAATC Ϫ83 to Ϫ92 GcTaC-gAgTC Ϫ83 (21), generating p1.5IL8C/EBP Ϫ . Mutations were confirmed by sequencing (University of North Carolina Automated DNA Sequencing Facility, Chapel Hill, NC). A B-dependent promoter-reporter construct, pNF-B-luc (Stratagene), was also used. It was composed of a 5ϫ tandem repeat of the NF-B RE of the mouse Ig gene intronic enhancer cloned upstream of a TATA box and a firefly luciferase cDNA. A constitutively active SV40 promoter-␤-galactosidase construct, pSV-␤-galactosidase (Promega) was used to adjust for well-to-well variation in cell number and transfection efficiency. BEAS cells grown to 40 -80% confluence in 24-well tissue culture dishes were co-transfected with 236 pg of one of the IL-8 promoterluciferase vectors or pNF-B-luc and 14 pg of pSV-␤-galactosidase using 1.5 g of DOTAP transfection reagent (Roche Molecular Biochemicals). 48 h after transfection cultures were treated for 1 h with 50 M sodium arsenite or vanadyl sulfate and cultured for an additional 7 (arsenite) or 3 h (vanadium). Luciferase and ␤-galactosidase activity was determined using the Dual Light reporter gene assay system (Perkin-Elmer) and an AutoLumat LB953 luminometer (Berthold Analytical Instruments, Nashua, NH). Promoter activity was estimated as specific luciferase activity (luciferase counts/unit ␤-galactosidase counts). Infection with Adenovirus-NHBE cells grown to about 80% confluence were infected with Ad5IB␣ (28) or a nonrecombinant control vector, Ad5CMV3, at a multiplicity of infection of 100 plaque-forming units/cell for 3-4 h. The infection mixture was aspirated, and the cells were incubated for another 24 h, before stimulation with sodium arsenite or vanadyl sulfate. Immunoblot Analysis-Protein samples (50 g) were separated by SDS-polyacrylamide gel electrophoresis on 14% Tris-glycine gels, followed by immunoblotting using specific rabbit antibodies to IB␣ or p65 (both 1:1000, Santa Cruz Biotechnology, Santa Cruz, CA) for 1 h at room temperature. Antigen-antibody complexes were stained with horseradish peroxidase-conjugated goat anti-rabbit antibody (1:4000, Bio-Rad) and enhanced chemiluminescence (ECL) reagent and ECL film (both from Amersham Pharmacia Biotech). Immunoblot films were digitized, and the optical densities of specific antigen-antibody complexes were quantified as described above (see RT-PCR methods). Indirect Immunofluorescent Localization of Hemaglutinin-tagged IB␣(S32A,S36A)-BEAS-2B cultures that had been infected with Ad5IB␣ (see above) or Ad5LacZ 24 h earlier were fixed for 5 min with 4% paraformaldehyde in CEB at room temprature, lysed for 2 min on ice with 0.2% Nonidet P-40 in CEB, washed once with CEB, fixed again for 20 min on ice, and finally blocked by incubation in 2% BSA/PBS on ice for 1 h. The hemagglutinin-tagged IB␣(S32A,S36A) was localized by incubation overnight in 1 g/ml mouse anti-hemagglutinin monoclonal antibody (Santa Cruz Biotechnology) diluted in 0.2% BSA/PBS followed by a 45-min incubation in a 1:1000 dilution of ALEXA 488 goat anti-mouse secondary antibody (Molecular Probes, Eugene, OR) diluted in 0.2% BSA/PBS. Samples were washed with 2% BSA/PBS and photographed on a Zeiss Axiovert 10 fluorescence microscope using a standard fluorescein excitation and emission filter set. Exposure to Sodium Arsenite or Vanadyl Sulfate Enhanced IL-8 Gene Expression in NHBE Cells -Exposure of primary human airway epithelial cells to noncytotoxic concentrations of sodium arsenite or vanadyl sulfate in vitro has been shown to enhance IL-8 expression (16,23). These observations were confirmed and extended by estimating the concentration thresholds for As III -and V IV -induced IL-8 expression. Levels of IL-8 protein in supernatants of NHBE cells cultured in the absence or presence of various concentrations of As III and V IV for 24 h are shown in Fig. 1. NHBE cultures constitutively expressed IL-8, and this expression was augmented in a dose-dependent fashion by challenge with the metals. The threshold concentration for metal-induced IL-8 production was lower for V IV (12.5 M) than for As III (25 M). Likewise, V IV induced greater increases in IL-8 production compared with As III when the metals were used at the same concentrations. These data indicated that V IV was a stronger stimulant than As III . The same doses of iron, nickel, and copper sulfate did not evoke IL-8 expression (not shown). Thus the response to As III and V IV was independent of colloidal properties of metal salts and dependent upon metal species-specific interactions with cellular constituents. The As III -and V IV -induced IL-8 production by NHBE cultures was preceded by an increase in steady-state levels of IL-8 mRNA. As shown in Fig. 2A, both As III and V IV elevated IL-8 mRNA levels above the basal level within 2 h. Quantitative estimates of IL-8 mRNA abundance using GAPDH mRNA levels to normalize between samples showed that arsenite induced approximately a 2.4-fold increase and vanadium a 5-fold in-crease in steady-state IL-8 mRNA abundance (Fig. 2B). As in the case of IL-8 protein production, V IV showed greater potency than As III in inducing a response. The Sodium Arsenite-and Vanadyl Sulfate-induced IL-8 Expression in Airway Epithelial Cells Was NF-B-dependent- The enhanced levels of IL-8 mRNA induced by As III and V IV could be mediated by enhanced IL-8 gene transcription. Because extracellular stimulus-dependent IL-8 gene transcription has been shown to be regulated in part by the transcription factor NF-B (19 -22), the role of NF-B in the As III -and V IV -induced IL-8 expression was investigated. NF-B activity was suppressed in NHBE cultures by overexpression of a dominant negative IB␣ mutant (IB␣(S32A,S36A) in which serines 32 and 36 had been substituted with alanines. Overexpression of this mutant IB can sequester NF-B into IB␣(S32A,S36A)-NF-B complexes that are unresponsive to numerous stimuli that mobilize NF-B by activating IB kinases that specifically phosphorylate serines 32 and 36. NHBE cultures were infected with Ad5IB␣, an adenoviral expression vector encoding hemagglutinin-tagged IB␣(S32A,S36A) (28) or with a nonrecombinant control vector (Ad5CMV3). Analysis of IB␣ levels following infection by immunoblotting confirmed overexpression of IB␣ (data not shown). As expected, stimulation with As III or V IV up-regulated steady-state IL-8 mRNA levels in the control infected cultures (Fig. 3, Ad5-CMV3). In marked contrast, overexpression of the dominant negative IB␣ depressed both the As III -and V IV -induced increases in steady-state IL-8 mRNA abundance to levels below those observed in Ad5CMV3-infected, unstimulated cultures (Fig. 3, Ad5-IB␣). Basal IL-8 mRNA levels were also suppressed, suggesting that NF-B may regulate basal IL-8 expression in NHBE cultures. Arsenite also induced an increase in c-Jun mRNA levels, but this response was not affected (Fig. 3A, c-jun), indicating that IB␣(S32A,S36A) overexpression selec- tively inhibited signal transduction in the NHBE cultures. Vanadyl Sulfate, but Not Sodium Arsenite, Induced IB␣ Breakdown and p65 Nuclear Translocation and Increased Nuclear NF-B DNA Binding Activity-To determine whether As III or V IV treatment mobilized cytoplasmic NF-B by inducing degradation of IBs, cytoplasmic fractions of As III -or V IVstimulated cells were subjected to immunoblotting analysis using IB␣-and IB␤-specific antibodies. Treatment with V IV induced a rapid (within 30 min) reduction in cytosolic levels of both IB␣ and IB␤ protein levels in NHBE cultures. In contrast, arsenite exposure had no effect on IB␣ or IB␤ levels ( Fig. 4), indicating that V IV , but not As III , induced IB degradation. To further test this inference, levels of the p65 subunit of NF-B in cytoplasmic and nuclear fractions were estimated by immunoblot analysis. Fig. 5 (A and B) shows that basal levels of nuclear p65 were detected in unchallenged cultures and that V IV , but not As III , induced an increase in ratio of nuclear to cytoplasmic p65 (n/c p65) compared with control ratios (Fig. 5C). Overexpression of IB␣(S32A,S36A) blocked the V IV -induced increase in n/c p65 but did not affect n/c p65 in controls or in As III -treated cultures (data not shown), suggesting that IB␣(S32A,S36A) did not alter the partitioning of NF-B between cytoplasm and nucleus in untreated and As IIItreated cultures but did prevent mobilization of NF-B. As a final test of the apparent differential mobilization of cytoplasmic NF-B by As III and V IV , the influence of challenge with the metals on the levels of nuclear NF-B DNA binding activity were assessed. Electrophoretic mobility shift assays showed that the enhanced n/c p65 ratio observed in V IV -treated cultures coincided with enhanced NF-B DNA binding activity and that As III treatment did not induce a comparable effect (Fig. 5C, compare lane V with lane C). The p65 subunit of NF-B was a component of this DNA binding activity, because it could be supershifted with an anti-p65 antibody (data not shown). However, the steady-state levels of nuclear p65 in untreated and arsenite-treated cultures observed by immunoblotting ( Fig. 5A) were not detected by EMSA (Fig. 5C, lanes C and As), demonstrating that the detection of p65 by EMSA depended upon factors in addition to the mere presence of p65 in nuclear extracts. Given that basal levels of NF-B DNA binding activity were not detected, the EMSA did not rule out the possibility that arsenite mobilized some small quantity of cytoplasmic NF-B that was not detected. Because arsenite did not induce IB breakdown (Fig. 4) or an increase in the n/c p65 ratio (Fig. 5, A and B) and the EMSA was not contradictory, the data were consistent with the notion that V IV , but not As III , mobilized cytoplasmic NF-B. Both Vanadyl Sulfate and Sodium Arsenite Enhanced B-dependent Transcription in Airway Epithelial Cells-The influence of As III and V IV on B-dependent transcription in airway epithelial cell cultures was investigated by transient transfection assays using a B-dependent promoter-reporter construct, pNF-B-luc. Because of the limited number of primary cells available, these assays were performed using the BEAS-2B human bronchoepithelial cell line (29). Similar to the primary cell lines, the BEAS-2B cells constitutively expressed low levels of IL-8 mRNA and protein in culture that were significantly augmented by treatment with As III or V IV and both basal and inducible expression were significantly reduced by IB␣(S32A,S36A) overexpression (data not shown). BEAS-2B cultures were transiently cotransfected with pNF-B-luc and pSV␤-galactosidase. The pSV␤-galactosidase construct directed ␤-galactosidase expression under the control of a constitutively active viral promoter that did not respond to either As III or V IV treatment (data not shown). Consequently, ␤-galactosidase activity could be used as a normalizing factor to adjust for well-to-well variation in transfection efficiency and cell number as well as an index of cell viability. 48 h after transfection, cultures were left untreated or were treated with 50 M metal for 1 h and then assayed for luciferase and ␤-galactosidase activity 7 h (As III ) or 3 h (V IV ) later. These conditions were based upon the kinetics of IL-8 protein expression, which showed that the response to V IV was rapid with IL-8 increases in the medium evident 4 h after exposure, whereas increased IL-8 protein was not apparent until 8 h after exposure to arsenite (data not shown). Unstimulated cultures supported transcription of pNF-B-luc as assessed by specific luciferase activity (Fig. 6A, Media Ϫ/Ϫ ), and brief exposure to both As III and V IV enhanced transcription above its basal level ( Inhibition of the V IV -induced B-dependent activity was expected, because IB␣(S32A,S36A), a cytoplasmic inhibitor, would be expected to prevent NF-B mobilization, and anti-p65 immunoblotting had shown that it prevented the V IV -induced increase in n/c p65 ratio (not shown). The inhibition of basal and arsenite-induced B-dependent transcription was unexpected, because they appeared to be independent of NF-B mobilization. However, indirect immunofluorescent localization of the hemagglutinin-tagged IB␣(S32A,S36A) transgene product demonstrated that it was present in both the nucleus and cytoplasm (Fig. 6B). The presence of nuclear IB␣(S32A,S36A) and inhibition of B-dependent transcription was in accordance with reports that IB␣ when uncharged with NF-B is imported into the nucleus where it can extract NF-B from transcription initiation complexes and inhibit B-dependent transcription (30 -33). Thus, IB␣(S32A,S36A) overexpres- Representative immunoblots are shown. B, densitometric analysis of optical densities of the anti-p65 immunoreactive bands from at least three independent experiments are shown. The data are expressed as the mean increase in p65 levels relative to unchallenged controls Ϯ S.E. C, a representative EMSA for NF-B DNA binding activity in NHBE cell nuclear extracts prepared from untreated cultures (C) and cultures treated with 50 M arsenite (As) or vanadyl sulfate (V) are shown. A radiolabeled double-stranded oligonucleotide corresponding to the NF-B RE of the MHC class II gene enhancer was used as probe (see Table I). FIG. 6. Both vanadium and arsenite activated B-dependent transcription of a 5xNF-B-reporter construct in BEAS-2B cells. BEAS-2B cultures were transiently cotransfected with pNF-B-luc and pSV␤-galactosidase (see "Experimental Procedures"). 24 h post-transfection, cultures were left uninfected (Ϫ/Ϫ) or were infected with Ad5CMV3 (ϩ/Ϫ) or Ad5IB␣ (Ϫ/ϩ) at an multiplicity of infection of 100 plaque-forming units/cell for 3 h. 48 h post-transfection, cultures were challenged with 50 M sodium arsenite or vanadyl sulfate for 1 h and harvested 7 or 3 h later, respectively. Specific luciferase activity in culture lysates was determined using ␤-galactosidase activity as a normalizing factor. The data are expressed as mean specific luciferase activity Ϯ S.E., n ϭ 5. A, both arsenite and vanadyl treatment enhanced B-dependent transcription. The inducible as well as the basal activity was inhibited by overexpression of IB␣(S32A,S36A). B, indirect immunofluorescent localization of the hemagglutinintagged IB␣(S32A,S36A) transgene product showed that it was present in both the cytoplasm and nucleus of Ad5IB␣-infected BEAS-2B cells. Nuclear IB␣(S32A,S36A) could explain the inhibition of the basal and arsenite-induced activity which were independent of NF-B mobilization. C, hemagglutinin immunoreactivity was not detected in cultures infected with Ad5LacZ, an expression vector encoding untagged ␤-galactosidase. Bar, 25 m. sion suggested that airway epithelial cell cultures supported a basal level of B-dependent transcription that was augmented by exposure to either As III or V IV . Sodium Arsenite Induced B-dependent IL-8 Promoter-Reporter Activity in Airway Epithelium-The IB␣(S32A,S36A)mediated inhibition of IL-8 mRNA levels (Fig. 3B) suggested that arsenite may be stimulating B-dependent IL-8 gene transcription. To investigate this possibility, the influence of arsenite on the activity of an IL-8 promoter-luciferase construct was examined. The IL-8 promoter-reporter construct was active in unchallenged cultures (Fig. 7, Media Ϫ/Ϫ ), consistent with the observed basal expression of IL-8 mRNA and protein in cultures (Figs. 1-3). Moreover, basal transcriptional activity was suppressed by overexpression of IB␣(S32A,S36A) (Fig. 7, WT, compare Media Ϫ/ϩ to Media Ϫ/Ϫ ), consistent with the observed depression in basal IL-8 mRNA levels following infection with Ad5IB␣ (Fig. 3). Arsenite induced a significant increase in the transcriptional activity (Fig. 7, WT, compare 50 M As Ϫ/Ϫ to Media Ϫ/Ϫ ), whereas overexpression of IB␣(S32A,S36A) inhibited this response (Fig. 7, WT, compare 50 M As Ϫ/ϩ to 50 M As Ϫ/Ϫ ). There was, however, a residual difference in the activity of IL-8 promoter-reporter construct in unstimulated and arsenite-challenged cultures that had been infected with Ad5IB␣ (Fig. 7, WT, compare Media Ϫ/ϩ to 50 M As Ϫ/ϩ ). This suggested that only a portion of the arsenite-induced IL-8 promoter-reporter activity was B-dependent. Infection with the nonrecombinant adenovirus did not affect the basal or inducible transcriptional activity (Fig. 7, WT, compare Media Ϫ/Ϫ to Media ϩ/Ϫ and 50 M As Ϫ/Ϫ to 50 M As ϩ/Ϫ ). In addition, exposure to arsenite did not affect the activity of the promoterless parent luciferase vector of the IL-8 promoter-reporter construct or that of a constitutively active SV40 promoterluciferase construct (not shown). The specificity of the inhibition mediated by IB␣(S32A,S36A) overexpression was investigated by determining its effect on the activity of an IL-8 promoter reporter construct in which the NF-B response element had been inactivated by mutation. As expected, the dominant negative IB␣ did not affect the B-independent activity of the mutant IL-8 promoter (NF-B Ϫ 50 M As), indicating that IB␣(S32A,S36A) selectively inhibited B-dependent transcription. These data supported the notion that the arsenite-induced increase in IL-8 expression was partially dependent upon enhanced, B-dependent IL-8 gene transcription. The Basal and Arsenite-induced IL-8 Promoter-Reporter Activity Required the Compound C/EBP/NF-B Response Element of the IL-8 Promoter- The B dependence of the basal and arsenite-induced activity of the IL-8 promoter-reporter construct suggested by the suppressive effects of IB␣(S32A,S36A) overexpression (Fig. 7) was confirmed by mutational analysis of the IL-8 promoter. Several studies have indicated that inducible B-dependent IL-8 gene transcription requires a compound C/EBP (nuclear factor-IL-6)/NF-B response element located upstream (base pairs Ϫ94 to Ϫ72) of the transcription start site in the IL-8 gene (20 -22, 34), although the C/EBP element may be dispensible in some instances (19). Consequently, the C/EBP and NF-B elements of the compound RE in the IL-8 promoterreporter construct were independently disrupted by site-directed mutagenesis, and the phenotype of these mutations was characterized by transient transfection of BEAS-2B cultures. The basal activity of the B Ϫ construct was reduced about 20-fold compared with the wild type construct (Fig. 8, compare lanes C at both WT and NF-B Ϫ ). Reversion of the B Ϫ construct to wild type restored basal activity to wild type levels (not shown), indicating that the reduction in basal activity was due solely to disruption of the NF-B response element. Disruption of the C/EBP RE also significantly reduced the basal IL-8 promoter-reporter activity (Fig. 8, compare lanes C at both WT and C/EBP Ϫ ), although to a lesser degree than mutation of the NF-B RE. Thus, the basal activity of the IL-8 promoterreporter construct was dependent upon both the NF-B and C/EBP elements of the IL-8 promoter. Moreover, the full basal activity of the wild type construct was approximately 2.7 times greater than the sum of the activities B Ϫ and C/EBP Ϫ constructs, suggesting that IL-8 promoter activity in unstimulated airway epithelium depends upon synergistic interactions between nuclear factors that bind to the C/EBP/NF-B compound RE of the IL-8 promoter. Even though the overall activity of the B Ϫ construct was greatly reduced, it retained some arsenite responsiveness (Fig. 8, compare lane C at NF-B Ϫ to lane As at NF-B Ϫ ). Arsenite induced a 3.8 Ϯ 1-fold increase of the activity of the wild type construct, whereas it induced a significantly smaller 2.4 Ϯ Specific luciferase activity in culture lysates was determined using ␤-galactosidase activity as a normalizing factor. The data are expressed as the mean specific luciferase activity Ϯ S.E., n ϭ 5. IB␣(S32A,S36A) overexpression inhibited arsenite induced wild type IL-8 promoterreporter activity, whereas the activity of the NF-B Ϫ construct was unaffected, suggesting that the IB␣ transgene product specifically inhibited the function of NF-B RE of the IL-8 promoter . FIG. 8. Both basal and arsenite-induced IL-8 promoter-reporter activity was dependent upon the compound C/EBP/ NF-B response element of the IL-8 gene. BEAS-2B cultures were transiently cotransfected with pSV␤-galactosidase and wild type (wt) or mutant IL-8 promoter-reporter constructs in which either the NF-B RE (NF-B Ϫ ) or the C/EBP RE (C/EBP Ϫ ) had been disrupted (see "Experimental Procedures"). 48 h post-transfection, cultures were left untreated (Media) or challenged with 50 M sodium arsenite for 1 h (As) and harvested 7 h later. Specific luciferase activity in culture lysates was determined using ␤-galactosidase activity as a normalizing factor. The data are expressed as the mean specific luciferase activity Ϯ S.E., n ϭ 9. 0.5-fold increase in the activity of the B Ϫ construct. This suggested that transcription factors in addition to NF-B dominated responsiveness of the IL-8 promoter to arsenite exposure. Mutation of the C/EBP RE suppressed the arsenite inducibility of the IL-8 promoter-reporter construct (Fig. 8, compare lane C at C/EBP to lane As at C/EBP Ϫ ). Thus the C/EBP RE had a greater influence on the arsenite-inducible activity than the NF-B RE had. As in the case of basal activity, the activity of the wild type construct was approximately 6.3 times greater than the sum of activities of the B Ϫ and C/EBP Ϫ constructs, suggesting synergistic interactions between transcription factors. Because the basal and arsenite induced IL-8 promoter-reporter activity was dependent upon the compound C/EBP/ NF-B response element of the IL-8 promoter (Fig. 8), nuclear extracts of NHBE cultures were analyzed by EMSA for DNA binding activities specific for this sequence. There were detectable levels of a single DNA binding activity in the nuclei of unchallenged cultures that were enhanced following treatment with arsenite for 1 h (Fig. 9A, arrow). The increases were transient, returning to control levels after 4 h of exposure. The activity was specific for the sequence of the compound C/EBP/ NF-B RE, because competition with 100-fold molar excess of unlabeled probe inhibited radiolabeled complex formation (Fig. 9B). Mutation of either half of the response element resulted in a significant reduction in DNA binding (Fig. 9C, compare wt with mNF-B and mC/EBP). This basal and enhanced DNA binding activity for the C/EBP/NF-B compound response element and its sensitivity to mutation correlated with the observed basal and arsenite-induced activation of the IL-8 promoter-reporter construct (Figs. 7 and 8) and its inhibition by disruption of the compound C/EBP/NF-B response element (Fig. 8). Nuclear extracts were also examined using a radiolabeled probe corresponding to the solitary C/EBP response element of the IL-6 gene (Table I). A DNA binding activity was observed in unstimulated cultures, and this activity was enhanced following exposure to arsenite (Fig. 9D, arrow, lanes C and As). These data demonstrated the presence of a constitutive nuclear factor in airway epithelium that binds the C/EBP response element and whose activity was increased by arsenite exposure. DISCUSSION In this study we investigated As III -and V IV -induced NF-B activation pathways, which culminate in IL-8 gene expression in airway epithelial cells. Both the As III -and V IV -induced IL-8 expression were NF-B-dependent; however, V IV induced IB␣ degradation and NF-B translocation, whereas exposure to As III failed to do so. Thus, despite the B dependence of arsenite-induced gene expression, there was no detectable mobilization of cytoplasmic NF-B, suggesting that the response to arsenite was mediated by low levels of constitutive nuclear NF-B that were detected in the airway epithelial cell cultures. The presence of low levels of constitutive nuclear NF-B was suggested by several pieces of evidence: (i) Nuclear p65 was detected by immunoblotting of nuclear extracts of unchallenged cultures (Fig. 5); (ii) EMSA of nuclear extracts using a recognized functional compound C/EBP/NF-B response element of the IL-8 gene revealed basal nuclear levels of a B-dependent DNA binding activity (Fig. 9B); (iii) unstimulated cultures supported B-dependent transcription from both 5xNF-B-reporter (Fig. 6) and IL-8 promoter-reporter constructs (Figs. 7 and 8); and (iv) basal expression of IL-8 mRNA was B-dependent (Fig. 3). The B dependence of basal IL-8 mRNA expression and basal activities of the promoter-reporter constructs was suggested by their suppression following global inhibition of NF-B function by overexpression of a dominant negative IB␣ mutant (Figs. 3, 6, and 7). The mutant IB␣ was present not only in the cytoplasm but also in the nucleus (Fig. 6B). This is in accordance with observations that IB␣ is imported into the nucleus when uncharged with NF-B (30), a likely situation when IB␣ is overexpressed. Nuclear IB␣ inhibits B-dependent transcription (31)(32)(33), which was also observed here. The FIG. 9. Arsenite enhanced the levels of a nuclear DNA binding activity for the C/EBP/NF-B RE of the IL-8 promoter in NHBE cells. A, nuclear extracts isolated from NHBE cultures were analyzed for DNA binding activities by EMSA using a radiolabeled probe corresponding the compound C/EBP/NF-B RE of the IL-8 promoter (see Table I). A nuclear DNA binding activity for the compound RE in unstimulated cultures (lane C, arrow) was transiently enhanced by a 1-h exposure to 50 M sodium arsenite but returned to basal levels after 4 h of stimulation. B, competition with 100-fold molar excess of wild type probe inhibited radiolabeled complex formation with nuclear factors isolated from unstimulated cultures (lane C) or cultures challenged with 50 M arsenite for 1 h (lane As). C, nuclear extracts were examined by EMSA for their affinity for a radiolabeled wild type compound RE (wt) or mutant compound RE in which the NF-B site (mNF-B) or the C/EBP site (mC/EBP) had been disrupted (see Table I). Nuclear factors from both unstimulated cultures (lane C) and cultures challenged with 50 M sodium arsenite (lane As) had substantially reduced affinity for the mutated compound RE (arrow). D, EMSA of nuclear extracts from untreated cultures (lane C) or cultures treated with 50 M sodium arsenite for 1 h using a radiolabeled probe corresponding to the C/EBP response element of the IL-6 gene (see Table I) is shown. Arsenite exposure enhanced the basal activity of a nuclear factor that bound to the C/EBP RE (arrow). specificity of the inhibition for B-dependent processes was suggested by a number of observations. Overexpression of IB␣(S32A,S36A) did not inhibit the As III -induced increase in c-Jun message (Fig. 3) or the B-independent activity of the IL-8 promoter (Fig. 7). Additional studies indicate that overexpression of the mutant IB␣ does not inhibit basal or phorbol myristate acetate-induced activator protein-1-dependent transcription but does inhibit phorbol myristate acetate-induced Bdependent transcription. 2 Thus, it is clear that IB␣(S32A,S36A) overexpression did not inhibit transcription in a nonspecific fashion. The data consistently supported the notion that there were low levels of constitutive nuclear NF-B and basal IL-8 expression in airway epithelial cell cultures. The origin of the low levels of constitutive nuclear NF-B and IL-8 expression is not clear. Environmental stresses because of artificial cell culture conditions have been shown to elicit IL-8 from cultured peripheral blood mononuclear cells, whereas freshly isolated (naïve) peripheral blood mononuclear cells do not express IL-8 (35). Thus, it is possible that stresses because of artificial culture conditions in addition to As III are acting on the primary cell lines and that these stresses establish the low levels of constitutive nuclear NF-B and IL-8 expression that are a prerequisite for the As III -induced B-dependent transcription. Alternatively, the constitutive nuclear NF-B and IL-8 expression may be a tissue characteristic of airway epithelium in vivo and in vitro. Recent clinical studies using RT-PCR have shown that IL-8 mRNA is expressed in naïve (uncultured) biopsies of airway epithelium (36). In addition, low levels of IL-8 are invariably found in the airway lining fluid of normal healthy individuals (36 -40). Although there are other cell types in the airway that can produce IL-8, epithelial cells are by far the most abundant. The expression of IL-8 by airway epithelium in vivo may be related to the role the epithelium plays in host defense. The airway is in an unusual physiological situation. It is constantly exposed to respirable environmental pathogens and toxicants, protected only by a thin layer of fluid containing mucus and proteins. Many of the pathogens and toxicants to which airway epithelial cells are constantly exposed are capable of inducing translocation of NF-B into the nucleus in vitro (41)(42)(43)(44)(45). Thus, the IL-8 expression detected in normal airway epithelium may be a consequence of chronic low levels of stress because of environmental pathogens and toxicants. Alternatively, airway epithelium may constitutively express low levels of IL-8 that mediate heightened immune surveillance of the airways. Even though these possibilites cannot be distinguish at the moment, the primary cell lines appear to be a reasonable model of airway epithelium in vivo. Constitutive nuclear NF-B has been observed previously in mature B cells (46), activated monocytes and macrophages (47), neurons (48), vascular endothelial cells (49), and fibroblasts (11). These studies suggest that the proportion of NF-B that is constitutively nuclear and that the subunit composition of nuclear NF-B varies with tissue type as well as with cellular differentiation and activation state. How constitutive nuclear levels of NF-B are set remains unclear. In neurons it may be determined by an autocrine activation loop (48). B lymphocyte maturation is associated with decreased stability of IB␣ as well as de novo expression of nuclear RelB-p52/p100 heterodimers, which are not efficiently inhibited by IB␣ (50,51). In certain tumor cell lines, elevated constitutive nuclear NF-B is associated with decreased stability of IB␣ (52,53), whereas dysfunctional IB␣ mutants have been observed in other tumor cells (54). Generalizing these observations leads to the suggestion that levels of constitutive nuclear NF-B may also vary as a consequence of genetic polymorphisms in NF-B or in the macromolecules that regulate NF-B expression or activity. The findings presented here may provide a framework for anticipating variation in the effects of arsenite exposure depending on tissue type, cellular differentiation or activation state, and possibly individual susceptibility. Arsenite activated B-dependent transcriptional activity of a 5xNF-B-reporter and IL-8 promoter-reporter construct in airway epithelium without increasing the low levels of constitutive nuclear NF-B. This finding is consistent with a recent study in which enhanced B-dependent transcription induced by Ras or Raf transformation of NIH 3T3 fibroblasts was mediated by constitutive nuclear NF-B (11). In this case, transformation was shown to activate a chimeric transcription factor composed of the DNA-binding domain of the yeast Gal4 transcription factor and the C-terminal TA1 transactivation domain of p65. This suggested that transcriptional activation was due to functional activation of p65. Using the same strategy, the transcription promoting activity of the TA1 and TA2 transactivation domains of p65 have also been shown to be responsive to tumor necrosis factor-␣ (12). Both of these studies suggested that activation does not depend upon translocation of the hybrid transcription factor into the nucleus but rather is a consequence of nuclear processes that are dependent upon activation of p38 and/or ERK1/2 MAP kinases. Arsenite has been shown to activate ERK1/2, JNK/SAPK, and p38 MAP kinases in a variety of tissues (55-60) including airway epithelial cells (16). Taken together, these studies present a plausible mechanism by which arsenite may activate B-dependent gene transcription in the presence of low levels of constitutive nuclear NF-B. Currently, we are addressing the role of MAP kinase activation in arsenite-induced B-dependent transcription in airway epithelium. The arsenite-induced activation of the IL-8 promoter-reporter construct was shown to depend upon the compound C/EBP/NF-B response element of the IL-8 gene (Fig. 8). Previously, this compound RE was shown to be essential for the activation of IL-8 promoter-reporter constructs by phorbol esters, tumor necrosis factor-␣, and IL-1␤ (20 -22). Both p65 and the ␤ isoform of C/EBP (C/EBP␤) family of transcription factors (61) have been shown to bind this RE in vitro (21,27) and synergistically activate transcription dependent upon the C/EBP/NF-B RE (21,34). However, it is likely that other combinations of C/EBP and NF-B isoforms may cooperate at this site, because numerous synergistic combinations of the six known isoforms of C/EBP with NF-B isoforms have been suggested (61). In this study, analysis of nuclear extracts by EMSA 2 W. Reed, unpublished observations. GGCTGGGGATTCCCCATCT IL-6 C/EBP TGCAGATTGCGCAATCTGCA indicated the presence of a DNA binding activity for the compound RE in unstimulated cultures whose activity was transiently enhanced by arsenite exposure (Fig. 9A). This activity correlated with the basal and arsenite-enhanced activity of the IL-8 promoter-reporter construct (Fig. 8). Additional analysis of nuclear extracts by EMSA indicated that a nuclear factor that binds a solitary C/EBP RE is present in unstimulated cultures and its activity is enhanced by arsenite exposure (Fig. 9D). C/EBP isoforms can be phosphorylated on independent regulatory sites by cAMP-dependent protein kinase, calcium/ calmodulin-dependent protein kinase, protein kinase C, and MAP kinase, and these modifications influence nuclear translocation and DNA binding activity (61,62). Based upon our data, we would predict that a C/EBP-like factor complexed with constitutive nuclear p65, both in functionally activated states, explain the arsenite-induced DNA binding activity for the C/EBP/NF-B RE, as well as the dependence of arsenite-induced IL-8 promoter-reporter activity upon the compound RE. Further studies are necessary to identify the factor or factors binding to the C/EBP/NF-B RE of the IL-8 promoter as well as their individual transactivation potentials in unstimulated and arsenite-treated airway epithelium. In conclusion, the study presented here describes a distinct mechanism of enhanced B-dependent gene expression in airway epithelium. In the absence of IB␣ breakdown and mobilization of cytoplasmic NF-B, exposure to arsenite increased B-dependent transcription and gene expression, implying a potential role for basal levels of nuclear NF-B in inducible gene transcription.
The Impact of Thyme and Oregano Essential Oils Dietary Supplementation on Broiler Health, Growth Performance, and Prevalence of Growth-Related Breast Muscle Abnormalities Simple Summary In recent years, there has been growing interest in the use of thyme and oregano essential oils in feed formulations to promote growth in chicken broilers. Thyme and oregano essential oils are considered promising ingredients to replace antibiotics as growth promotors. The aim of this study was to evaluate the impact of thyme and oregano essential oils on growth performance, broiler health, and the incidence of muscle abnormalities at different slaughter ages. This study showed that the addition of thyme and oregano essential oils, individually or in combination, significantly increased body weight compared to the control group. Thyme and oregano essential oils improved the feed conversion factor, which indicates lower feed intake (feed intake did not change according to our results) with higher meat production. Muscle abnormalities increased with the addition of thyme and oregano essential oils to broiler diets, which could be due to the increase in the growth rate. In conclusion, the inclusion of thyme and oregano oils in broiler chicken feed resulted in an improvement in the growth performance of broiler chickens. Abstract The objective of this study was to investigate the effects of thyme and oregano essential oils (as growth promotors), individually and in combination, on the health, growth performance, and prevalence of muscle abnormalities in broiler chickens. Six hundred day-old Cobb 500 hybrid chickens were randomized into four dietary treatment groups with three replicates each. Chicks in the control group (C) received a basal diet, while the experimental treatment groups received basal diets containing 350 mg/kg of thyme oil (T1), 350 mg/kg of oregano oil (T2), and 350 mg/kg of thyme and oregano oil (T3). Growth performance parameters were evaluated at 14, 28, and 42 days. The broilers in treatments T1 and T2 had significantly higher body weights than the control group. The feed conversion ratio was the lowest in chicks who received oregano oil, followed by those fed thyme oil. The overall prevalence of growth-related breast muscle abnormalities (including white striping and white striping combined with wooden breast) in groups receiving essential oils (T1, T2, and T3) was significantly higher than in the control group (C). The thyme and oregano oil diets showed no significant differences in antibody titers against Newcastle disease or interferon-γ (INF-γ) serum levels. In conclusion, thyme and oregano oils had a positive impact on the growth performance of broiler chickens but increased the incidence of growth-related breast muscle abnormalities. Introduction Health concerns and regulatory restrictions on the use of antibiotics motivated the researchers to evaluate several alternatives to antibiotics. It was found that the use of different combinations of additives (such as medium-chain fatty acids, short-chain fatty acids, oregano essential oil, and sweet basil essential oil) exhibited positive effects on the growth performance of broilers [1]. Extracts of medicinal herbs (aromatic herbs) have received increasing attention from both researchers and producers as potential alternatives to conventional antibiotic growth promoters in broiler rations [2]. The beneficial effects of these essential oils as well as plant oils are related to their suitable chemical properties and functional groups, whose mechanisms of action remain to be explained [3,4]. Thyme and oregano essential oils have been extensively studied as feed supplements in broiler rations. However, varying results have been reported on their effects on overall broiler production performance [1,5]. There was no agreement between previous studies about the effects of thyme or oregano essential oils on feed intake, body weight gain, and feed conversion in broilers when these oils were used separately [6][7][8]. Extracts of thyme (Thymus vulgaris) and oregano (Origanum vulgare L.) are rich in several functional compounds such as carvacrol, thymol, lutein, and zeaxanthin, which play an important role in broiler health and growth performance [8,9]. The inclusion of oregano essential oil in broiler feed exhibited a protective effect against necrotic enteritis (NE) caused by Clostridium perfringens [9,10]. Some studies reported positive effects on the performance parameters of broiler chicks [8,11,12], while other studies showed no effect on broiler performance parameters [13,14]. In contrast to these studies, others reported negative effects of supplemental thyme or oregano oils in rations on broiler growth [6,7,15]. The use of thyme with prebiotics, such as mannan-oligosaccharides, in the feed formulation showed positive effects on the growth performance of broilers [16]. A few reports showed positive effects on the meat characteristics of carcasses when essential oils were added to the broiler rations [12,17]. These authors attributed the inconsistent results to differences in the doses of the essential oils used, environmental factors, the durations of the experiments, and health status of the chicks used. Currently, poultry breeders and the meat industry are concerned about the occurrence of growth-related breast muscle abnormalities such as white striping (WS) and wooden breast (WB) [18]. In this context, several studies indicated that breast meat affected by these disorders had lower quality characteristics than normal breast meat [19][20][21][22][23]. Overall, the incidence rates of these abnormalities are alarming and appear to be unsustainable for the poultry industry [24]. It was found that the incidence of muscle abnormalities was higher in high-breast hybrids than standard-breast hybrids [25]. Moreover, the incidence of muscle abnormalities was higher in males than in females [26]. Incidence rates varied between studies. It was found that the incidence of WS was about 12% [25], while other researchers found that the incidence of WS reached 50% [27]. Another study showed that the incidence of WS was 75% in high-breast-yield hybrids and 74% in standard-breast-yield hybrids [28]. Mudalal et al. [29] examined the effect of a natural herbal extract on the occurrence of muscle abnormalities such as WS and WB. The results showed that the herbal extract reduced the occurrence of WS and WS combined with WB. In particular, Newcastle disease (ND) is considered one of the most serious diseases affecting broiler flocks worldwide, causing severe losses in the poultry sector [33]. Biosecurity and vaccination strategies are needed to control this disease [34]. Improving the immunization strategy of ND vaccines and host protection can be enhanced by complementary approaches, such as the use of herbal extracts from medicinal natural products [35]. There is growing evidence that the coadministration of herbal extracts with the vaccine showed increases in cytokine production and the antibody responses of immune cells [30]. To our knowledge, there are few studies that investigated the effects of thyme and oregano oils as a mixture on the health, growth performance, and prevalence of muscle abnormalities of broilers reared under commercial conditions. Therefore, the objective of this study was to examine the possible effects of thyme and oregano oils and a combination of both oils on the performance parameters, health status, and meat characteristics of broiler chicks as well as on the prevalence of muscle abnormalities from 1 day to 42 days of age. Experimental Design In this study, 600 one-day-old Cobb 500 hybrid broiler chicks were randomly divided into four groups of 150 chicks each, and each group was replicated three times. The chicks from the first treatment group received a basal ration (starter and grower) as a control group (C) ( Table 1). The rations of the second treatment group (T1) were supplemented with 350 mg/kg of thyme essential oil. The rations of the third treatment group (T2) were supplemented with oregano essential oil at a concentration of 350 mg/kg. The rations of the fourth treatment group (T3) were supplemented with 350 mg/kg of thyme and oregano essential oils in equal proportions. In formulating each experimental ration, the essential oils were first mixed with the corresponding oil stock, and the mixture was then homogenized. The rations were mixed in two batches (the starters and the growers) and stored in airtight bags at room temperature for a short time before being fed to the chicks. The chicks were housed on a deep litter (fresh wood shavings) in an open-sided broiler house. Commercial protocols were used to rear the experimental chicks. The broiler house temperature was manipulated and closely monitored to avoid fluctuations, starting at 32 • C on day 1 and decreasing by 2 • C every week thereafter. The chicks were exposed to 24 h of lighting for the first 4 days and then 23 h of lighting and 1 h of darkness until the termination of the experiment. Chicks had access to feed and water around the clock. Body weight and feed intake were determined on days 14, 28, and 42. Mortality was recorded daily. The feed conversion ratio was calculated as feed intake (g) per mean body weight (g) for each replicate of the treatment groups. The feed intake was calculated on a weekly basis, taking into account differences in feed weight. In addition, the weight of each broiler was recorded weekly. Breast Weights Seven broilers from each replicate were slaughtered at 42 days of age using a manual operation technique (n = 21/group). Breasts were weighed using a balance with a sensitivity of 0.01 g. Assessment of Incidence of Growth-Related Breast Muscle Abnormalities The incidence of growth-related breast muscle abnormalities was assessed at approximately 8 h postmortem. Muscle abnormalities were classified into three levels (normal, WS, and WB combined with WS) based on previously described criteria [27,36]. Breast fillets that exhibited no white striations or hardened areas were considered normal (N). Breast fillets that had white striations of varying thickness (thin to thick striations) were considered to be white-striped fillets (WS). Finally, breast fillets that had pale ridge-like bulges and diffuse hardened areas (namely WB) in combination with white striations were labeled as WS/WB. The color trait (CIE L* = lightness, a* = redness, and b* = yellowness) of raw breast meat was measured in triplicate using a Chroma Meter CR-410 (Konica Minolta, Japan), and the skin-side surface of each fillet was considered a measuring point. Newcastle Disease Vaccine Response The freeze-dried live Newcastle Disease (ND) vaccine (LaSota strain-SPF origin vaccine, Biovac ® , Cape Town, South Africa) was administrated via drinking water when the chicks were 12 days old, and this was repeated when the chicks were 22 days old. Blood samples were collected during the 1st, 3rd, and 5th weeks from the wing vein (n = 24). Each blood sample was left to coagulate at room temperature and was then centrifuged at 3000 rpm for 5 min. Hemagglutination Inhibition (HI) The collected sera were subjected to the hemagglutination inhibition (HI) test, and the level of the anti-NDV antibody titer was determined. The HI tests were performed in microplates using two-fold dilutions of serum, 1% PBS-washed chicken red blood cells, and four hemagglutinating units of vaccinal LaSota NDV (Biovac ® , Cape Town, South Africa), following the method of Allan and Gough [37]. Titers were expressed as log2 values of the highest dilution that caused the inhibition of the hemagglutination. All tested serum samples were pretreated at 56 • C for 30 min to inactivate the nonspecific agglutinin. ELISA Interferon Assay The interferon concentration was determined by an immunoenzymatic assay (ELISA). At three time points (eight birds in each group at 7, 14, and 35 days) the serum level of interferon-γ (INF-γ) was determined using ELISA kits, following the instructions enclosed in the manufactured kits (Elabscience Co., Wuhan, China). Eight standards of 0, 15.6, 31.2, 62.5, 125, 250, 500, and 1000 pg/mL were added to the wells of the ELISA plate. Absorbance was measured at a wavelength of 450 nm. The interferon concentration was calculated using the standard curve. Statistical Analysis The effects of the thyme and oregano oils on the growth performance, feed conversion ratio, and the incidence of muscle abnormalities were assessed using an ANOVA (GLM procedure in SAS Statistical Analysis Software, version 9.1, 2002). Duncan's test was employed to separate means in the case of the presence of statistical differences (p < 0.05). Pearson's correlation was used to test the relationships between pairs of continuous variables (i.e., the feed conversion ratio, carcass, and visceral organ variables). Results The effects of thyme and oregano oils on the performance indices of broilers at different slaughter ages are shown in Table 2. Our results showed that the inclusion of thyme and/or oregano oils in feed did not exhibit any effect on feed intake. In general, there were significant differences in body weight between treatments at different slaughter ages (14, 28, and 42 days). Birds in treatment T2 (with oregano) exhibited the highest body weights and the lowest feed conversion ratios at different slaughter ages when compared to other groups. There were no significant differences between treatment C and T3 in these parameters. The birds in treatment T1 had higher body weights and lower feed conversion ratios than the birds of the control group (C) at different slaughter ages (14, 28, and 42 d). Data are reported as means (M, n = 150/group) and standard deviations (STD). Different letters in the same row indicate significant differences (p < 0.05). Treatment C: basal ration as a control group, treatment T1: basal ration supplemented with 350 mg/kg of thyme essential oil, treatment T2: basal ration supplemented with 350 mg/kg of oregano essential oil, treatment T3: basal ration supplemented with 350 mg/kg of thyme and oregano essential oils in equal proportions. The incidences of growth-related breast muscle abnormalities (normal, WS, and WS combined with WB condition) in all treatments are shown in Figure 1. The results showed that the control treatment had the highest percentage of normal cases (70%) compared with other treatments. Treatments T1 and T3 had quite similar percentages of normal cases, while treatment T2 had 42.9% normal cases, which was higher than treatment T1 and T2. The incidence of WS was the lowest (5%) in the control treatment compared to the other treatments. Treatment T2 exhibited the highest percentage of WS cases (33.3%), while treatments T1 and T3 had 30% and 9.1% FWS cases, respectively. For WS occurring with WB abnormalities, treatment T3 had the most cases (59.1%) compared with the other treatments. The control treatment and treatment T2 had quite similar percentages of the WB condition. Figure 1. Percentages of normal, white striping, and white striping plus wooden breast meat abnormalities of broilers supplemented with herb extract (HE) (n = breasts/group). The basal diet (control, C) was similar to regular broiler starter diets, while the experimental treatments of the T1, T2, and T3 birds included the same diet as in the control group, but they were supplemented with herb extracts: thyme essential oil at 350 mg/kg (T1), oregano essential oil at 350 mg/kg (T2), and equal proportions of thyme and oregano essential oils at 350 mg/kg (T3). The effects of thyme and oregano extracts on color traits (L*, a*, and b*), pH, and breast weight are shown in Table 3. In general, there were no significant differences between treatments in the color index (L*, a*, and b*), pH, and breast weight. The effects of muscle abnormalities (normal, WS, and WS combined with WB) on the color traits (L*, a*, and b*), pH, and breast weight are shown in Table 4. Muscle abnormalities did not affect the color traits (L*, a*, and b*). Meat affected by the WB abnormality exhibited higher breast weight (213.22 vs. 188.97, p < 0.05) in comparison to normal meat, while whitestriped meat exhibited intermediate values. Table 3. The effects of the inclusion of thyme and oregano extracts on color traits (L*, a*, and b*), pH, and breast weight for raw chicken breast. Data are reported as means (M, n = 21/group) and standard deviations (STD). Treatment C: basal ration as a control group, treatment T1: basal ration supplemented with 350 mg/kg of thyme essential oil, treatment T2: basal ration supplemented with 350 mg/kg of oregano essential oil, treatment T3: basal ration supplemented with 350 mg/kg of thyme and oregano essential oils in equal proportions. * The color trait (CIE L* = lightness, a* = redness, and b* = yellowness) Figure 1. Percentages of normal, white striping, and white striping plus wooden breast meat abnormalities of broilers supplemented with herb extract (HE) (n = breasts/group). The basal diet (control, C) was similar to regular broiler starter diets, while the experimental treatments of the T1, T2, and T3 birds included the same diet as in the control group, but they were supplemented with herb extracts: thyme essential oil at 350 mg/kg (T1), oregano essential oil at 350 mg/kg (T2), and equal proportions of thyme and oregano essential oils at 350 mg/kg (T3). C M ± STD The effects of thyme and oregano extracts on color traits (L*, a*, and b*), pH, and breast weight are shown in Table 3. In general, there were no significant differences between treatments in the color index (L*, a*, and b*), pH, and breast weight. The effects of muscle abnormalities (normal, WS, and WS combined with WB) on the color traits (L*, a*, and b*), pH, and breast weight are shown in Table 4. Muscle abnormalities did not affect the color traits (L*, a*, and b*). Meat affected by the WB abnormality exhibited higher breast weight (213.22 vs. 188.97, p < 0.05) in comparison to normal meat, while white-striped meat exhibited intermediate values. Table 3. The effects of the inclusion of thyme and oregano extracts on color traits (L*, a*, and b*), pH, and breast weight for raw chicken breast. Data are reported as means (M, n = 21/group) and standard deviations (STD). Different letters in the same row indicate significant differences (p < 0.05). The levels of white striping (WS) were classified as normal, moderate, or severe according to Kuttappan et al. [27]. * The color trait (CIE L* = lightness, a* = redness, and b* = yellowness). Dietary supplementation with thyme or oregano essential oils alone or a mixture had no significant (p < 0.05) positive effects on the broilers' humoral or cellular immune reactions to NDV treatments ( Figure 2). No significant effects were found for the treatments on the weekly and accumulative NDV Ab titers and IFN-γ levels of chicks during the experimental period ( Figure 2). Table 4. The effects of muscle abnormalities (normal, white striping (WS), and white striping combine with wooden breast condition (WS and WB)) on the color traits (L*, a*, and b*), pH, and breast weight. Data are reported as means (M, n = 21/group) and standard deviations (STD). The levels of whit striping (WS) were classified as normal, moderate, or severe according to Kuttappan et al. [27]. * Th color trait (CIE L* = lightness, a* = redness, and b* = yellowness) Normal Dietary supplementation with thyme or oregano essential oils alone or a mixture ha no significant (p < 0.05) positive effects on the broilers' humoral or cellular immune reac tions to NDV treatments ( Figure 2). No significant effects were found for the treatment on the weekly and accumulative NDV Ab titers and IFN-γ levels of chicks during th experimental period (Figure 2). Discussion Thyme or oregano essential oils, when used as growth promoters, have been reporte to improve body weight gain and feed conversion when added to broiler rations [7,8,17 In the present study, essential oils of thyme or oregano at a dosage of 350 mg/kg signifi cantly increased the average body weight at 14, 28, and 42 days of age. A similar tren was observed in the feed conversion ratio. The results of the present study were in disa greement with the results of some previous studies that revealed that thyme or oregan oils did not affect body weight gain and feed efficiency [8,11,17]. It has also been suggeste that dietary supplementation with oregano or thyme oils may exert positive effects o growth parameters when relatively high doses are used [38]. However, other studie Discussion Thyme or oregano essential oils, when used as growth promoters, have been reported to improve body weight gain and feed conversion when added to broiler rations [7,8,17]. In the present study, essential oils of thyme or oregano at a dosage of 350 mg/kg significantly increased the average body weight at 14, 28, and 42 days of age. A similar trend was observed in the feed conversion ratio. The results of the present study were in disagreement with the results of some previous studies that revealed that thyme or oregano oils did not affect body weight gain and feed efficiency [8,11,17]. It has also been suggested that dietary supplementation with oregano or thyme oils may exert positive effects on growth parameters when relatively high doses are used [38]. However, other studies concluded that incremental doses of 100 to 1000 mg/kg or 300 to 1200 mg/kg of oregano oils did not always improve production performance [6,15,39]. These contrasting observations could be explained by differences in the concentrations and chemical compositions of the oils used, the lengths of the experimental periods, the numbers of chicks used, and management factors. In the present study, the variation in these factors was minimized to some extent so that the differences in the performance parameters could only be attributed to the supplemental oils. Saleh et al. [39] reported that the feed intake of chicks that received thyme essential oil (100 to 200 mg/kg) was higher than that of chicks in a control treatment. These findings were in disagreement with the results of the present study. In contrast, Wade et al. [8] reported that supplementing broiler diets with varying amounts of thyme oil had no effect on feed intake. Regarding the effect of herbal extract addition on the incidence of growth-related breast muscle abnormalities, our results were partially in agreement with previous studies. Mudalal et al. [29] found that the incidence of WS was 19.5-39.2% and that WS combined with WB was in the range of 67-76.5% at a slaughtering age of 41 days. Previous studies showed that the incidence of WS was 25.7-32.3% [20]. Cruz et al. [40] found that the prevalence of WS and WB abnormalities ranged from 32.3 to 89.2%. Mudalal [41] found that the total prevalence of WS in turkey breast was 61.3%. Mudalal and Zaazaa [23] showed that the incidence of muscle abnormalities was highly affected by slaughter age, where it was about 45% at a slaughter age of 34 days and 100% at a slaughter age of 48 days. The overall results showed that the addition of thyme and oregano extracts to broiler diets increased the incidence of these abnormalities. The overall prevalence of muscle abnormalities (WS and WS combined with WB) was higher in the treated groups (T1, T2, and T3) than in the control group (65%, 57.1%, 68.2% vs. 30%), respectively. These results may be attributed to an increase in the growth rate and the live weight of broilers at slaughter (Table 2). Previous studies have shown that an increase in growth rate was associated with a higher prevalence of muscle abnormalities [19,28,42,43]. The addition of thyme and oregano extracts exhibited no effects on the color traits (L*, a*, and b*), pH, and breast weight. The incidence of muscle abnormalities (normal, WS, and WS combined with WB) had no effect on the color traits (L*, a*, and b*) and pH but affected breast weight. Zambonelli et al. [44] found that WS combined with WB did not affect the a* and b* values, while the L* values were lower than in normal meat. Another study found that meat with WS alone or combined with WB abnormalities did not affect the color traits (L*, a*, and b*) [45]. Even though there was an apparent increase in pH due to the presence of muscle abnormalities, it was not significant. In this context, Tijare et al. [20] found that the WS abnormality did not affect pH values, while Soglia et al. [19] showed that meat affected by both abnormalities (WS and WB) exhibited a higher pH than normal meat. Meat affected by the WB abnormality exhibited a higher breast weight (213.2 vs. 189.0 g, p < 0.05) compared to normal meat, while white-striped meat exhibited intermediate values. Similar results were obtained by Tasoniero et al. [46], where WB exhibited significantly higher breast weight than normal meat while white-striped meat exhibited moderate values. In addition, Malila et al. [47] found that meat affected by the WB abnormality had a higher breast weight than normal meat. Dietary supplementation with thyme or oregano essential oils alone or in a mixture had no significant (p < 0.05) positive effects on the humoral or cellular immune reactions of broilers to NDV in the treated groups ( Figure 2). No significant effects of the treatments were detected in the weekly and cumulative NDV-Ab titers and IFN-γ levels of the chicks during the experimental period. Our results were also in agreement with previous studies [30,48] that used thyme in the feed and drinking water of broilers and found no significant differences in antibody titers against NDV compared to the control group. In contrast, our results contradict previous reports in which thyme essential oil supplementation (135 mg/kg of feed) increased the humoral immune response against NDV compared to the control group [39]. Since thyme has been reported to have antibacterial and antifungal activities and the main components of thyme are thymol and carvacrol, which are reported to have strong antioxidant properties, an increase in the immune responses of the chicks was expected [48,49]. Although the dietary treatments had no significant effects on the immune-related parameters measured in this study, no deleterious effects were observed from the addition of thyme, oregano, or a combination to the diet. This could be due to the quantity of the additives used in our study. The results also showed that broilers whose diets were supplemented with thyme and oregano or a mixture of both showed no change in the production of IFN-γ proinflammatory cytokines compared with the control group. No significant differences were observed in the relative expression levels of IFN-γ. This is consistent with results published by Hassan and Awad [50], who claimed that thyme supplementation did not alter relative messenger RNA (mRNA) transcription levels for IFN-γ and other cytokines. Moreover, thymol inhibited the phosphorylation of NFκB and decreased the production of IL-6, TNF-α, iNOS, and COX-2 in LPS-stimulated mouse epithelial cells [51]. These findings support the previously mentioned results and indicate that the anti-inflammatory effects of thyme and oregano make them suitable for use in animal production. On the other hand, it was found that oregano oil combined with a macleaya cordata oral solution improved serum immunological characteristics [52]. Conclusions In conclusion, the addition of oregano oil was the most effective in improving the growth performance of broiler chickens and was better than thyme oils. The inclusion of thyme and oregano essential oils together had no positive impact on broiler health. While the essential oils of oregano and thyme improved the feed conversion factor, the incidence of muscle abnormalities increased, and this may be attributed to the increase in the growth rate. Therefore, it is important to consider the impact of these muscle abnormalities on meat quality when developing any growth promotion program.
Acne-Associated Syndromes Introduction: Acne, a chronic inflammatory disorder of pilosebaceous unit, is characterized by comedones, pustules, papules, nodules, cysts, and scars. It affects nearly 85% of adolescents. High sebaceous gland secretion, follicular hyperproliferation, high androgen effects, propionibacterium acnes colonization, and inflammation are major pathogenic factors. Systemic disease or syndromes that are associated with acne are less commonly defined. Therefore, these syndromes may not be usually recognized easily. Research methods: Acne-associated syndromes prove the nature of these diseases and are indicative of pathogenesis of acne. Polycystic ovary (PCOS), synovitis-acne-pustulosis-hyperostosis-osteitis (SAPHO), hyperandrogenism-insulin resistance-acanthosis nigricans (HAIR-AN), pyogenic arthritis-pyoderma gangrenosum-acne (PAPA), pyoderma gangrenosum-acne vulgaris-hidradenitis suppurativa-ankylosing spondylitis (PASS), pyoderma gangrenosum-acne conglobate-hidradenitis suppurativa (PASH), seborrhea-acne-hirsutism-androgenic alopecia (SAHA), and Apert syndromes are wellknown acne-associated syndromes. Endocrine disorders (insulin resistance, obesity, hyperandrogenism, etc.) can be commonly seen in these syndromes, and there are too many unknown factors that must be investigated in the formation of these syndromes. Conclusion—key results: If we are aware of the component of these syndromes, we will recognize those easily during dermatological examination. Knowledge of clinical manifestations and molecular mechanisms of these syndromes will help us to understand acne pathogenesis. When acne pathogenesis is explained clearly, new treatment modalities will be developed. Introduction Acne vulgaris is a common chronic inflammatory disease of the pilosebaceous unit. Acne is typically thought as an adolescent disease but it is also seen in adulthood anymore [1]. Although there are lots of studies about the pathogenesis of acne, all pathogenetic factors are not known very well. Four main pathways are described in acne pathogenesis: increased sebum production, abnormal keratinization, Propionibacterium acnes colonization, and inflammation [2]. Acne is a multifactorial disease and sometimes associated with systemic disorders. Acne may be a potential skin marker of internal disease or a component of syndromes such as PCOS, HAIR-AN, PAPA, PASH, SAPHO, SAHA, and Apert. To know about the pathogenesis of those will help us to understand the acne pathogenesis [3,4]. Herein, we aim to mention about acne-associated syndromes and their clinical and pathogenetic features. Polycystic ovary syndrome (PCOS) Polycystic ovary syndrome (PCOS) is an ovarian disease characterized by hyperandrogenism, chronic anovulation, and polycystic ovaries. It is one of the most common endocrinopathy that affects 4-12% of women of reproductive age [5]. Its etiology is unknown, but it was first described by Drs Irving Stein and Michael Leventhal in 1935. They discovered polycystic ovaries in seven patients who had anovulation during surgery and described this disorder as the name of Stein-Leventhal syndrome [6]. Later diagnostic criteria were developed. The National Institutes of Health (NIH), the Rotterdam, and the Androgen Excess Society Criteria are used for the diagnosis of PCOS [7,8]. Nowadays, NIH criteria are the preferred diagnostic criteria in adolescents [9]. NIH criteria are the presence of oligoovulation or anovulation and biochemical or clinical signs of hyperandrogenism [9]. Before the diagnosis with using these criteria, some conditions must be excluded that result in anovulation and hyperandrogenism, such as congenital adrenal hyperplasia, Cushing's syndrome, and androgen secreting tumors [9]. Thyroid disease, hyperprolactinemia must also be excluded. Although pathogenesis of PCOS is not understood very well, it is thought that hormonal pathways contribute to this process. The pulse frequency of gonadotropin-releasing hormone (GnRH) increases in PCOS and stimulates to the anterior pituitary gland to secrete luteinizing hormone (LH) more than follicle-stimulating hormone (FSH), resulting in an increased ratio of LH to FSH. The increase in LH relative to FSH stimulates the ovarian theca cells to synthesize androstenedione. Consequently, the net ovarian androgen production increases [10]. Insulin has also a role in the pathogenesis of PCOS by stimulating the ovarian theca cell to secrete androgens as LH and also inhibits hepatic production of sex hormone binding globulin (SHBG). As a result, free and total androgen level increases. Obesity is another component of this syndrome and contributes pathogenesis via insulin resistance [10]. Insulin resistance and hyperandrogenism are responsible for the cutaneous involvement of PCOS. Insulin resistance causes acanthosis nigricans (AN), and hyperandrogenism leads to hirsutism, acne, oily skin, seborrhea, and hair loss (androgenic alopecia). It is estimated that 72-82% of women with PCOS have cutaneous signs [11]. PCOS has also multisystemic effects and is associated with lots of diseases including infertility, endometrial cancer, obesity, depression, sleep-disordered breathing/obstructive sleep apnea (OSA), nonalcoholic fatty liver disease (NAFLD), and nonalcoholic steatohepatitis (NASH), type 2 diabetes mellitus (T2DM), and cardiovascular diseases [9]. Patients with PCOS are usually first seen by a dermatologist. Because of the above comorbidities, dermatologists should know the diagnosis and clinical findings of PCOS very well. Cutaneous findings in women with PCOS are related to abnormalities of the pilosebaceous unit. Increased androgen levels activate abnormal development of the pilosebaceous unit, and hirsutism, acne or androgenic alopecia. Androgenic alopecia is luckily rare among women with PCOS because of its complex etiology. Acanthosis nigricans (AN) and skin tags are the other skin disorders in PCOS [12]. Although acne, hirsutism, and AN were the most common skin manifestations, hirsutism and AN were the most sensitive for PCOS diagnosis [13]. In previous reports, the range of acne prevalence in PCOS is 15-95%. While hirsutism affects 5-15% of women in the general population, previous reports showed hirsutism prevalence between 8.1 and 77.5% in women with PCOS. In patients with PCOS, hirsutism is also a sign of metabolic abnormalities [13]. AN is also associated with substantial metabolic dysfunction (increased insulin resistance, glucose intolerance, body mass index, and dyslipidemia). Therefore, the presence of AN and hirsutism should warn us regarding a patient's potential metabolic risk factors. A broad range in the prevalence of AN among women with PCOS (2.5% in the United Kingdom, 39 5.2% in Turkey, 16 and 17.2% in China) were observed [13]. Although patients with PCOS frequently refer to dermatologists with cutaneous concerns, it is important to educate them about the metabolic and fertility-related implications of PCOS. Pharmacologic treatment is not every time necessary for all patients with PCOS. Mild forms of hirsutism, acne, and androgenetic alopecia may be controlled with standard nonhormonal agents and life style changes (weight loss, diet, exercise, glucose control) [14]. Hyperandrogenism-insulin resistance-acanthosis nigricans syndrome (HAIR-AN syndrome) Hyperandrogenism-insulin resistance-acanthosis nigricans syndrome (HAIR-AN syndrome) is a subphenotype of polycystic ovary syndrome. It is clinically characterized by acne, obesity, hirsutism, and acanthosis nigricans. It usually manifests in early adolescence. Although etiology is not known very well, genetic, environmental factors, and obesity are estimated to cause HAIR-AN syndrome. The primary abnormality in patients with HAIR-AN syndrome is thought to be severe insulin resistance. In those patients, insulin levels increase and stimulate the overproduction of androgens in the ovaries [15]. Patients may also present with amenorrhea and signs of virilization. Although adrenal function is normal, the levels of insulin, testosterone, and androstenedione may be high. Adolescents with HAIR-AN syndrome usually have normal levels of luteinizing hormone (LH) and follicle-stimulating hormone (FSH) but the ratio of LH to FHS is usually more than one [16]. Acne and Acneiform Eruptions For HAIR-AN syndrome diagnosis and follow-up, in addition to history and physical examination, a complete blood cell count, thyroid screen, serum prolactin, glucose and insulin measurements, serum electrolyte, and lipid panel should be evaluated [17] because Cushing's syndrome, Hashimoto's thyroiditis, Grave's disease, and congenital adrenal hyperplasia may accompanied with HAIR-AN syndrome [4]. To investigate the origin of hyperandrogenism, total testosterone, levels of 17-hydroxyprogesterone, dehydroepiandrosterone sulfate (DHEAS), levels of luteinizing and follicle-stimulating hormones, and morning cortisol after a low dose of dexamethasone should be analyzed [17]. High level of DHEAS should warn us about the possibility of an androgen-producing tumor of the adrenal gland. Increased 17-hydroxyprogesterone is usually seen in congenital adrenal hyperplasia. Patients with Cushing's syndrome have elevated levels of circulating androgen, and abnormal secretion of cortisol. In those, increased basal levels of cortisol and failure of suppression after stimulation with dexamethasone are observed. In polycystic ovarian syndrome, LH/FSH ratio is usually >2.5 (may see in normal patients as well). Although there is not an underlying virilizing tumor (ovarian or adrenal) in HAIR-AN syndrome, plasma testosterone level is high [19]. In the treatment, lifestyle changes like exercise, lower-calorie diet rich in fiber and protein are advised to patients. Metformin can also be prescribed. Other choices are estroprogestatif pills and antiandrogens [20]. In the pathogenesis of SAHA, increased androgen synthesis in adrenals and ovaries, disturbed peripheral metabolism of androgens, or induction of metabolism and activation of androgens in the skin may play important role [4]. Approximately 20% of the patients have all four major signs of SAHA syndrome. Seborrhoea is observed in all of patients, androgenetic alopecia is seen in 21% of the patients, and acne in 10% and hirsutism in 6% of the patients [4]. The management of disorder resembles HAIR-AN and PCOS [22]. APERT syndrome Apert syndrome is a rare congenital type I acrocephalosyndactyly syndrome (acrocephalosyndactyly type I). It was first described in 1906 by the French physician Eugène Apert, characterized by the premature fusion of the craniofacial sutures and syndactyly of the hands and feet [23]. The syndrome is inherited in an autosomal dominant fashion, a rare congenital disease. It is caused by a genetic mutation in the FGFR2 gene, and approximately 98% of all patients have specific missense mutations of FGFR2 [24]. FGFR2 is responsible for the development of embryonic skeleton, epithelial structures, and connective tissue [25]. Craniofacial deformities, hypertelorism, dental and palatal abnormalities, proptosis of the eyes, different skeletal deformities, hydrocephalus, abnormal brain development, mental retardation, blindness, cardiovascular, urogenital, gastrointestinal, respiratory, and skin abnormalities can be seen in patients with Apert syndrome [25]. Dermatologic associations of Apert syndrome was not known when first described in 1906 [26]. In 1970, Solomon first reported the dermatologic manifestation of this disorder, which is severe acneiform lesions [27]. The other skin manifestations are hyperhidrosis, hypopigmentation, and hyperkeratosis of plantar surfaces [27]. In this syndrome, the pathogenesis of acne is not understood very well but increased fibroblast growth factor receptor-2 (FGFR2)-signaling is suspected to be of pathophysiological importance in acne vulgaris because it was reported that in skin cultures, keratinocyte-derived interleukin-1α stimulated fibroblasts to secrete FGF7 which stimulated FGFR2b-mediated keratinocyte proliferation [28]. In acne pathogenesis, increased levels of interleukin-1α (IL-1α) are seen in comedones, and an important pro-inflammatory cytokine stimulates keratinocyte proliferation, hyperkeratinization, and decreased desquamation of comedo formation [28]. Patients with Apert syndrome usually have oily skin. Moderate to severe acne, occurring in childhood or early adolescence, affecting the forearms, which is an unusual site for conventional acne, is often observed [23,28]. Comedones, papules, pustules, furunculoid cysts, and scars as seen in conglobate acne can be seen. It is very difficult to treat, and often unresponsive to therapy but good response to oral isotretinoin [23,26,28]. Although isotretinoin therapy has good treatment option, it has serious adverse effects such as teratogenicity, hepatic dysfunction, elevation of cholesterol and triglyceride levels, visual changes, pseudotumor cerebri, musculoskeletal pain, hyperostosis, mucocutaneous dryness, and dryness of the eyes. Therefore, the risk/benefit ratio in treatment of acne lesions with isotretinoin in children with Apert syndrome should be evaluated well [26]. Propionibacterium acnes is estimated to play a pathogenic role in SAPHO syndrome. Productions of microbial determinants of P. acnes stimulate innate immune response through TLR-2. TLR-2 induces inflammatory cytokines via NF-jB and mitogen-activated protein kinase pathway [31]. Recent reports also showed that SAPHO syndrome had similar features with other autoinflammatory disease. IL-1β, TNF-α, and IL-8 were suggested to be important in the pathogenesis of SAPHO [32]. SAPHO syndrome is a rare disease so usually misdiagnosed because this syndrome have similar clinical features with infectious discitis, seronegative SpA, and psoriatic arthritis (PsA), and skin and bone lesions may appear at different times [34]. Standard diagnostic criteria are also controversial such as etiology. The commonly used diagnostic criteria of SAPHO syndrome: (i) local bone pain with gradual onset; (ii) multifocal lesions, especially in the long, tubular bones and spine; (iii) failure to culture an infectious microorganism; (iv) a protracted course for several years with exacerbations and improvement with anti-inflammatory drugs; and (v) neutrophilic skin eruptions, mostly palmoplantar pustulosis (PPP), nonpalmoplantar pustulosis, psoriasis vulgaris, or severe acne [30] (Figure 2). The skin manifestations are those of different neutrophilic dermatoses. PPP is the most common skin involvement, including pustular psoriasis, representing 50-75% of all dermatologic manifestations, psoriasis vulgaris may also be seen among the dermatologic manifestations of SAPHO. One fourth of patients have acne conglobata and fulminans with men clearly predominating [34]. Hidradenitis suppurativa may also be seen. PG, Sweet's syndrome, and Sneddon-Wilkinson disease are the other rare cutaneous manifestations. IBD especially Crohn's disease may also be accompanied with SAPHO syndrome [34]. The most of author agree about that SAPHO could be classified within the spectrum of autoinflammatory diseases. Therefore, intra-articular or systemic corticosteroids, disease-modifying antirheumatic drugs (DMARDs) such as methotrexate, sulfasalazine, cyclosporine, and leflunomide are the treatment options but there are no randomized controlled clinical trials for the treatment. Doxycycline can also be thought as a treatment option for P. acnes eradication [33] Infliximab (INFX), an anti-TNF-α monoclonal antibody, has been showed effective for the treatment SAPHO patients especially unresponsive or refractory to conventional drugs. In recent case series, remarkable improvement of bone, joints, and skin inflammatory manifestations was observed with Infliximab (INFX) therapy. In resistant SAPHO cases, the IL-1 antagonist anakinra can be also tried [35,36]. PAPA syndrome PAPA syndrome (pyogenic arthritis, pyoderma gangrenosum, and acne) is an autosomal dominant, autoinflammatory disorder. PAPA syndrome was first described as a hereditary disease in 1997 [37]. There is a PSTPIP1/CD2BP1 mutation on chromosome 15q that causes an increased binding affinity to pyrin and induces the assembly of inflammasomes [37]. The caspase, a protease, is activated and converts inactive prointerleukin (IL)-1 beta to its active isoform IL-1 beta. Overproduction of IL-1 beta induces to release pro-inflammatory cytokines and chemokines. Those are responsible for the recruitment and activation of neutrophils, leading to a neutrophil-mediated inflammation [38]. PAPA syndrome is usually presents with severe self-limiting pyogenic arthritis in early childhood. Pyoderma gangrenosum (Figure 3) and nodular-cystic acne may be seen around puberty and adulthood [37]. Pathergy test is positive in PAPA syndrome and clinically appears as pustule formation followed by ulceration [34]. There is not a diagnostic test but acute phase reactants and white blood cell count may be elevated because of systemic inflammation [34]. Acne and Acneiform Eruptions Arthritis usually gives good response to therapy with corticosteroids. Pyoderma gangrenosum is treated with topical or systemic immunosuppressant drugs. In addition, a few reports showed that anti-TNF-α and anti-IL-1 agents are effective in the treatment [39]. PASH syndrome The clinical trial of pyoderma gangrenosum, acne conglobata, and hidradenitis suppurativa was described as PASH syndrome by Braun-Falco et al. in 2012. PASH syndrome clinically resembles pyoderma gangrenosum, acne conglobata, and pyogenic arthritis (PAPA) syndrome, but arthritis is not observed [40]. The molecular basis of PASH syndrome is not known very well. It is accepted as autoinflammatory disease. PAPA (pyogenic arthritis, pyoderma gangrenosum, and acne), PAPASH (pyogenic arthritis and PASH), and PASH (pyoderma gangrenosum, acne conglobata and hidradenitis suppurativa) syndromes clinically have similar components. The absence of pathogenic mutations in the PSTPIP1 gene may be used for distinguishing this syndrome from other AIDs [41,42]. Although a genetic mutation was not discovered clearly in PASH syndrome, in some case series, NCSTN gene, NOD (nucleotide-binding oligomerization domain) genes, the immunoproteasome, and MEFV mutations were reported in PASH syndrome [41,42]. Systemic corticosteroids, traditional antineutrophilic agents (dapsone and colchicine), and metformin may be tried at first but standard therapy options for autoinflammatory diseases are not usually enough to treat patients with PAPA and PASH syndrome whereas anti-TNF therapies and anti-IL-1 therapy are promising new treatment options for drug resistant cases [43].
Dubai: An Urbanism Shaped for Global Tourism The urban transformation experience of Dubai presents an interesting model of dealing with globalization and benefiting from its flows of people, capital, and transformation. Although that city does not have rich urban heritage or natural attractions compared to other cities in the region, it managed to construct an urban structure that captured a relatively significant portion of global tourism to its local context. In this research paper the author argue that Dubai has achieved this quest by constructing a series of what author call "places of people flows." This research mean by places of people flows, projects that have the capacity of triggering people flows to the city. This research mean categorizes these places into: 1) Places of urban image, or spectacular projects that contributes to the quality of the urban image of the city. 2) Places of linkage that connects the city to the global domain. 3) Places of agglomeration that host flows of people flows coming to the city. This research mean analyzed by the role of these places of people flows in transforming Dubai from a peripheral city to one of the most attractive tourism destination in the Middle East. information. On the scale of global flows of people, these ties include modes of transportation that facilitate the movement of people and hubs that have the capacity of hosting them. The scale and rate of people flows across the globe has increased dramatically because of the revolution in modes of transportation, especially air travel. According to the International Air Transport Association, the number of passengers who travelled by air in 2012 is 2.98 billion. This huge figure indicates the intensity of human flows between cities on the domestic and international scale. Human flows from one place to another require both modes of transportation and nodes of agglomeration. Places such as airports, seaports, highways and train stations facilitate mobility. They contribute to what Janelle [4] describes as space-time convergence or the diminishing time needed to connect two places due to the advancement of transportation technologies. The capacity of airports is becoming one of the major indicators of the status of a city in the global system. They perform as hubs that connect the local context to the global domain. They are becoming crucial urban components for cities aiming to attract flows of people. Hubs of agglomeration of human flows are other essential components for globalizing cities. Hotels, resort areas and tourist attractions are examples of these hubs. They determine the scale of human flows to a city. Places of tourists’ agglomeration such as hotels, resorts, museums and other attractions are another indicator of the scale of human flows to a city. No doubt that information technology has created what Urry [5] calls “virtual and imaginative travel” through internet, radio and TV, noting that there is no evidence that virtual and imaginative travel is replacing corporeal travel. Tourism is currently one of the largest sectors of the global economy. Cities that do not have natural or urban attractions tend to create attractions and facilities to encourage global tourism. Invented tourist attractions are emerging in many of the Citation: Salama HH (2015) Dubai: An Urbanism Shaped for Global Tourism. J Archit Eng Tech 4: 154. doi:10.4172/2168-9717.1000154 Introduction The revolution of communication, information and transportation technologies has created what David Harvey refers to as time-space compression. It accelerated the experience of time and shrank the significance of distance [1]. This has triggered unprecedented flows of people, capital and information across the globe. Nearly one billion people travel internationally every year. Billions of dollars are transferred across the globe daily. Information and knowledge move with the speed of light. These new patterns of movement are referred to as global flows. They contribute to emergence of transnational networks, relation and interdependency. In the context of this paper is focused on developments that target flows of people and more specifically tourists. Tourism has become one of the largest economic sectors in the world today. Its export earnings were estimated by $1.4 trillion in 2013 compared to $475 billion in 2000. In the same period, the number of international tourist arrivals has jumped from 674 million to 1.087 billion. The Middle East has also witnessed a substantial increase in tourist arrivals during the same periods. The number of tourists jumped from 24 million in 2000 to reach 52 million in 2013 (World Tourism Organization). It could be argued that Dubai has contributed significantly to this growth. During the last ten years, the city was able to increase its share of tourist arrivals by nearly four folds. According to Dubai Tourism Authority, hotels in the city hosted 11 million tourists in 2013 which makes it the number one tourism hub in the Middle East. Dubai in the New World Order Dubai, more than any other city in the Middle East managed to benefit from globalization and its flows. The city managed to transform itself into a major tourism hub by constructing a series of projects which primarily aim to attract global flows of people. It is referred to these projects as 'places of people flows' [2]. These are projects that have the capacity of attracting, facilitating and hosting flows of people to the city. These places are crucial for any city aiming to become part of the new world system. They play a significant role in connecting the city to the global domain. As noted by Smith and Timberlake [3], "the world system is constituted, on one level, by a vast network of locales that are tied together by multitude of direct and indirect exchanges". These ties and networks are the routes of flows of people, capital and information. On the scale of global flows of people, these ties include modes of transportation that facilitate the movement of people and hubs that have the capacity of hosting them. The scale and rate of people flows across the globe has increased dramatically because of the revolution in modes of transportation, especially air travel. According to the International Air Transport Association, the number of passengers who travelled by air in 2012 is 2.98 billion. This huge figure indicates the intensity of human flows between cities on the domestic and international scale. Human flows from one place to another require both modes of transportation and nodes of agglomeration. Places such as airports, seaports, highways and train stations facilitate mobility. They contribute to what Janelle [4] describes as space-time convergence or the diminishing time needed to connect two places due to the advancement of transportation technologies. The capacity of airports is becoming one of the major indicators of the status of a city in the global system. They perform as hubs that connect the local context to the global domain. They are becoming crucial urban components for cities aiming to attract flows of people. Hubs of agglomeration of human flows are other essential components for globalizing cities. Hotels, resort areas and tourist attractions are examples of these hubs. They determine the scale of human flows to a city. Places of tourists' agglomeration such as hotels, resorts, museums and other attractions are another indicator of the scale of human flows to a city. No doubt that information technology has created what Urry [5] calls "virtual and imaginative travel" through internet, radio and TV, noting that there is no evidence that virtual and imaginative travel is replacing corporeal travel. Tourism is currently one of the largest sectors of the global economy. Cities that do not have natural or urban attractions tend to create attractions and facilities to encourage global tourism. Invented tourist attractions are emerging in many of the globalizing cities and in particular the Middle East. These places tend to trigger flows of people to their urban context. They are gradually becoming commodities that generate wealth for their cities. Dubai and Places of People Flows During the last decade, Dubai managed to construct an urban structure that attracted global attention and triggered massive flows of people to its local context. The city became one of the top tourist destinations in the Middle East. More than any other city in the Middle East, Dubai managed to deal successfully with globalization and its flows of people, capital and information. It could be argued that the city was built for global flows. Dubai has relied mainly on the production of a series of places which can be classified into the following three categories: Places of urban image and fascination Dubai was introduced to the world through its urban image. The concept of creating iconic architecture designed by celebrity architects has dominated urban development not only in Dubai, but also many other globalizing cities during the last decade. The quest for impressive urban images or what Charles Jencks refers to as the "Bilbao effect" became a major driving force that has been shaping urban change in cities seeking to upgrade its world city status [6]. Priority has been given to projects that have the capacity of attracting global attention and seducing global flows [7]. A major portion of real estate investments has been directed to the production of spectacular buildings and urban settings. Dubai has actually embraced this approach to the extreme. The city has invested extensively in developing unique projects that made the city known around the world. It all started by Burj Al Arab, one of the most luxurious hotels in the world that opened in 2000. This project introduced Dubai to the world. It also jump started a trend of development in the city that primarily focused on constructing urban spectacles. In less than a decade, Dubai managed to attract enormous global attention. The city is now perceived as the capital of extravagance, luxury and spectacularity. Dubai pursuit for a spectacular urban image is not actually a new phenomenon. However, the competition between cities to occupy the top ranks of the "world cities rankings" has triggered this quest. Cities are no longer relying on fancy brochures and charming postcards portraying polished places in order to promote themselves. The internet and satellite channels have exposed cities and their urban realities. It is becoming extremely important for cities to construct a presentable image to display around the world. Attracting global attention is becoming easier for cities that can afford the construction of spectacles. For others, it is becoming significantly difficult to veil urban deterioration and backwardness. Dubai is actually one of the cities that managed to construct an attractive urban image. The process of development in the city mainly focused on creating a spectacle, an urban structure that makes the city known around the world. It was a process that featured a commodification of urbanism. The excessive emphasis on branding and the promotion of Dubai as an urban spectacle have overridden social and environmental aspects. As argued by Guy Debord [8], "the spectacle is the moment when the commodity has attained the total occupation of social life. Not only is the relation to the commodity visible but it is all one sees: the world one sees is its world." Saunders [9] notes that: Spectacle is the primary manifestation of the commodification or commercialization of design: design that is intended to seduce consumers will likely be more or less spectacular, more or less a matter of flashy, stimulating, quickly experienced gratification, more or less essentially like a television ad. The stimulation that leads to 'Wow'! Dubai has been seeking this 'WOW' effect in almost every major development during the last decade. The city has been determinant to impress the world by every project it builds. Burj Al Arab, the most luxuries hotel in the world, Burj Khalifa, the tallest building on earth, Palm Islands the largest man made islands on the planet are all examples of projects seeking the WOW effect. As noted by Davis [10], the vision of the ruler of Dubai was simply that 'everything must be 'world class', by which he means Number One in the Guinness Book of Records'. Iconic projects that can create this image were given the priority. Dubai as a place lacks the historic charisma that features other famous cities like Rome and Barcelona. It does not have the political influence of New York and London, or the cultural importance of Paris. Accordingly, in order to make the city famous, the idea was simply to build spectacles. Each of these spectacles has a distinct theme or a story that in most of the cases has no relation to the local context. New developments were the tallest, largest, most dramatic or most luxurious in the world. Hotels such as Burj Al Arab became major tourism attractions. People even pay an entrance fee to get into its reception. Same phenomenon could be observed in Atlantis, the Palm which attracts much more visitors than guests. The hotel is a replica of Atlantis Bahamas. It was built on the artificial island of Palm Jumeirah and offers underwater suites. The cost of the opening ceremony of this hotel was estimated by $20 million. The Armani Hotel is another example of spectacular destinations in the city. It is the first of a new chain of Giorgio Armani Hotels. The hotel is located in Burj Khalifa the tallest building in the world, and was exclusively designed by Armani Designers. Many of Dubai spectacular hotels such as Burj Al Arab Hotel for example, were not built to make quick profits. With its cost that exceeded half a billion dollars and a minimum room rate of $2000/night, the place is not financially feasible. The main objective was simply to attract attention and create a spectacular image of the city that makes it recognizable across the globe. Mega shopping malls are another example of spectacular places in Dubai. The city has been investing intensely in creating mega malls, the largest, not only in the region but the whole world. Dubai Mall a 9,000,000 ft² of shopping retail space that is designed to host 1200 stores is one of the largest malls in the world. It marked the largest mall opening in history with 600 retailers. The mall is located in Burj Khalifa, the tallest building on earth. The mall has attracted 30 million visitors in its first year. It includes a 10,000,000 litres aquarium with 33,000 marine animals on display. Ibn Batutta Mall is another example of spectacular malls. It is named after the medieval traveller and explorer Ibn Battuta. The mall has six main sections; each replicates the architecture of the regions visited by Ibn Battuta. It has Chinese, Egyptian, Persian, Tunisian, Andalusian and Indian themed sections. Mercato Mall is another example of themed malls in Dubai. The place replicates Italian Tuscan architecture. The developer states with pride that Mercato Mall is the first themed mall in the Middle East. To a great extent, these projects and many other spectacles managed to make Dubai famous. The city managed to attract global attention more than any other place in the Gulf region. It is now competing with countries such as Egypt and Morocco in the number of tourists it attracts every year. The city now hosts more transnational corporations than any other city in the Middle East. Dubai is becoming a model for places seeking a top world city status. In almost a decade, this young city managed to construct an image that marked its name among the world famous cities. Places of linkage In order for global flows of people to reach a city, there should be places that facilitate their movement between the global domain and the local context. Airports, seaports and train stations are examples of these places. In literature on globalization and urbanism, much emphasis is given to the capacity of cities to connect to the global society both physically and digitally. Global accessibility and linkages are among the main measures that identifies top world cities [11]. Dubai has recognized the importance of creating linkages with the global domain both physically and digitally. The city has invested intensively in constructing the most advanced information and communication infrastructure. It also developed media and internet cities in order to connect its local context to the global domain. This has contributed to the promotion of the city around the world. Through these digital networks, the city managed to market its new spectacular urban image and attract global attention. This exposure was supported by a series of spectacular events such as hosting a tennis game between world champions on the helicopter pad of Burj Al Arab. On the physical level, Dubai invested in constructing one of the largest airports in the world. In 2011, Dubai International airport served 51 million passengers on 326,341 flights making it the fourth busiest airport in the world in terms of international passengers. The airport current capacity is 62 million passengers. Over 150 airlines operate out of Dubai International Airport. The airport capacity is expected to reach 90 million in 2018 and will be expanded again to serve 98.5 million passengers in 2020. Once fully completed, it will be the largest airport in the world with a passenger's capacity of 120 million. This huge number of passengers compared to the small population of the city reflects the massive flows of people to and from the city. Dubai International Airport is currently one the major transit hubs in the world. Its duty free shops with its fancy daily prizes as Ferrari cars and Rolex Watches make it one of the most preferable transit airports. Jebel Ali Port and Port Rashid are another example of places that facilitate the movement of people to Dubai. Although these ports were mainly developed to serve flows of goods and capital, they remain major access points to the city. Port Rashid was built in 1972 by Sheikh Rashid Al Maktoum. This modern port managed to attract much trade to the city. It was followed by Jebel Ali Port which started operation in 1977 and Dubai World Trade Centre, a thirty nine stories building that opened in 1979. When built, Jebel Ali Port was one of the largest ports in the region. Dubai has also invested in developing an advanced highway networks that connects with neighbouring cities in the UAE and the region. The city is currently a major tourism destination for residents of Saudi Arabia, Qatar and Oman due to its proximity and unique urban quality. All these places of linkage made Dubai a major hub of people flows in the Middle East. In nearly a decade, the city managed to increase its share of tourists five times making it one of the top tourism cities in the Middle East. Dubai is one of the most globally connected cities in the world today, both physically and digitally. This contributes to its capacity of capturing part of global people flows to its context. Places of agglomeration In order for Dubai to assure continuous flows of people to its urban context, it was important for the city to keep directing investments in the development of places that can host these flows. In nearly a decade, Dubai managed to establish a series of places that can absorb the continuously increasing number of visitors to the city. This process of urban transformation started by the construction of the Intercontinental Hotel in order to serve Jebel Ali Port and the World Trade Centre in the early 1980s. This was actually the first world class hotel in the city. It was followed by a group of projects that primarily aimed to serve the growing number of city visitors (Figures 1-4). In 1988, the number of hotels in Dubai reached 48 and then jumped to 223 in 1995. The number of hotel rooms was 4,764 in 1988 and reached 12,727 in 1995. The period between 2000 and 2010 witnessed unprecedented expansion in hotels capacity. The number of hotel rooms has nearly tripled during that period. In 2010, the number of rooms reached 51,115 offered by 382 hotels. Although the occupancy rate has dropped from 80.5% in 2008 to 70% in 2010 due to the world economic crises, the number of hotels in the city has increased by 41 hotels during the same period. This reflects the intention of the city to keep expanding its capacity of hosting tourists. The most recent published statistics by the Government of Dubai indicates that by the end of 2012, Dubai had 399 hotels and 200 hotel apartments. During that year, nearly 40% of the hotel nights in the city were spent in Five Star hotels compared to 7.5% and 8% in One and Two Star hotels respectively. This is attributed to the nature of tourists who visit the city. Except for Asians and Africans, tourists tend to stay in Five Star hotels more than other categories. For example, 75% of Source: Data collected from Dubai Statistical Yearbooks. European tourists and 63% of Gulf Cooperation Council Countries preferred to stay in either five or four Star hotels. They also tend to spend more hotel nights in these categories than others. 80% of the hotel nights spent by European tourists in Dubai were hosted by Four and Five Star hotels (Dubai Statistical Yearbook 2012). Hotel apartments also host a significant portion of tourists in Dubai. The number of apartments is estimated by 23,069 units distributed over 200 establishments. In 2012, the number of guests who stayed in these apartments has reached 2.13 million, spending 11.44 million nights compared to 7.8 million hotel guests who spent 26 million nights. These apartments usually serve visitors staying for more than four days or are coming with large family or group. They also serve corporations that continuously host foreign experts and professionals for limited periods of time. Nearly 56% of hotel apartment's quests are Asians and Europeans (Dubai Statistical Yearbook 2012). The continuously growing tourism market in Dubai demands a population of foreign labour and professionals to construct operate and serve tourism facilities. It is worth noting that expatriates make the majority of the city population. Ethnic enclaves that host their agglomerations have gradually emerged and became crucial components of the urban fabric of the city. In the case of Dubai, these enclaves are not yet as established as Chinatowns and Korea towns in many American cities. However, some distinct urban qualities, signs and symbols could be traced in these settings. The presence of these enclaves triggers more expatriates flows to the city. They provide haven for new comers and more specifically, cheap labour form South East Asia who neither master the local language nor are familiar with the new lifestyle. In these enclaves immigrants can find "middleman minorities" who can help them settle and find a job [12]. In Dubai, expatriates' enclaves are not limited to low income labour. Gated communities and residential towers hosting talented professionals and executive elites are examples of these places. These urban typologies consider the preferences and lifestyle foreigners and tend to segregate them from locals' neighbourhoods. These enclaves allow its residents to enjoy a lifestyle that might not be socially acceptable outside the gates. These places are crucial for the agglomeration of expatriates. They are as described by Featherstone and Lash [13][14][15] a 'global creation of locality.' Conclusion During the last decade, Dubai managed to transform itself to become one of the major tourism hubs in the Middle East. The city managed to benefit from the new global order and its flows of people, capital and information. It has mainly relied on creating an exciting urban experience in order to attract global tourism to its local context. Dubai focused on developing three types of projects which have triggered enormous flows of people to the city. This process started by the construction of a spectacular urban image which attracted global attention. Projects such as Burj Al Arab, Burj Khalifa, and the Palm and World Islands managed to make Dubai famous around the world. This was associated with the development of physical and digital linkages with the global domain. Information and communication technology infrastructure and networks in Dubai are among the most advanced in the world. Dubai International Airport and Jebel Ali Seaport are among the largest ports on the globe. These places managed to link the city to the global system. Dubai has also invested in the development of places that have the capacity of hosting agglomerations of people flows in its urban context. The number and capacity of hotels, resorts and hotel apartments in the city have increased dramatically during the last decade. All these projects managed to make Dubai a major tourism attraction in the Middle East. The urban experience of Dubai during the last decade presents an interesting model of cities dealing with globalization. The city managed to attract massive global flows of people to its local context by constructing a series of places which could be described as places of people flows. There is much to be learned from Dubai experience and its approach to dealing with the new world order.