text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Comparative structural dynamic analysis of GTPases GTPases regulate a multitude of essential cellular processes ranging from movement and division to differentiation and neuronal activity. These ubiquitous enzymes operate by hydrolyzing GTP to GDP with associated conformational changes that modulate affinity for family-specific binding partners. There are three major GTPase superfamilies: Ras-like GTPases, heterotrimeric G proteins and protein-synthesizing GTPases. Although they contain similar nucleotide-binding sites, the detailed mechanisms by which these structurally and functionally diverse superfamilies operate remain unclear. Here we compare and contrast the structural dynamic mechanisms of each superfamily using extensive molecular dynamics (MD) simulations and subsequent network analysis approaches. In particular, dissection of the cross-correlations of atomic displacements in both the GTP and GDP-bound states of Ras, transducin and elongation factor EF-Tu reveals analogous dynamic features. This includes similar dynamic communities and subdomain structures (termed lobes). For all three proteins the GTP-bound state has stronger couplings between equivalent lobes. Network analysis further identifies common and family-specific residues mediating the state-specific coupling of distal functional sites. Mutational simulations demonstrate how disrupting these couplings leads to distal dynamic effects at the nucleotide-binding site of each family. Collectively our studies extend current understanding of GTPase allosteric mechanisms and highlight previously unappreciated similarities across functionally diverse families. Introduction and the C-terminal membrane anchoring lobe (lobe2) [13,14]. Several allosteric sites were identified in lobe 2 or between lobes, including L3 (the loop between β2 and β3), L7 (the loop between α3 and β5), and α5. Importantly, α5 is the major membrane-binding site and has been related to the nucleotide modulated Ras/membrane association [15]. In addition, binding of small molecules at L7 has been reported to affect the ordering of SI and SII [16]. Intriguingly, recent studies of Gα have revealed nucleotide associated conformational change and bilobal substructures in the catalytic domain largely resembling those in Ras [17,18]. The allosteric role of lobe 2, which contains the major binding interface to receptors, has also been well established for Gα [18][19][20][21][22][23][24][25][26][27]. Furthermore, the comparison between G proteins and translational factors via sequence and structural analysis indicates a conserved molecular mechanism of GTP hydrolysis and nucleotide exchange, and cognate mutations of key residues in the nucleotide-binding regions showed similar functional effects among these systems [2,6,7,12]. Collectively, these consistent findings from separate studies support the common allosteric mechanism hypothesis of GTPases and underscore a currently missing detailed residue-wise comparison of the structural dynamics among different GTPase superfamilies. In this study, we compare and contrast the nucleotide-associated conformational dynamics between H-Ras (H isoform of Ras), Gαt (transducin α subunit) and EF-Tu (elongation factor thermo unstable), and describe how this dynamics can be altered by single point mutations in both common and family-specific ways. This entails the application of an updated PCA of crystallographic structures, multiple long time (80-ns) MD simulations, and recently developed network analysis approach of residue cross-correlations [18]. In particular, we identify highly conserved nucleotide dependent correlation patterns across GTPase families: the active GTP-bound state displays stronger correlations both within lobe1 and between lobes, exhibiting an overall "dynamical tightening" consistent with the previous study in Gα alone [18]. Detailed inspection of the residue level correlation networks along with mutational MD simulations reveal several common key residues that are potentially important for mediating the inter-lobe communications. Point mutations of these residues substantially disrupt the couplings around the nucleotide binding regions in Ras, Gαt and EF-Tu. In addition, with the same network comparison analysis, we identify Gαt and EF-Tu specific key residues. Mutations of these residues significantly disrupt the couplings in Gαt and EF-Tu but have no or little effect in Ras. Our results are largely consistent with findings from experimental mutagenesis, with a number of dynamical disrupting mutants have been shown to have altered activities in either Ras or Gα. Our new predictions can be promising targets for future experimental testing. Principal component analysis (PCA) of Ras, Gαt/i and EF-Tu crystallographic structures reveals functionally distinct conformations Previous PCA of 41 Ras crystallographic structures revealed distinct GDP, GTP and intermediate mutant conformations [13]. Updating this analysis to include the 121 currently available crystallographic structures (S1 Table) reveals consistent results but with two additional conformations now evident (Fig 2A). In addition to GDP (green in Fig 2A), GTP (red), and mutant forms, GEF-bound nucleotide free (purple) and so-called 'state 1' forms (orange) are now also apparent. In the GEF-bound form, the SI region is displaced in a distinct manner-12Å away from the nucleotide-binding site coincident with the insertion of a helix of GEF into the PL-SI cleft. The state 1 GTP-bound form was first observed via NMR and later high-resolution crystal structures were solved [28][29][30]. In contrast to the canonical GTP-bound conformation (red), the state 1 form (orange) lacks interaction between the two switches and the γphosphate of GTP, resulting in a moderate 7Å displacement of SI away from its more closed GTP conformation. The first two PCs capture more than 75% of the total mean-square displacement of all 121 Ras structures. Residue contributions from SI and SII dominate PC1 and PC2 (Fig 2D). The height of each bar in Fig 2D displays the relative contribution of each residue to a given PC. PC1 mainly describes the opening and closing of SI-more open in GEF-bound and state 1 forms, and more closed in nucleotide bound structures. PC1 also captures smaller scale displacement of L8 (the loop between β5 and α4), which resides 5Å closer to the nucleotide-binding pocket in the GEF-bound structures than the GTP-bound structure set. PC2 depicts SII displacements and clearly separates GTP from GDP bound forms (red and green, respectively). As we expect, the lack of γ-phosphate in the GDP releases SII from the nucleotide, whereas in the GTP form SII is fixed by the hydrogen bond of the backbone amide of G60 with the γ-phosphate oxygen atom. This is also shown in the state 1 form where the hydrogen bond is disrupted with SII moderately displaced from the nucleotide (4Å on average from the canonical GTP group structures). PCA of 53 available Gαt/i structures described recently (S2 Table) revealed three major conformational groups: GTP (red in Fig 2B), GDP (green) and GDI (GDP dissociation inhibitor; blue) bound forms [18]. The first two PCs capture over 65% of the total variance of Cα atom positions in all structures. The dominant motions along PC1 and PC2 are the concerted displacements of SI, SII and SIII in the nucleotide-binding region as well as a relatively smallscale rotation of the helical domain with respect to RasD (Fig 2E). PC1 separates GDI-bound from non-GDI bound forms. In GDI-bound structures the GDI interacts with both the HD and the cleft between SII and SIII of the Ras-like domain, increasing the distance between SII and SIII. Similar to Ras, PC2 of Gαt/i clearly distinguishes the GTP and GDP-bound forms, where again the unique γ-phosphate (or equivalent atom in GTP analogs) coordinates SI and SII. In addition, the SIII is displaced closer to the nucleotide, effectively closing the nucleotide-binding pocket. PCA of 23 available full-length EF-Tu structures reveals distinct GTP and GDP conformations (S3 Table). PC1 dominantly captures nearly 95% of the total structural variance of Cα atom positions (Fig 2C). It mainly describes the dramatic conformational transition in SI as well as the large rotation of two β-barrel domains D2 and D3 (Fig 2F). In the GTP-bound form, the C-terminal SI is coordinated to the γ-phosphate and Mg 2+ ion, forming a small helix near SII. Meanwhile, D2 and D3 are close to RasD and create a narrow cleft with SI, serving as the binding site for tRNA [31]. In the GDP-bound form, the C-terminal helix in SI unwinds and forms a β-hairpin, protruding towards D2 and D3 [32]. The highly conserved residue T62 (T35 in Ras) of EF-Tu moves more than 10Å away from its position in the GTP form and loses interaction with the Mg 2+ ion. In addition, D3 rotates towards SI and D2 moves far away from the Ras-like domain. In contrast to PC1, PC2 only captures a very small portion (3.59%) of the structural variance in EF-Tu (Fig 2F). The major conformational change along PC2 is a smallscale rotation of D2 and D3 with respect to RasD in the GTP form. PCA of Ras, Gαt/i and EF-Tu demonstrates that the binding of different nucleotides and protein partners can lead to a rearrangement of global conformations in a consistent manner. In particular, within RasD, these three families display conserved nucleotide-dependent conformational distributions with major contributions from the switch regions. In the GTPbound form of these proteins, SI and SII are associated with the nucleotide through interacting with γ-phosphate. Despite these similarities, critical questions about their functional dynamics remain unanswered: How does nucleotide turnover lead to allosteric regulation of distinct partner protein-binding events? To what extent are the structural dynamics of these proteins similar beyond the switch region displacements evident in accumulated crystal structures? How do distal disease-associated mutations affect the functional dynamics for each family and are there commonalities across families? In the next section, we report MD simulations that address these questions, which are not answered by accumulated static experimental structures. MD simulations reveal distinct nucleotide-associated flexibility and crosscorrelation near functional regions MD simulations reveal distinct nucleotide-associated flexibility at known functional regions. Representatives of the distinct GTP and GDP-bound conformations of Ras, Gαt and EF-Tu were selected as starting points for MD simulation. Five replicated 80-ns MD simulations of these three proteins for each state (GTP and GDP totaling 2.4μs; see Materials and Methods) exhibit high flexibility in the SI, SII, SIII/α3 and loop L3, L7, L8 and L9 regions (Fig 3A-3C). The Cα atom root-mean-square fluctuation (RMSF) in Gαt shows that SI is significantly more flexible in the GDP-bound state (Fig 3B). The C-terminal SI of Ras and EF-Tu, corresponding to the shorter SI in Gαt, is also more flexible with GDP bound (Fig 3A & 3C). Interestingly, the middle part of SI in Ras and EF-Tu show higher fluctuations in the GTP-bound state. Moreover, SII is more flexible in the GTP-bound state in Ras. Detailed inspection reveals that Structural dynamics of GTPases SII always stays away from the nucleotide during the GDP-bound state MD simulations, whereas SII sometimes moves close to and interacts with the unique γ-phosphate of GTP, leading to higher flexibility in the GTP-bound state. In contrast, the flexibility of SII in Gαt has no significant difference between states, whereas SII in EF-Tu is less flexible with GTP bound. This is due to the relatively compact interactions between SII and the unique D2 and D3 in the GTP-bound EF-Tu. In fact, D2 and D3 show extremely higher flexibility in the GDP state ( Fig 3C). Overall, the nucleotide-dependent flexibility of RasD in Ras, Gαt and EF-Tu are quite similar except for SII. The cross-correlations of atomic displacements derived from MD simulations also manifest conserved nucleotide-associated coupling in these three systems (Fig 3D-3F). In both Ras and Gαt, significantly stronger couplings within the catalytic lobe 1 between PL, SI and SII can be found only in the GTP-bound state (red rectangles in Fig 3D & 3E). Interestingly, a unique inter-lobe coupling between SII and SIII/α3 also characterizes the GTP-bound state in both systems (blue rectangles in Fig 3D & 3E). In EF-Tu, the intra-lobe 1 and inter-lobe couplings are similar between states (red and blue rectangles in Fig 3F). Intriguingly, a lot of negative correlations between D2 and RasD of EF-Tu are found in the GDP-bound state, indicating the swing motion of D2 with respect to RasD during MD simulations (lower triangle in Fig 3F). Correlation network analysis displays similar nucleotide-associated correlation in Ras, Gαt and ET-Tu Consensus correlation networks for each nucleotide state were constructed from the corresponding replicate MD simulations. In these initial networks, each node is a residue linked by edges whose weights represent their respective correlation values averaged across simulations (see Materials and Methods). These residue level correlation networks underwent hierarchical clustering to identify groups of residues (termed communities) that are highly coupled to each other but loosely coupled to other residue groups. Nine communities were identified for Ras and eleven for Gαt and EF-Tu (Fig 4). The two additional family specific communities not present in Ras correspond to two regions of HD in Gαt and D2 and D3 in EF-Tu. In the resulting community networks the width of an edge connecting two communities is the sum of all the underlying residue correlation values between them. Interestingly, Ras, Gαt and EF-Tu community networks can be partitioned into two major groups (dashed lines in Fig 4) corresponding to the previously identified lobes for Ras and the RasD in Gαt [13,18]. The boundary between lobes is located at the loop between α2 and β4. In these proteins, lobe1 includes the nucleotide-binding communities (PL, SI and SII) as well as the N-terminal β1-β3 and α1 structural elements. Lobe2 includes α3-α5, L8 and the C-terminal β4-β6 strands. Comparing the GTP and GDP community networks of these three proteins reveals common nucleotide-dependent coupling features. In particular, for Ras and Gαt, comparing the relative strength of inter-community couplings in GTP and GDP networks using a nonparametric Wilcoxon test across simulation replicates reveals common significantly distinct coupling patterns (colored edges in Fig 4A & 4B). Within lobe1 stronger couplings between PL, SI and SII are observed for the GTP state of both families. This indicates that the γ-phosphate of GTP leads to enhanced coupling of these proximal regions. This is consistent with our PCA results above, where PC2 clearly depicts the more closed conformation of SI and SII in the GTP bound structures (Fig 2D & 2E). In addition, a significantly stronger inter-lobe correlation between SII and α3 is evident for the GTP state of both families, which is not available from analysis of the static experimental ensemble alone. This indicates that nucleotide turnover can lead to distinct structural dynamics not only at the immediate nucleotide-binding site in lobe 1 but also at the distal lobe 2 region. Intriguingly, similar patterns of intra and inter-lobe dynamic correlations are observed in EF-Tu (Fig 4C). Within lobe1, significantly stronger correlations between PL-SI and PL-SII are evident in the GTP state, although SI-SII coupling becomes weaker in this state. In fact, the C-terminal β-hairpin of SI moves towards and interacts extensively with SII and D3 in the GDP bound state, leaving the nucleotide-binding site widely open. Moreover, our results reveal that SII and SIII/α3 of EF-Tu are more tightly coupled in the GTP state, resembling the strong inter-lobe couplings in the GTP bound Ras and Gαt. It is worth noting that this conserved structural dynamic coupling is evident only from the comparative network analysis and is not accessible from PCA of crystal structures. The common residue-wise determinants of structural dynamics in Ras, Gαt and EF-Tu Comparative network analysis highlights the common residue-wise determinants of nucleotide-dependent structural dynamics. Besides correlations within lobe1, inter-lobe couplings are also significantly stronger in the GTP state networks of Ras, Gαt and EF-Tu. Inspection of the residue-wise correlations between communities reveals common major contributors to the SII-α3 couplings in the three proteins (red residues in S4 Table). In particular, M72 Ras in SII and V103 Ras in α3 act as primary contributors to inter-lobe correlations in Ras. Interestingly, the equivalent residues in the other two systems, F211 Gαt or I93 EF-Tu in SII and F255 Gαt or V126 EF-Tu in α3/SIII also contribute to the inter-lobe couplings. We further examined the importance of these residues by MD simulations of mutant GTP-bound systems. Results indicate that each single mutation M72A Ras and V103A Ras can significantly reduce the couplings between SI and PL, indicating that these mutations disturb couplings at distal sites of known functional relevance (Fig 5A & 5D). Moreover, the cognate mutations F211A Gαt and F255 Gαt in Gαt not only decouple SI and PL but also SI and SII (Fig 5B & 5E). Similarly, the analogous mutation I93A EF-Tu decreases the correlations between PL and SI, whereas V126A EF-Tu decouples PL and SII (Fig 5C & 5F). The simulation results indicate that single alanine mutation of residues contributing to SII-α3 couplings diminishes the couplings of the nucleotide binding regions, and this allosteric effect is common in all the three proteins. Inter-lobe couplings that are distal from the nucleotide binding regions are also shown to be critical for the nucleotide dependent dynamics in Ras, Gαt and EF-Tu. By inspecting the residue level couplings between L3 and α5, we identified common distal inter-lobe couplings in the three proteins. Mutational simulations indicate that the substitutions K188A Gαt and D337A Gαt significantly decouple SI from the PL and SII regions (Fig 6B & 6E). Interestingly, the mutations K188A Gαt and D337A Gαt have been reported to cause a 6-fold and 2-fold increase in nucleotide exchange, respectively, but no direct structural dynamic mechanism was established [19]. We further tested mutations of analogous residues in Ras. We considered both D47 Ras and E49 Ras as the equivalent residues to K188 Gαt (due to the longer L3 region of Ras), and R164 Ras as the equivalent residue to D337 Gαt . Both double mutation D47A/E49A Ras and single mutation R164A Ras significantly reduce the correlations between PL and SI (Fig 6A & 6D). We note that the functional consequences of mutating these residues in Ras has been highlighted in a previous study, in which the salt bridges between D47/E49 Ras in L3 and R161/ R164 Ras in α5 were shown to be involved in the reorientation of Ras with respect to the plasma membrane, and enhanced activation of MAPK pathway [15]. Moreover, substitutions of analogous residues R75A EF-Tu (L3) and D207A EF-Tu (α5) also significantly reduce the couplings between PL and SI (Fig 6C & 6F). Our results indicate that the conserved interactions between L3 and α5 are important for maintaining the close coordination of the distal SI, SII and PL around the nucleotide, and this is common to these three proteins. Network analysis identifies family-specific residue substitutions that can also perturb structural dynamics Comparison of the GTP-bound residue-wise networks of Ras, Gαt and EF-Tu reveals that the N-terminus of α3 strongly couples SII only in Gαt and EF-Tu. In particular, we identified residues R201 Gαt or A86 EF-Tu (SII) and E241 Gαt or Q115 EF-Tu (α3) as underlying these strong couplings (blue residues in S4 Table). These residues are specific to Gαt and EF-Tu because the corresponding residues E62 Ras in SII and K88 Ras in α3 have no contribution in Ras (green residues in S4 Table). Mutational MD simulations indicate that substitutions E241A Gαt and Q115A EF-Tu have a similar drastic effect on the coupling of nucleotide binding regions (S1 Fig). In particular, the couplings between PL, SII and PL are all significantly reduced (S1B & S1C Fig). We note that E241A Gαt in Gαs (the α subunit of the stimulatory G protein for adenylyl cyclase) was previously reported to impair GTP binding but the structural basis for this allosteric effect has been unknown [33,34]. Our results indicate that weakened correlations of the nucleotide-binding regions in E241A Gαt as a consequence of allosteric mutations in SIII/α3 and SII likely underlie the reported impaired GTP binding. Moreover, we identified residue E232 Gαt as a Gαt-specific primary contributor to the inter-lobe couplings in SIII, which has no Table). The simulation of mutation E232A Gαt shows diminished couplings between PL, SI and SII, as well (S2A Fig). Similar effects of mutations R201A Gαt and D234A Gαt are also observed (S2B & S2C Fig). Mutations of the counterpart residues E62A Ras and K88A Ras result in no significant change in the coupling of nucleotide binding loops in Ras (S1A Fig). Collectively these findings indicate that in Gαt and EF-Tu both N-and C-terminal α3 positions dynamically couple with SII, whereas in Ras the communication between α3 and SII is mainly through the C-terminus of α3. In addition, our results suggest that SIII plays a unique role in Gαt not only mediating the couplings between the two lobes but also allosterically maintaining the tight correlations between SI, SII and PL. Discussion In this work, our updated PCA of Ras structures captures two new conformational clusters representing the GEF-bound state and "state 1", respectively, in addition to the canonical GTP and GDP forms. By comparing the Ras PCA to PCA of Gαt/i and EF-Tu, we reveal common nucleotide dependent collective deformations of SI and SII across G protein families. Our extensive MD simulations and network analyses reveal common nucleotide-associated conformational dynamics in Ras, Gαt and EF-Tu. Specifically, these three systems have stronger intra-lobe1 (PL-SI and PL-SII) and inter-lobe (SII-SIII/α3) couplings in the GTP-bound state. Meanwhile, with the network comparison approach we further identify residue-wise determinants of commonalities and specificities across families. Residues M72 Ras (SII), V103 Ras (α3), D47/E49 Ras (L3) and R164 Ras (α5) are predicted to be crucial for inter-lobe communications in Ras. Mutations of these distal residues display decreased coupling strength in SI-PL. Interestingly, the analogous residues in the other two proteins, F211 Gαt /I93 EF-Tu (SII), F255 Gαt /V126 EF-Tu (α3), K188 Gαt /R75 EF-Tu (L3) and D337 Gαt /D207 EF-Tu (α5) also have important inter-lobe couplings and show similar decoupling effects upon alanine mutations. Besides the key residues that are common in the three systems, residues mediating inter-lobe couplings only in Gαt and EF-Tu are identified. These include R201 Gαt /A86 EF-Tu and E241 Gαt / Q115 EF-Tu , whose cognates in Ras do not have significant effect on the nucleotide-binding regions upon mutation. In addition, Gαt specific residue E232 Gαt in SIII (which is missing in Ras and EF-Tu) is identified to be important to the couplings of the nucleotide-binding regions. Importantly, some of our highlighted mutants (D47A/E49A Ras , K188A Gαt , D207A Gαt and R241A Gαt ) have been reported to have functional effects by in vitro experiments. Our analysis provides insights into the atomistic mechanisms of these altered protein functions. Using differential contact map analysis of crystallographic structures, Babu and colleagues recently suggested a universal activation mechanism of Gα [27]. In their model, structural contacts between α1 and α5 act as a 'hub' mediating the communications between α5 and the nucleotide. These contacts are broken upon the binding of receptor at α5, leading to a more flexible α1 and the destabilization of nucleotide binding. According to their studies, however, these critical α1/α5 contacts do not exist in Ras structures. Thus, they concluded that, unlike Gα, α5 in Ras does not have allosteric regulation of the nucleotide. It is worth noting that Babu's work is purely based on the comparison of structures without considering protein dynamics. In fact, our study indicates that functionally important communications may not be directly observed from static structures. For example, the inter-lobe couplings between SII and L3 (B), D337A Gαt in α5 (E), R75A EF-Tu in L3 (C) and D207A EF-Tu in α5 (F) have similar effects in the nucleotide-binding region-significantly reducing the couplings between PL, SI and SII. https://doi.org/10.1371/journal.pcbi.1006364.g006 Structural dynamics of GTPases SIII/α3 are not captured by PCA of structure ensemble, but they are clearly shown in our network analysis of structural dynamics. By inspecting structural dynamics, we find that α5 in Ras actually plays an allosteric role, in which point mutation (R164A) substantially disrupts the couplings in the nucleotide binding regions. The potential salt bridges between D47/E49 in L3 and R161/R164 in α5 are shown in S3 Fig. A previous study of Ras GTPases via an elastic network model-normal mode analysis (ENM-NMA) revealed similar bilobal substructures and found that functionally conserved modes are localized in the catalytic lobe1, whereas family-specific deformations are mainly found in the allosteric lobe2 [35]. The subsequent study via MD, in constrast, indicated that the conformational dynamics of Ras and Gαt are distinct, especially in the GDP state [36]. We note that in that study only a single MD simulation trajectory was analyzed, which is insufficient to assess the significance of the observed difference. Moreover, few atomistic details were given in that work. In our study, we make improvements by building ensemble-averaged networks based on multiple MD simulations instead of a single trajectory. This increases the robustness of the networks and largely reduces statistical errors. In addition, our correlation analysis provides residue wise predictions of potential important positions that mediate communications between functional regions. Overall, separation of functionally conserved and specific residues in conformational dynamics provides us unprecedented insights into protein evolution and engineering. Crystallographic structures preparation Atomic coordinates for all available Ras, Gαt/i and EF-Tu crystal structures were obtained from the RCSB Protein Data Bank [37] via sequence search utilities in the Bio3D package version 2.2 [38,39]. Structures with missing residues in the switch regions were not considered in this study, resulting in a total of 143 chains extracted from 121 unique structures for Ras, 53 chains from 36 unique structures for Gαt/i and 34 chains from 23 unique structures for EF-Tu (detailed in S1-S3 Tables). Prior to analyzing the variability of the conformational ensemble, all structures were superposed iteratively to identify the most structurally invariable region. This procedure excludes residues with the largest positional differences (measured as an ellipsoid of variance determined from the Cartesian coordinate for equivalent Cα atoms) before each round of superposition, until only invariant "core" residues remained [40]. The identified "core" residues were used as the reference frame for the superposition of both crystal structures and subsequent MD trajectories. Principal component analysis PCA was employed to characterize inter-conformer relationships of both Ras and Gαt/i. PCA is based on the diagonalization of the variance-covariance matrix, S, with element S ij built from the Cartesian coordinates of Cα atoms, r, of the superposed structures: where i and j enumerate all 3N Cartesian coordinates (N is the number of atoms being considered), and <�> denotes the average value. The eigenvectors, or principal components, of S correspond to a linear basis set of the distribution of structures, whereas each eigenvalue describes the variance of the distribution along the corresponding eigenvector. Projection of the conformational ensemble onto the subspace defined by the top two largest PCs provides a low-dimensional display of structures, highlighting the major differences between conformers. Molecular dynamics simulations Similar MD simulation protocols as those used in [18] were employed. Briefly, the AMBER12 [41] and corresponding force field ff99SB [42] were exploited in all simulations. Additional parameters for guanine nucleotides were taken from Meagher et al. [43]. The Mg 2+ �GDPbound Ras crystal structure (PDB ID: 4Q21), Gαt structure (PDB ID: 1TAG) and EF-Tu structure (PDB ID: 1TUI) were used as the starting point for GDP-bound simulations. The Mg 2+ � GNP (PDB ID: 5P21), the Mg 2+ �GSP (PDB ID: 1TND) and the Mg 2+ �GNP (PDB ID: 1TTT) bound structures were used as the starting point for GTP-bound simulations of Ras, Gαt and EF-Tu, respectively. These structures were identified as cluster representatives from PCA of the crystallographic structures. Prior to MD simulations, the sulfur (S1γ)/nitrogen (N3β) atom in the GTP-analogue was replaced with the corresponding oxygen (O1γ) / oxygen (O3β) of GTP. All Asp and Glu were deprotonated whereas Arg and Lys were protonated. The protonation state of each His was determined by its local environment via the PROPKA method [44]. Each protein system was solvated in a cubic pre-equilibrated TIP3P water box, where the distance was at least 12Å from the surface of the protein to any side of the box. Then sodium ions (Na + ) were added to neutralize the system. Each MD simulation started with a four-stage energy minimization, and each stage employed 500 steps of steepest descent followed by 1500 steps of conjugate gradient. First, the atomic positions of ligands and protein were fixed and only solvent was relaxed. Second, ligands and protein side chains were relaxed with fixed protein backbone. Third, the full atoms of ligands and protein were relaxed with fixed solvent. Fourth, all atoms were free to relax with no constraint. Subsequent to energy minimization, 1ps of MD simulation was performed to increase the temperature of the system from 0K to 300K. Then 1ns of simulations at constant temperature (T = 300K) and pressure (P = 1bar) was further performed to equilibrate the system. Finally, 80ns of production MD was performed under the same condition as the equilibration. For long-range electrostatic interactions, particle mesh Ewald summation method was used, while for short-range non-bonded Van der Waals' interactions, an 8Å cutoff was used. In addition, a 2-fs time step was use. The center-of-mass motion was removed every 1000 steps and the non-bonded neighbor list was updated every 25 steps. We performed a total of 1,920 ns of MD simulation and analyzed results from multiple production phase 80ns simulations for each of our 3 systems, including the wild type in two nucleotide states along with 5 mutant ras, 8 mutant Gαt and 5 mutant EF-Tu systems (see full listing in S5 Table). The RMSD time courses for the above systems is shown in S4 Fig. Correlation network construction Consensus correlation networks were built from MD simulations to depict dynamic couplings among functional protein segments. A weighted network graph was constructed where each node represents an individual residue and the weight of edge between nodes, i and j, represents their Pearson's inner product cross-correlation value cij [45] during MD trajectories. The approach is similar to the dynamical network analysis method introduced by Luthey-Schulten and colleagues [46]. However, instead of using a 4.5Å contact map of non-neighboring residues to define network edges, which were further weighted by a single correlation matrix, we constructed consensus networks based on five replicate simulations in the same way as described before [18]. Network community Hierarchical clustering was employed to identify residue groups, or communities, that are highly coupled to each other but loosely coupled to other residue groups. We used a betweenness clustering algorithm similar to that introduced by Girvan and Newman [47]. However, instead of partitioning according to the maximum modularity score, which is usually used in unweighted networks, we selected the partition closest to the maximum score but with the smallest number of communities (i.e. the earliest high scoring partition). This approach avoided the common cases that many small communities were generated with equally high partition scores. The resulting networks under different nucleotide-bound states showed largely consistent community partition in Ras, Gαt and EF-Tu, with differences mainly localized at the nucleotide binding PL, SI, SII and α1 regions. To facilitate comparison between states and families, the boundary of these regions was re-defined based on known conserved functional motifs. Re-analysis of the original residue cross-correlation matrices with the definition of communities was then performed. Only inter-community correlations were of interest, which were calculated as the sum of all underlying residue correlation values between two given communities satisfying that the smallest atom-atom distance between corresponding residue pairs was less than 4.5Å (for Gαt and EF-Tu) or 6 Å (for Ras) for more than 75% of total simulation frames. A larger cutoff was selected for Ras because the overall residue level correlations are weaker in Ras. A standard nonparametric Wilocox test was performed to evaluate the significance of the differences of inter-community correlations between distinct states. Table. Residue-wise contributions to inter-community couplings. The numbers represent the residue-wise contributions to inter-community couplings. For example, the sum of correlations between residue M72 in SII and all residues in SIII/ α3 is 1.19 (after filtering by contact map). The first row contains common counterpart residues (red) connecting SII and SIII/α3 in three proteins. The second row contains family-specific functional residues: residues in Gαt and EF-Tu (blue) contribute to the dynamic correlations between SII and SIII/α3, whereas their counterparts in Ras (green) have no contributions. The third row contains Gαt specific residue in SIII, which has no counterparts in the other two proteins. (DOCX) S5
7,390.6
2017-02-03T00:00:00.000
[ "Biology", "Chemistry" ]
Extending the Multiple Discrete Continuous (MDC) modelling 1 framework to consider complementarity, substitution, and an 2 unobserved budget Abstract Introduction Many choices can be represented as multiple discrete continuous decisions.In these, a decision maker faces a finite set of alternatives, and must choose how much to "consume" of each one, potentially consuming none, one, or multiple alternatives.Examples of these situation include activities performed during a day, grocery shopping, investment allocation, etc. Traditional choice models are not well suited for these situations, as they only allow the choice of a single alternative. Continuous models, on the other hand, often underestimate the probability of zero consumption for individual alternatives, also known as the "corner solution".Joint models, where the continuous choice is conditional on the discrete one, usually lack a strong grounding in economic theory, though there are exceptions (Hausman et al., 1995). The Karush-Kuhn-Tucker multiple discrete continuous (MDC) consumer demand models (Bhat, 2008(Bhat, , 2018;;Chintagunta, 1993;Hanemann, 1978;Kim et al., 2002;Mehta and Ma, 2012;Phaneuf and Herriges, 1999;Song and Chintagunta, 2007;Wales and Woodland, 1983) attend to the issues mentioned in the previous paragraph.These models begin by explicitly formulating the consumer utility maximisation problem, assuming either a direct or indirect utility function with associated randomness.Then the optimal solution is derived through the use of Karush-Kuhn-Tucker conditions.Finally, the likelihood function of these conditions is written given the distributional assumptions on the utility function.Nowadays, one of the most popular models of this category is the Multiple Discrete Continuous Extreme Value (MDCEV) model (Bhat, 2008).It has been applied in different areas, such as transport (Jäggi et al., 2012), time use (Enam et al., 2018), social interactions (Calastri et al., 2017), alcohol purchase (Lu et al., 2017), energy consumption (Jeong et al., 2011), investment decisions (Lim and Kim, 2015), household expenditure data (Ferdous et al., 2010), price promotions (Richards et al., 2012), and tourism (Pellegrini et al., 2017). In this paper, we propose two extensions to the MDC modelling framework.First, we propose a new non-additive functional form for the utility that includes explicit complementarity and substitution effects.Secondly, we present an MDC model formulation that does not require the definition of a budget, while still allowing for explicit complementarity and substitution.The second approach is a suitable approximation of a full MDC model for (the relatively common) situation where the expenditure on all alternatives that are included in the model (i.e.inside goods) is small compared to the overall budget, which allows us to drop the budget from the model likelihood.To allow for a tractable likelihood function, we do not include a stochastic error term in the marginal utility of the outside good in any of the two proposed models. Substitution and complementarity define relationships between the demand for pairs of products. If the demand for one of them increases, then the demand for the other is reduced in the case of substitution and increased in the case of complementarity (Hicks and Allen, 1934).While the budget constraint naturally induces substitution between products due to income effects, this is only an indirect effect.The inclusion of complementarity and substitution is necessary for a more realistic representation of behaviour in applications as diverse as time use or grocery shopping. For example, in the first case, it could be that going to the cinema makes it more likely for individuals to also eat at a restaurant.In the second case, it could be that products such as pasta and tomato sauce are usually bought together.On the other hand, it could be that the more hours an individual works, the fewer hours they allocate to leisure activities; or purchasing more bread leads to a reduction in the consumption of biscuits. Concerning the budget, while determining it can be easy in some applications, it can be challenging in others.For example, in purchase decisions, the budget will rarely be an individual's full income, as there is likely mental accounting and recurring expenses to account for, all of which are not observable.Investment decisions face a similar problem, as the total budget may expand or shrink as a function of expected performance of the investment alternatives.There are other scenarios where even the simple definition of a budget is problematic, for example when modelling the number of recreational trips during a year, or the number of activities performed by an individual during a week.The problem becomes more acute in forecasting.Any predictions from a model require a budget, and predicting the budget, e.g. the income of individuals in the future, is another problem in itself, and introduces cascading errors in the forecast values. While other models including complementarity and substitution effects through non-additive separable utility functions have been proposed in the literature, they either require complementarity and substitution effects to add up to zero (Song and Chintagunta, 2007), or pose specific constraints on their parameters, making either estimation or model transferability difficult (Bhat et al., 2015;Mehta and Ma, 2012;Pellegrini et al., 2021a).Models with implicit (also called infinite) budget have also been proposed by Bhat (2018) and ?for models with neither complementarity or substitution effects.A detailed comparison between the models in this paper and those already in the literature is presented in section 5. The remainder of this document is structured as follows.The next section introduces the formulation, derivation, likelihood function and forecasting algorithm of the model with complementarity and substitution.Section 3.2 presents the same for the model with complementarity, substitution and an implicit budget.Section 4 discusses the identification of both model parameters, some constraints that theory and estimation imposes on them, and compares the forecasting performance of both models to each other.Section 5 compares the proposed models' formulation to that of similar models in the literature.Section 6 presents applications of the proposed models to four different datasets, dealing with time use, household expenditure, supermarket scanner data, and number of trips, respectively.The paper closes with a brief summary of the proposed model formulations capabilities and limitations. 2 An MDC model with complementarity and substitution Model formulation Consider the classical (consumer) utility maximisation problem, where an individual n must decide what products k to consume from a set of alternatives, by maximising his or her utility subject to a budget constraint (Eqn.1). where n = 1...N indexes individuals and k = 1...K alternatives, x n = [x n0 , x n1 , ..., x nK ] is a vector grouping the consumed amount of each alternative (product), p nk is the price of alternative k faced by individual n, and B n is the total budget available to individual n. x n0 is an outside or numeraire good, i.e. a good that aggregates all consumption outside of the category of interest. For example, if the researcher is interested in modelling demand for food, x n1 , ..., x nK would represent consumption of different food categories (the inside goods), while x n0 would represent the aggregate consumption of housing, transport, leisure, etc.It is usually assumed that p n0 = 1, so that x n0 becomes the total expenditure on categories other than the one of interest.To simplify the notation, we use this convention henceforth.It is assumed that the numeraire good is always consumed, so x n0 > 0 always. The formulation in eqn. 1 is consistent with a two-stage budgeting approach, where the individual first allocates expenditure to broad groups (e.g.food, utilities, transport, entertainment, etc.) based on price indices representative for each group, followed by independent within-group allocations to individual products.According to Edgerton (1997), such an approach is sensible and subject to only small approximation errors when (i) the preferences for groups are weakly separable, i.e. the utility provided by each group is not affected by the level of consumption of other groups; and (ii) the group price indices being used do not vary too greatly with the utility or expenditure level.The first condition can be satisfied as long as the inside goods are reasonably separable from excluded goods.Edgerton (1997) argues that empirical and theoretical arguments support the fulfilment of the second condition. We assume the following functional forms for the different parts of the utility function. We take the definition of u k from Bhat (2008).In this formulation, ψ nk represents alternative k's base utility, i.e. its marginal utility at zero consumption.This parameter could be interpreted as the scale of the utility of product k.The γ k parameters, on the other hand, relate mainly to consumption satiation, by altering the curvature of alternative k's utility function.In general, a higher γ k indicates higher consumption of alternative k, when consumed.While a common interpretation is that ψ nk and γ k determine what and how much of alternative k to consume, respectively, this is not completely true.There is a level of interaction between these parameters, and in some circumstances a low value of ψ nk can be compensated by a high value of γ k (Bhat, 2008(Bhat, , 2018)). Parameters ψ nk must always be positive, as they represent the marginal utility of alternatives at the point of zero consumption.We ensure this using the following definition. (5) where z n0 is a column vector of characteristics of the decision maker that are expected to correlate with that individual's marginal utility of the outside good (e.g.socio-demographics); α is a row vector of parameters representing the weights of those characteristics on the marginal utility of the outside good; z nk are attributes of alternative k; β k are vectors of parameters representing weights of those attributes on the alternative's base utility; and ε nk is a random disturbance term. We only include random disturbances in the base utility of the inside goods, as this leads to a computationally tractable likelihood function.We discuss the inclusion of a random disturbance in the marginal utility of the outside good in Section 4.1. The final component of the utility function, u kl (x nk , x nl ), captures the complementarity and substitution effects between inside goods.This particular functional form is inspired by the translog function, and previous formulations by Vásquez Lavín and Hanemann (2008) and Bhat et al. (2015).Figure 1 presents the behaviour of this component for a set of δ kl parameters, and different values of x nk and x nl , which are assumed to be equal.If δ kl > 0, there is complementarity between alternatives k and l, as this component will increase the overall utility.If δ kl < 0, there is a substitution effect between alternatives k and l, as u kl becomes more negative as x nk and x nl increase.If δ kl = 0, the consumption of both alternatives is independent of each other.The value of u kl is bounded to the interval [0, δ kl ), ensuring transferability of estimated models to other datasets, a point we discuss in Section 4.2. In summary, the proposed MDC model has two main characteristics.First, it contains no stochastic error in the marginal utility of the outside good, allowing for a tractable likelihood function.Second, its non-additive utility function allows for interaction (complementarity and substitution) among alternatives. Model derivation To solve the optimisation problem, we begin by writing its Lagrangian (Eqn.6) and Karush-Kuhn-Tacker conditions of optimality (eqns.7 and 8).We drop the n subindex to simplify the notation. Eqn. 8 will be an equality when alternative k is consumed (i.e.x * nk > 0, with x * nk the consumption at the optimum, i.e. the observed consumption).Eqn. 8 will be an inequality when x * nk = 0.In other words, the marginal utility of any consumed product k at the optimum level of consumption will be λ scaled by the alternative's price p nk .Instead, if the product is not consumed, its marginal utility will be lower.By combining eqns.7 and 8, we obtain: Replacing ψ 0 and ψ k by their definitions (Eqn.5), and isolating the random component ε k , we obtain Now, if we assume all ε k disturbances to follow identical and independent distributions, we only need to apply the Change of Variable Theorem from ε k to x k (only over the consumed alternatives) to obtain the likelihood function of the model.Then, if f and F are the density and cumulative distribution functions of ε k , respectively, we can write the likelihood function as follows: In this set of equations, |J| is the value of the determinant of the Jacobian J of vector −W m , where m indexes consumed alternatives.The elements of this Jacobian are defined in Eqn. 12 (i indexes rows, and j columns).No obvious compact form exists for this determinant.I x k >0 and I x k =0 are binary variables taking value 1 if x k > 0 or x k = 0, respectively, or zero in other case. If no alternative is consumed, the Jacobian drops out of Eqn.11. In the remainder of this paper, we assume all ε k disturbances to follow identical and independent Normal distributions with mean fixed to zero and a standard deviation σ, which is estimated. Assuming other distributions is possible, where the use of Gumbel distribution leads to a closedform likelihood, but has the disadvantage of generating a high rate of outliers during prediction, due to the thick tails of the distribution.The Normal distribution, on the other hand, has thinner tails and it is a natural choice due to the Central limit theorem, while being computationally tractable. Forecasting Once the model has been estimated, forecasting requires solving the original maximisation problem proposed in eqn. 1 several times, each time using different draws of ε k from a Normal distribution with mean zero and standard deviation σ, and then averaging the result across these draws. This must be done separately for each observation in the sample.The optimisation problem can be solved using any algorithm, with the Newton or gradient descent algorithms being the most common type.This forecasting procedure is demanding from a computational perspective, especially if a high number of draws are used for each individual.However, due to the forecast for each individual and draw being independent from one another, calculating them in parallel can significantly reduce the overall processing time.The software implementation in Apollo (ApolloChoiceModelling.com) uses parallel computing to speed up the forecasting. 3 An MDC model with complementarity, substitution and an implicit budget In this section we introduce an extension of the model presented in section 2, such that it does not require defining a budget.The formulation and derivation of the model is very similar to that presented in the previous section, so in this section we only highlights the points where the two models differ. Model formulation Considering the classical consumer utility maximisation problem described in eqn. 1, we now assume a different utility formulation for the outside good, while all other definitions remain as in the previous section (i.e. as in eqns.3, 4, and 5). We assume a linear utility function for the outside good (eqn.13), as this will later on allow us to drop both the outside good consumption x 0 and the budget B from the final model formulation. While a linear utility function does not comply with the law of diminishing marginal utility (a common assumption in demand models), it should be considered as an approximation of a function that does, when most of the budget is spent on the outside good, and only a relatively small amount is spent on the inside goods.In such a case, changes in the total expenditure of inside goods would lead to a relatively small change in the consumed amount for the outside good, and therefore a negligible change in the marginal utility of it. More formally, we can write changes in the utility of the outside good using a second degree Taylor expansion as u 0 (x 0 + ∆) u 0 (x 0 ) + u 0 (x 0 )∆ + 1 2 u 0 (x 0 )∆ 2 , where u 0 and u 0 are the first and second derivatives of u 0 , respectively, and ∆ is a small change in the consumption of the outside good.If u 0 is continuous, monotonically increasing, and satisfies the law of diminishing returns, then lim x 0 →+∞ u 0 is a constant equal to or bigger than zero, because the slope must smoothly decrease as x 0 increases, without ever becoming negative.It then follows that lim x 0 →+∞ u 0 = 0. Therefore, for a large value of x 0 , we can assume that u 0 (x 0 ) is small, and approximate u 0 using a linear function, making u 0 ψ 0 . Assuming a linear utility function for the outside good does not necessarily imply that all individuals have the same marginal utility for it, nor that absolutely no information on the budget can be included in the model.The proposed formulation allows for parameterisation of the ψ 0 parameter.The modeller could make ψ 0 a function of socio-demographics, or other proxies of the budget.For example, ψ 0 could be explained by an individual's full income, occupation, or their level of education. Model derivation Proceeding in the same way as in section 2.2, we first find a difference when calculating the derivative of the Lagrangean (Eqn.6) with respect to the outside good, as follows. which combined with Eqn. 8 leads to the Eqn.15 Replacing ψ 0 and ψ k by their definitions (Eqn.5), and isolating the random component ε k , we obtain Assuming all ε k disturbances follow identical and independent distributions, and applying the Change of Variable Theorem from ε k to x k for the consumed alternatives, to obtain the likelihood function of the model, as described in eqn.11, except this time the definition of the Jacobian elements is as in eqn.17, with E i the same as in eqn.12. Just as with the model with observed budget, we assume all ε k disturbances to follow identical and independent Normal distributions with mean zero and a standard deviation σ to be estimated. Forecasting Once the model has been estimated, forecasting requires solving the original maximisation problem proposed in Eqn. 1 several times, each time using different draws of ε nk from a Normal(0,σ) distribution, and then averaging the result across these draws. To solve the optimisation problem we once again use the Lagrangian in Eqn.6 and the KKT conditions in eqns.14 and 8, leading us to Eqn. 15.Assuming an equality and isolating x k , we obtain where the definition of E k can be found in eqn.17, and where it depends on the value of all x n . Eqn. 18 is a fixed point problem, i.e. a problem of the form x = h(x).According to the Existence and Uniqueness theorem, as the right part of Eqn.18 is continuous in x n over the closed interval [0, Bn p nk ], at least one solution to the problem exists.However, we cannot ensure that the solution is unique.We solve Eqn.18 through the following iterative approach: K ] to zero. where S is the maximum number of iterations allowed, and τ indicates the convergence tolerance parameter, which can be set to the desired precision.This procedure must be performed multiple times for each observation, each time with a different set of draws for the ε k disturbances.Then results for each set of draws must be averaged. As this model assumes a very large budget, in practice, there is no bound on the magnitude of the forecast consumption.Therefore, we recommend only forecasting for values of the explanatory variables in a reasonable vicinity of the values observed in the estimation dataset.What defines reasonable is difficult to quantifiy, but, for example, if an explanatory variable z 1 ∈ [0, 1] in the estimation dataset, forecasting for z 1 = 10 could lead to unreasonably high consumption levels. This is similar to how linear models are usually valid only in the vicinity of values on which they were estimated. Model properties In this section, we discuss some of the most relevant properties of the model, namely the identifiability of its parameters, including the possibility of using random coefficients; some theoretical constraints on its parameters; and the performance of the model with implicit budget as compared to the model with observed budget. Identification of parameters When estimating the proposed models, the modeller should consider the following six points regarding identifiability of parameters. First, observations who do not consume any inside good should not be excluded from the sample.Even though these observations do not provide any information on the value of ψ k , they do provide information of the value of ψ 0 in relation to the inside goods. Second, there should be no constant (intercept) in the definition of ψ 0 , i.e. z 0 should not contain an element equal to 1 for every individual.As utility does not have any meaningful units, we require setting a base against which all other utilities are measured.To do this, we recommend setting the intercept of the outside good to zero.Any variable that changes across observations can be included in z 0 , even if they are not centred around zero.We recommend populating z 0 with characteristics of decision makers, such as socio-demographics. In the case of the model with implicit budget (see section 3) we recommend including the individual's income in z 0 .Including income in this way does not imply that the budget is equal to the income, but only that the marginal utility of the outside good depends on it.We would expect a negative coefficient for income if included in ψ 0 , as an increase of income usually leads to increased overall consumption, and therefore a smaller marginal utility of the outside good. In general, a negative coefficient α indicates that an increase in the corresponding explanatory variable leads to increased consumption.The opposite is true for a positive coefficient. Third, just as most other MDC models, the two formulations presented in this paper are not scale-independent.This means that the magnitude of the dependent variable influences the results of the model.For example, expressing the dependent variable in grammes or kilogrammes might lead to different forecasts and marginal rates of substitution.This is due to the non-linear nature of the utility functions used in the models.We recommend testing different scalings of the dependent variable, favouring those making the dependent variable range between zero and five, so as to match the range of maximum variability of the transformation in u kl , which is mostly flat for values x k > 5 (see figure 1). Fourth, in the case of the model with implicit budget, complementarity and substitution effects can be confounded with income effects.In the model with implicit budget, all interactions between the consumption of alternatives are captured by the δ kl parameters.The cause of interaction could be complementarity or substitution, but it could also be due to income effects.For example, a restricted budget could induce increased demand for an inexpensive product while decreasing the demand for an expensive one.This could be captured by the model as substitution between the two products.This problem will be attenuated if the budget is large in comparison with the expenditure on the inside good. Fifth, concerning the number of complementarity and substitution parameters (δ kl ), while the model formulation defines one parameter per pair of products, the modeller can easily impose restrictions to reduce the number of parameters to estimate.For example, if alternatives can be grouped into non-overlapping sets, the modeller could impose all δ kl parameters to be the same within each group, and across the same pair of groups.Alternatively, the modeller could perform a Principal Component Analysis on the dependent variables, identifying the most important interactions between alternatives, and then estimating only those δ kl parameters and fix all others to zero (as done in section 6.2).These or other strategies are recommended when the number of alternatives is large. Finally, as recommended by Manchanda et al. (1999), the proposed models allow for complementarity, substitution, and coincidence effects, both in a deterministic and random way. Complementarity and substitution effects are captured by the δ kl parameters.Coincidence effects are shocks to demand influencing either one or multiple alternatives at the same time, and they can be captured by either ψ 0 (common shocks to all alternatives), or ψ k and γ k (independent shocks).All of these parameters allow for deterministic heterogeneity, for example defining δ kl as a function of socio-demographic characteristics.It is also possible to incorporate random heterogeneity in ψ k and γ k by using simulated maximum likelihood techniques (Train, 2009), but we do not recommend including such heterogeneity in ψ 0 nor δ kl as it could lead to violations of eqns.23 and 24 (see section 4.2). To test identifiability of the model through simulation, we created 50 datasets using the generation process of the model with observed budget, and another 50 datasets using the generation process of the model with implicit budget.We then estimated the corresponding model on each generated dataset to check if we were able to recover the parameters used during data generation. All datasets were composed of 500 observations with four alternatives each.All models shared the specification described in eqn.19, but with the value of their parameters randomly drawn on each occasion from the distributions defined in table 1.The range of parameters was influenced by other models estimated in section 6 and considerations discussed in section 4.2.All explanatory variables (z, x, y) followed a U(0,1) distribution, except for z 1 ∼ Bernoulli(0.5).Prices were drawn from a U(0.1, 1) distribution, while the budget was set to 10 for the models with observed budget. Table 1: Distributions used to draw parameters from when simulating datasets. Observed budget Implicit budget Constraints on estimated parameters The derivation of the likelihood function relies on the assumption of the utility function being monotonically increasing with decreasing marginal returns of consumption.In other words, it assumes ∂U ∂x k > 0, where U is the global utility.Failing to comply with this assumption renders the likelihood function invalid, as second order derivatives on the Lagrangean would have to be checked to make sure the critical point is not a minimum.Furthermore, it could lead to the existence of multiple local critical points, i.e. the solution may not be unique, which is once again contrary to the assumptions made during the derivation of the likelihood function.The marginal utility of the outside good is always positive in both models proposed in this paper.But the marginal utility with respect to an inside good will only be positive when the inequality in Eqn. Additionally, the argument of the logarithm inside W k must be larger than zero, so as to avoid undefined operations.In the case of the model with observed budget, this translate into the inequality in Eqn. 21.And in the case of the model with implicit budget, it implies Eqn.22 must be satisfied. These conditions are functions of x k , making their fulfillment dependent on the particular dataset at hand.We would like to instead derive dataset-independent conditions.This is possible by noting that the impact of x k in both conditions is bounded by its exponential transformation to the interval 0 ≤ e −x k ≤ 1 (because x k ≥ 0).This allows us to derive more general conditions than Eqns.20, 21 and 22 by analysing the extreme cases x k = 0 and x k = ∞, as the value of the conditions for all other x k values will fall between these.These extreme cases have the benefit of removing x k from the conditions.Table 2 summarises the results from this analysis. All conditions in table 2 with zero on the right hand side are always fulfilled because ψ k , γ k , p k , ∆ − and ∆ + are all equal or bigger than zero.Eqn.20 for x k = ∞ will also always be true as zero is approached from the right (i.e. from positive values).Among the remaining conditions, Therefore, the sufficient conditions for the model with observed budget can be summarised as in eqn.23 Where: And the sufficient conditions for the model with implicit budget are summarised in eqn.24. Conditions in eqns.23 and 24 are based on extreme cases, so they represent sufficient but not necessary conditions for the validity of the parameters.In other words, estimated parameters need only to comply with eqn.20, and with eqn.21 or 22, but satisfying eqn.23 or 24 guarantees that those conditions are met. If individuals in the dataset behave rationally and in accordance with economic theory, then the estimated parameters should naturally comply with eqn.23 or 24.At the time of writing, we have not experienced any issues of running into inconsistent parameters, nor have we had to impose parameter constraints during estimation to enforce compliance with these equations. Suitability of a linear utility for the outside good In the model with implicit budget, we propose a linear utility for the outside good as an approximation of the case where expenditure on the inside goods (i.e.considered alternatives) is small compared to that on the outside (numeraire) good.In these cases, we expect only very small changes to the marginal utility of the outside good due to changes in the consumption of the inside goods.For example, consider consumption of the yoghurt product category.The expenditure on yoghurt will be small compared to the total expenditure on food, and even smaller compared to the entire disposable income of the household.By using the model with implicit budget, the modeller does not need to determine what the correct budget is, but only needs to know that total expenditure in the category of interest is small compared to the budget, whatever that may be. If our interpretation is correct, then the forecast of the model with implicit budget should approach that of the model with observed budget when the expenditure on the outside good is large compared to that on the inside goods.We tested this assumption through simulation.We first created 30 different datasets of 500 observations each, assuming a data generation process with observed budget, i.e. using the model presented in section 2.Besides having an outside good, each dataset had four inside goods that were always available.The base utility of the outside good was set to zero, while the base utility of the inside goods was composed of a single constant, each drawn from U (−2, 0), i.e. a uniform distribution between -2 and 0. Satiation parameters γ k were drawn from U (0.5, 1.5), δ kl were drawn from a U (−0.01, 0.01), while price p k followed a U (0.1, 1), and the budget was set to 10 for every observation.We measured the fit of each model on each dataset using the Root Mean Squared Error (RMSE) of the forecast aggregate demand in the whole sample.Results are exhibited in figure 4. As figure 4 shows, the fit of the model with implicit budget approaches that of the model with observed budget as the expenditure on the outside good increases.This indicates that the model with implicit budget is an appropriate approximation when the expenditure on the outside good is large relative to the expenditure on inside goods. Comparison with other MDC formulations The MDC models presented in this paper are not the first to include complementarity, substitution or an implicit budget in the literature.In this section, we discuss other MDC models with these properties, and compare them to the models proposed in this paper.We begin with a very brief review of models without complementarity or substitution (other than income effects), which form the basis for more flexible models. No complementarity or substitution, and an observed budget One of the most popular models in this category is the MDCEV model by (Bhat, 2008).It is derived from the same consumer optimisation problem proposed in eqn. 1, but using a different functional form for the utility components.While there are several possible formulations, the most common one is the alpha-gamma formulation, due to it allowing for an efficient forecasting algorithm (Pinjari and Bhat, 2011).In this case, the utility takes the form described in eqn.25, where α can either tend towards zero during the estimation process, or the modeller can fix it a priori. Parameter interpretation in the MDCEV model is essentially the same as in the models described in this paper, except for two differences.First, the outside good's marginal utility contains no covariates, but only a stochastic error term, i.e. ψ 0 = e ε 0 .Second, α measures satiation across the whole choice set in MDCEV, and not the influence of covariates in the outside good's marginal utility as in the models proposed in this paper.And while it is possible to introduce explanatory variables into the base utility of the outside good in MDCEV models (either directly, or by including them with the same coefficient in all inside goods' base utility), it is not commonly done in practice. By setting u kl = 0, the MDCEV model does not allow for pure complementarity or substitution effects, though product substitution can still take place due to income effects.Also, the form of u 0 requires the value of x 0 , and therefore the budget, to be observed.also present a similar model to MDCEV, but without an error term in the marginal utility of the outside good.Other models in this category include Habib and Miller (2008) and Habib and Miller (2009), who present models similar to that by Von Haefen and Phaneuf (2005). Introducing complementarity and substitution through new functional forms Vásquez Lavín and Hanemann (2008) propose a model formulation allowing for complementarity and substitution using a non-additively separable utility function and an observed budget.This formulation was later refined by Bhat et al. (2015), who called it the NASUF model.Beginning from the consumer optimisation problem set in eqn. 1, the utility components are defined as described in eqn.26. The definition of u kl makes the NASUF utility function non-additive, effectively introducing complementarity and substitution effects.A positive value of θ kl is indicative of complementarity, while a negative one represents substitution, and θ kl = 0 implies no complementarity or substitution.Yet, this formulation has three main drawbacks. The first drawback is that the utility function is valid only for some values of θ kl .Just as in the case of the models proposed in this paper, and as discussed in section 4.2, the derivation of the likelihood function assumes ∂U ∂x k > 0. For this to be true, the inequality in eqn.27 must be satisfied. ∂U ∂x While it is possible to bound the value of parameters during estimation, the problem with the condition in eqn.27 is that it depends on the value of x k .As the logarithm is not a bounded function, whether or not this condition is satisfied will depend on the level of consumption x of each individual, making it impossible to assess the correctness of a model without associating it to a particular dataset.This hinders model transferability from one dataset to another, and jeopardises forecasting, as only scenarios that fulfil the condition above should be permissible forecasts. If all individuals in the dataset behave in accordance with economic theory, then the parameters should automatically fulfill eqn.27.Yet, this does not prevent the estimation algorithm from trying parameter values violating eqn.27 during the parameter value search.Furthermore, calculating the likelihood of the model requires calculating the logarithm of the expression in eqn. 27, leading to an error if the expression is less or equal than zero. The second issue with the solution proposed by Bhat et al. (2015) is that the stochasticity is introduced midway through the derivation of the model in the Karush-Kuhn-Tacker conditions, and not in the initial formulation of the model.While this is merely a formal issue, it does imply that the origin of the randomness is not clear, and it is not possible to easily associate it with unobserved variables or measurement errors, as would be the case in more traditional econometric models. The third issue is that γ parameters have a role both in satiation and in the interaction term (i.e.complementarity and substitution) of the utility, making their interpretation difficult.A similar formulation was proposed by Lee and Allenby (2009), but using a quadratic function to incorporate satiation, complementarity, and substitution.This model only considers inside goods, defining the global utility as x l (we assume only one product per category to simplify the analysis).Note that θ kk is not restricted to zero in this case, as is in the models proposed in this paper.The validity of the formulation rests on the condition which depends on the value of x k , leading to the same issue already discussed in the context of the NASUF model. Finally, Lee et al. (2010) propose a model allowing for asymmetric complementarity and substitution among categories of product.However, the formulation of the model does not satisfy the principle of weak complementarity (Maler, 1974), i.e. that an individual's utility is not influenced by the attributes of non-consumed goods or, in other words, that goods provide utility only through their use.This is a reasonable assumption in cases where non-use values are believed to be absent or small (see von Haefen ( 2004) for a more detailed discussion). Introducing complementarity and substitution through the indirect utility function While in this paper we derived MDC models from the direct utility function of consumers, it is also possible to make assumptions on the indirect utility instead, and then calculate the optimal consumption using Roy's identity, as described in section 3.1 of Chintagunta and Nair (2011).Song and Chintagunta (2007) propose an MDC model following the indirect utility approach, considering not only a set of alternatives, but grouping them into categories, and assuming that at most one alternative inside each category is consumed.Furthermore, this model imposes a symmetry constraint on its complementarity and substitution parameters, as described in eqn. M l=0 θ kl = 0 ∀k (28) where θ kl represents the complementarity and substitution parameters (originally called β in Song and Chintagunta ( 2007)).Eqn.28 forces that, for each product, the amount of complementarity and substitution with other products needs to add up to zero.But there are no theoretical reasons for this to necessarily be the case in any given application.This requirement prevents, for example, for a product to only have complementarity with one other product, while not having substitution with any other product.Mehta and Ma (2012) propose a model with a similar formulation to that of Song and Chintagunta ( 2007), but without the symmetry constraint.However, it requires the matrix of complementarity and substitution parameters (whose elements are θ k l) to be positive semi-definitive. Additionally, the likelihood function does not have a closed functional form, requiring multipledimension integration; and the number of parameters increases geometrically with the number of alternatives. Introducing complementarity and substitution through correlation in utility functions An alternative way to introduce complementarity and substitution into an MDC model is by introducing correlation across the utility of alternatives.This can be done in two ways: (i) by directly correlating the random error term ε in the utility function of each alternative across multiple alternatives, or (ii) by adding new random error terms common to the utility of multiple alternatives.Pinjari and Bhat (2010) use the first approach, using extreme value distributions to nest alternatives together into mutually exclusive subsets, allowing for perfect substitutes but not for complementarity.This approach was generalised by Pinjari (2011), by allowing for overlapping non-exclusive nests, but still limiting its applicability to complementarity.Bhat et al. (2013) makes ε follow a multivariate normal distribution across alternatives, allowing for flexible correlation patterns.Calastri et al. (2020a) follows the second approach, by using random intercepts and coefficients (β in our notation) correlated across alternatives. As Pellegrini et al. (2021a) discuss, the main limitation of introducing complementarity and substitution through correlation in the utility functions of different alternatives is that of confounding effects.Indeed, using this approach it is impossible to discriminate between correlation due to common heterogeneity in preferences, from correlation due to complementarity and substitution.For example, two utilities could be positively correlated due to them sharing unobserved attributes, but not because the alternatives are complementary. Two stage approaches to unobserved budgets The necessity to observe the budget can lead to two separate issues.The first one is during estimation, in the case when the budget is not observed.This forces the modeller to assume some value for the budget before even estimating and MDC model.A common solution to this problem in past work has been to use the total expenditure as the budget.This is a strong assumption, as it implies that the total expenditure will not change as a function of prices or other attributes of the products.For example, it implies that consumers will spend the same amount regardless of the level of discount offered. The second problem due to the necessity of an observe budget in MDC models manifests during forecasting.Forecasting for any future scenario requires exogenously defining a budget. Any errors in the forecasting of the budget will cascade down to the MDC model, as shown in section 6.2. In the literature, these problems have been addressed mostly through two-stage procedures, where in the first stage, a model is used to estimate (and predict) the budget, and in the second stage, a traditional MDC model with observed budget is used to allocate the budget to the different alternatives.Pinjari et al. (2016) proposes a two-stage approach.In the first stage, they use either a stochastic frontier or a log-linear regression to estimate the expected budget, and in the second stage they use the expected budget in an MDCEV model.They compare the performance of both approaches against arbitrarily determined budgets.When using the stochastic frontier method, they assume the budget to be an unobservable characteristic of decision makers, defined as the maximum amount they are willing to spend.This implies that the expected budget under this approach tends to be bigger than the total expenditure.The log-linear regression, on the other hand, attempts to predict total expenditure, so it leads to expected budgets that are of the same magnitude as the total expenditure.While both approaches offer similar performance, and both outperform the arbitrarily determined budget, the stochastic frontier approach leads to bigger expected budgets, therefore allowing for more variability in the forecast, as the total expenditure has room to grow if the attributes of the alternatives improve.This approach is also used by Pellegrini et al. (2021b).Dumont et al. (2013) propose a different two-step approach to estimate the budget.In the first step, they estimate a Structural Equation Model (SEM) where the budget is a latent variable, whose structural equation has socio-demographics as explanatory variables.The budget can have several indicators, such as average expenditure in the category during the last three months, expected expenditure in the future, and ownership of goods from the same category.Income is also considered a latent variable, with at least stated income as indicator.More formally, the latent budget B n and latent income I n relate as follows : where Z n are socio-demographics of individual n, y nj is indicator j of the budget, S n is the stated income, η n , ξ n , ε nj and ε ns are standard normal error terms, and ζ z , ζ I , λ j , σ j , λ s and σ s are parameters to be estimated.As expected, authors report lower log-likelihoods when using the SEM approximation to the budget than when using maximum expenditure, but they also do note an improvement in the MDC parameters significance levels.They do not report changes in forecast performance, making it difficult to evaluate the performance of the proposed approach. Other MDC models with implicit budget Other models in the literature have also used linear utility functions for the outside good, in the same way that in the models proposed in this paper.This functional form leads to a likelihood function that does not depend on the budget, effectively allowing for unobserved budgets. In the context of the MDCEV model and its derivations, Bhat (2018) was the first one to propose using a linear utility function for the outside good.This functional form, however, was not motivated by the need to drop the budget from the model formulation, but it was used to allow for more separability between the parameters that determine the discrete choice (i.e.what to choose), from those that determine the continuous choice (i.e.how much to choose).Therefore, this property of the model is hardly explored in that paper. More recently, Saxena et al. (2022) discussed the consequences of using a linear utility for the outside good in models with additively separable utility functions.Such a configuration leads to models that do not consider complementarity, substitution, nor income effects, therefore making demand from one product independent from another, unlike the model proposed in this paper (though it does allow for parameterising ψ 0 ).Similarly to our own advice, they recommend using a linear utility function for the outside good only when the total expenditure in the inside goods is no more than 35% of the budget (or more strictly, less than 5%).If the expenditure in inside goods is higher than those values, they find bias in the model estimates and poor forecasting performance. While we did not find evidence of biased parameters in the proposed model (see figure 3), we did find evidence of poor forecast performance (see figure 4).The absence of parameter bias in the proposed model could be due to it including complementarity and substitution effects, and the fact that the error term follows a Normal distribution instead of a Gumbel distribution. Model application and comparison In this section we apply the proposed models to four different datasets.The first dataset records time use, where all participants face the same budget (24 hours a day), and all alternatives (in this case, activities) have the same price (one unit of time).This dataset allows us to measure how much fit is lost when using the model with implicit budget when the budget is known, as well as compare the proposed models against a model without complementarity nor substitution.The second dataset deals with household expenditure, where budgets vary between different households, but consumption is aggregated to categories, so prices are still unitary (one unit of money). This dataset helps us illustrate how the fit of the model with observed budget degrades when the budget is misspecified, a case particularly relevant in forecasting.The third dataset contains scanner data from a supermarket, where both budgets and prices vary from one observation to the next.This dataset allows us to compare the sensitivity to price of the models with observed and implicit budget.The last dataset reports the number of trips performed by travellers for different purposes.This dataset is a case where the very definition of a budget is problematic, as there is no evident limit on the number of trips during a day. Fixed budget and fixed prices: time use dataset The first dataset records time use of 447 individuals across 2,826 days in total.Details about the data collection can be found in Calastri et al. (2020b), and an application to time use analysis using this data can be found in Calastri et al. (2019) and Palma et al. (2021).Only out-ofhome activities are registered in the dataset, which we aggregate to six plus the outside good, as described in table 3. We estimated three different models using the Time Use data.First we estimated a traditional MDCEV model (Bhat, 2008), which has an observed budget and no complementarity. We also estimated the first model proposed in this paper (eMDC1 ), with an observed budget, complementarity and substitution.Finally, we estimate the second model proposed in this paper (eMDC2 ), with an implicit budget, complementarity and substitution. In the case of time use, the budget is observed (24 hours a day for everyone), and remains unchanged in forecasting scenarios, giving a clear advantage to the MDCEV and eMDC1 models. Nevertheless, we are interested in exploring the consistency of results across the models with observed budget, as well as the loss of fit in the eMDC2 model (which uses an implicit budget) with respect to the others.We estimated the models using 70% of the sample, and forecast for the remaining 30%.Table 4 presents the estimated parameters, likelihood and root mean squared error (RMSE) of the forecast consumption at the aggregate sample level for each model. The parameter estimates point towards consistent effects across models.And while parameters across models change in magnitude, their signs remain unchanged.Parameter interpretation is equivalent across models, except for α.In the MDCEV model α measures satiation across all alternatives.Instead, in the proposed eMDC models α represents the impact of the associated explanatory variable (z 0 ) on the marginal utility of the outside good (ψ 0 ).In the proposed models, α > 0 (α < 0 ) implies a positive (negative) effect of z 0 on ψ 0 , therefore an increased (decreased) consumption of the outside good, and a decreased (increased) consumption of the inside goods when z 0 grows.In this particular application, the negative sign of α female indicates that, after controlling for other variables, women on average perform more out-of-home activities than men. Concerning the β parameters, all of them are negative because all "inside" activities are less common than the "outside" activity (staying at home, see table 3).These parameters become more negative as the engagement with their corresponding activity decreases, except for leisure and work in eMDC1, probably due to the effect of interactions.As expected, working full time increases the chance to engage in work activities, while the weekend decreases it but increases the chance of engaging in leisure activities; and being 30 years old or younger increases the probability of engaging in school activities.γ parameters follow a similar trend, with higher values associated with activities performed for longer periods of time.The only exception is school, which has a large γ parameters despite being consumed for shorter periods than leisure, probably to compensate for its small ψ school . Only the eMDC models provide information on complementarity and substitution through their δ parameters, which are fairly consistent across eMDC1 and eMDC2.As expected, there is substitution between work and school, because few people work and study concurrently.On the other hand, we observe complementarity between shopping, private business and leisure, probably because all of these activities are often performed at the city centre, and therefore easier to chain into a single trip.As table 3 shows, correlations between time consumption are negative for all pairs of activities, because of the fixed budget and competing nature of the activities.Yet we do observe that correlations with a magnitude smaller than 0.05 tend to be associated with complementarity effects.In section 6.3, we again compare correlations and complementarity/substitution parameters, but in a dataset where the budget constraint is less strenuous, finding a much stronger connection between them. Concerning fit, the eMDC1 model achieves the lowest RMSE of the three models, followed by eMDC2 and MDCEV.We expected the eMDC1 achieving the best fit, as it uses all the available information, including the total consumption or budget, and it includes complementarity and substitution effects.On the other hand, it was hard to predict which of the other two models would achieve the second best fit, as the MDCEV model omitts complementarity and substitution, while the eMDC2 model does not use information about the budget.In this particular case, the eMDC2 model fit better than MDCEV, but this is probably a dataset-dependent result, and may change in other study scenarios.The loglikelihood is not comparable across models, as they have different formulations, making the RMSE a better indicator of fit.In summary, when the budget is known, and will be known in future scenarios when forecasting is relevant, then we recommend using the eMDC model with observed budget. eMDC1-100 and eMDC2 are presented in Table 6.Parameter estimates of eMDC1-80 and eMDC1-120 followed similar trends, and are available from the authors. α, β and γ parameters follow a similar trend in models eMDC1-100 and eMDC2.Results indicate that having a female or older household head both increase the marginal utility of the outside good (i.e.decrease expenditure in the inside goods), while a more educated household head has the opposite effect.These effects can be explained by the low female participation in the labour market (Contreras and Plaza, 2010), higher levels of education among younger individuals (for Economic Co-operation and Development, 2009), and a strong correlation between level of education and income among the Chilean population (Bilbao, 2013).Among β parameters, we observe that a higher number of adults, children, elders, workers and students per household increase the chance of spending money on alcohol, clothing, health, transport and education, all of which are reasonable effects.Furthermore, the estimates of the γ parameters indicate that more populous households tend to spend more on food, transport, communications, leisure, education and others, but not necessarily on alcohol, clothing, homeware, health, and restaurants, as these categories are more discretionary. Complementarity and substitution parameters δ are particularly different between the model with observed and implicit budget (eMDC1-100 and eMDC2, respectively).While the model with observed budget captures substitution between multiple pairs of categories, the model without it is dominated by complementarity.This is because when the budget is not controlled for, all categories of consumption seem to increase or decrease in tandem, because a higher (lower) income implies a higher (lower) expenditure across all categories.In other words, the income effect is confounded with complementarity in the model with implicit budget, as discussed in section 4.1. Our main objective with this dataset was to analyse how errors in the definition of the budget lead to different forecast errors in models with observed budget.To do this, we first estimated the models using 70% of the full sample (training dataset), and then forecast demand on the remaining 30% of observations (validation dataset) multiple times, assuming a different value of the budget in each occasion.We repeated this for each of the eMDC1 models we estimated. Different budgets lead to different forecasts in the eMDC1 models, but not in eMDC2 model. Figure 5 presents the results of this exercise.We used the root mean squared error (RMSE) of the aggregate predictions in the validation sample as an indicator of error in the forecast. As Figure 5 shows, the forecast performance of the model with implicit budget (eMDC2 ) does not change as a function of the budget.Instead, the eMDC1 models achieve a better forecast performance when the forecast budget is close to the estimation budget, but their error grows in a quadratic way with the budget misspecification.It does not seem to be very important how the estimation budget is defined in eMDC1 models.For example, the estimation budget could be defined as the total income of the household or just the total expenditure on the inside goods plus one.However, once a budget has been used during estimation, it is very important to accurately and consistently predict the budget for any forecasting scenario, otherwise the forecast error can These results reveal that in contexts where the forecasting of the budget implies even mild uncertainty, the proposed model with implicit budget can ensure a bounded level of error in the forecast. Variable budget and variable prices: supermarket scanner dataset The third application deals with scanner data from a chain of supermarkets (Venkatesan, 2014). After dropping all records of transactions from households with missing socio-demographic characteristics, and limiting the analysis to only four product categories, the dataset contains 4,002 purchase baskets from 656 households.All the considered product categories are fresh fruits: oranges, peaches, pears, and pineapples.Each fruit can be purchased in packs of different weights, but to simplify the analysis, we calculated the average price per Kg of each product, and expressed the amount purchased in Kg.Table 7 summarises consumption in the dataset.Our objective with this dataset was to compare the model with observed and implicit budget in terms of their sensitivity to changes in price.We estimated two models on the supermarket dataset: eMDC1 is the model with observed budget, which we set to the observed consumption plus one; the second model (eMDC2 ) assumes an implicit budget.The parameter estimates and log-likelihood at convergence of these models are shown in Table 8.Non significant parameters were not removed from the model formulation.To compare their sensitivity to price, we changed the price of oranges between 70% and 130% of their original price, and calculated both models' aggregated forecast demand on the training dataset.Figure 6 plots the demand forecast by each model, for different prices. As can be seen in Figure 6, both models predict a similar demand for the product whose price changes (oranges), but offer different predictions for the other products, whose prices remain constant.This is because of the income effect only being present in the model with observed budget, pushing for a much more dramatic reassignment of consumption when price changes. On the other hand, the model with implicit budget assumes a large unobserved budget, inducing smaller reassignment effects caused only by the δ parameters.Assuming a larger budget in eMDC1 would decrease the sensitivity of the forecast demand among the products whose price does not change, making it more similar to the forecast of the eMDC2 model (not reported).Based on the available data we cannot determine which of the two predictions is more accurate, as we are forecasting for unobserved prices. The complementarity and substitution (δ kl ) parameters are significantly different across models.While eMDC1 captures only complementarity, eMDC2 captures both complementarity and substitution.This is because the δ parameters in eMDC2 are not only capturing the complement- arity and substitution effects, but are also confounded with the income effect.This is apparent as the sign of δ parameters in eMDC2 mirror those of the correlation of demand in the dataset (see table 7).This also explains why the δ parameters in eMDC2 have higher t-ratios, as they are used to capture any interaction between the demand of different products, be it due to complementarity, substitution, or income effects.Larger budgets (as compared to expenditure in inside goods) will reduce the size of income effects, making the model with implicit budget more suitable for such scenarios.Our objective with this dataset is to compare out-of-sample forecast performance between the proposed models with explicit and implicit budget (eMDC1 and eMDC2, respectively) when the definition of the budget is arbitrary.In theory, the budget in our dataset should be the maximum amount of trips a household could generate during a day, but this value is very difficult to determine.Defining the budget as any lower (but more reasonable) value would be an arbitrary decision.A common approach in situations without an evident budget is to use the observed total consumption as the budget (Bhat and Sen, 2006).We follow this approach when estimating eMDC1, assuming the budget to be equal to the observed total number of trips plus one, so that the "outside good" is always consumed.However, this strategy poses a problem when predicting out of sample, as the budget needs to be predicted using an auxiliary model.To reproduce this situation, we estimate our models using only 70% of the whole sample, and predict for the remaning 30%.In the case of eMDC1 we predict the budget using a linear regression on the training data.In the case of eMDC2 we have no need to make assumptions on the budget nor using an auxiliary model for out-of-sample prediction, as the budget is not needed during estimation nor forecasting. In both eMDC1 and eMDC2 we use a linear function with the same socio-demographics to explain the base utility of the outside good (ψ 0 ).The base utility of the inside good and their satiation is described by a single constant each.The linear regression used to predict the 36 budget has the same socio-demographics as explanatory variables than the discrete-continuous models.Table 10 presents the coefficients of each model estimated with the training dataset (70% of the whole sample), and their forecast performance when predicting on the validation dataset (remaining 30% of the sample).Table 11 presents the complementarity/substitution (δ) parameters of both eMDC1 and eMDC2.Establishing parallels between the parameters of both models is difficult.In the model with observed budget (eMDC1 ) the effect of socio-demographics has two components: their effect on the budget prediction, and their effect on the multiple discrete continuous model itself.On the other hand, the model with implicit budget (eMDC2 ) does not have this complexity.The sign of another that -additionally to these effects-does not require the analyst to define a budget.The inclusion of explicit complementarity and substitution effects enriches the interpretability and realism of the model (Manchanda et al., 1999), while its functional form avoids issues present in previous formulations proposed in the literature (see section 1).The second model, with its implicit budget, is particularly useful when forecasting as it avoids cascading errors due to inaccurate budget predictions (see section 6.2). The model with implicit budget is based on the hypothesis that total expenditure on the alternatives under consideration is small compared to the overall budget.This hypothesis allows us to approximate the utility of the numeraire good by a linear function, hence removing the necessity to define a budget.This approximation comes at the cost of reduced fit, as compared to the model with observed budget.However, simulations show that the fit of both models converges when the hypothesis above is fulfilled (see section 4.3).Such an assumption is realistic in most daily consumption decisions, but should always be justified when using the model.In general, if the budget can be determined with a great degree of confidence in forecasting scenarios, then we recommend using the model with observed budget.But if there is significant uncertainty in the budget prediction, the model with implicit budget can be a useful alternative, as it makes the prediction error independent from the budget estimation. A computational implementation of the proposed model is available for R, as an extension of the Apollo package (Hess and Palma, 2019).To download this extension and see examples, visit ApolloChoiceModelling.com. The models proposed in this paper contribute to the literature on Kuhn-Tucker system demand models to study multiple-discrete choices.There are still several avenues for improvement and further investigation.New functional forms for the complementarity and substitution term in the direct utility function could be explored, with special emphasis on those leading to a compact form of the Jacobian in the likelihood function.More generally, including a random component in the marginal utility of the outside good would be a useful development, especially if it leads to a closed-form likelihood function.Alternative formulations based on indirect utility functions could be less restrictive, as they avoid assumptions on the shape of decision makers' direct utility functions.The model formulation could also be modified to incorporate multiple constraints, for example a monetary and a time budget, or a storage capacity.Of particular interest would be an approach that mixes constraints with an explicit and implicit budget.Finally, an empirical comparison of alternative formulations for the complementarity and substitution component of the utility, as well as the utility of the outside good, is of much interest specially given recent developments in Bhat (2018) and Pellegrini et al. (2021a). Figures 2 and 3 summarise the true and estimated parameter for the model with observed and implicit budget, respectively.In the graphs, the horizontal axis indicates the true value of the parameter, while the vertical axis indicates the estimated value.In these graphs, a perfect recovery of a parameter is represented by a dot along the identity line (in blue).The graph also contains the 95% confidence interval for each estimated parameter.Both figures offer a similar perspective: while all parameters are recovered correctly, α and β parameters are recovered more precisely, while γ and δ parameters (specially the latter) are harder to recover. Figure 2 :Figure 3 : Figure 2: Recovery of parameters for the model with observed budget. Figure 4 : Figure 4: Compared fit of models with observed and implicit budget, on data generated assuming a generation process with observed budget Kim et al. (2002) use a similar utility function to the MDCEV model, but assume that the random disturbances follow a multivariate normal distribution.While more flexible, this distribution makes the model much more computationally demanding.Von Haefen andPhaneuf (2005) Pellegrini et al. (2019) refine the model proposed inBhat et al. (2015) by proposing a different interaction term in the utility function.While this new formulation leads to an improved fit and provides a clear interpretation of γ parameters, it retains at least the first issue associated to the formulation ofBhat et al. (2015).Pellegrini et al. (2021a) further expand the NASUF model by allowing for two budget constraints in an application where both time and monetary constraints are considered jointly. Figure 5 : Figure5: Comparison of forecast precision of model with implicit and observed budget, when the budget is wrongly specified in the latter. Figure 6 : Figure 6: Relative aggregated sample demand forecasted by the traditional and extended MDCEV models for variations in the price of oranges.The black line indicates unity (i.e.original demand). Table 2 : Constraints on proposed model parameters for extreme levels of consumptionx k x l:δ kl >0 x l:δ kl <0 Table 3 : Main descriptive statistics of the time use database * outside good; † when engaged Table 4 : Comparison of the proposed extended MDC and a traditional MDCEV models on a time use dataset Table 6 : Comparison of model with observed and implicit budget on expenditure dataset Table 7 : Main descriptive statistics of the supermarket scanner data Table 8 : Parameters estimates of model with observed and implicit budget on the supermarket scanner dataset Table 9 : Main descriptive statistics of the number of trips database The last application deals with number of trips generated by a household, split across different purposes: work, study, personal business, leisure and return home.Data comes from the 2012 Origin-Destination survey of Santiago, Chile(Observatorio Social, 2014).The database contains observations for a single day from 10,927 households.Table9summarises the average number of trips per purpose by households' number of vehicles and income. Table 10 : Parameter estimates and forecast performance for models on number of trips dataset * Robust t-ratio.† Calculated based on out-of-sample prediction
15,323
2022-07-01T00:00:00.000
[ "Economics", "Engineering" ]
Examining the Transmission of Visible Light through Electrospun Nanofibrous PCL Scaffolds for Corneal Tissue Engineering The transparency of nanofibrous scaffolds is of highest interest for potential applications like corneal wound dressings in corneal tissue engineering. In this study, we provide a detailed analysis of light transmission through electrospun polycaprolactone (PCL) scaffolds. PCL scaffolds were produced via electrospinning, with fiber diameters in the range from (35 ± 13) nm to (167 ± 35) nm. Light transmission measurements were conducted using UV–vis spectroscopy in the range of visible light and analyzed with respect to the influence of scaffold thickness, fiber diameter, and surrounding medium. Contour plots were compiled for a straightforward access to light transmission values for arbitrary scaffold thicknesses. Depending on the fiber diameter, transmission values between 15% and 75% were observed for scaffold thicknesses of 10 µm. With a decreasing fiber diameter, light transmission could be improved, as well as with matching refractive indices of fiber material and medium. For corneal tissue engineering, scaffolds should be designed as thin as possible and fabricated from polymers with a matching refractive index to that of the human cornea. Concerning fiber diameter, smaller fiber diameters should be favored for maximizing graft transparency. Finally, a novel, semi-empirical formulation of light transmission through nanofibrous scaffolds is presented. Introduction In the field of tissue engineering, electrospun scaffolds are commonly used [1]; however, optical properties are in general of minor importance in most applications. In the case of tissue engineering for ophthalmic applications, the transparency of the graft is of highest interest. The cornea is the window of the eye, and its transparency is essential for human beings. Recently, electrospun scaffolds have been discussed for use in ophthalmic applications such as wound dressings after corneal surgery [2][3][4][5][6] or as artificial DMEK (Descemet Membrane Endothelial Keratoplasty) grafts [7,8] for treating patients with corneal endothelial cell pathologies. In both cases, the transparency of the scaffold is of major importance for patients' immediate benefit after surgery. The transparency of a healthy cornea, which is the reference material in this case, is 85-99% in the visible spectrum [9]; hence, a similar transparency is sought for artificial grafts. For corneal tissue engineering, additionally to xenogeneic tissue like decellularized corneas [10][11][12], different materials and approaches have been investigated, including nanofibers [5,7,13], hydrogels [4,14,15], and composites thereof [16][17][18]. The transparency of the investigated materials was usually determined by light transmission measurements of individual samples with discrete scaffold thicknesses, and no general transmission study was conducted [6,7,13,17]. The comparison of individual scaffolds with different specifications, such as material or fiber diameter, always presents the problem of insufficient accuracy in scaffold thickness. Beside the field of biomaterials, the optical properties 2 of 15 of nanofiber scaffolds were mostly investigated for optoelectronic and energy-related developments to enhance their efficiency [19][20][21]. PCL is a well-studied material in the field of tissue engineering, in particular in the field of corneal tissue engineering [22]. Although known for its opacity, it seems worthwhile to further study PCL due to its remarkable properties, as it is biodegradable and easy to blend with other polymers and has good mechanical strength. So far, only a few studies were conducted on the transparency of PCL nanofiber scaffolds. For example, Park et al. [23] measured light transmission through electrospun PCL scaffolds using two different fiber diameters and wavelengths. Using an integrating sphere, Park et al. were able to measure directly transmitted and reflected fractions of the incident beam. Their observations indicated that the scattering of the nanofibrous structure is the dominant factor, compared to the light absorption by the material. However, only two discrete wavelengths were investigated, and the influence of a surrounding medium was neglected. From a physical point of view, the transmission of an electromagnetic wave through a medium can be defined as T = I I 0 (1) and describes the transparency of a material. The incremental decrease in light intensity dI within an infinitesimal distance dx is proportional to the incident beam I dI = −µI dx (2) which can be simply integrated to for I = I 0 at x = 0. The parameter µ, known as the extinction coefficient, describes the absorption and scattering of the electromagnetic wave within the volume and can be written as µ total = µ absorption + µ scattering (4) Additionally, the incident electromagnetic wave can be reflected at the interface between two optical adjacent phases, characterized by their refractive indices n i . If vertical incidence and polarization of the light are of no relevance, the Fresnel equation [24,25] yields the reflectance R, reducing the light transmission T to where n 1 and n 2 are the refractive indices of the surrounding medium and the material, according to Figure 1a. When an electromagnetic wave passes through a volume, reflectance occurs at the n 1 /n 2 as well as at the n 2 /n 1 interfaces. Thus, combining Equations (3)-(5) and neglecting multibeam interference, the overall light transmission through a homogenous volume of thickness d can be written as T = I I 0 = T 2 reflection exp − µ absorption + µ scattering d (6) In the case of a nanofibrous scaffold of thickness D consisting of nanofibers with a fiber diameter d, as displayed in Figure 1b, µ absorption describes light absorption within each fiber, and µ scattering describes light scattering at the individual fibers. The scattering coefficient µ scattering depends on the scattering cross section of the scatterers, i.e., the nanofibers. The scattering cross section for thin fibers was firstly described by Rayleigh in 1881 [26], and a detailed derivation can be found in [27]. The wavelength-dependent total scattering cross section per unit length of a single isolated fiber of random orientation, with its fiber axis in Nanomaterials 2021, 11, 3191 3 of 15 the y-z-plane and an incident beam perpendicular to the fiber axis and therefore normal to the y-z-plane, is given by where r is the fiber radius, λ is the wavelength, and m is the ratio between the refractive indices n 1 (fiber) and n 2 (medium). Derived from the dielectric needle approximation, Equation (7) has been used extensively to describe the natural transparency of the mammalian cornea [28][29][30][31][32]. For a porous scaffold, which is the case for electrospun scaffolds, reduction in light transmission occurs for every interaction with individual fibers. The total light transmission through a nanofibrous scaffold should therefore be describable through the scaffold's thickness, the diameter of the nanofibers, and the refractive indices of the fiber material and the surrounding medium. The scattering cross section for thin fibers was firstly described by Rayleigh in 1881 [26] and a detailed derivation can be found in [27]. The wavelength-dependent total scattering cross section per unit length of a single isolated fiber of random orientation, with its fibe axis in the y-z-plane and an incident beam perpendicular to the fiber axis and therefore normal to the y-z-plane, is given by σ scattering (λ) = n 1 3 π 3 (πr 2 ) 2 (m 2 -1) 2 where r is the fiber radius, λ is the wavelength, and m is the ratio between the refractive indices n1 (fiber) and n2 (medium). Derived from the dielectric needle approximation Equation (7) has been used extensively to describe the natural transparency of the mam malian cornea [28][29][30][31][32]. For a porous scaffold, which is the case for electrospun scaffolds reduction in light transmission occurs for every interaction with individual fibers. The total light transmission through a nanofibrous scaffold should therefore be describable through the scaffold's thickness, the diameter of the nanofibers, and the refractive indice of the fiber material and the surrounding medium. (a) (b) Figure 1. Schematic of an incident beam with intensity I0 in the x direction passing through (a) a homogenous volume of thickness d with the optical interfaces at the n2/n1 and n1/n2 transitions and (b) a planar scaffold in the y-z-plane of thickness D, consisting of single nanofibers with fiber diameter d. Propagation of the incident beam in the x direction. For an application-oriented field of research, such as corneal tissue engineering, a general equation, describing the influencing parameters of nanofibrous scaffold transpar ency, is substantial. Therefore, in this study, electrospun PCL nanofibrous scaffolds with different fiber diameters were investigated regarding their optical properties. Using UVvis spectroscopy measurements, light transmission through the scaffolds was analyzed with regard to scaffold thickness, fiber diameter, and surrounding medium. Using statis tical modelling, power laws were derived for an appropriate description of the data within the experimental error. Finally, design principles were formulated from the experimenta findings to promote further research in the field of corneal tissue engineering. Materials and Methods Polycaprolactone (PCL) nanofiber scaffolds were produced via electrospinning. The method is well described in the literature (e.g., [33]; a theoretical description can be found in [34]). In brief, a polymer melt or polymer solution is extruded through a needle. Th polymer solution is stretched due to the electrical forces in the electric field, which is se between the needle and a grounded collector. By varying the polymer concentration, dif ferent fiber diameters can be fabricated. The spinning solution was prepared from PCL For an application-oriented field of research, such as corneal tissue engineering, a general equation, describing the influencing parameters of nanofibrous scaffold transparency, is substantial. Therefore, in this study, electrospun PCL nanofibrous scaffolds with different fiber diameters were investigated regarding their optical properties. Using UV-vis spectroscopy measurements, light transmission through the scaffolds was analyzed with regard to scaffold thickness, fiber diameter, and surrounding medium. Using statistical modelling, power laws were derived for an appropriate description of the data within the experimental error. Finally, design principles were formulated from the experimental findings to promote further research in the field of corneal tissue engineering. Materials and Methods Polycaprolactone (PCL) nanofiber scaffolds were produced via electrospinning. The method is well described in the literature (e.g., [33]; a theoretical description can be found in [34]). In brief, a polymer melt or polymer solution is extruded through a needle. The polymer solution is stretched due to the electrical forces in the electric field, which is set between the needle and a grounded collector. By varying the polymer concentration, different fiber diameters can be fabricated. The spinning solution was prepared from PCL (M W = 80,000 g mol −1 , Sigma Aldrich, Saint Louis, MO, USA) dissolved in a 7:3 mixture of formic acid and acetic acid (both Carl Roth GmbH + Co. KG, Karlsruhe, Germany). Fiber diameter was evaluated using SEM images (CrossBeam Carl Zeiss Microscopy GmbH, Oberkochen, Germany) and ImageJ software. In preliminary experiments, for each solution, a working window was identified, focusing on a homogenous fiber morphology and sufficient fiber yield. The electrospinning parameters as well as the resulting mean fiber diameters ± standard deviation are given in Table 1. Table 1. Parameters for the electrospinning of PCL scaffolds from spinning solutions with varying concentrations from 5 g/100 mL to 16 g/100 mL and resulting fiber diameters. With increasing spinning time, the scaffold thickness could be adjusted. Due to the similar flow rates, increasing spinning concentrations and thus fiber diameter led to a reduced spinning time for the desired scaffold thicknesses. Scaffolds were fabricated with a desired thickness from 1 µm to 50 µm. Within this range, application-oriented conclusions towards predicting light transmission through the nanofibrous scaffolds could be drawn. Concentration Scaffolds were fixed in tissue carrier rings (9 mm inner diameter, Minucells and Minutissue, Bad Abbach, Germany), and the thickness of each scaffold was measured using a digital contact sensor (GT series, Keyence, Itasca, IL, USA). Therefore, the scaffolds, fixed in the tissue carrier rings, were sandwiched between a cylindrical base (8 mm in diameter) and a circular glass platelet (4.5 mm in diameter). Subsequently, the net thickness was measured over a scaffold area of approximately 16 mm 2 . For the measurement of light transmission through the scaffolds, a UV-vis spectrometer (Specord 210 plus, Analytik Jena GmbH, Jena, Germany) was used. Therefore, the scaffolds were placed in a cuvette of personal proprietary (e.g., Figure 2), ensuring that the scaffolds were kept in place perpendicular to the incident, monochromatic beam. The cuvette was filled with either ethanol (EtOH) (Carl Roth GmbH + Co. KG, Karlsruhe, Germany) or phosphate-buffered saline (PBS) (VWR International GmbH, Darmstadt, Germany) to investigate the influence of different surrounding media. Light transmission measurements were conducted from 380 nm to 780 nm with an increment of one nanometer. Prior to every measurement, a calibration scan was performed to normalize the measured intensity to the experimental set-up, consequently I 0 (λ) = 100%. For each scaffold type, at least 50 scaffolds were measured, resulting in over 250,000 individual wavelength-transmission data points. From the individual wavelength-transmission data, discrete thickness-transmission data for defined wavelengths were plotted, as shown in Figure 3. Starting from 380 nm, with an increment of 10 nm, fit lines were plotted using an exponential decay function (8) where T background accounts for the diffuse light transmission of thick scaffolds >50 µm, where the measured light transmission is usually in the range of a few percent. Utilizing Equation (8), an optimal description of the data in the thickness range of interest was reached. Data fitting was performed as a two-stage process using Origin 2019 (OriginLab Corporation, Northampton, MA, USA). After the first fitting, data points with an individual residuum higher than 1.5 times the externally studentized residuum of the fit function were removed from the dataset, and fitting was then repeated with the processed dataset. Usually, outliers originated from false thickness measurements, due to the thickness measurement in contact mode or to an inhomogeneous thickness distribution of the scaffolds. Finally, Nanomaterials 2021, 11, 3191 5 of 15 fit lines were combined in a contour plot, with the wavelength on the x-axis, the scaffold thickness on the y-axis, and the light transmission as colored grading. Between the fit lines, a linear interpolation was presumed. With this approach, errors in the determination of the scaffold thickness or light transmission could be eliminated by averaging a large amount of data. From the contour plot, contour lines of arbitrary thickness can be extracted for an exact comparison of different experimental groups. To give an estimation of the experimental error, contour lines are presented with error bars indicating the 95% confidence interval of the discrete fit lines. were removed from the dataset, and fitting was then repeated with the processed dataset. Usually, outliers originated from false thickness measurements, due to the thickness measurement in contact mode or to an inhomogeneous thickness distribution of the scaffolds. Finally, fit lines were combined in a contour plot, with the wavelength on the x-axis, the scaffold thickness on the y-axis, and the light transmission as colored grading. Between the fit lines, a linear interpolation was presumed. With this approach, errors in the determination of the scaffold thickness or light transmission could be eliminated by averaging a large amount of data. From the contour plot, contour lines of arbitrary thickness can be extracted for an exact comparison of different experimental groups. To give an estimation of the experimental error, contour lines are presented with error bars indicating the 95% confidence interval of the discrete fit lines. . From the individual transmission values, transmission-versus-scaffold thickness plots were generated for discrete wavelengths. Using the fit lines, contour plots were generated for individual scaffolds and enclosing media. Using the contour lines, a comparison between different scaffolds and environmental parameters at various scaffold thicknesses could be made. Further evaluations of the experimental data were conducted at a wavelength of 589 nm due to the availability of refractive indices, as the D-line of the sodium spectrum is usually used for determining the optical properties of materials. The absorption coefficient of PCL was presumed to be 0.0001 µm −1 , and the considered refractive indices of the used materials were 1.36 for ethanol, 1.33 for PBS, and 1.46 for PCL [23,35,36]. were removed from the dataset, and fitting was then repeated with the processed dataset. Usually, outliers originated from false thickness measurements, due to the thickness measurement in contact mode or to an inhomogeneous thickness distribution of the scaffolds. Finally, fit lines were combined in a contour plot, with the wavelength on the x-axis, the scaffold thickness on the y-axis, and the light transmission as colored grading. Between the fit lines, a linear interpolation was presumed. With this approach, errors in the determination of the scaffold thickness or light transmission could be eliminated by averaging a large amount of data. From the contour plot, contour lines of arbitrary thickness can be extracted for an exact comparison of different experimental groups. To give an estimation of the experimental error, contour lines are presented with error bars indicating the 95% confidence interval of the discrete fit lines. . From the individual transmission values, transmission-versus-scaffold thickness plots were generated for discrete wavelengths. Using the fit lines, contour plots were generated for individual scaffolds and enclosing media. Using the contour lines, a comparison between different scaffolds and environmental parameters at various scaffold thicknesses could be made. Further evaluations of the experimental data were conducted at a wavelength of 589 nm due to the availability of refractive indices, as the D-line of the sodium spectrum is usually used for determining the optical properties of materials. The absorption coefficient of PCL was presumed to be 0.0001 µm −1 , and the considered refractive indices of the used materials were 1.36 for ethanol, 1.33 for PBS, and 1.46 for PCL [23,35,36]. Further evaluations of the experimental data were conducted at a wavelength of 589 nm due to the availability of refractive indices, as the D-line of the sodium spectrum is usually used for determining the optical properties of materials. The absorption coefficient of PCL was presumed to be 0.0001 µm −1 , and the considered refractive indices of the used materials were 1.36 for ethanol, 1.33 for PBS, and 1.46 for PCL [23,35,36]. For a simplified and easy-to-use formulation of transmission through nanofibrous scaffolds at distinctive wavelengths, a semi-empirical approach using a regression analysis was adopted using Statistica 10 (StatSoft Inc., Tulsa, OK, USA). In total, the modeling of approximately 250,000 individual experimental data points was performed, and a semi-empirical model, depending on the scaffold properties and surrounding medium, was formulated. Results and Discussion The individual transmission measurements of scaffolds with arbitrary thickness were the basis of the following results. Evaluating the transmission as a function of scaffold thickness for discrete wavelengths opened the possibility to analyze light transmission through electrospun scaffolds and compare devised scaffolds of arbitrary thickness with regard to their transparency. Figure 4 shows schematically the fit functions of all six sample groups for a discrete wavelength of 589 nm. Obviously, as shown in Equation (3), light transmission decreased exponentially with increasing scaffold thickness. Sufficiently high transmission values were only obtained below 5 µm, whereby scaffolds with thinner fiber diameters showed a higher light transmission in general. As displayed, the parameter m from Equation (8) increased with increasing fiber diameter from 0.054 to 0.089, and thus light attenuation. The highest light transmission could therefore be attributed to scaffolds consisting of fibers with a diameter of 35 nm (Figure 4, top left). The parameters of the scaffolds with a fiber diameter of 103 nm and 136 nm slightly diverged from the overall trend. This could be due to insufficient data points in the relevant thickness range, resulting in poor data fitting. Moreover, the broad fiber diameter distribution accounted for insignificant distinguishable median values of the fiber diameters for the samples with a fiber diameter of 103 nm to 136 nm. Nevertheless, it is clear from Figure 4 that with increasing fiber diameter, the coefficient m increased, and transmission of visible light through the scaffolds decreased. Individual Transmission Measurements and Resulting Contour Plots The empirical description of light transmission, as shown in Figure 4, was evaluated for discrete wavelengths from 380 nm to 780 nm with an increment of 10 nm. From the combination of fit lines, contour plots were generated, as displayed in Figure 5. The fit lines became vertical lines in Figure 5, with transmission as color grading from red (0% light transmission) to green (100% light transmission). Light transmission >85% characterized a scaffold transparency comparable to that of the human cornea [9]. Again, it became clear that light transmission values above 85% were only accessible for thin scaffolds. With increasing scaffold thickness, light transmission was reduced to values insufficient for all types of scaffolds. The concept presented in this study, using discrete wavelengths and resulting contour plots, as shown in Figures 4 and 5, may serve as a tool to decide on the maximum scaffold thickness for a desired light transmission or vice versa. This represents a novel approach for characterization of scaffolds for corneal tissue engineering. Formerly, for a meaningful comparison, scaffolds of similar thickness had to be produced. Now, for the first time, light transmission through nanofibrous scaffolds can be compared, not only for existing scaffolds but also for scaffolds of arbitrary thickness. Based on the plots in Figure 5, further evaluations of the influence of fiber diameter and enclosing medium on light transmission through electrospun scaffolds were performed. Influence of Fiber Diameter and Surrounding Medium As shown in the previous section, light transmission depends on the scaffold properties. Beside scaffold thickness, fiber diameter is the structuring element. With decreasing fiber diameter, the structure of the scaffolds changed, as the number of fibers per unit volume increases. In Figure 6a, exemplary 10 µm scaffolds from the contour plots of Figure 5 are displayed. It is shown that the overall light transmission increased with decreasing fiber diameter. For a better clarity, scaffolds with 103 nm and 136 nm fiber diameter were left out as, due to the broad fiber diameter distribution of electrospun nanofibers, light transmission values were not significantly different for the scaffolds from 103 nm, 113 nm and 136 nm, as already mentioned before. The highest light transmission was observed for scaffolds consisting of fibers with a diameter of 35 nm. Transmission values up to 66% (at 589 nm) were measured. With increasing fiber diameter, light transmission decreased to 43% (at 589 nm). For all scaffolds, a wavelength-dependent light transmission was observed. This could occur from the decreasing ratio of fiber diameter to wavelength with increasing wavelength. Similar to pure Rayleigh scattering, where the scattered intensity is proportional to λ −4 , or the thin needle approximation as shown in Equation (7), the influence of scattering is reduced for increasing wavelengths [27]. Influence of Fiber Diameter and Surrounding Medium As shown in the previous section, light transmission depends on the scaffold properties. Beside scaffold thickness, fiber diameter is the structuring element. With decreasing fiber diameter, the structure of the scaffolds changed, as the number of fibers per unit volume increases. In Figure 6a, exemplary 10 µm scaffolds from the contour plots of Figure 5 are displayed. It is shown that the overall light transmission increased with decreasing fiber diameter. For a better clarity, scaffolds with 103 nm and 136 nm fiber diameter were left out as, due to the broad fiber diameter distribution of electrospun nanofibers, light transmission values were not significantly different for the scaffolds from 103 nm, 113 nm and 136 nm, as already mentioned before. The highest light transmission was observed for scaffolds consisting of fibers with a diameter of 35 nm. Transmission values up to 66% (at 589 nm) were measured. With increasing fiber diameter, light transmission decreased to 43% (at 589 nm). For all scaffolds, a wavelength-dependent light transmission was observed. This could occur from the decreasing ratio of fiber diameter to wavelength with increasing wavelength. Similar to pure Rayleigh scattering, where the scattered intensity is proportional to λ −4 , or the thin needle approximation as shown in Equation (7), Electrospun scaffolds usually show a whitish appearance. The big difference in refractive indices between air and polymer leads to strong isotropic reflections and scattering of all wavelengths; hence, the scaffolds appear white. With a decreasing difference in refractive index, reflectance as well as scattering could be minimized, and scaffold transparency improved. In Figure 6b, the transmission data for two different scaffold types is shown. Just like in Figure 6a, light transmission data were taken from the contour plots as horizontal line for a scaffold thickness of 10 µm for scaffolds with 35 nm and 167 nm fiber diameter. Again, light transmission was enhanced with a reduced fiber diameter. Changing the surrounding medium from EtOH to PBS led to a reduced light transmission by 5 to 10 percentage points. The difference in refractive index increased from 0.1 (PCL/EtOH) to 0.13 (PCL/PBS) resulting in an increased light attenuation. Electrospun scaffolds usually show a whitish appearance. The big difference in refractive indices between air and polymer leads to strong isotropic reflections and scattering of all wavelengths; hence, the scaffolds appear white. With a decreasing difference in refractive index, reflectance as well as scattering could be minimized, and scaffold transparency improved. In Figure 6b, the transmission data for two different scaffold types is shown. Just like in Figure 6a, light transmission data were taken from the contour plots as horizontal line for a scaffold thickness of 10 µm for scaffolds with 35 nm and 167 nm fiber diameter. Again, light transmission was enhanced with a reduced fiber diameter. Changing the surrounding medium from EtOH to PBS led to a reduced light transmission by 5 to 10 percentage points. The difference in refractive index increased from 0.1 (PCL/EtOH) to 0.13 (PCL/PBS) resulting in an increased light attenuation. Additionally to UV-vis measurements, differences in light transmission can be observed with optical imaging. Therefore, scaffolds with a thickness close to 10 µm were moistened in PBS and placed onto a reference. The resulting images are displayed in Figure 7. The transparency of the scaffolds, as already indicated in Figure 6, could be classified as insufficient for corneal grafts, though, as shown in Figure 7, the transparency of the scaffold with a mean fiber diameter of 35 nm (B) was closer to that of the reference (A) than the transparency of the scaffold with a fiber diameter of 167 nm (C). (a) (b) Figure 6. Examples of extracted contour lines from Figure 5. Transmission values were taken for scaffolds with a thickness of 10 µm. Light transmission increases with a decreasing fiber diameter (a) as well as with a decreasing ratio of the refractive indices (b). Additionally to UV-vis measurements, differences in light transmission can be observed with optical imaging. Therefore, scaffolds with a thickness close to 10 µm were moistened in PBS and placed onto a reference. The resulting images are displayed in Figure 7. The transparency of the scaffolds, as already indicated in Figure 6, could be classified as insufficient for corneal grafts, though, as shown in Figure 7, the transparency of the scaffold with a mean fiber diameter of 35 nm (B) was closer to that of the reference (A) than the transparency of the scaffold with a fiber diameter of 167 nm (C). Summarizing the above, it can be concluded that reducing the fiber diameter and matching the refractive indices yield improved light transmission through nanofibrous scaffolds. Semi-Empirical Description of Light Tranmission Following the theoretical considerations in the Materials and Methods section, light transmission through the nanofibrous scaffolds depends on scaffold properties such as fiber diameter and scaffold thickness and on material characteristics such as the refractive index. Thus, a semi-empirical description of the experimental transmission data was de- Summarizing the above, it can be concluded that reducing the fiber diameter and matching the refractive indices yield improved light transmission through nanofibrous scaffolds. Semi-Empirical Description of Light Tranmission Following the theoretical considerations in the Materials and Methods section, light transmission through the nanofibrous scaffolds depends on scaffold properties such as fiber diameter and scaffold thickness and on material characteristics such as the refractive index. Thus, a semi-empirical description of the experimental transmission data was derived to describe light transmission through nanofibrous scaffolds using regression analysis. With the formulation of scaling laws, precise predictions of the influence of eligible parameters can be made within the experimental accessed range. Neglecting wavelength-dependent variances in the refractive indices and in consistency with the Lambert-Beer law (Equation (3)), the following approach was chosen ln −lnT D = α 0 +α 1 lnR + α 2 lnd + α 3 ln λ where R stands for reflectance from equation 5 at a wavelength of 589 nm. The resulting α-values were α 0 = 1.48, α 1 = 0.55, α 2 = 0.60, and α 3 = 1.68 (d, D and λ in µm). A further simplification based on the consideration of physical reasonable dimensions, led to the improved model with only three adjustable parameters. Now, the resulting α-values were α 0 = 1.41, α 1 = 0.55 and α 2 = 0.57. Taking into considerations the experimental error due to variances in fiber diameter as well as scaffold thickness, α 1 and α 2 were set as α 1,2 = 0.5. Subsequently, the model from Equation (10) could be written as In this semi-empirical model, α is a dimensionless parameter and was set to α = 2.75. Consequently, the formulation presented in Equation (11) could be written as and allowed the prediction of light transmission through nanofibrous scaffolds within typical experimental errors. Accounting for the differences in refractive indices, R was derived from the Fresnel equations for vertical incidence, neglecting multibeam interference [24,25]. The predicted transmission data versus the observed transmission data for all six samples groups, measured in two different media within the range of 380 nm to 780 nm, are shown in Figure 8. The data are described with R 2 = 0.91, suggesting an acceptable accuracy of the model within the experimental data. An estimation of the experimental error was performed utilizing the relative error T relative , considering that the dominant experimental uncertainty is attributed to the scattering thickness D. (12). For reasons of clarity, only every 50th data point is s The red area corresponds to the error range based on Equations (13) and (21). Transparenc healthy cornea is indicated at T = 85%. Formulation of the Design Principles Tissue engineering in the context of ophthalmology mostly deals with the full mellar replacement of the cornea. The main part of the cornea, the stroma, consi highly aligned collagen fibrils [9], which act as scatterers, besides other parts of the st Figure 8. Predicted versus observed transmission of all individual data points. Predicted transmission was calculated using Equation (12). For reasons of clarity, only every 50th data point is shown. The red area corresponds to the error range based on Equations (13) and (21). Transparency of a healthy cornea is indicated at T = 85%. T relative equals approximately −µ ∆D, giving easy access to the expectable accuracy of the predicted transmission data, as µ is defined as µ(n 1 , n 2 , λ, d). In order to estimate the error of the scaffold thickness, the following simple approach was adopted: assuming that a scaffold with total thickness D can be separated in N layers of thickness D i , the total thickness can be written as yielding the error of the total thickness, utilizing error propagation On the other hand, D = D i N (16) while all sublayers with thickness D i can be assumed to have the same thickness D e D i = D e (17) and therefore the same error From Equation (15), it follows and utilizing Equation (16), the error can now be estimated with where k is an adjustable parameter. Considering typical values for the scaffold thickness, k yields values of approximately 1 µm. Finally, the experimental error in the measurement of the scaffold thickness can be estimated with With the semi-empirical formulation of light transmission through nanofibrous scaffolds, a novel concept is presented for the design of nanofibrous scaffolds, focusing on the optical properties. Formulation of the Design Principles Tissue engineering in the context of ophthalmology mostly deals with the full or lamellar replacement of the cornea. The main part of the cornea, the stroma, consists of highly aligned collagen fibrils [9], which act as scatterers, besides other parts of the stroma like the keratocytes. The collagen fibrils have a diameter around 25 nm and are thus even smaller in diameter than the smallest fibers in this study. Considering them as a blueprint, mimicking the corneal structure would mean the following: • Reducing the fiber diameter d; • Reducing the scaffold thickness D; • Selecting a material with a refractive index similar to that of the human cornea Meanwhile, the first two points refer to structural properties, while the latter one is purely based on the chosen materials, whereby fiber diameter and matching refractive indices are closely connected, as depicted in Figure 6. Especially with decreasing fiber diameter, light transmission is mainly influenced by the scattering cross section, which again strongly depends on the refractive index of the used material. The ideal material would therefore have a refractive index as close as possible to the refractive index of the human cornea with n cornea = 1.376 [9]. As most of the commonly used polymers for tissue engineering possess a refractive index of approximately 1.50, it becomes evident that for corneal tissue engineering, the use of pure polymers will result in insufficient light transmission. Therefore, we suggest blending these polymers with foremost hygroscopic polymers such as peptides or polysaccharides or even using hygroscopic polymers themself as fibers for the scaffold. The key to an improved light transmission lies in the incorporation of water (n = 1.33) in the polymeric fiber matrix. With a sufficient amount of water uptake, the resulting refractive index of the blend fibers can be approximated using the Gladstone-Dale equation [37], which holds for ∆n i < 0.2. Thus, an n total can be calculated as n total = ∑ n i v i (22) where n i represents the refractive indices, and v i the volume fractions of the individual components. For the sum of the volume fractions applies ∑v i = 1. With the estimation of a hypothetic refractive index for varying blend compositions, the transmission can be predicted. With this approach, a preselection of suitable polymers and polymer blends can be achieved, and basic design principles can be formulated. In Figure 9, a hypothetical example using this approach is shown. The blending of polymer A with polymer B with refractive indices of n A = 1.45 and n B = 1.55 with different ratios requires a defined amount of water uptake to minimize the difference in refractive indeces between the ternary blend and the natural cornea. As a visual result, the ternary contour plot of ∆n is shown in Figure 9a. From this on, using Equation (11), the transmission could be calculated. In Figure 9b, the transmission for a scaffold in equilibrium swelling state with 10 µm thickness, 100 nm fiber diameter, and an experimental wavelength of 589 nm is shown. . Example for a mixture of two polymers and different water uptake after swelling. From the refractive indices of the single materials, the overall refractive index as well as the difference with respect to the refractive index of the cornea can be calculated (a). Using Equation (11), the resulting light transmission through such hypothetic scaffolds can be estimated (b). Scaffold was defined to be 10 µm thick, consisting of fibers with a fiber diameter, after swelling, of 100 nm. Transmission is shown at 589 nm. Asterisk (*) indicates corneal transparency corresponding to Tcornea > 85%. Table 2 provides a brief overview of various eligible blend polymers. For most polymers, swelling, and thus water uptake, are highly dependent on the degree of crosslinking, the crosslinking agent, and the molar mass. It must be pointed out that hygroscopic polymers require chemical or physical crosslinking, otherwise the fibers would lose their mechanical strength due to the water uptake or will be dissolved in the worst case. In the case of polymer blends, water uptake is related to the blend polymer and relative amounts of the matrix and blend polymer. The approach presented in this study can be used in all areas of biomaterials like bio-printing or tissue engineering, where transparency of the graft is of interest. 1 Depending on crosslinking, crosslinking agent, and/or blend polymer and content. Conclusions The transmission of light in the visible spectrum from 380 nm to 780 nm is an im- Figure 9. Example for a mixture of two polymers and different water uptake after swelling. From the refractive indices of the single materials, the overall refractive index as well as the difference with respect to the refractive index of the cornea can be calculated (a). Using Equation (11), the resulting light transmission through such hypothetic scaffolds can be estimated (b). Scaffold was defined to be 10 µm thick, consisting of fibers with a fiber diameter, after swelling, of 100 nm. Transmission is shown at 589 nm. Asterisk (*) indicates corneal transparency corresponding to T cornea > 85%. The resulting proportion of polymer A, B, and water refers to the steady state, where the swelling reached its equilibrium value. While the ratio of polymer A to B can be adjusted as desired, water uptake is mainly dependent on the hygroscopic behavior of the blend. In the case of Figure 9, a very low amount of the components A and B in the range from 0.2 to 0.5 and 0 to 0.3 should be used, while a high water uptake is required, leading to a final water content of 0.7 to 0.8 (70-80%). Such blends will show high light transmission values over 85%, qualifying for corneal grafts. Table 2 provides a brief overview of various eligible blend polymers. For most polymers, swelling, and thus water uptake, are highly dependent on the degree of crosslinking, the crosslinking agent, and the molar mass. It must be pointed out that hygroscopic polymers require chemical or physical crosslinking, otherwise the fibers would lose their mechanical strength due to the water uptake or will be dissolved in the worst case. In the case of polymer blends, water uptake is related to the blend polymer and relative amounts of the matrix and blend polymer. The approach presented in this study can be used in all areas of biomaterials like bio-printing or tissue engineering, where transparency of the graft is of interest. Conclusions The transmission of light in the visible spectrum from 380 nm to 780 nm is an important characteristic of future transplants in corneal tissue engineering. Should the patient experience a direct improvement after surgery, transparent grafts must be produced. With the emerging interest in electrospun scaffolds for corneal tissue engineering, transparency has to be equally important to biocompatibility and mechanical strength. In the literature, graft transparency is only examined as a side aspect of graft evaluation, and in most publications only exemplary grafts are shown. In this study, a detailed analysis of light transmission through nanofibrous PCL scaffolds was performed. By varying fiber diameter and surrounding medium, material and structural properties could be separated. For enhanced transparency of nanofibrous scaffolds, thin fibers and matching refractive indices should be used. Moreover, a novel, simple model is provided to describe the light transmission of nanofibrous scaffolds and its experimental validation by a huge amount of data. Finally, from the general conclusions, design principles were formulated to promote further research in the field of corneal tissue engineering
9,199.4
2021-11-25T00:00:00.000
[ "Materials Science", "Medicine" ]
A Virtue Reliabilist Error-Theory of Defeat Knowledge defeat occurs when a subject knows that p, gains a defeater for her belief, and thereby loses her knowledge without necessarily losing her belief. It’s far from obvious that externalists can accommodate putative cases of knowledge defeat since a belief that satisfies the externalist conditions for knowledge can satisfy those conditions even if the subject later gains a defeater for her belief. I’ll argue that virtue reliabilists can accommodate defeat intuitions via a new kind of error theory. I argue that in cases where the subject holds dogmatically onto her belief in the face of an apparent defeater, her belief never qualified as knowledge, since the belief was not gained via an exercise of her epistemic virtues. In cases where the subject suspends her judgment upon receiving the putative defeater her original belief might have qualified as knowledge, but crucially, in such cases knowledge is lost due to loss of belief, rather than due to the epistemic force of the defeater. Therefore, knowledge defeat isn’t a genuine phenomenon even though there are no cases where a subject knows what she originally believed after receiving the putative defeater. Introduction Knowledge defeat is said to occur when a subject knows that p, then gains a putative defeater for her belief, and thereby loses her knowledge that p without necessarily losing her belief that p or any relevant evidence. Accommodating the phenomenon of knowledge defeat isn't easy for externalist theories of knowledge. 1 Indeed, some evidence, then their beliefs might become unsafe, since they might have formed a different belief, which would have been false. But whether Yen and Ciri rebase their beliefs is a contingent matter, and therefore their knowledge need not be defeated by the putative defeater, contra the defeatist intuition (Lasonen-Aarnio, 2010). 7 Therefore, it isn't easy to see how externalist theories of knowledge could accommodate putative cases of knowledge defeat. An option that used to be popular was to add a no-defeaters clause to the externalist theory of knowledge. 8 Those who have been unwilling to add a seemingly ad hoc no-defeaters clause to their accounts of justification or knowledge have aimed to accommodate intuitions of knowledge defeat via error-theories. Lasonen-Aarnio (2010 has argued that knowledge can sometimes be retained in putative cases of knowledge defeat. Our negative assessment of subjects who retain their knowledge in such cases is explained by the fact that they are manifesting bad dispositions, dispositions that would in general be manifested in cases of ignorance, rather than in cases of knowledge. Baker-Hytch and Benton (2015, p. 57) have argued that if knowledge is the norm of belief, then the apparent irrationality of subjects who retain their beliefs in face of misleading evidence can be explained by the fact that such subjects violate a guidance norm that is generated by the knowledge norm of belief. 9 While I am very sympathetic to both Lasonen-Aarnio's proposal and to Baker-Hytch and Benton's account, I wish to sketch a new kind of error-theory that falls directly out of virtue reliabilism. The error-theory I propose differs significantly from the earlier ones. According to it, in cases where the subject dogmatically clings onto her belief she never knew to begin with, or did not acquire a putative defeater in the first place, while in cases where the subject suspends judgment she might have known, but doesn't any more since she lacks the relevant belief. It's an error theory for two reasons. Firstly, in some cases of putative knowledge defeat we mistakenly think that the subject had knowledge to begin with. Secondly, according to the view putative defeaters cannot on their own defeat knowledge. The defeatist intuition is explained by the fact that there are no cases where a subject knows that p at t 1 , and retains her knowledge of p after having received a putative defeater for p at t 2 . Some readers might think that the error-theory provided is too radical. These readers are invited to see this paper as offering an argument against virtue reliabilism, since the error-theory I present falls directly out of the main tenets of virtue reliabilism. My sole aim here is to examine what consequences virtue reliabilism has for defeat. 7 Note also that if we were to think that what one knows is always part of one's evidence, then if Yen and Ciri don't lose any evidence upon receiving the misleading defeater, their evidence at t 2 will still conclusively support their original beliefs because knowledge is factive. 8 Goldman has aimed to deal with putative defeat cases by adding a no-defeaters clause to his theory of justification, and hence to his theory of knowledge. For different ways in which a no-defeaters clause can be added to process reliabilist theories see Goldman (1979;1986, pp. 111-112) and Lyons (2009Lyons ( , 2016. For critique of these proposals, see Beddor (2015). 9 See Brown (2018, p. ch. 5) for discussion of these strategies to explain away knowledge defeat. In what follows, I'll focus on cases like Red light and Feint that involve so-called doxastic or mental-state defeaters. All doxastic defeaters are beliefs. For instance, in 'Red light' the defeater that Yen has is her belief that [the wall is illuminated by red light that would have made the wall look red no matter its actual colour]. I'll set aside cases that feature propositional or normative defeaters (Lackey, 1999). A propositional defeater for the belief that p is a true proposition such that if S were to believe it, then S wouldn't know that p. A normative defeater is a propositional defeater that the subject should have believed. There are two reasons why I limit the scope of inquiry to the potential epistemic force of doxastic defeaters. Firstly, cases featuring doxastic defeaters are the most plausible cases of knowledge defeat. If it turned out that doxastic defeaters are void of epistemic power, as I hope to show, then there is reason to think that propositional and normative defeaters are void of epistemic power too. Secondly, I think that cases of propositional and normative defeat are highly contentious. In my mind it's better not to use such cases when evaluating a theory. Henceforth all talk of defeaters refers to doxastic defeaters. This essay is structured as follows. In the next section I lay out some key ideas of virtue reliabilism. In the third section I spell out under what conditions a belief can function as a defeater. In the fourth section I examine whether knowledge defeat is a genuine phenomenon, under the assumption that knowledge is always the product of one's cognitive abilities. In the fifth section I briefly compare my account to other virtue theoretic solutions. Virtue and Coherence of Character The central thesis of virtue reliabilism is that knowledge requires that one's cognitive success must be attributable to one's cognitive character. Some virtue epistemologists see this central thesis as giving both necessary and sufficient conditions for knowledge. 10 Others think that it provides only a necessary condition for knowledge. 11 The central thesis can be interpreted in various ways. A cognitive success can be understood either as the acquisition or maintaining of a true belief, or as the acquisition or maintaining of knowledge. The former views belong to the classical tradition of analyzing knowledge in terms of true belief plus some other conditions. 12 The latter views belong to the knowledge first movement, championed by Williamson (2000). 13 Another aspect in which the central thesis is ambiguous is on the question when a cognitive success is attributable to the subject's cognitive abilities. According to Greco (2010), the truth of a subject's belief is attributable to her cognitive character just in case the fact that she believes out of cognitive character is part of the most salient causal explanation why she acquired a true, rather than a false belief. In other words, one's cognitive character has to be an important part of the best causal explanation for one's cognitive success in order for one to know. According to Sosa (2007Sosa ( , 2009), one's cognitive success is attributable to one's cognitive character just in case one's cognitive success is a manifestation of the cognitive abilities that make up one's cognitive character. Thus Sosa's seeks to understand the attribution relation in terms of a more general metaphysical relation, namely, as the manifestation of a disposition. Many have preferred Sosa's account to Greco's, probably because Sosa is able to side-step some counterexamples that Greco's account seems susceptible to. 14 Here we need not be concerned with these issues. The argument I offer doesn't depend on how one understands cognitive success, nor on how we flesh out the attribution clause. In fact, virtue-theoretic views that don't invoke the attribution relation, but merely require that one's belief has to be the product of one's cognitive abilities, also fall under the scope of the views I wish to discuss. 15 What all of these views share is the idea that knowledge requires the use of cognitive abilities. In order to make use of a cognitive ability one must possess that ability. But under what kind of conditions does one possess a certain ability? In a broadly Aristotelian spirit, virtue epistemologists think that a reliable doxastic disposition can count as a cognitive virtue only if it's a proper part of one's virtuous epistemic character (Greco, 1999, p. 287;2010, p. 150;Palermos, 2014Palermos, , p. 1940Pritchard, 2012, p. 262). 1617 This is what differentiates virtue reliabilism form process reliabilism. Virtue reliabilists require that the reliable processes be properly grounded in the subject in order to be knowledge-conducive. According to them, not all reliable doxastic dispositions count as cognitive abilities. If Alvin has a brain lesion that causes him to believe that he has a brain lesion, his belief is the product of an extremely reliable doxastic disposition, but it's not a product of his cognitive 14 For instance, Turri (2011), Littlejohn (2014, and Kelp (2017) invoke the notion of manifestation of a disposition in their understanding of the attribution clause. Lackey (2007Lackey ( , 2009) argues on the basis of testimonial cases of knowledge that knowing doesn't require that one's cognitive success be attributable to one's cognitive character. It would seem that Sosa (2007, pp. 95-96;2011, p. 87) has the means to deal with Lackey's objections, but it's not clear whether Greco does. In fact Greco (2012) has changed his view in light of Lackey's apt criticism. I think that Greco's new proposal is better suited to deal with Lackey's objections. For discussion of different ways to understand credit, see Hirvelä and Lasonen-Aarnio (forthcoming). 15 See Hirvelä (2018Hirvelä ( , 2019a and Beddor and Pavese (2020) for a virtue-theoretic view that doesn't invoke the attribution relation. The virtue-theoretic condition that Pritchard (2012) endorses does demand that the agent's cognitive success be of credit to her, and hence is logically stronger. 16 Knowledge first virtue reliabilists think that the relevant cognitive abilities are abilities to know, whereas those virtue reliabilists who have reductive ambitions understand such abilities as abilities to gain or maintain true beliefs. In what follows we can remain neutral on this score. 17 Is the notion of character essential in virtue epistemology? Perhaps not. Sylvan (2017), drawing on the work of Thomson (1997) and Hurka (2006), develops an intriguing virtue responsibilist view that takes act-attaching virtue properties to be fundamental, rather than character-attaching virtue properties. This kind of virtue theory is outside the scope of my argument. abilities, because the brain lesion isn't a part of Alvin's cognitive character (Breyer & Greco, 2008, p. 174;Greco, 2010, p. 151;Palermos, 2014Palermos, , p. 1938. 18 But under what conditions is a reliable doxastic disposition a proper part of one's cognitive character? At least three conditions have been proposed by virtue reliabilists; (1) that the disposition is stable, (2) that it's not strange, and, (3) that it's integrated into the subject's cognitive character (Greco, 2010, p. 150). The central idea behind these conditions is to ensure that in order for a doxastic disposition to be a part of one's cognitive character it has to be the agent's disposition. Beliefs that are products of such cognitive abilities are in a sense owned by the subject, in that she is responsible for those beliefs and can be properly blamed or credited for having those beliefs. I'll focus on condition (3), since it seems to be the most central one, and is more widely endorsed than conditions (1) and (2). 19 What suffices for cognitive integration varies from case to case. In some extreme cases, like in the brain lesion case, reflective endorsement of the truth-conduciveness of the disposition might be required (Pritchard, 2010). If, for instance, Alvin went to see a doctor who told him that he suffers from an extremely rare brain lesion that causes one to believe that one suffers from a brain lesion, the doxastic disposition generated by the brain lesion could become a part of Alvin's cognitive character. But this kind of reflective endorsement is almost never required in more mundane cases. Doxastic dispositions that are innate, or otherwise naturally developed, are integrated into our cognitive system via subconscious mechanisms in virtue of constantly confirming each other's outputs. Consider for example the following description of Edgar's afternoon: Edgar sees a beautiful pint of ale and can smell the overwhelming aroma of the hops. He can feel the cold glass in his hand and sipping the beer, finds delightful notes of pine, citrus and tropical fruits. Pricking his ears he can even hear the dense head slowly dissolving, and thinks: "I'm drinking ale today". All of these experiences confirm to Edgar that there's a pint of ale on the table. In Edgar's case all of his sensory modalities part-take in confirming a single proposition. Of course this isn't always the case. For many sensible qualities it applies that they can be sensed directly only via some particular sense modality. No one can hear the redness of the wall. Its redness can only be seen. However, many of our experiences are multi-modal in that multiple sense modalities are responsible for our phenomenological state. And it's not just the case that our sense modalities confirm the outputs of each other. Rather, in many cases our sense modalities affect the outputs and operation of our other sense modalities. 20 A minimal externalist condition for cognitive integration is that the cognitive abilities act in concert with each other. Greco (2010, p. 152) writes that "cognitive integration is a function of cooperation and interaction, or cooperative interaction, with other aspects of the cognitive system." Palermos (2014Palermos ( , pp. 1941Palermos ( -1942 holds that "the only necessary and sufficient condition for a process to count as knowledge-conducive is that it cooperatively interacts with the rest of the agent's cognitive character. [The] process of cognitive integration gives rise to a coherentist effect both on the level of processes (how the beliefs are generated) and on the level of content (how the beliefs themselves combine)." Pritchard (2010, pp. 147-148) holds that a doxastic disposition D is integrated to the subject's cognitive character only if beliefs gained via a D have cohered with the beliefs formed via the subject's other cognitive abilities, and that if they had not, then the subject would have responded accordingly. Sosa argues also that knowledge never arises purely from one faculty, but from the interplay of cognitive faculties. 21 He writes: Note that no human blessed with reason has merely animal knowledge of the sort attainable by beasts. For even when perceptual belief derives as directly as it ever does from sensory stimuli, it is still relevant that one has not perceived the signs of contrary testimony. A reason-endowed being automatically monitors his background information and his sensory input for contrary evidence and automatically opts for the most coherent hypothesis even when he responds most directly to sensory stimuli. […] The beliefs of a rational animal hence would seem never to issue from unaided introspection, memory, or perception. For reason is always at least a silent partner on the watch for other relevant data, a silent partner whose very silence is a contributing cause of the belief outcome. (Sosa, 1991, p. 240) This kind of minimal integration doesn't require any perspective on the truth conduciveness of the dispositions. The only thing that is required is that the doxastic dispositions that make up one's virtuous cognitive character are not acting in conflict with each other. Hence we are able to lay down the following condition for minimal cognitive integration: INTEGRATION: Subject S's doxastic disposition D is integrated with her cognitive character only if D would act in concert with the set of doxastic dispositions D* that together with D make up S's cognitive character if D were triggered while both D and D* are in appropriate conditions. Given INTEGRATION, a reliable doxastic disposition can qualify as a cognitive ability just in case it acts, or would act, in concert with one's cognitive character, while in appropriate conditions. 22 According to virtue reliabilism only beliefs 22 Note that it is not enough that it would be merely probable that the disposition acts in concert with one's cognitive character. Virtue reliabilists have at least two reasons why they should not opt for a weaker reading of 'would' in INTEGRATION. First, INTEGRATION demands that the relevant dispositions act in concert with each other when triggered while in appropriate conditions. Many virtue reliabilists understand appropriate conditions in terms of normal conditions, or conditions that are otherwise suitable for the exercise of the ability in question (Beddor & Pavese 2020;Greco 2010;Sosa 2010). Therefore, INTEGRATION is already effectively weakened in that it requires only that the dispositions would normally act in concert with each other. Second, if a doxastic disposition D could be a part of S's cognitive character even though it would be merely probable that it acts in concert with S's cognitive character while in appropriate conditions, then the performances that D would issue which were in tension with S's cognitive character would be attributable to S, since they would be manifestations of her cognitive abilities. But within the literature on attributability, even outside virtue epistemology, it is commonplace to think that an act is attributable to an agent "just in case it expresses the agent's deep 21 Thanks to Kurt Sylvan for pointing me towards relevant passages in Sosa's work. gained via cognitive abilities can have positive epistemic statuses like justification or knowledge. Knowledge and justification require a kind of coherence of one's cognitive faculties. For a subject to be eligible for such normative statuses she must keep her cognitive home in order. 23 One might object that INTEGRATION is too strong. Even though I know by testimony that the Müller-Lyer lines are equally long I still see them as of different lengths. When in the grips of the Müller-Lyer illusion my eyesight doesn't seem to act in concert with the other doxastic dispositions that make up my cognitive character. But here it's important to note that I don't form the belief that the lines are of different lengths on the basis of my perceptual experience when I know that they are of the same length. The fact that I don't form the belief is evidence that my eyesight is acting in concert with my cognitive character, since the knowledge that I've gained through my other cognitive faculties prevents me from forming a belief that corresponds to the experiential-state generated by my eyesight. It would be good if we could say more about what it takes for two doxastic dispositions to act in concert with each other. Sadly virtue reliabilists have been largely silent on this issue. What seems clear, however, is that two doxastic dispositions can act in concert with each other just in case the dispositions are appropriately sensitive to each other's outputs. It's clear that in cases where the dispositions generate beliefs that are logically inconsistent, the dispositions are not sensitive to each other's outputs. But while logical inconsistency of the outputs suffices to show that the doxastic dispositions are not properly integrated with the subject's cognitive character, it cannot be a necessary condition. If the doxastic disposition D generates in me the belief that p and another doxastic disposition D* generates in me the belief [I don't know that p] then D and D* are acting in tension with each other, even though p and [I don't know that p] are not logically inconsistent. A tempting way to explain the tension between D and D* is to appeal to the fact that the outputs that they generated cannot amount to knowledge simultaneously. We could then claim that two doxastic dispositions are acting in concert with each other only if it's possible that the outputs amount to knowledge on the condition that both outputs are true. This constraint on cognitive integration is supported by the idea that knowledge is the norm of belief. 24 The purpose of our cognitive abilities is to provide a unified picture of the world that amounts to knowledge. If our doxastic dispositions are acting against each other in such a way that achieving this aim is impossible, then at least some of those doxastic dispositions are not integrated with our cognitive character. 23 I argue elsewhere (2020) that if knowledge requires employing cognitive abilities that are integrated to our cognitive character, then modal conditions for knowledge which are relativized to such abilities are not hostage to the possible truth of the extended mind thesis. 24 The knowledge norm of belief has been endorsed by Williamson (2000) and Sosa (2011) among many others. Footnote 22 (continued) self" Shoemaker (2015, p. 59). But how could a performance that is out of character express, or reveal the agent's cognitive character? I contend that it could not. I would like to thank an anonymous reviewer at Erkenntnis for raising this issue. However, by saying that two doxastic dispositions can act in concert with each other just in case the outputs they yield could have amounted to knowledge simultaneously threatens to make virtue reliabilism a circular theory of knowledge. While this would probably suit knowledge first virtue reliabilists like Kelp and Miracchi, it's doubtful whether those who aim to provide a reductive virtue-theoretic analysis of knowledge should understand cognitive integration in this way. But here it's important to note that we need not commit ourselves to the idea that cognitive integration should ultimately be understood in terms of knowledge. Rather, we can only note that when it's in principle impossible that the two outputs could have amounted to knowledge if they were true, then the doxastic dispositions that produced the outputs are not acting in concert. True, we will use our pre-theoretic understanding of knowledge when determining whether a doxastic disposition is integrated to the subject's cognitive character, as does Williamson (2000) when he uses our pre-theoretic understanding of knowledge to determine whether a belief is safe. But this need not make virtue reliabilism a circular theory of knowledge. Virtue reliabilists are still free to unpack the notion of cognitive integration without appealing to knowledge. All we require here is that the way in which virtue reliabilists end up unpacking cognitive integration entails that two doxastic dispositions that are acting in a 'knowledge-inconsistent way' are not acting in concert with each other. Finally, it's worth keeping in mind that virtue reliabilists relativize cognitive abilities to normal or appropriate conditions and environments (Greco, 2010;Sosa, 2010). This means that cognitive abilities can be lost when moving to environments that are not suitable for the use of those abilities. The fact that one's doxastic dispositions don't act in concert in some such conditions and environments doesn't mean that those doxastic dispositions wouldn't qualify as cognitive virtues in more suitable environments and conditions, where the doxastic dispositions in question are in the market of being cognitive abilities. This helps to alleviate the pressure to think that INTEGRATION is a too strong condition. In the next section we examine under what kind of conditions a belief can serve as a defeater. Defeat and Justification I'll assume that only those beliefs that have a positive epistemic status can serve as defeaters. I think that this positive epistemic status is justification. Irrational and unjustified beliefs cannot serve to defeat knowledge or justified beliefs. I take this to 1 3 be the mainstream position among epistemologists, 25 but it'll be useful to go through the rationale for this position, since it plays a pivotal role in the next section. Often some of our beliefs confer justification on our other beliefs. The fact that I know that the drink is laced with hemlock justifies me in believing that the drink is poisonous. In this case my knowledge entails the truth of the latter belief. But if I believed out of sheer paranoia that the drink is laced with hemlock, I wouldn't be justified in believing that the drink is poisonous. While the contents of my beliefs in the above cases stand in exactly the same logical relations, my belief that the drink is poisonous isn't justified in the latter case, since there is no justification to be transmitted from my belief that the drink is laced with hemlock. Similarly, if I were to believe out of wishful thinking that England is going to lose the game, I wouldn't thereby be justified in believing that Italy is going to win the game. If justified beliefs could be built on paranoia and wishful thinking living a good epistemic life would be all too easy. Given that irrational and unjustified beliefs cannot confer positive epistemic statuses on our other beliefs, it would be prima facie bizarre if they could render our justified beliefs unjustified. How could they have only this kind of negative epistemic import? Moreover, if irrational and unjustified beliefs can defeat justified beliefs, then they can also serve to restore the justificatory status of beliefs (Casullo, 2018). 26 This is because a putative defeater d can be defeated by yet another putative defeater d', rendering the original belief justified once again (Pollock, 1987). One shouldn't be able to restore the justificatory status of a defeated belief by irrationally believing that the putative defeater doesn't defeat one's original belief. Otherwise irrational and unjustified beliefs can confer justification to our beliefs. Therefore, only justified beliefs can serve as defeaters. Virtue reliabilists think that a subject S's belief is justified if, and only if it's an exercise of S's cognitive abilities (Greco, 2002, p. 311;Kelp, 2017, p. 238;Miracchi, 2015, p. 48;Sosa, 1991, p. 189). Given that beliefs need to be justified in order to serve as defeaters, a defeater-belief must be a product of one's cognitive abilities. 3 A Virtue Reliabilist Error-Theory of Defeat Defeat of the Virtues? So far I've shown that virtue reliabilists are committed to the idea that knowledge arises from exercises of cognitive abilities and that a doxastic disposition can qualify as a cognitive ability only if it's suitably integrated with the cognitive character of the subject. I've also explained that in order for a putative defeater to have potential normative import, it must be the case that the defeater enjoys a positive epistemic standing. I assume that the defeater has to be justified in order to have potential normative import. On virtue reliabilism justified beliefs are exercises of cognitive abilities. Therefore, the defeater belief has to be an exercise of a cognitive ability in order to have potential normative import. Given this, what must virtue reliabilists say about the phenomenon of knowledge defeat? Consider a paradigmatic case of knowledge defeat like Red light: At t 1 Yen comes to know that the wall in front of her is red via perception in optimal conditions. At t 2 Yen's trusted friend Triss tells her that the wall is illuminated by red light that would have made the wall look red even if it had been of some other colour. In order for Red light to be a potential case of knowledge defeat it must be the case that Yen's belief that the wall is red is a product of her cognitive abilities. Otherwise her belief could not have qualified as knowledge at t 1 . It must also be the case that her belief that [the wall is illuminated by red light that would have made the wall look red whatever its actual colour is] is a product of her cognitive abilities since otherwise the defeater belief wouldn't be justified, and hence wouldn't have any defeating force. Now suppose that Yen dogmatically clings to her belief that the wall is red after forming a justified belief in the defeater. Given that the defeater supports that her original belief doesn't qualify as knowledge, what should virtue reliabilists say about this case? I think that virtue reliabilists are committed to claiming that the case, when described in this way, is metaphysically impossible. It cannot be the case that both Yen's original belief and her defeater belief are products of her cognitive abilities. Why? Because the doxastic dispositions that generate these beliefs are clearly acting in tension, rather than in concert with each other. After all, the way in which Yen believes that the wall is red can only constitute knowledge if her defeater belief isn't knowledge and vice versa. This is because if Yen knows by visual perception alone that the wall is red at t 1 it cannot be the case that wall is bathed in red light at t 1 , because then the colour that Yen would have seen would be that which the red light cast on the wall and not the redness of the wall. The truth of Yen's perceptual belief would lack an appropriate causal connection to what makes it true. Similarly, Yen cannot know via testimony that the wall was bathed in red light at t 1 if she knew by visual perception alone that the wall is red at t 1 . After all, if the wall was bathed in red light at t 1 the truth of Yen's perceptual belief would have lacked an appropriate casual connection to what makes it true, and hence she couldn't have known by visual perception that the wall is red. So while the contents of the doxastic outputs are not logically inconsistent, the ways in which the beliefs are formed are epistemically inconsistent in that both beliefs could not have constituted knowledge simultaneously. Recall that INTEGRATION requires that the subject's doxastic dispositions would act in concert with the other doxastic dispositions that make up the subject's cognitive character if it were triggered. In cases where the subject dogmatically clings onto her belief after forming a justified belief in the putative defeater, this counterfactual is false. Importantly, the counterfactual was already false at the moment when the subject formed her original belief, and hence the subject's original belief was not a product of her cognitive abilities, and cannot qualify as knowledge. Therefore, if the defeater belief is justified, and the subject holds onto her original belief after receiving the putative defeater, her original belief never amounted to knowledge to begin with. Since the subject never acquires knowledge in this first variant of the case, there is no knowledge defeat. Alternatively, it could be the case that Yen's defeater belief isn't a product of her cognitive abilities, in which case the belief would be unjustified. But if it's true that unjustified beliefs cannot serve as defeaters Yen doesn't have a defeater for her belief that the wall is red. Moreover, since Yen's defeater belief isn't a product of her cognitive abilities, the doxastic dispositions that help to constitute her cognitive character are not acting in tension with each other if she holds onto her original belief. Therefore, Yen can know that the wall is red. And since Yen can continue to know in this second variant of the case that the wall is red after t 2 there is no knowledge defeat in this variant either. But suppose that instead of dogmatically holding onto her belief, Yen suspends judgment after having formed a justified belief in the defeater. In this third variant Yen's original belief might have amounted to knowledge, since the cognitive abilities that are responsible for her perceptual belief are acting in concert with her cognitive character. In this version Yen is acting in the same way as the subject who cannot fail to see the Müller-Lyer lines as being of different lengths but nevertheless doesn't believe that they are of different lengths after having learned, perhaps by testimony, that the Müller-Lyer lines constitute a known illusion. But while Yen's original belief and her defeater belief might be products of her cognitive abilities in this variant of the case, there is no knowledge defeat in this case either. It's true that she doesn't know that the wall is red after having received the putative defeater, but this is because she doesn't believe that the wall is red after having received it. It's not the defeater that robs her of knowledge; it's her lack of belief. Below is a table that summarizes these different variants. But while knowledge defeat doesn't occur in any of the three variants, it's not impossible for Yen to lose her knowledge while holding onto her original belief. If Yen were to rebase her belief that the wall is red at t 2 , and the doxastic disposition responsible for the rebasing was not a cognitive ability, she would fail to know that the wall is red at t 2 , even though she would still believe that the wall is red. But here it's important to recall that whether Yen rebases her belief at t 2 is a contingent matter. And since it's a contingent matter, Yen doesn't necessarily lose her knowledge after having acquired the putative defeater. It could also be the case that Yen's cognitive character changes between t 1 and t 2 in such a way that the doxastic disposition that generated the belief that the wall is red no longer counts as a cognitive ability at t 2 . 27 In this variant of the case Yen could have known at t 1 that the wall is red, but doesn't know it at t 2 , since the doxastic disposition in charge of retaining the belief doesn't qualify as a cognitive ability at t 2 . But again, it's a contingent matter whether Yen's cognitive character changes between t 1 and t 2 , and hence the fact that knowledge is lost in this variant of the case doesn't suffice to show that Yen's of knowledge is defeated. Recall that knowledge defeat occurs just in case a subject knows that p, gains a putative defeater for her belief that p, and thereby loses her knowledge that p without necessarily losing her belief that p. The defeatists claim that acquiring the putative defeater suffices on its own to defeat one's knowledge. But knowledge defeat doesn't occur in any of the five variants of Red light that we just considered. In the first version Yen never knew, in the second one she never gained a defeater, and in the third one she lost knowledge only because she lost the relevant belief. In the last two variants Yen does lose her knowledge without losing the corresponding belief. But this is only because she either (1) starts believing that the wall is red via a method of belief-formation that isn't a cognitive ability, or, (2) her cognitive character changes in such a way that the way in which she formed her belief originally no longer counts as a cognitive ability. Yen doesn't lose her knowledge in any of these variants solely in virtue of having acquired a putative defeater for her belief. So knowledge defeat, strictly speaking, is an illusory phenomenon. There are no cases where acquiring a putative defeater for a belief that qualifies as knowledge suffices on its own to defeat the belief's epistemic standing. But while knowledge defeat turns out to be an illusory phenomenon on the sketched account, it's nevertheless true that there are no cases where a subject knows that p after having acquired a putative defeater for her belief that p. Thus virtue reliabilists are able to explain intuitions of knowledge defeat, without granting that knowledge defeat is a genuine phenomenon. Virtue reliabilism provides an error-theory of our defeat intuitions. It's an error-theory in two senses. First, it claims that in some putative cases of knowledge defeat, knowledge was never had to begin with. Second, the potential loss of knowledge isn't explained in terms of the putative defeater's normative force, but rather via the way in which the subject reacted to her epistemic situation. 28 Finally, it's worth noting that this error-theory can explain why suspending judgment is nevertheless epistemically speaking good, even though putative defeaters lack normative force. Suspending judgment is epistemically optimal, because only in those cases where the subject suspends her judgment is it possible that both her original, and her defeater belief, were justified (variant 3 above). 29 Here's an objection I've heard against the theory proposed (voiced by Maria Lasonen-Aarnio, among others). Intuitively Yen knows in Red light that the wall is red at t 1 even if she would dogmatically cling onto her belief if she later gained a justified belief that is a putative defeater for her original belief (variant 1 above). Yen's dogmatism is a vice of her epistemic character that doesn't stain her belief, the objection goes. I grant the objector that intuitively Yen knows at t 1 that the wall is red. While it might be unintuitive that Yen's dogmatism would preclude her from knowing that the wall is red, virtue reliabilists are committed to this claim. They hold that knowledge and justification can only arise from the exercise of cognitive abilities that are integrated to one's cognitive character. 30 Virtue reliabilists can explain the intuition that Yen knows that the wall is red at t 1 . In all but variant 1 Yen does know that the wall is red at t 1 . It's easy to mix up the variants since information regarding Yen's dogmatic character is revealed only later. Furthermore, variant 1 is, perhaps, the most unnatural way of fleshing out the case. Most people would withdraw their belief if they were presented with a putative defeater. It's plausible that we implicitly assume that Yen is non-dogmatic when originally evaluating whether Yen knows at t 1 . First impressions are hard to shake, especially when it comes to intuitions. That said, those who think that the objection is successfully, are invited to see this paper as offering an argument against virtue reliabilism. My aim was to examine what virtue reliabilists ought to say about defeat given some of their core commitments. Other Virtue Theoretic Proposals I will briefly consider some alternative solutions to the problem of knowledge defeat, put forth by virtue reliabilists. In order to deal with defeaters Greco (2010) adds a subjective justification condition to his analysis of knowledge. According to Greco subject S's belief that p is subjectively justified "if and only if S's believing that p is properly motivated; if and only if S's believing that p results from intellectual dispositions that S manifests when S is motivated to believe the truth" (2010, p. 167). 29 Neta (2002, pp. 675-676) has provided a contextualist theory of knowledge that yields an account of defeat that bears some similarity to the account proposed here, in that according to it acquiring new evidence cannot on its own defeat knowledge. I'd like to thank an anonymous reviewer at Erkenntnis for pointing this out. 30 Hurka (2006), Sylvan (2017) and Lasonen-Aarnio (forthcoming-a) have criticized character-attaching virtue theories, like virtue reliabilism, for requiring that virtuous acts must arise from virtuous character. Footnote 28 (continued) once, but then, because usually trustworthy S lied to me, I stopped knowing it." I would like to thank an anonymous reviewer at Erkenntnis for alerting me to Azzouni's work. Greco's argument as to why knowledge entails subjective justification is motivated by his take on Aristoteles's virtue ethics. According to him virtuous action requires not only that the action arises from a virtuous character trait, but also that the action is properly motivated by one's virtuous character (Greco, 2010, p. 43). I won't take issue with the difficult question under what conditions a subject is properly motivated to believe the truth, nor with Greco's motivation to add a subjective justification component to his analysis of knowledge. For the sake of the argument, I'll also grant that in putative cases of knowledge defeat Greco's subjective justification condition isn't satisfied and that knowledge is hence lost in such cases. I only wish to note that the kind of virtue reliabilism that Greco endorses already has the necessary tools to explain our intuitions of knowledge defeat. Adding a subjective justification condition to the analysis isn't necessary and achieves nothing on this score. To me, adding this condition seems like an extra cost. Pritchard (2018) has argued that his anti-luck virtue epistemology can account for the phenomenon of knowledge defeat. He claims that a subject who comes to know that [that's a barn] in an area with no barn facades around, loses her knowledge if she sees a sign that says that she is in the barn façade-county. He writes that "the safety of her cognitive success is now in despite of her manifestation of relevant cognitive agency, rather than being to any significant degree because of it" (2018, p. 3075). Assuming that her belief that [that's a barn] is a product of her cognitive abilities, I fail to see why the subject's safe cognitive success wouldn't be to a significant degree because of the exercise of her cognitive abilities. After all, the fact that the subject trusts her perception seems to explain precisely why she continues to have a safe belief. Pritchard needs to tell us more about why the subject's safe cognitive success isn't attributable to her cognitive agency in cases like this, if his explanation of knowledge defeat is to succeed. Moreover, if I am correct, Pritchard already has the necessary tools to accommodate our intuitions of knowledge defeat. As far as I know, Sosa has not addressed the problem of knowledge defeat in print. However, given his distinction between animal and reflective knowledge, he could perhaps adopt the following view. 31 In putative cases of knowledge defeat one's original belief retains its aptness, and hence it amounts to animal knowledge. However, once the defeater is introduced, the subject can no longer aptly take her belief to be apt, which is what reflective knowledge would require (Sosa, 2011, p. ch. 1). Reflective knowledge requires that one competently assesses the risk of forming a false belief to be low enough, and arguably, one cannot have competently assessed the risk to be low enough if one has a defeater for one's belief. Therefore, defeaters would destroy reflective knowledge, but leave animal knowledge intact. This error-theoretic account of knowledge defeat rests on Sosa's distinction between animal and reflective knowledge. Since the error-theory that I gave is derivable from the core tenets of virtue reliabilism, it's simpler than the possible account that Sosa's more complicated framework could yield. Moreover, if I am correct, Sosa has the resources to accommodate our intuitions of knowledge defeat without resorting to his distinction between animal and reflective knowledge. 3 To wrap up, the error-theory that I have presented is preferable to extant virtue reliabilist accounts of defeat since it is simpler than those accounts and stems from the core ideas of virtue reliabilism. Virtue reliabilists need not add bells and whistles to explain defeatist intuitions. Conclusions I argued that virtue reliabilism is able to explain our defeat intuitions via a new kind of error-theory that falls directly out of the core tenets of virtue reliabilism. According to the error-theory, in paradigmatic cases of knowledge defeat where the subject holds onto her belief, the subject never knew to begin with. In cases where the subject suspends her judgment upon receiving the defeater she might have originally known, but doesn't anymore, since she lacks the relevant belief. In neither case is knowledge lost solely in virtue of the fact that the subject acquired a defeater, and hence knowledge defeat is an illusory phenomenon. Nevertheless, the defeatists are right in claiming that there are no cases where a subject retains her knowledge of p after having acquired a defeater for p.
10,402
2021-09-18T00:00:00.000
[ "Philosophy" ]
Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training Aspect-based sentiment analysis aims to identify the sentiment polarity of a specific aspect in product reviews. We notice that about 30% of reviews do not contain obvious opinion words, but still convey clear human-aware sentiment orientation, which is known as implicit sentiment. However, recent neural network-based approaches paid little attention to implicit sentiment entailed in the reviews. To overcome this issue, we adopt Supervised Contrastive Pre-training on large-scale sentiment-annotated corpora retrieved from in-domain language resources. By aligning the representation of implicit sentiment expressions to those with the same sentiment label, the pre-training process leads to better capture of both implicit and explicit sentiment orientation towards aspects in reviews. Experimental results show that our method achieves state-of-the-art performance on SemEval2014 benchmarks, and comprehensive analysis validates its effectiveness on learning implicit sentiment. Introduction Aspect-level sentiment analysis (ABSA) is a finegrained variant aiming to identify the sentiment polarity of one or more mentioned aspects in product reviews. Recent studies tackle the task by either employing attention mechanisms (Wang et al., 2016b;Ma et al., 2017) or incorporating syntax-aware graph structures (He et al., 2018;Tang et al., 2020;Sun et al., 2019;. Both methodologies aim to capture the corresponding sentiment expression towards a particular aspect, which is usually an opinion word that explicitly expresses sentiment polarity. For instance, given the review on a restaurant "Great food but the service is dreadful", current models attempt to find "great" for aspect "food" to determine the positive sentiment polarity towards it. * * Corresponding author. Reviews contain implicit sentiment The waiter poured water on my hand and walked away The bartender continued to pour champagne from his reserve 10 hours of battery life ... The battery life is probably an hour Table 1: Examples of reviews contain implicit sentiment where aspects are marked to bold. In the above examples, "pour" expresses opposite emotions in different contexts. In the below examples, people determine the sentiment orientations towards "battery" by referring to a common lifetime. However, implicit sentiment expressions widely exist in the recognition of aspect-based sentiment. Implicit sentiment expressions indicate sentiment expressions that contain no polarity markers but still convey clear human-aware sentiment polarity in context (Russo et al., 2015). As illustrated in Table 1, the comment "The waiter poured water on my hand and walked away" towards aspect "waiter" contains no opinion words, but can be clearly interpreted to be negative. According to Table 2 (as seen in Section 4), 27.47% and 30.09% of reviews contain implicit sentiment among Restaurant and Laptop datasets. However, most of the previous methods generally pay little attention on modeling implicit sentiment expressions. This motivates us to better solve the task of ABSA by capturing implicit sentiment in an advanced way. To equip current models with the ability to capture implicit sentiment, inadequate ABSA datasets are the main challenge. With only a few thousand labeled data, models could hardly recognize comprehensive patterns of sentiment expressions, and are unable to capture enough commonsense knowledge, which is required in sentiment identification. It reveals that external sentiment knowledge should be introduced to solve the problem. Therefore, we adopt Supervised ContrAstive Pre-Training(SCAPT) on external large-scale sentiment-annotated corpora to learn sentiment knowledge. Supervised contrastive learning gives an aligned representation of sentiment expressions with the same sentiment label. In embedding space, explicit and implicit sentiment expressions with the same sentiment orientation are pulled together, and those with different sentiment labels are pushed apart. Considering the sentiment annotations of retrieved corpora are noisy, supervised contrastive learning enhances noise immunity of the pretraining process. Also, SCAPT contains review reconstruction and masked aspect predication objectives. The former requires representation encoding review context besides sentiment polarity, and the latter adds the model's ability to capture the sentiment target. Overall, the pre-training process captures both implicit and explicit sentiment orientation towards aspects in reviews. Experimental evaluations conducted on SemEval-2014 (Pontiki et al., 2014) and MAMS datasets show that proposed SCAPT outperforms baseline models by a large margin. The results on partitioned datasets demonstrate the effectiveness of both implicit sentiment expression and explicit sentiment expression. Moreover, the ablation study verifies that SCAPT efficiently learns implicit sentiment expression on the external noisy corpora. Codes and datasets are publicly available 1 . The contributions of this work include: • We reveal that ABSA was only marginally tackled by previous studies since they paid little attention to implicit sentiment. • We propose Supervised Contrastive Pre-training to learn sentiment knowledge from large-scale sentiment-annotated corpora. • Experimental results show that our proposed model achieves state-of-the-art performance, and is effective to learn implicit sentiment. Implicit Sentiment As sentiment that can only be inferred within the context of reviews, many researches address the presence of implicit sentiment in sentiment analysis. Toprak et al. (2010); Russo et al. (2015) proposed similar terminologies (as implicit polarity or polar facts), and provided corpora containing implicit sentiment. Deng and Wiebe (2014) detected implicit sentiment via inference over explicit sentiment expressions and so-called goodFor/badFor events. Choi and Wiebe (2014) used +/-EffectWordNet lexicon to identify implicit sentiment, by assuming sentiment expressions are often related to states and events which have positive/negative/null effects on entities. To investigate the ubiquitous of implicit sentiment in ABSA, we split SemEval-2014 Restaurant and Laptop benchmarks into Explicit Sentiment Expression (ESE) slice and Implicit Sentiment Expression (ISE) slice, based on the presence of opinion words. Fan et al. (2019) have annotated opinion words for target aspects on SemEval benchmarks. We notice that the provided datasets do not keep the original order and have some differences in texts. Thus, we first match the annotations to the original datasets, and then manually pick the reviews including opinion words towards the aspect from the remaining part. As results shown in Table 2 (as seen in Section 4), 27.47% and 30.09% of reviews are divide into ISE part among Restaurant and Laptop, revealing that implicit sentiment exists widely in ABSA and is worthy to be explored. Methodology In this section, we introduce the pre-training and fine-tuning scheme of our models. In pre-training, we introduce Supervised ContrAstive Pre-Training (SCAPT) for ABSA, which learns the polarity of sentiment expressions by leveraging retrieved review corpus. In fine-tuning, aspect-aware finetuning is adopted to enhance the ability of models on aspect-based sentiment identification. Supervised Contrastive Pre-training Three objectives are included in SCAPT: supervised contrastive learning, masked aspect prediction, and review reconstruction. The details of SCAPT's procedure are shown in Figure 1. Transformer Encoder Backbone The pretraining scheme is built on Transformer encoder (Vaswani et al., 2017). We denote the retrieved review corpus used in SCAPT as D = {x 1 , x 2 , . . . , x n } including n sentences. The i-th sentence x i is labeled with y i . For each input sentence x i , following Devlin et al. (2019), we format the input sentence as I i = [CLS] + x i + [SEP] to feed into the model. The output vector of [CLS] token encodes the sentence representationh i : Figure 1: An overview of SCAPT on ABSA. SCAPT consists of three objectives, in which Supervised Contrastive Learning aligns the representations with the same sentiment label. Supervised Contrastive Learning Inspired by Khosla et al. (2020), we adopt supervised contrastive learning objective in SCAPT to align the representation of explicit and implicit sentiment expressions with the same emotion. Supervised contrastive learning encourages the model to capture the entailed sentiment orientation in context and incorporate it in sentiment representation. Specifically, for (x i , y i ) within a batch B, we first extract sentiment representation s i = W shi from sentence representationh i of x i . W s could be seen as a trainable sentiment perceptron for sentences. The supervised contrastive loss on the batch B is defined as: Here, P sup B (i, c) indicates the likelihood that s c is most similar to s i and τ is the temperature of softmax. Here we simply use sim(s i , s c ) = s i · s c for similarity metric. Supervised contrastive loss L sup B is calculated for every sentence s i among B, where C i = |{c|y c = y i , c = i}| is the number of samples in the same category y i in B. Notably, we do not directly use sentence representation in the supervised contrastive pre-training process. Instead, we use sentiment representation to make full use of document-level labeled corpora in mining the inherent sentiment perception. Review Reconstruction Motivated by the power of denoising auto-encoder (Vincent et al., 2008) and its success in pre-training models (Lewis et al., 2020), we further propose review reconstruction task to enhance the sentence representation on context semantic modeling. With solely pretrained on the supervised contrastive learning task which only focuses on sentiment regularization, the essential semantic information is not completely preserved in the sentence representations. Thus, we additionally employ review reconstruction in SCAPT to capture comprehensive context information in sentence representations. Generally, this objective reconstructs the whole sentence x i with the sentence representationh i . After encoding x i to the sentence representationh i , the latter is fed to Transformer decoder for autoregressive generation: x i is the recovered sentence.h i acts as a beginningof-sentence input embedding in the decoding process to control the whole generation. We use the original sentence x i without masking as the gold reference of review reconstruction objective: Masked Aspect Prediction In masked aspect prediction, the model learns to predict the masked aspect from a corrupted version for each review. The masking strategy of input reviews consists of following two steps: 1. Aspect Span Masking. Since all inputs are from our retrieved corpora, we ensure that each review contains at least one aspect. For each input, the tokens of aspect spans are replaced with [MASK] with 80% probability, or replaced with a random token with 10% probability, otherwise kept unchanged. Aspect span masking provides a better capture of aspect words. 2. Random Masking. After aspect span masking, if the proportion of masked tokens is less than 15%, we randomly mask extra tokens from the rest ones to reach the proportion. We denote the input token of [MASK] as w MASK . For each masked input token at k-th position, its contextualized hidden representation h ik is fed into a softmax layer to predict the original word: Specific to the above equation, h ik is the output of Transformer encoder at k-th position, W o is a trainable parameter matrix, and P map (k) indicates the predict probability of the original word at k-th position. The masked aspect prediction loss is an accumulation of log-likelihood on predictions of each masked position: Different from MLM (Devlin et al., 2019) or sentiment masking (Tian et al., 2020), masked aspect prediction focuses more on modeling aspectrelated context information in aspect-based representations, which complements the other pretraining objectives and purposefully benefits our fine-tuning scheme. Joint Training The three losses mentioned above are combined and jointly trained in SCAPT. For the overall pre-training loss L pre B on batch B, the review reconstruction loss and masked aspect prediction loss are counted on each example b ∈ B, and α and β are coefficients to balance the objectives: Aspect-Aware Fine-tuning Our proposed models are fine-tuned on ABSA benchmarks by aspect-aware fine-tuning, to fully leverage their ability of sentiment identification. They also learn to capture aspect-related sentiment information during fine-tuning. Specifically, given a sentence x ab = {w 1 , . . . , w a , . . . w n } in ABSA dataset D ab , and w a is one of the aspects occurring in x ab . In fine-tuning, models predict aspect-level sentiment orientation y ab a according to aspect-based representationh ab a and sentiment representation s ab . Aspect-based Representation The research (Ethayarajh, 2019) on pre-trained contextualized word representation has demonstrated that it can capture context information related to the word. Thus, in spite of using laborious methods to embed the aspect information, we extract aspect-based representationh ab a by collecting final hidden states that correspond to w a . In fine-tuning,h ab a would focus on aspect-related words in context, which we believe would enhance the perception of aspect-specific opinion words and bring the model with a good view of explicit sentiment. Specifically, let I a be the token index in aspect x a , we average the hidden state h i for all i ∈ I a to acquire aspect-based representation: Notably, when processing multiple aspects w a1 , w a2 , . . . in sentence x ab , we extract aspectbased representationh ab a1 ,h ab a2 , . . . in a single run, while previous methods embed aspect and encoder whole input for each aspect one-by-one. Representation Combination For sentiment classification, aspect-based representation and sentiment representation are considered jointly to predict aspect-level sentiment polarity. In that case, fine-tuned model builds the perception of both word-occurrence-related explicit sentiment and semantic-related implicit sentiment. We use the same sentiment perceptron W s in pretraining to extract sentiment representation s ab from sentence representation. Then sentiment representation s ab and aspect-based representation h ab a are concatenated for predicting aspect-level sentiment polarity: y ab a is the prediction on aspect x a and W a is trainable parameter matrix. Lastly, our fine-tuning objective is cross-entropy loss for prediction task L ab = − x ab ∈D ab log y ab a . Experimental Settings ABSA Datasets Our experiments are mainly conducted on two benchmarks, Laptop and Restaurant review from SemEval 2014 task 4 (Pontiki et al., 2014). We use ESE and ISE slices of their test parts to evaluate model performance on explicit and implicit sentiment respectively. The process to build these slices is detailed in Section 2. Furthermore, we also use a more challenging dataset, Multi-Aspect Multi-Sentiment (MAMS) , which shares the same domain to SemEval2014 Restaurant. All these datasets involve three sentiment categories which are positive, neutral, and negative. The details of these ABSA datasets can be found in Table 2. Retrieved External Corpora We retrieve largescale sentiment-annotated corpora from documentlevel labeled data for pre-training. Specifically, we first extract five-stars-rated/one-star-rated reviews from the Yelp 2 and Amazon Review (He and McAuley, 2016) datasets, and label them as positive/negative. Such a procedure can mitigate the noise in the 5-way rated document-level sentiment language source. Then we preserve reviews within the topic of restaurant/laptop to make sure that pre-train corpora and ABSA datasets are in the same domain. Later, we split these documentlevel reviews into sentences and preserve sentences containing the same aspect term as those mentioned in ABSA training sets. The sentiment label of each sentence is determined by the label of its original review. After the retrieving process, we finally acquire about 1.56/0.51 million sentence-level reviews from Yelp/Amazon that are noisy-labeled as positive/negative. After manually checking a small portion of both corpora, we confirm that both implicit and explicit sentiment expressions are available. We pre-train our models on the retrieved corpus that shares the same domain with the downstream ABSA task. Models with SCAPT We apply SCAPT to Transformer encoder and BERT, and these models are fine-tuned by aspect-aware fine-tuning. The models are so-called TransEncAsp+SCAPT and BERTAsp+SCAPT respectively. We use a 300dimensional randomly initialized Transformer encoder with 6 layers and 6 heads and BERTbase-uncased as the basis. The pre-training for Transformer encoder and BERT takes 80 and 8 epochs respectively. We adopt Adam (Kingma and Ba, 2015) with warm-up to optimize our models with learning rate 1e−3 for Transformer encoder and 5e−5 for BERT. The pre-trained models are fine-tuned by aspect-aware fine-tuning with 5e−5 learning rate. The hyper-parameters are set as α = β = 1 for combining objectives in SCAPT, and τ = 0.07 in supervised contrastive learning. Baselines We compare the proposed models with baselines from different perspectives to comprehensively evaluate the performance of our approach: (Rietzler et al., 2020), and R-GAT+BERT . For better analyze the effect of SCAPT and aspectaware fine-tuning, we further propose the following variants as baselines: Table 3: Overall performance of different methods on Restaurant and Laptop. We rerun the code of baselines and report their accuracy on ESE and ISE slices of the two datasets. For the baselines of which the accuracy or F1-score is missing, we also report the accuracy and F1-score of our rerunning version, and these results are marked with *. • TransEncAsp: Directly apply aspect-aware fine-tuning on randomly initialized Transformer encoder without pre-training. • BERTAsp+CEPT: Merely replace the supervised contrastive learning loss with cross-entropy loss in SCAPT. Other settings are the same as BERTAsp+SCAPT. Results and Analysis This section mainly demonstrates the experiment results. Our model achieves state-of-the-art on three ABSA benchmarks, and we illustrate the representation alignment effect of supervised contrastive learning and the effectiveness of other parts from several perspectives. Moreover, we reveal that our model is capable to identify implicit sentiment, and attributes its effectiveness to supervised contrastive learning in SCAPT. Main Results The performance of baselines and our proposed models are shown in Table 3. Models are evaluated with Accuracy and Macro-F1. According to the results, several observations can be noted. Our model achieves SOTA performance. BERTAsp+SCAPT outperforms the current SOTA model by 1.97%/3.80% on Restaurant/Laptop. TransEncAsp+SCAPT performs better than most baselines without pre-trained knowledge. Moreover, BERTAsp+SCAPT also achieves the best performance on ESE/ISE slices of the two datasets, revealing the effectiveness of the proposed pretraining scheme. After pre-trained with SCAPT, models improve significantly on ABSA tasks. Compared with BERTAsp which directly fine-tuned on ABSA datasets, BERTAsp+SCAPT achieves a 3.31%/4.23% performance gain on Restaurant/Laptop, which is a convinced proof that acquiring in-domain knowledge with proper adaptive pretraining is still necessary for knowledge-enhanced models, and SCAPT is an effective approach to be adopted. Moreover, TransEncAsp+SCAPT is 6.29%/11.34% better that TransEncAsp, illustrating that incorporating sentiment knowledge with SCAPT greatly potentiates ABSA models. SCAPT is good at learning implicit sentiment. This could be verified from several perspectives. First, compared with its performance on ESE, BERTAsp+SCAPT appears to be much better on ISE. Compared with other works, BERTAsp+SCAPT is around 0-2% better on ESE slices, but surpasses the previous SOTA model by 4.49%/4.60% on ISE slices. Therefore, the well performance of BERTAsp+SCAPT mainly contributes to its awareness of implicit sentiment. Second, TransEncAsp+SCAPT behaves much better than BERTAsp on ISE slices. With only exposing to million-scale pre-training corpus, TransEncAsp+SCAPT is generally worse than BERTAsp on the whole task, but exceeds BERTAsp by 4.88%/4.43% on ISE slices. This demonstrates that SCAPT is data-effective on learning implicit sentiment. Last, after pre-trained with SCAPT, models attain remarkable performance gain on ISE which is much more significant than ESE. BERTAsp+SCAPT is 2% better than BERTAsp on ESE, but outperforms the latter by 8.61%/9.20% on ISE. As for Transformer encoder based models, the performance gain on ISE after SCAPT goes beyond 20%. We conclude that what models have learned in SCAPT is dominantly the perception of implicit sentiment. Aspect-aware fine-tuning serves as a complement to SCAPT. We find that models with aspectaware fine-tuning perform better on ESE slices of the datasets. Specifically, BERTAsp performs worse on ISE but better on ESE compared with BERT-SPC, and is therefore evaluated to be better on the two datasets. The better performance of BERTAsp on ESE slices may mainly due to its use of aspect-based representation, which attends to aspect-related context that may contain sentiment orientation. This characteristic of aspectaware fine-tuning makes it suitable to enhance the recognition of explicit sentiment of models pretrained with SCAPT. Table 4 shows the performance of baselines and our models in MAMS datasets. Though it is challenging to distinguish the sentiment polarities of multiple aspects in a single sentence, the result shows TransEncAsp+SCAPT outperforms baselines that lack external sentiment knowledge, and BERTAsp+SCAPT achieves state-of-the-art in the multi-aspect scenario. The efficiency of our models can attribute to both SCAPT and aspect-aware finetuning since they enhance the learning of implicit and explicit sentiment respectively. Besides, BERTAsp performs much more better than BERT-SPC in MAMS than in Restaurant/Laptop. We suppose the exceeding performance of BERTAsp credits to its modeling of contextual information in aspect-based representation, which is more important in multi-aspect ABSA. Implicit Sentiment Learning in SCAPT We conclude the key aspects of learning implicit sentiment in SCAPT as exposing to sentiment knowledge and using supervised contrastive learning. The results in Table 3 shows that implicit sentiment is more challenging to learn than explicit sentiment, and previous methods based on attention or syntax modeling are not tackling the issue perfectly. The knowledge-enhanced baselines perform slightly better with 5% performance gain on ISE. By pre-training on large-scale sentimentannotated corpora, our models achieve remarkable performance improvement on implicit sentiment learning, with 19.59%/29.62% relative gain on TransEncAsp. These results prove that in-domain sentiment knowledge is absolutely necessary for implicit sentiment learning, which is provided by our retrieved corpora. Furthermore, the models pre-trained with supervised contrastive learning objective surpasses cross-entropy classification in ISE slices. Compared with BERTAsp+CEPT, BERTAsp+SCAPT is 4.49%/1.73% better on ISE, which leads to its better performance on the whole tasks. The deployment of supervised contrastive learning objective enhances noise immunity of the pre-training process, thus the pre-trained models are more effective in learning implicit sentiment. Ablation Study on SCAPT As illustrated in Table 5, we validate the effectiveness of each part by ablation study. First, removing supervised contrastive learning loss (-SCL) leads to a 2.38% performance drop on Restaurant, which is more significant than the occation of removing the other two objectives (-MAP-RR). This verifies that supervised contrastive learning plays a primary role in SCAPT. Besides, we observe that the removing of masked aspect prediction and review reconstruction objectives also brings about performance drop. This demonstrates that these mechanisms are also indispensable in SCAPT. Hidden Sentiment Representations For better understanding the behavior of our proposed methods, we further perform a visualization of the sentiment representation using t-SNE (Van der Maaten and Hinton, 2008). As seen in Figure 3, models with sentiment pre-training have a strong embedding ability for sentiment expression, while many misclassifications can be found in BERTAsp. The visualization also shows that BERTAsp+SCAPT tightly clusters the representations of both implicit and explicit sentiment expressions. Aspect Robustness We analyze the robustness of our proposed models on aspect robustness test sets. Aspect robustness of ABSA was first emphasized and tested in Xing et al. (2020) by applying several perturbations on reviews from Restaurant and Laptop. TextFlint (Wang et al., 2021) extended these transformations by introducing transformations from various linguistic perspectives. The test sets are designed to probe whether models could distinguish the sentiment of the target aspect from the non-target aspects and unrelated information. Table 6 lists the performance of tested models, in which the robustness of our proposed models is convincingly proved. Comparing to obvious performance drop in baseline models, BERTAsp+SCAPT performs significantly better than other models with 9.05%/6.63% decline on Restaurant and Laptop. The results show that models pre-trained with SCAPT are more robust for aspect-level perturbations, which attribute to the better modeling for sentiment and context information with the enhancement of in-domain sentiment knowledge. Related Work Neural Network Methods for ABSA The early neural network methods (Wang et al., 2016b;Ma et al., 2017) in ABSA employed various of attention mechanisms to identify aspect-related context. Memory Network (Tang et al., 2016;Chen et al., 2017;Wang et al., 2018) was further proposed to identify corresponding sentiment expression for aspects. Recent efforts (He et al., 2018;Tang et al., 2020) used syntax information from dependency trees to enhance attention-based models. A lot of works Sun et al., 2019;) make use of graph neural networks to incorporate tree-structured syntactic information and capture aspect-related information in text. Another line in ABSA concentrated on utilizing external corpus and pre-trained knowledge to enhance semantic awareness of models Rietzler et al., 2020;Dai et al., 2021). Contrastive Representation Learning Our work adopts contrastive method in representation learning to acquire discriminating instance representations. Recent work on contrastive representation learning of instances usually based on estimating representation similarities on similar and dissimilar pairs, which are usually composed in a self-supervised manner . Specially, Khosla et al. (2020) illustrated a supervised contrastive method to build positive pairs between instances with same class label, and put their representations together. In this work, our models learn to capture implicit sentiment from informative but noisy language resources in supervised contrastive pre-training. Conclusion In this paper, we introduce Supervised ContrAstive Pre-Training (SCAPT) for ABSA. By noticing that implicit sentiment is not well-handled by current neural network based ABSA models, we argue that more sentiment knowledge is required to solve this issue. We therefore retrieve large-scale indomain annotated corpora, and propose SCAPT to learn sentiment knowledge from the corpora. Experimental results show that our proposed models with SCAPT achieve SOTA performance. Moreover, SCAPT is proven to be effective in implicit sentiment learning. We hope to inspire future researches on learning and modeling implicit sentiment with knowledge-enhanced methods.
5,805.8
2021-11-03T00:00:00.000
[ "Computer Science" ]
Facile Synthesis, Characterization, and Photocatalytic Performance of BiOF/BiFeO 3 Hybrid Heterojunction for Benzylamine Coupling under Simulated Light Irradiation : Under simulated light irradiation, the aerobic oxidation of benzylamine to N , N -benzylidene benzylamine was carried out as a model reaction to investigate the photocatalytic activity of a hy-drothermally prepared composite based on BiOF and BiFeO 3 materials. The prepared photocatalysts were characterized using several spectroscopic techniques, such as powder X-ray diffraction (PXRD), diffuse reflectance spectroscopy (DRS), scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDX), and Fourier transform infrared spectroscopy (FTIR). Band gap analysis showed that the composite exhibits a band gap that lies in the UV region (3.5 eV). Nonetheless, pristine BiOF and BiFeO 3 exhibited band gaps of 3.8 eV and 2.15 eV, respectively. N , N -benzylidenebenzylamine was selectively achieved with a high conversion yield of ~80% under atmospheric conditions in which the product was confirmed using 1 H-NMR, 13 C-NMR, and FTIR spectroscopic techniques. Various control experiments were conducted to further confirm the enhanced photocatalytic performance of the reported composite. Introduction Since its discovery in 1960, bismuth ferrite has garnered considerable interest due to its rarity as one of the few multiferroics with the simultaneous presence of ferroelectric and antiferromagnetic order characteristics in a perovskite structure [1,2]. An inorganic semiconductor, bismuth ferrite (BiFeO 3 , BFO) is a promising multifunctional material with a wide range of intriguing applications, including spintronics [3,4], sensors [5,6], photocatalysis [7][8][9], optical devices [10,11], and data storage [12]. Due to the narrow band gap energy of BFO and the potential for internal polarization suppression of the electron-hole recombination rate, it has unique advantages as a heterogenous photocatalyst [13,14]. Compared to the commonly used TiO 2 photocatalyst [15], which absorbs in the UV region due to its broad band gap, BFO narrow band gap enables the greatest and most efficient exploitation of visible light from solar radiation energy [16,17]. In photocatalytic applications, the behavior of BFO in the presence of visible light is of particular interest due to the number of distinctive properties it holds, including a low cost [18], nontoxicity [19,20], chemical stability [18], discrete crystalline structure [18], and special electro-optical properties [21]. BFO perovskites can only stabilize over a limited range of temperatures, and, until now, the problem of obtaining single-phase pure BFO crystallite remains challenging. Nonetheless [8], a range of wet chemical techniques have been employed to create pure single-phase BiFeO 3 . A hydrothermal procedure can create pure single-phase BiFeO 3 at low temperatures, but pressure is necessary [22]. These procedures include the microemulsion Photochem 2023, 3 188 technique [23], the ferrioxalate precursor method [24], the solution combustion method [25], and the citrate method [26], and using sol-gel techniques based on ethylene glycol [27]. It has been proven in many previous studies that metal-oxides-based photocatalysts perform better when heterojunctions are formed; thus, attention to the assembly of various heterojunction structures has grown in the past few years. For example, heterojunctions such as BiOF/BiOI [28], MoS 2 /BiVO 4 [29], BiOI/TiO 2 [30], NiS/CdS [31], and SnO 2 /TiO 2 [32] were used as photocatalysts for the removal of diclofenac potassium, the photodegradation of methylene blue, the photodegradation of methyl orange, photocatalytic H 2 -production, and the decolorization of rhodamine-B, respectively. In this study, the coprecipitation method was used to generate pure single-phase BFO crystallites at a low temperature of about 70 • C. This method had the ability to create BFO powders that were well crystallized, had regulated morphology, and had a limited dispersion of particle size. Notably, all previous studies on BFO materials have focused on magnetic and dielectric properties. Nevertheless, our present study focuses on the photocatalytic aspect of this material and a newly prepared heterojunction based on BiFeO 3 and BiOF. The photocatalytic activity has been investigated using the photocatalytic benzyl amine coupling reaction as a model reaction. Materials and General Procedures Bismuth nitrate pentahydrate (BiNO 3 ·5H 2 O), iron nitrate nonahydrate (Fe(NO 3 ) 3 ·9H 2 O), ammonium fluoride (NH 4 F), deuterated chloroform (CDCl 3 ), acetonitrile (CH 3 CN, ACN), glacial acetic acid (CH 3 COOH), and benzylamine (C 7 H 9 N) were purchased from Sigma-Aldrich (St. Louis Missouri, MO, USA) and used as received. The FTIR spectrum was obtained using ATR-IR spectroscopy (Agilent Technologies Cary 600 Series FTIR Spectrometer, Santa Clara, CA, USA) across a range of 4000 to 500 cm −1 (512 scan average), in which the background was first collected using potassium bromide where the oily product was sandwiched between two KBr disks. Using a benchtop Rigaku MiniFlex X-ray diffractometer (Neu-Isenburg, Germany) with a CuKα radiation tube (λ = 1.542 Å) operating at 40 kV across a range of 3-50 • (2θ) and a rate of 2 • C min −1 , powder X-ray diffraction (PXRD) (Neu-Isenburg, Germany) measurements were made. The sample surface morphology and elemental analysis were analyzed using Quattro ESEM instrument scanning electron microscopy (SEM) (Waltham, MA, USA), equipped with an energy-dispersive X-ray (EDX) detector, that was operated at high vacuum with 30 kV accelerating voltage. Diffuse reflectance (DRS) of BiFeO 3 , BiOF, and the composite was measured using a Shimadzu UV-3600 spectrophotometer (Kyoto, Japan) over wavelength ranges from 200 nm to 800 nm after a baseline measurement using barium sulfate. Proton nuclear magnetic resonance, 1 H-NMR, and 13 C-NMR spectra were acquired using a Varian 400 MHz (Palo Alto, CA, USA) in d-chloroform as the solvent to validate the identity of the product. Nitrogen sorption experiments at 77 K were used to assess the surface area and porosity of the photocatalysts. Preparation of BiFeO 3 Bismuth ferrite was synthesized using the coprecipitation method. Bismuth nitrate and iron nitrate were dissolved separately in 2-methoxyethanol and left under stirring until completely dissolved. In detail, 6.18 × 10 −3 moles of bismuth nitrate and iron nitrate equivalent to 3 g and 2.5 g were used, respectively. A total of 30 mL of 2-methoxyethanol was used to dissolve bismuth nitrate and an additional 5 mL of ethylene glycol was added to enhance the solubility of bismuth nitrate. For iron nitrate, 8 mL of 2-methoxyethanol was needed. The two solutions were mixed, and the pH was set to 5 using acetic acid and ammonium hydroxide. The solution was then left under continuous stirring and heating at 70 • C for around 7 h. The product was then collected and washed with DI water and ethanol several times. It was subsequently dried for 2 h at 80 • C, then calcined at 600 • C for 4 h (Scheme 1). was used to dissolve bismuth nitrate and an additional 5 mL of ethylene glycol was added to enhance the solubility of bismuth nitrate. For iron nitrate, 8 mL of 2-methoxyethanol was needed. The two solutions were mixed, and the pH was set to 5 using acetic acid and ammonium hydroxide. The solution was then left under continuous stirring and heating at 70 °C for around 7 h. The product was then collected and washed with DI water and ethanol several times. It was subsequently dried for 2 h at 80 °C, then calcined at 600 °C for 4 h (Scheme 1). Scheme 1. Schematic syntheses of the hydrothermal method for preparing BiFeO 3 , BiOF, and BiOF/BiFeO 3 composite using hydrothermal methods. Preparation of BiOF According to previously reported procedure, BiOF was synthesized using simple coprecipitation method [28]. Bismuth nitrate and ammonium fluoride were utilized in quantities of 0.002 mols each, or 970 mg and 74.08 mg, respectively. Bismuth nitrate was dissolved in 20 mL of 4 M acetic acid with constant stirring and heating at 80 • C. Then, ammonium fluoride was dissolved in 2 mL of deionized water (DI) and added to bismuth nitrate solution. The mixture was left under continuous stirring and heating at 80 • C for 7 h. The product was then collected and washed with DI water and ethanol several times. It was subsequently dried for 2 h at 80 • C and calcined for 4 h at 400 • C (Scheme 1). Preparation of BiFeO 3 /BiOF Composite BiFeO 3 /BiOF composite was synthesized using the simple hydrothermal method. A molar ratio of 1:1 was used from each of BiFeO 3 and BiOF, which was added to a 25 mL Teflon line autoclave with 15 mL of DI water. The mixture was sonicated for 1 h before heating at 140 • C for 72 h (Scheme 1). Photocatalytic Activity In a 25 mL round-bottom flask connected to a condenser, the photocatalytic reaction was conducted under aerobic conditions. An optimum of 5.0 mg (0.16, 0.02, 0.009 mmol for BiOF, BiFeO 3 , and BiOF/BiFeO 3 composite, respectively) of each sample was added to 51 µL (0.50 mmol) of benzyl amine in 2 mL of acetonitrile (ACN). The reaction was then irradiated by a 400 W halogen lamp for 24 h. The photocatalysts were washed with ACN and separated from the product by syringe filtration. Product purification was completed by simple column chromatography using silica gel and a mixture of ethyl acetate and hexane (1:2) to remove any possible impurities or byproducts. To isolate the product (Nbenzylidenebenzylamine) that was dissolved in ACN, the solvent was evaporated using a rotary evaporator. Finally, 1 H-NMR, 13 C-NMR, and FTIR techniques were used to confirm the identity of the product. All control experiments followed the same procedure with required changes in some variables, as outlined in Table 1. BiOF/BiFeO3 composite using hydrothermal methods. Preparation of BiOF According to previously reported procedure, BiOF was synthesized using simple coprecipitation method [28]. Bismuth nitrate and ammonium fluoride were utilized in quantities of 0.002 mols each, or 970 mg and 74.08 mg, respectively. Bismuth nitrate was dissolved in 20 mL of 4M acetic acid with constant stirring and heating at 80 °C. Then, ammonium fluoride was dissolved in 2 mL of deionized water (DI) and added to bismuth nitrate solution. The mixture was left under continuous stirring and heating at 80 °C for 7 h. The product was then collected and washed with DI water and ethanol several times. It was subsequently dried for 2 h at 80 °C and calcined for 4 h at 400 °C (Scheme 1). Preparation of BiFeO3/BiOF Composite BiFeO3/BiOF composite was synthesized using the simple hydrothermal method. A molar ratio of 1:1 was used from each of BiFeO3 and BiOF, which was added to a 25 mL Teflon line autoclave with 15 mL of DI water. The mixture was sonicated for 1 h before heating at 140 °C for 72 h (Scheme 1). Photocatalytic Activity In a 25 mL round-bottom flask connected to a condenser, the photocatalytic reaction was conducted under aerobic conditions. An optimum of 5.0 mg (0.16, 0.02, 0.009 mmol for BiOF, BiFeO3, and BiOF/BiFeO3 composite, respectively) of each sample was added to 51 µL (0.50 mmol) of benzyl amine in 2 mL of acetonitrile (ACN). The reaction was then irradiated by a 400 W halogen lamp for 24 h. The photocatalysts were washed with ACN and separated from the product by syringe filtration. Product purification was completed by simple column chromatography using silica gel and a mixture of ethyl acetate and hexane (1:2) to remove any possible impurities or byproducts. To isolate the product (N-benzylidenebenzylamine) that was dissolved in ACN, the solvent was evaporated using a rotary evaporator. Finally, 1 H-NMR, 13 C-NMR, and FTIR techniques were used to confirm the identity of the product. All control experiments followed the same procedure with required changes in some variables, as outlined in Table 1. Characterization of the Photocatalysts The purity and phase structure of prepared BiFeO3 and BiOF were confirmed by the PXRD diffraction pattern and compared to the standard PXRD database (JCPDS file No. 01-073-0548 and 73-1595 for BiFeO3 and BiOF, respectively), as shown in Figures S1 and S2 in the Supporting Information. All diffraction peaks were matched using Match!3 Characterization of the Photocatalysts The purity and phase structure of prepared BiFeO 3 and BiOF were confirmed by the PXRD diffraction pattern and compared to the standard PXRD database (JCPDS file No. 01-073-0548 and 73-1595 for BiFeO 3 and BiOF, respectively), as shown in Figures S1 and S2 in the Supporting Information. All diffraction peaks were matched using Match!3 software (version 3.0) to the pure trigonal and tetragonal phases of BiFeO 3 The band gaps of BiFeO3, BiOF, and BiOF/BiFeO3 composite were measured using the Tauc method [33], with which conduction band positions and band potential measurements could be estimated as previously reported [34,35]. When plotting (αhν) vs. photon energy (hν), as shown in Figure 2, the BiFeO3 exhibited a band of 2.15 eV lying in the visible region, and BiOF showed a larger band gap of 3.8 eV in the UV region. The BiOF/BiFeO3 composite exhibited a band gap of 3.5 eV lying in the UV light region compared to pure BiOF, indicating that the synthesized composite forms a heterojunction interface. (122), which appeared at 2θ of 19.34°, 29.2°, 31.93°, 37.21°, 39.62°, 45.93°, 51.5°, and 55.56°, respectively. Moreover, upon comparing the PXRD pattern of the prepared photocatalyst BiOF/BiFeO3 composite with both mixed metal oxides (Figure 1), the obtained patterns showed a combination of similar peaks from both mixed materials, which confirmed the preparation of the composite. The band gaps of BiFeO3, BiOF, and BiOF/BiFeO3 composite were measured using the Tauc method [33], with which conduction band positions and band potential measurements could be estimated as previously reported [34,35]. When plotting (αhν) vs. photon energy (hν), as shown in Figure 2, the BiFeO3 exhibited a band of 2.15 eV lying in the visible region, and BiOF showed a larger band gap of 3.8 eV in the UV region. The BiOF/BiFeO3 composite exhibited a band gap of 3.5 eV lying in the UV light region compared to pure BiOF, indicating that the synthesized composite forms a heterojunction interface. The band gaps of BiFeO 3 , BiOF, and BiOF/BiFeO 3 composite were measured using the Tauc method [33], with which conduction band positions and band potential measurements could be estimated as previously reported [34,35]. When plotting (αhν) vs. photon energy (hν), as shown in Figure 2, the BiFeO 3 exhibited a band of 2.15 eV lying in the visible region, and BiOF showed a larger band gap of 3.8 eV in the UV region. The BiOF/BiFeO 3 composite exhibited a band gap of 3.5 eV lying in the UV light region compared to pure BiOF, indicating that the synthesized composite forms a heterojunction interface. SEM images of pure BiFeO3, BiOF, and composite BiOF/BiFeO3 photocatalysts are shown in Figure 3. BiOF showed agglomerated small particles on an irregular surface, whereas BiFeO3 possessed a spherical particle shape. As for the BiOF/BiFeO3 composite, there is no specific adopted morphology from either material, but the composite has an irregular surface with agglomerated particles. The EDX data ( Figures S3-S5, Supporting Information) were used to confirm the elemental composition of the synthesized photocatalyst. Atomic percent confirmed the presence of Bi, Fe, F, and O in the photocatalyst with a higher presence in Bi and O compared to both BiFeO3 and BiOF (Tables S1-S3, SEM images of pure BiFeO 3 , BiOF, and composite BiOF/BiFeO 3 photocatalysts are shown in Figure 3. BiOF showed agglomerated small particles on an irregular surface, whereas BiFeO 3 possessed a spherical particle shape. As for the BiOF/BiFeO 3 compos-ite, there is no specific adopted morphology from either material, but the composite has an irregular surface with agglomerated particles. The EDX data ( Figures S3-S5, Supporting Information) were used to confirm the elemental composition of the synthesized photocatalyst. Atomic percent confirmed the presence of Bi, Fe, F, and O in the photocatalyst with a higher presence in Bi and O compared to both BiFeO 3 and BiOF (Tables S1-S3, Supporting Information). SEM images of pure BiFeO3, BiOF, and composite BiOF/BiFeO3 photocatalysts are shown in Figure 3. BiOF showed agglomerated small particles on an irregular surface, whereas BiFeO3 possessed a spherical particle shape. As for the BiOF/BiFeO3 composite, there is no specific adopted morphology from either material, but the composite has an irregular surface with agglomerated particles. The EDX data ( Figures S3-S5, Supporting Information) were used to confirm the elemental composition of the synthesized photocatalyst. Atomic percent confirmed the presence of Bi, Fe, F, and O in the photocatalyst with a higher presence in Bi and O compared to both BiFeO3 and BiOF (Tables S1-S3, Supporting Information). Photocatalytic Activities Based on their band gap values, the photocatalytic performance of the reported photocatalysts was tested using the photocatalytic oxidative coupling reaction of benzylamine as a model photocatalytic reaction. The photocatalytic experiments were carried out using 51 µL of benzyl amine placed in a 25 mL round-bottom flask irradiated from the side (10 cm from the radiation source) using light from a 400 W halogen lamp with a wavelength Figure 4. FTIR spectra of the prepared photocatalysts with corresponding band assignments. Photocatalytic Activities Based on their band gap values, the photocatalytic performance of the reported photocatalysts was tested using the photocatalytic oxidative coupling reaction of benzylamine as a model photocatalytic reaction. The photocatalytic experiments were carried out using 51 µL of benzyl amine placed in a 25 mL round-bottom flask irradiated from the side (10 cm from the radiation source) using light from a 400 W halogen lamp with a wavelength range of 340-850 nm. The round-bottom flask was attached to a condenser open at the top to the atmosphere. A pale-yellow oily product of N-benzylidenebenzylamine was formed and confirmed using 1 H-NMR and 13 C-NMR ( Figure S6, Supporting Information). Peak shifts were as follows: 1 δ 162, 139.23, 136.09, 130.78, 128.61, 128.49, 128.27, 127.98, 126.99, 65.06. Because of the newly appeared double bond formed after the coupling of another benzyl amine, the peak at 8.4 ppm was the most deshielded signal. The 1 H-NMR spectrum of the starting material benzyl amine was compared to the products in which the signal at 3.9 ppm (-CH 2 protons) was downfielded to 4.9 in N-benzylidenebenzylamine, which further confirms the coupling step. In agreement with this finding, 13 C-NMR data showed the highest shift at δ C = 162 ppm, which belongs to the characteristic peak of the N=C bond of the products. Having the expected shifts along with the integration of each peak, the conversion percent yields using the three photocatalysts were calculated to be 18, 39, and 80% for BiFeO 3 , BiOF, and BiOF/BiFeO 3 composite, respectively, with no starting material or any other byproduct detected (Table 1). Due to rapid charge/hole recombination at lower band gaps, BiFeO 3 shows low conversion, while the composite had the highest conversion because the recombination was suppressed. The enhanced yield of formation under UV light irradiation for the prepared BiOF/BiFeO 3 composite heterojunction in comparison to pure BiOF or BFO is mainly explained by the fact that a heterojunction formed by adding BiOF could absorb more photons and act as an efficient transfer of excited electrons from the BiOF conduction band to the conduction band of BFO particles; moreover, a transfer from the valence band of BiOF to the valence band of BFO suppresses recombination and allows greater electron transfer to benzyl amine. To provide more evidence regarding the completion of the coupling reaction, FTIR spectra were recorded for the starting material and compared to the product as well. Upon coupling, the characteristic band of -NH stretch in benzyl amine (3200 and 3500 cm −1 ) disappeared, and a new band at 1643 cm −1 appeared, which corresponded to the C=N of the product. Moreover, the bands of the benzene ring were obvious and appeared between 1650 and 1800 cm −1 ( Figure S7, Supporting Information). Different control experiments have been carried out to investigate other reaction conditions. As illustrated in Table 2, using the BiOF/BiFeO 3 composite photocatalyst showed the highest conversion. The percent yields were calculated as previously reported [33]. The highest yields of N-benzylidenebenzylamine products were obtained in an open-air and acetonitrile environment, while 0, 7, and 13% were obtained using BiFeO 3 , BiOF, and BiOF/BiFeO 3 composite, respectively, in a solvent-free condition (Entries II-IV), which confirms that our photocatalysts work best in the solvent environment. Many other control experiments have shown no conversion yields in the absence of either light (Entries V-VII) or photocatalysts (entries VIII), which strongly supports the high photocatalytic activity of the reported photocatalysts under visible or UV irradiation. A plausible mechanism of the oxidative coupling of benzylamine is explained in Scheme 2. The process happens through an O 2 mediated pathway when the photoexcited electron of the photocatalyst first rises to the conduction band (CB) upon radiation. It then reacts with atmospheric molecular oxygen to produce an O 2 − radical, which was previously confirmed using electron spin resonance (EPR) measurements [36][37][38]. An aminium radical cation intermediate is then formed by oxidized benzylamine by the photoinduced hole in the valence band (VB). Hydrogen-atom abstraction from the O 2 − radical then occurs at this radical cation, producing phenylmethanamine (Ph-CH 2 -NH 2 ), and, thus, forming Ph-CH=NH and H 2 O 2 [36]. H 2 O 2 is then dissociated to · OH radicals, which act as a supporting material for benzylamine oxidation to the produced imine that finally reacts with another free benzylamine to produce the N-benzylidenebenzylamine product, liberating ammonia. Scheme 2. Proposed mechanism of benzylamine coupling over reported photocatalyst. Conclusions In this study, benzylamine was photocatalytically coupled to N-benzylidenebenzylamine using the newly synthesized BiOF/BiFeO3 composite photocatalyst, which showed a high conversion yield of 80% compared to other materials. The composite's construction was proven by multiple characterization methods, including XRD, DRS, SEM, EDX, and FT-IR. According to the measured band gaps, the BiOF/BiFeO3 composite exhibited a lower band gap compared to BiOF (3.5 eV vs. 3.8 eV), which can be successfully employed as a photocatalyst in the UV region. The reported synthesis procedure provides an easy, eco-friendly, and low-cost route for developing various active materials for photocatalytic applications. Conclusions In this study, benzylamine was photocatalytically coupled to N-benzylidenebenzylamine using the newly synthesized BiOF/BiFeO 3 composite photocatalyst, which showed a high conversion yield of 80% compared to other materials. The composite's construction was proven by multiple characterization methods, including XRD, DRS, SEM, EDX, and FT-IR. According to the measured band gaps, the BiOF/BiFeO 3 composite exhibited a lower band gap compared to BiOF (3.5 eV vs. 3.8 eV), which can be successfully employed as a photocatalyst in the UV region. The reported synthesis procedure provides an easy, eco-friendly, and low-cost route for developing various active materials for photocatalytic applications. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/photochem3010012/s1, Figure S1: Matched X-ray spectroscopy of the BiOF using Match!3 software (version 3.0). Figure S2: Matched X-ray spectroscopy of the BiFeO 3 using Match!3 software (version 3.0); Figure S3: EDX elemental analysis of BiOF; Figure S4: EDX elemental analysis of BiFeO 3 ; Figure S5: EDX elemental analysis of BiOF/ BiFeO 3 composite; Table S1: Weight and atomic percentages of BiOF; Table S2: Weight and atomic percentages of BiFeO 3 ; Table S3: Weight and atomic percentages of BiOF/ BiFeO 3 composite; Figure S6: Spectra of N-benzylidenebenzylamine product using 1 H-NMR (A) and 13 Data Availability Statement: The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
5,441.8
2023-03-21T00:00:00.000
[ "Chemistry", "Materials Science", "Environmental Science" ]
Influence of water soaking on swelling and microcharacteristics of coal Improving the coal seam permeability is an important measure for increasing the coal bed methane (CBM) production and preventing gas disasters. Hydraulic technologies are effective ways of improving coal seam permeability. However, hydraulic technologies can also cause water to enter the coal seam, allowing the coal seam to soak for a long time. In this study, to obtain the influence of water soaking on the microscopic characteristics of coal, X‐ray diffraction (XRD), scanning electron microscopy (SEM), free swelling ratio tests, and low‐temperature nitrogen adsorption tests (LT‐NATs) were conducted. The mineral compositions of raw coal samples, the variation regularities of micromorphologies, and pore characteristics of the samples with different soaking times were obtained. The results showed that the coal samples contained about 8.5% clay minerals, of which 71% were illite/smectite mixed‐layer. Expansions of different sizes in the areas where the surface of the soaked coal samples contained clay minerals were observed, and the swelling was also observed macroscopically. The swelling not only led to an increase in the coal sample volume but also led to a decrease in pore volume. This change was magnified with the increase in soaking time (within 30 days). The cumulative pore volume of the samples soaked for 30 days was 0.00681 cm3/g. This was a reduction of 29.9% compared to the unsoaked samples. Moreover, the pore volumes show a logarithmic dependence on soaking time. This study provides evidence that the coal containing clay minerals will swell obviously when soaked in water, and the hydration swelling of clay minerals has a great influence on the swelling of coal. This swelling would lead to a decrease in the pore volume and the efficiency of CBM transport, thus affecting the effect of hydraulic measures. | 51 WEN Et al. creating difficulties in extracting CBM in these coal seams. 5,7,8 Increasing the permeability of the coal seam is an effective means to improve the recovery of CBM. [9][10][11] Hydraulic technologies, including hydraulic fracturing 12,13 and hydraulic slotting, [14][15][16] are often used to prevent the gas disasters and enhance CBM recovery. These methods use high-pressure fluids to create fractures in the coal seams, thereby increasing permeability to cause more gas to desorb, create more space for gas diffusion, and improve the CBM production. Water is widely used in underground hydraulic technologies in Chinese coal mines due to the low cost and ready availability. 17 These hydraulic methods not only promote the development of fractures but also cause water to enter the fractures. Water present in coal seams is of profound importance in relation to CBM production. 18,19 Its effects are manifold. The presence of water in coal reduces gas sorption capacity and gas diffusivity, and the sorption of water vapor by coal leads to several percent of swelling. [19][20][21] Generally, these effects are usually caused by changes in the external morphology and internal structure. 22 Many studies have been performed to identify the variation of coal in response to absorption of water or water vapor. [23][24][25] Zhang et al 25 compared dry and soaked coal samples using X-ray microcomputed tomography, and the results showed that the cleats in the coal matrix closed upon water absorption, while the cleats in the mineral phase were not affected. Yang et al 17,26 compared the influence of water and viscoelastic surfactant fracturing fluid on coal samples, and the results showed that the permeability of the coal samples saturated with the viscoelastic surfactant fracturing fluid was higher than that of the samples saturated with water. Zhang et al 27 measured nanoscale rock mechanical properties via nanoindentation tests for dry and wet heterogeneous coal. The indentation moduli measured by nanoindentation decreased by 60%-66%, but a 16.6% increase was measured in the dynamic bulk measurement. Liu et al 19 reports dilatometry experiments conducted on 1 and 4 mm sized cubic high-volatile bituminous coal samples, the results show that the volumetric swelling strains attained at equilibrium show a near-linear dependence on relative humidity, reaching 1.37%-1.43% at around 95% relative humidity. Hydraulic technologies, especially hydraulic fracturing, require a high pressure to be maintained for a period of time to expand the fractures in coal seams. 13 This also pressed water into a deeper area. Meanwhile, after the water flowback, due to fluid leak off and water blocking damage, some water still exists in the coal seam causing the coal seam to soak for a long time. 22,23 This water soaking may have an influence on the coal, thereby affect the permeability and gas seepage. However, few studies have focused on the influence of water on the swelling and microcharacteristics of coal after long soaking times, and the variation of coal samples in different soaking times is not clear. In this study, the coal samples were taken from the No. 8 coal seam in the Songzao mining area. Coal samples with different soaking times were analyzed by X-ray diffraction (XRD), scanning electron microscopy (SEM), low-temperature nitrogen adsorption tests (LT-NATs), and free swelling ratio tests. The mineral composition of the coal samples was obtained by XRD. SEM was used to study the variations in the surface morphologies of the coal samples with different soaking time. Through the LT-NATs, the variations in the pores of coal samples with different soaking time were obtained. Through the free swelling ratio tests, the swelling characteristics of the coal samples were obtained. These results demonstrate the analysis of the swelling and microscopic characteristics of coal when exposed to water and provide support for further research on the swelling mechanism of coal. | Materials A high-rank anthracite coal block was obtained from the No. 8 coal seam of Songzao mining area in Chongqing City (China) (Figure 1). The major coal-bearing rock series is the Permian Longtan Formation. The average seam thickness of the No. 8 coal seam is 2.5 m, as shown in Figure 2. The maximum gas content and pressure measured in the coal seam were 18.17 m 3 /t and 2.56 Mpa, respectively. 28 The proximate analysis of coal samples is summarized in Table 1, measured by Chinese Standards DL/T 1030-2006 and GB/T 212-2008. | X-ray diffraction The mineral compositions of the coal samples were identified by XRD (X'Pert3 Powder, Th Netherlands). The coal samples cut from the coal block were pulverized to below 320 mesh, after which they were treated by low-temperature ashing to remove organics to ensure the integrity of the mineral compositions in the coal samples. 29 The ashed coal samples were dried for 24 hours at 100°C. The coal samples were F I G U R E 1 Location of the Songzao mining area in Chongqing City, China divided into three parts, and XRD was conducted. Each part was >3 g. The mineral composition and content were measured by the Chinese Standard SY/T 5163-2010. | Scanning electron microscopy The surface topographies of the coal samples were observed using field emission scanning electron microscope (FESEM; FEI Quanta FEG-250, USA). Coal samples with fresh cross-sections and areas of 0.5-1 cm 2 were selected for this study. The samples were soaked in deionized water. Three coal samples each soaking time of 0, 1, 5, 15, and 30 days (0 days samples were not soaked). The samples were dried for 48 hours by vacuum freeze-drying (vacuum freeze-drying has the advantage of retaining the sample's original physical properties and chemical composition). 30 The surface attachments of the coal samples were blown away. The samples were sputter coated with gold for 60 s, after which they were observed by FESEM. | Free swelling ratio test The free swell ratio is the ratio of the sample's volume increase after soaking in water to the original volume without a confining pressure. 31 The coal samples were pulverized to 60-80 mesh and dried for 48 hours by vacuum freeze-drying. A 40-g sample was divided into two equal parts. One part was soaked and stirred in 100 mL deionized water in a measuring cylinder, followed by static settling for 96 hours. The other part was untreated. | Low-temperature nitrogen adsorption The porosities of coal samples were determined by LT-NATs (Micrometrics ASAP2020 analyzer, USA). The coal samples that were separated from the fresh coal block were pulverized to 60-80 mesh and soaked in deionized water. Next, 5 g samples were taken after soaking times of 0, 1, 5, 15, and 30 days. The samples were dried for 48 hours by vacuum freeze-drying. Prior to the tests, the coal samples were degassed under a vacuum for at least 12 hours, after which the LT-NATs conducted. 32 Note: FC ad is the fixed carbon, M ad is the moisture content, A ad is the ash content, V ad is the volatile matter content, and f is the firmness coefficient. | Mineral compositions of coal samples XRD spectra of the ashed coal samples are presented in Figure 3. According to the Chinese Standard SY/T 5163-2010, the XRD spectra were analyzed. The compositions and contents of minerals and clay minerals are shown in Tables 2 and 3, respectively. The main mineral components of the coal sample were clay minerals, calcite, dolomite, quartz, and anhydrite. Quartz, calcite, dolomite, and hematite have no chemical reaction with water, and they are hardly soluble in water, so these minerals have less influence on coal after water soaking. Illite, smectite, and kaolinite are all aluminosilicate clay minerals. The clay mineral content of the ashed coal samples was 47%, of which 71% was an illite/smectite mixed-layer, and the ratio of illite to smectite was 20%. Table 1 shows that the coal ash accounted for 18% of the coal sample. Thus, the clay mineral and illite/smectite mixed-layer accounted for about 8.5% and 6.0%, respectively. Smectite is a category of clay minerals with a three-layer crystalline structure that exhibits hydration swelling. 33 When exposed to water, the adsorbed exchangeable cations dissociate to form a diffusive double layer, which produces electronegativity. 34 The crystal layers repel each other, and the spacing increases, causing expansion. Although the content of smectite was relatively low, the hydration swelling ratio of the smectite was large, greater than 30 times its original volume. 35 After the use of hydraulic methods, the coal mass within range becomes saturated with water. This may cause swelling of the smectite and influence the effect of the hydraulic methods. Kaolinite and illite have strong water absorption ability which is same as smectite, but they only have weak expansibility, so the effect on the coal is small. Figure 4 shows the FESEM results under 1500× magnification after soaking times of 0, 1, 5, 15, and 30 days. As shown in Figure 4A, the surface of the unsoaked coal sample was relatively flat, with a small number of coal particles attached. The structure was compact and exhibited good continuity and a regular shape. There were a few pores and fractures on the surface, the natural existed fractures tend to be irregular in shape and size. After 1 day of soaking, the surface of the samples became eroded, flaky particles were exfoliated, and small pores had formed at the exfoliated place, as shown in Figure 4B. The exfoliated particles attached to surfaces or filled the pores and fractures. As shown in Figure 4C-E, with the increase in soaking time, the edges of the flaky structure became blurred, the surface structure was gradually destroyed, and became loose and fragmented, and peeled particles with various sizes and shapes gradually increased. | Influence of water soaking on surface topography After soaking, considerable clay swelling occurred. As shown in Figure 5A, part of the coal sample swelled significantly and bulged over the surface after soaking (1500×). A portion of the image is further magnified 6000× in Figure 5B. The expansion was composed of spherical particles of different sizes, which were closely packed and irregular. There were a large number of reticulated fractures around the expansion. The fractures connected with each other, and the widths were generally <5 μm. These fractures divided the surface of the coal sample into layered fragments of different sizes, and there was also a large number of smaller, irregularly accumulated, round particles on the fragments. This multifissure structure resulted in more water entering the coal mass. In addition to the large volume expansion, there was also some relatively small expansion. As shown in Figure 6A, a number of expansions were present (1500×). Compared with Figure 5, the expansions were much smaller. The expansions were 5-10 μm in diameter and surrounded by dark areas. The dark area indicated that it contained hydrated clay minerals. After the water soaking, interlayer water existed in the clay minerals, resulting in a decrease in its total atomic number (the backscattered electron quantity with the decrease of atomic number). The electronic signal decreased accordingly, forming dark areas. It was observed in the dark area that there are semicircular fractures with a width between 0.1 and 0.5 μm around the expansions ( Figure 6B and 6), and there are also cases where clay minerals swelled but no fractures occurred ( Figure 6D). The results indicate that the water soaking will erode the surface of the coal and cause the clay mineral to swell. Figure 7 shows the volumes of the unsoaked and soaked coal samples. The volume of unsoaked coal sample is 23 mL (Figure 7A). After soaking for 96 hurs, the coal samples were divided into two parts: one part was precipitated, with a volume of 32 mL, and the other part was F I G U R E 3 XRD spectra of coal samples suspended, with a volume of 2 mL, corresponding to a total volume of 34 mL ( Figure 7B). The free swelling ratio was calculated as follows: | Influence of water soaking on sample volume where δ ef is the free swelling ratio, V 0 is the volume of the unsoaked coal sample, and V is the volume of the soaked coal sample. The calculated free swelling rate of the coal sample treated with water was 47.8%. To determine the influence of clay minerals on the swelling of the coal samples, a KCl solution with a concentration of 0.5% was prepared, which has a good effect on inhibiting the clay hydration swelling. 36 The coal samples were treated with the solution, and the results are shown in Figure 8. The coal samples were also divided into two parts: the precipitated part was 27 mL, and the suspended part was 2 mL; the total volume was 29 mL. The calculated free swelling rate of the coal samples treated with 0.5% KCl solution was 26.1%. After being treated with KCl solution, the free swelling ratio of the coal sample decreased significantly. This indicates that clay minerals play an important role in the swelling of coal when exposed to water. The swelling effect of clay minerals in coal must be taken into account, and corresponding measures should be taken. Figure 9 shows the low-temperature nitrogen adsorption isotherms of the test samples processed with deionized water after soaking times of 0, 1, 5, 15, and 30 days. With the increase in the relative pressure, the adsorbed nitrogen volumes of all the samples increased. In the low relative pressure ranges, the growth rate was maintained at a low level. After the relative pressure reached 0.80, the adsorbed volume increased rapidly until it reached a maximum value. The isotherms of the test samples were type II adsorption isotherms, which means that the sample mainly contained both macropores (≥50 nm) and mesopores (2-50 nm). 37 Comparing the adsorption isotherms after different soaking times, the coal samples with longer soaking times had lower adsorption capacities. This indicates that the pore volume decreased as the soaking time increased. Figure 9A and 9 show the fitting curves of quantity adsorbed at inflection point p/p 0 = 0.035 and p/p 0 = 0.8, respectively. It obtained that the quantity adsorbed of coal samples showed a logarithmic dependence on soaking time. | Influence of water soaking on pore structure With the increase of soaking time, the quantity adsorbed in the low relative pressure stage (p/p 0 = 0.035) did not exhibit significant variation, while in the high relative pressure stage (p/p 0 = 0.8), the quantity adsorbed gradually decreased until it reached relative stability after soaking for 30 days. This indicated that water soaking will significantly reduce the pore volume of the coal sample, and the volume of micropore did not change much with the increase of soaking time, while the volume of macropores and mesopores changed greatly, and remained stable after soaking for 30 days. According to the International Union of Pure and Applied Chemistry (IUPAC) classification, the hysteresis loops between the adsorption and desorption isotherms of the samples after a soaking time of 0 and 30 days belong to type H3, as shown in Figure 10, corresponding to slit-shaped pores. The pore shapes of the sample did not change after soaking. A nonlocal density functional theory(NLDFT)model was used to analyze the pore size distribution (PSD) using the SAIEUS software. The NLDFT model was suitable for the PSD analysis of micropores, mesopores, and macropores with high accuracy. 38 The calculation of the PSD was based on the integral adsorption equation: where N (p/p 0 ) is the adsorption isotherm data, D is the pore width, N (p/p 0 , D) is the kernel of theoretical isotherms, and f(D) is the PSD. Figures 11and 12 show the PSDs and pore volumes of the samples after soaking times of 0 and 30 days with pore widths from 1 to 100 nm. Due to the limitation of the LT-NATs, the micropores only contained pores of 1-2 nm in this study. Mesopores and macropores accounted for a large proportion of the pore volume, which was consistent with the characteristics of type II adsorption isotherms. The incremental and cumulative pore volumes of the soaked samples decreased significantly compared with those of the unsoaked samples. The cumulative pore volume of the soaked samples was 0.00681 cm 3 /g, which was a reduction of 29.9% compared to the unsoaked samples whose cumulative pore volume was 0.00972 cm 3 /g. The decrease of pore volume was driven mainly by a decrease in pore width between 2-6 nm, 35-45 nm, and 65-75 nm. This is mainly attributed to the hydration swelling of smectite after water soaking. After the coal samples are soaked in water, the water enters into the coal along with the pores and fractures, and the smectite near the pores and fractures hydrates and gradually swells. Due to the limitation of coal structure, expansions will invade into pores and fractures, resulting in the decrease of pore volume. It seems to contradict the results of SEM, which shows the new fractures on the surface of the coal sample due to expansion. This is mainly because only a small amount of clay mineral is exposed to the surface of the coal samples; after water soaking, the pore volume reduced by internal expansion is larger than the pore volume generated by surface expansion, and the peeled particles can fill the pores and fractures. It should be noted that compared with bigger mesopores and macropores, the pores with width between 2 and 6 nm have a greater impact on CBM transport. 39 As shown in Figure 13, after water soaking, the decrease in pore volume of these smaller mesopores may be accompanied by the closure or narrowing of the pore throats, resulting in an increase in closed pores or dead-end pores. 32 The results are different from the findings of some scholars. Zhai 40 and Song 41 claimed that the water soaking will dissolve the organic and inorganic substances in the coal, thus increasing the pore volume and pore size. This difference is probably due to the differences in composition of the coal samples. Since the minerals in the coal samples collected from Songzao have a low solubility in water, the pore volume increased by dissolution is small. Meanwhile, the smectite in the coal samples hydrates and swells after water soaking, resulting in a decrease in pore volume. | Discussion Coal is a typical porous medium. 42 The pore structure plays an important role for the CBM adsorption and transport in coal. 43 The smectite in the coal samples swells, causing changes in pore structure. Nanopore (<100 nm) is an important channel for CBM transport. After a coal seam is invaded by water, due to the water blocking damage, if the formation driving force cannot overcome the capillary pressure, it will cause the pores to be blocked by water in the invaded zone, the CBM in the fractures cannot be extracted ( Figure 13). 22 where P c is the capillary pressure, γ is the interfacial tension between the water and the air, θ is the contact angle between the water and coal, and r is the radius of the capillary. Due to water soaking, the pore volume and size decreased. According to Equation 3, interfacial tension and the contact angle are constant and with a decrease in the pore width, the capillary pressure will increase correspondingly. Therefore, the decrease in pore size caused by water soaking will aggravate the damage of water blocking effect, resulting in a decrease in the permeability of coal. The nanopores have greater adsorption capacity, and the pores smaller than 100 nm accounts for more than 80% of the total specific surface area, which indicates that most of the CBM is adsorbed in the nanopores. 44,45 In general, 100 nm is the threshold of diffusion and seepage, and the CBM transport in pores below 100 nm is dominated by diffusion. 46 As a key flow property for CBM extraction, diffusion is considered as the first step of CBM transport in coal. 39,43 Knudsen diffusion dominates the overall mass transport in nanopores. Knudsen diffusion coefficient is shown in the following equation: 39 where D K is the Knudsen diffusion coefficient, d p is the pore diameter, Ṙ is the ideal gas constant, T is the temperature, and M A is the molecular mass of gas. After the water soaking, the pore size and pore volume decreased, according to Equation 4, Knudsen diffusion coefficient decreased correspondingly. After the implementation of the hydraulic measures, the CBM will be desorbed, and most of the CBM diffused into the fractures through the nanopores. However, as the soaking time increased, the pore volume and pore size decrease, and the CBM diffusion rate will decrease accordingly which greatly reduced the efficiency of CBM transportation. Therefore, it is necessary to focus on the coal swelling caused by water soaking to reduce its impact on hydraulic technologies. | CONCLUSION This study aimed to obtain the influence of water soaking on swelling and microscopic characteristics of coal. Based on the experiments conducted, it was found that the coal samples obtained from the No. 8 coal seam of the Songzao mining area contained small amounts of clay minerals dominated by illite/smectite mixed-layer, whose content was about 6%. With the increase in soaking time, the surface of the coal gradually eroded, resulting in a large number of flaky particles of different sizes. In the areas where the surfaces of soaked coal samples contained clay minerals, expansions composed of spherical particles of different sizes were observed, and fractures formed on the surface. Through the free swelling ratio test, the soaked coal samples also showed significant swelling, and the hydration swelling of the clay was the main reason for the expansion of the coal sample. Meanwhile, the pore volumes of the soaked coal samples decreased with the increase of soaking time until soaking for 30 days, and it shows a logarithmic dependence on soaking time. The pore volume of the coal sample soaked for 30 days was 0.00681 cm 3 /g, which was a reduction of 29.9% compared to the unsoaked samples. The reduction in pore volume was mainly due to a reduction in the number of macropores and mesopores. We concluded that the coal samples that contained clay minerals, especially smectite, which were prone to hydration, exhibited significant swelling after water soaking. The swelling effects resulted in the increase in the coal sample volume and the decrease in the pore volume, and this change was magnified with the increase in the soaking time (within 30 days). This decrease of pore volume will lead to a decrease in pore width, an increase in closed pores, and a decrease in pore connectivity, which will exacerbate the water blocking damage and affect CBM transport and the efficiency of hydraulic measures.
5,706.4
2019-10-20T00:00:00.000
[ "Environmental Science", "Engineering" ]
Resiliency of healthcare expenditure to income shock: Evidence from dynamic heterogeneous panels Using the World Bank data over the period of 1960–2019, this study aims at estimating the resiliency of health expenditures against gross domestic product (GDP). Long-run and short-run elasticities are calculated using the type of panel time series methods that are exclusively designed for dynamic heterogeneous panels: Mean Group, Pooled Mean Group, and Dynamic Fixed Effects estimators. These methods permit better estimations of elasticity with considerable heterogeneity across the 177 countries included in this study. Along with a standard elasticity estimation, this study estimates country-specific long-run and short-run elasticities along with error correction components. The study finds that the long-run elasticity of income is very close to unity, but short-run coefficients are insignificant for most nations. In addition, most countries revert to long-run equilibrium reasonably quickly if there is shock as the error correction coefficients are negative and, in many cases, very close to one. While for most developed countries, the short-run elasticities are lower in comparison with the short-run elasticities of developing countries indicating that many developing countries may face a larger decrease in health expenditure with the forecasted decline in income due to impending economic recession. Therefore, although this study is not directly intended to capture the post-COVID-19 effects, the study estimates may project the potential responses in health expenditure across countries due to potential income shocks. . Introduction This paper aims at estimating the income elasticity of healthcare expenditure using data from the past 60 years for 177 countries across the world to recognize how healthcare expenditures respond to economic fluctuations. With this estimate, we can gain insight into the stability and resilience properties of healthcare spending. For instance, if healthcare expenditures are elastic (i.e., healthcare as a luxury good) with respect to changes in GDP, then with economic downfall, healthcare spending will fall more than proportionately, destabilizing the countries' healthcare spending. Conversely, if healthcare expenditures are inelastic (i.e., healthcare is a necessary good), it would indicate that health expenditure will not fluctuate significantly with changes in income. With this estimate, we can assess the resiliency of healthcare expenditures both in absorbing an instantaneous shock from an economic impact and in estimating the time needed to revert back to its long-run equilibrium once deviates from the equilibrium. Estimation of such relationships is widely available; however, most of the previous studies attempted to understand this relationship using fixed effects ignoring an essential nature of heterogeneity across panels. Most importantly, many earlier studies fail to provide common long-run coefficients for all . /fpubh. . countries with heterogeneous short-term coefficients as they tend to assume homogeneity. Using the Pool Mean Group (PMG) estimation method-a suitable estimation method for common longrun estimates as well as heterogeneous short-run estimates-this study provides both aggregate and country-specific long-term and short-term estimates. In doing so, we use panel data of current health expenditure (CHE) per capita, GDP per capita, out-of-pocket (OOP) expenditure (% of CHE), and life expectancy (LE) at birth for 177 countries from 1960 to 2019. We rely mostly on PMG estimates as this estimation method provides aggregate level long-run efficiency, which is believed to be relatively stable, with country-specific fluctuating short-run coefficients. We use Mean Group (MG) and PMG estimators designed explicitly for estimating long-run relationships from dynamic heterogeneous panels. Although PMG is exclusively designed for estimation in the case of a heterogeneous panel, based on our knowledge, these tools are rarely applied to explore the relationship between GDP and healthcare expenditure. However, we also use the more conventional Dynamic Fixed Effects (DFE) estimator for comparison. Since with these estimates, we can understand the common overall long-term trajectory of health expenditure along with short-term fluctuations and resiliency, these results can provide insights into the potential outcomes of an income shock. Therefore, while this study does not directly address COVID-19 issues, key findings of this study will leave strong implications on what to expect with regard to fluctuations in healthcare expenditure across countries under COVID-19-related GDP shocks. Hence, the current study will have strong policy implications on health expenditure and its relationship to income shocks. The paper is organized as follows. Section 2 covers related literature; Section 3 includes data and estimation methods. Section 4 provides findings and discussions, and Section 5 provides conclusions and policy recommendations. . Related literature The income elasticity of healthcare expenditure, as evident in the literature, often surpassed unity, indicating healthcare as a luxury good (1)(2)(3)(4)(5)(6)(7). Much of these works have been grounded on crosscountry data and recently with panel data followed by unit root tests and cointegration analysis carried out, especially for developed countries. The most notable work in this issue is that of Newhouse that used 1 year of cross-sectional data from 13 developed countries and estimated an elasticity exceeding one (2). Newhouse observed that over 90% of the variation between countries in per capita healthcare expenditure could be explained by variations in per capita GDP, with an income elasticity ranging from 1.15 to 1.31. Later, Newhouse promotes that there is a substantial role for organizational factors of healthcare delivery and financing in determining healthcare expenditures (8). Parkin et al. (9) show that different conversion factors (exchange rates and healthcare purchasing power parities, PPP) lead to different results with respect to the estimated income elasticity of healthcare expenditure, and the use of healthcare PPP reduces the income elasticity below unity (0.9). In contrast, Gerdtham et al. (10) suggest that the value of estimated income elasticity is invariant with respect to the use of GDP or healthcare spending, although the use of exchange rate adjustment leads to a trivial fall in estimated elasticity. Hitiris and Posnett (11) re-examine the results of previous work covering 20 OECD countries and find GDP as a determinant of healthcare expenditure, with an estimated income elasticity at or around unity, and propose that OECD countries should not be regarded as a single, homogeneous group. Though this study acknowledges heterogeneity, no attempt has been made to estimate the parameters considering heterogeneity. Moore et al. (12) find that income is the most dominant determinant of healthcare spending, which explains above 90% of the variance in expenditures across 20 OECD countries, and observed that long-run income elasticity of medical care exceeds unity, in accordance with Culyer (13). Using panel data, other studies find elastic healthcare spending (14-16). Using country-specific time series data, multiple studies find that the income elasticity of healthcare spending is greater than unity (17)(18)(19)(20). Blomqvist and Carter (21) claim that when comprehensive data are used, health expenditure cannot be considered a luxury product. Getzen (22) posits that the debate arises primarily from misspecifying the levels of analysis-between vs. within estimates. The study finds that individual income elasticities are usually near zero with social security, while national healthcare expenditure elasticities are usually greater than unity. Hence, he summarizes that "healthcare is an individual necessity and a national luxury". Another group of researchers, Clemente et al. (23), show a long-term relationship between the total healthcare expenditure and gross domestic product (GDP) using the cointegration approach and state that potential nonstationarity of data and cross-section heterogeneity may serve as the reasons behind healthcare expenditure being more than unity. Correspondingly, Jewell et al. (24) indicate that before studying the relationship between healthcare expenditure and income, it is critical to specify whether these variables are stationary. In empirical tests, disregarding the above issues will lead to pointless results and spurious regression (25,26). In different circumstances, a number of studies of income elasticity for healthcare spending produce estimates of less than unity (27)(28)(29)(30). Matteo (28) provides a comparison between parametric and non-parametric estimation techniques. He shows that locally weighted scatterplot smoothing allows for variability in the income elasticity of health albeit inapt for multivariate cases. However, this limitation can be partially addressed by combining non-parametric estimators with parametric specifications (31). Later, Panel Smooth Threshold Regression has been developed to indicate a change in parameters among countries and also change in parameters over time (32)(33)(34)(35). Using this approach, Mehrara et al. (36) estimate the relationship between healthcare expenditure and income for 16 OECD countries and reveal that income elasticity is much more than unity (2.59) and also the estimation has been unvarying over time and across countries. Convergence of healthcare expenditure by applying economic growth models in developed countries has been examined in some previous studies (37 -44). However, Barros (38) finds that the . /fpubh. . characteristics of health systems (e.g., availability of gatekeepers, public reimbursement) have no significant effects on either the growth or level of health expenditure. Nghiem and Connelly (45) also reveals no evidence that the growth of health spending per capita in OECD countries converges over time. While the income elasticity of healthcare spending remains inconclusive for developed countries, it is rarely explored for less developed countries. By using panel data, some studies of developing countries indicate that healthcare is a necessity rather than a luxury, and healthcare expenditure in general does not grow faster than GDP after taking other factors into consideration (46)(47)(48)(49)(50). Furthermore, Farag et al. (47), find that healthcare spending is least responsive to changes in income in low-income countries and most responsive in middle-income countries, in comparison with high-income countries falling in the midway. In a recent study, Stepovic (51) confirms that there had always been differences between low and high-income countries in the speed of recovery. Abdullah et al. (52) conduct a study of 36 Asian countries and find that long-run income elasticity of healthcare expenditure is less than unity. The findings collide with Hassan et al. (53) but are in line with some other studies (36,(54)(55)(56). In another study using the panel data method; Baltagi et al. (57) stated that the size of income elasticity depends on the geo-political position of different countries in the global income distribution, with poorer countries showing higher elasticity. Obradović and Lojanica (58) accomplish a study on South-Eastern European Health Network countries which shows that in the long run, the income elasticity of healthcare expenditure is greater than unity and states healthcare can be considered a luxury good. Additionally, the study reveals that the elasticity of healthcare expenditure relative to income is less than unity in the short run, which means that healthcare is a necessary product over the short term. Using dynamic panel data, a reciprocal relationship has been found between health expenditure and economic growth in the short run and one-way causality from economic growth to public health expenditure in the long run (59). Rana et al. (60) examine the common correlated effects on income elasticity and health expenditure using the mean group (MG) method. Findings show that about 43% of the variation in global health expenditure growth can be explained by economic growth. Income shocks affect the health expenditure of high-income countries more than lower-income countries. Moreover, the income elasticity of health expenditure is less than one for all income levels. Similar to prior studies, Murthy and Okunade (61) present empirical evidence that in the U.S. health care is a necessity, along with an income elasticity estimate of around 0.92. To elucidate the context of Asian countries, Mehmood et al. (62) estimate the presence of a long run relationship between income per capita, health expenditures, and health literacy using pooled mean group (PMG) estimation method for a sample of 26 Asian countries (1990-2012). Alhassan et al. (63) uses Pesaran's autoregressive distributed lag model on annual time-series data from Nigeria to conceptualize the hypothesized claim about the sustaining relationship between economic growth and public health expenditure. The empirical findings experience the long-run relationship between public health expenditure and economic growth over the entire study span. Iheoma (64) employs the panel autoregressive distributed lag model to express the theoretical relationship between public health expenditure per capita, economic uncertainty, and population growth rate. Using the mean group (MG) and the pooled mean group (PMG) estimators, the study reveals that in low-income countries, economic uncertainty is negatively associated with health spending in the short run. In lower-middle-income countries, economic uncertainty increases health spending in the short run but reduces it in the long run as uncertainty persists. Fedeli (65) upholds the view that an increase in GDP accelerates healthcare expenditure in both the long and the short run, although at a decreasing rate in the short run. The common drawbacks of most of the previous studies are reliance on relatively small-size homogenous samples and using relatively weak or less suitable econometric modeling with available data sets. Moreover, the majority of the previous studies either estimated a single long-run and short-run estimates, or separate estimates for each country. None of them utilizes the strength of the PMG approach which capitalizes on the strength of panel regression by proving a common long-run coefficient with varying short-term coefficients across countries. The current study overcomes those limitations. A major contribution of the present paper is that it applies panel estimation methods for healthcare expenditure and GDP taking heterogeneity among countries into consideration, thus providing rigorous and robust elasticity estimates of healthcare spending and analyzing observed heterogeneity across countries' healthcare expenditure systems. . Data and estimation method In our empirical estimations, we use values of CHE per capita, GDP per capita, OOP (% of CHE), and life expectancy at birth taken from the World Development Indicators. Healthcare expenditure and GDP data cover the period 1960-2019 for 177 countries. CHE per capita is measured in current US dollars and includes healthcare goods and services consumed during each year. GDP per capita is gross domestic product divided by midyear population and data are in current U.S. dollars. OOP (% of CHE) is the share of outof-pocket payments of total current health expenditures whereas out-of-pocket payments are spending on health directly out-ofpocket by households. Life expectancy at birth indicates the number of years a newborn infant would live if prevailing patterns of mortality at the time of its birth were to stay the same throughout its life. To leverage the strength of panel data, we use MG and PMG dynamic panel estimators which are applied to account for heterogeneity among countries in panel data sets. For reference, we also apply DFE estimation methods. There has been growing interest in dynamic panel data models, where the number of time series observations, T, is relatively large and of the same order of magnitude as N, the number of groups. Pesaran et al. (66) report that the usual practice is either to estimate N separate regressions and then compute the mean of estimated coefficients, which demonstrate an MG estimator, or to pool data assuming that slope coefficients and error variances are identical, as with the DFE method. They The current study in fact estimates buoyancy rather than elasticity. But the term "buoyancy" is not common in economics literature, so we use elasticity. indicate an intermediate procedure, the PMG estimator, which constrains long-run coefficients to be identical but allows shortrun coefficients and error variances to vary across groups. Both cases are considerable when regressors are non-stationary and they follow unit root processes, and for both cases derive the asymptotic distribution of PMG estimators as T tends to infinity. Subsequently, we employ a traditional DFE estimator along with dynamic panel MG and PMG estimation. DFE estimates the time series for each group pooled and only intercepts are allowed to vary across groups. However, there are no grounds to assume that the rate of convergence to the steady state is identical across countries, as the DFE method assumes. MG estimator relies on estimating N time series regressions and averaging the coefficients (25). This method generates consistent estimates of parameter averages, yet it does not allow for the possibility that certain parameters may be analogous across groups. In contrast, the PMG estimator is an intermediate estimator since it uses a combination of pooling and averaging of the coefficients. A PMG estimator allows the intercepts, short-run coefficients, and error variances to differ across groups (as would an MG estimator) but constrains the long-run coefficients to be equal (as would a DFE estimator). MG estimators provide consistent estimates of the mean of long-run coefficients, though these will be inefficient if slope homogeneity holds. Under longrun slope homogeneity, the pooled estimators are consistent and efficient. Even so, the long-run slope homogeneity imposed by PMG can be easily tested using the Hausman test (67). There is no reason to believe that in such a large panel there would not be substantial heterogeneity, and therefore, any estimation tool that takes this issue into consideration should be used for estimation. Both MG and PMG estimators are intended to deal with panel data characterized by a large number of groups N and a large number of time periods T, as the data used in this paper. However, PMG estimators appear to be more relevant in our case because it is very likely many countries will follow a similar longrun trend keeping the avenue open for heterogeneity in shortrun estimates. Given data in time periods, t = 1, 2, . . . , T, and groups, i = 1, 2, . . . , N, Pesaran et al. (66) estimated an ARDL (p, q, q, . . . , q) model, where X it (k × 1) is the vector of explanatory variables (regressors) for group i; µ i represents the fixed effects; the coefficients of lagged dependent variables, λ ij , are scalars; δ ′ ij are (k × 1) coefficient vectors; and T must be large enough such that the model can be estimated for each group separately. Similarly, time trends and other types of fixed regressors can be included in Equation (1). If variables in Equation (1) are, for example, I(1) and cointegrated, then the error term is an I(0) process for all i. A prime feature of cointegrated variables is their responsiveness to any aberration from long-run equilibrium. This feature entails an error correction model in which the short-run dynamics of variables in the system are influenced by aberration from equilibrium. Thus, it is convenient to work with the reparameterization of Equation (1) into the error correction equation, N , and t = 1, 2, . . . , T, where The parameter φ i is the error-correcting speed of the adjustment term. If φ i = 0, there would be no evidence for a long-run relationship. This parameter is expected to be significantly negative under the prior assumption that the variables return to long-run equilibrium. Notably, the vector β ′ i comprises long-run relationships between the variables. If time series observations are considered for each group, Equation (2) can be written as: where, i = 1, 2, . . . , N; y i = y i1 , y i2 , . . . , y iT ′ is a T×1 vector of observations on the dependent variable of i-th group; where The maximum likelihood (ML) estimation of long-run coefficients, θ , and group-specific error-correction coefficients φ i , can be computed by optimizing (Equation 4) with respect to ϕ. These ML estimators for reference will be PMG estimators in order to highlight both the pooling implied by homogeneity restrictions on long-run coefficients and averaging across groups used to derive means of the estimated error-correction coefficients and other short-run parameters of the model. We assume our equation for income elasticity of healthcare spending as, Where, number of countries, i = 1, 2, . . . , N; number of periods, t = 1, 2, . . . , T; he it is the log of CHE per capita; y it is GDP per capita; le it is LE at birth; and oop it is OOP (% of CHE). The choice of rightside variables, especially the control variables, are determined by the variables used in various studies as well as availability. Now, if the variables are I(1) and cointegrated, then the error term is I(0) for all i. The ARDL (1,1,1) dynamic panel specification of Equation (5) is, therefore, And the error correction equation is, The error-correction speed of the adjustment parameter, φ i and long-run coefficients, θ 1i , θ 2i , and θ 3i , are of our prime concern. A non-zero mean of a cointegrating relationship is allowed by the insertion of θ 0i . We expect φ i to be negative when variables expiate to long-run equilibrium. In accordance with our estimations, we hinge on the PMG estimation method, by and large, for analysis and interpretation of the parameters. However, PMG does not give country-specific long-run estimates, we rely on MG estimates for that purpose. A similar approach is used by Anderson and Shimul (68). The main justifications for such estimates are manifold. First, longrun responses of healthcare expenditures to income and other variables are likely to be similar across countries, although shortrun adjustments in healthcare spending, depending on patterns of investment in health, are unlikely to be homogeneous across countries. Again, the PMG estimator allows us to investigate long-run homogeneity without imposing parameter homogeneity in the short run. Second, as econometric theory suggests imposing homogeneity causes an upward bias in the coefficient of lagged dependent variable which makes the MG estimator inefficient since it may be sensitive to extreme values or outliers. Third, if the focus of analysis is on average (across countries) income elasticities, then PMG estimates are probably preferable to MG estimates on the grounds of their better precision. What is more, it is less sensitive to lag order used in estimation, irrespective of the sizes of T and N, in contrast to MG and DFE estimators. Fourth, one advantage of PMG over the traditional DFE model is that it can allow short-run dynamic specification to differ from country to country as the PMG model is less restrictive. Since we are considering a wide range of countries with different time periods, heterogeneity across countries is quite expected. However, we will use a homogeneity test (69) to understand whether data exhibits heterogeneity across countries. We prefer to estimate only the income elasticity of health spending rather than any causality estimation. In such cases, we can ignore endogeneity if prevails, analogous to Anderson and Shimul (68), Pesaran et al. (25), and Pesaran et al. (66). More specifically, we use the PMG method to estimate short-run elasticities and error corrections across countries. Error correction close to one indicates that it can recover healthcare spending from GDP shocks straight away. We rely on MG estimators for long-run elasticity estimates. Since the PMG method constrains long-run coefficients to be equal across groups, the MG estimator is a simple arithmetic average of the coefficients, which can be calculated separately for each group. . Findings and discussions We report the estimates of elasticities in this section using dynamic heterogeneous panel estimators PMG, and MG, along with DFE estimates. As mentioned earlier, for long-run overall estimates we rely on PMG estimation. However, we report MG estimates for long-run country-specific estimates. In addition, we record DFE estimates for comparisons. In addition, the homogeneity test (69) suggests that the data used here are heterogeneous as Delta Statistic is statistically significant (p = 0.000) (see Supplementary Table A1). Table 1 reports the long-run estimates of PMG, MG, and DFE estimators for GDP per capita, LE at birth, and OOP (% of CHE). The preferred PMG long-run elasticity is 1.051 which is significantly different from zero but not much different from unity. The elasticity of more than one indicates that current healthcare expenditure per capita changes more than proportionately with changes in the GDP of the country. This finding is analogous to Bhat and Jain (14) (65). PMG estimates show that LE at birth and OOP also have a significant long-run relationship with CHE per capita. Shortrun PMG estimates of elasticities are not significantly different from zero. The short-run error correction parameter is −0.295, indicating a 30% correction (in opposite direction) in the first year following a country's GDP shock. After 3 years, 90% of the disequilibrium is removed and in the fourth year, all the disequilibrium is recovered. Consequently, the speed of adjustment of GDP shock is relatively moderate. MG estimates indicate a long-run elasticity of 0.201 which is not significantly different from zero, but it is far lower than PMG estimate. MG estimates also show that LE at birth has a significant long-run relationship with CHE per capita. Shortrun error correction is −0.755 indicating a relatively strong 76% correction (in opposite direction) in the first year following a country's GDP shock. It indicates that in the second year, all the disequilibrium is removed. Hence, the speed of adjustment of GDP shock is relatively fast in this case. The short-run MG estimates of elasticities are not significantly different from zero, as same as PMG estimates. Recall that, PMG long-run estimates are identical across countries, whereas MG estimates separate estimates for different time series of different countries and then averages those. Hence, an MG elasticity of 0.201 is the average of the country time series estimates. On the other hand, PMG estimates allow short-run estimates to vary across countries but constrains long-run estimates to be identical. Standard errors in parentheses and * p < 0.10, * * p < 0.05, * * * p < 0.01. Hence, the PMG estimate of 1.051 shows the common long-run elasticity for all 177 countries. For comparison, DFE estimates are also presented in the last two columns of Table 1. It shows long-run elasticity is 0.954 and shortrun error correction is −0.240. DFE long-run elasticity is bracketed by PMG and MG estimates, but DFE short-run error correction is smaller than the other two estimates. PMG estimates are preferred due to the nature of our dynamic heterogeneous panel data whereas DFE estimator estimates simply pool the cross-section data. . . Heterogeneity and country-specific estimates of elasticities Along with long-run and short-run common estimates, we provide the country-specific estimates considering heterogeneity. This is to be noted that those countries with at least one of the three (long-run, short-run, and short-run error correction) estimates significantly different from zero are presented in Table 2. Even though PMG is the preferred method for aggregate level results, we relied on MG estimation for long-run elasticities when we intend to understand differences in long-term elasticities across countries. In addition, we relied on the PMG estimation for shortrun elasticities. Column (1) presents long-run elasticities, column (2) shows short-run elasticities, and column (3) gives short-run error correction estimates. Long-run MG elasticity estimates are reported in column (1), as the long-run PMG estimator constrains estimates to be identical for each country. MG estimates are significantly different from zero for 78 countries. Though Table 1 reports the average MG long-run coefficient as 0.201, a wide range (−8.291 to 4.174) of estimates is evident here specifying substantial heterogeneity of data. Of the 78 countries, 50 have estimates above the average (0.915) and 47 have estimated coefficients of more than one, indicating more than proportionate changes in CHE per capita with shocks in GDP of the country. Interestingly, countries with high GDP per capita (i.e., developed countries) have estimated coefficients of more than one including Australia, Austria, Canada, Estonia, Finland, the United Kingdom, Israel, Italy, Lithuania, Latvia, Malaysia, Poland, Portugal, Slovak Republic, and Tanzania. Fogel (20) Iheoma (64) also uses the PMG which restricts the long-run estimates to be equal across countries, while the short-run relationship captures countryspecific heterogeneity. The other way around, the MG estimator allows for heterogeneity in the short and long-run relationships between economic uncertainty and health expenditure per capita. Short-run PMG elasticity estimates are reported in column (2). PMG estimates are significantly different from zero for 58 countries. Rest (119) countries do not respond to changes in GDP in the short run. Though Table 1 Countries with relatively lower GDP per capita have estimated short-run elasticity of more than one including Bangladesh, Brazil, Honduras, Jordan, Pakistan, Paraguay, Serbia, Uganda, and the Republic of Yemen. Tanzania is an exception in this case. Our study found that developed countries have a short-run elasticity of less than one which endorses (71). In column (3), short-run error correction estimates of 108 countries that are significantly different from zero are reported. This column also shows heterogeneity across countries in the process of error correction. Though Table 1 reports the average short-run error correction coefficient as 0.954, a wide range of estimates is evident here varying from −0.996 to 0.489, showing considerable heterogeneity. Of the 50 countries with slow error correction process (estimates are about 0.30 or less), Spain, France, Gambia, and Oman have the slowest error correction process (estimates are about 0.1 or less). Ninety countries have moderate error correction processes (estimates are more than 0.50 and <0.75), and 18 countries have fast error correction countries (estimates are at least 0.75). Of these, Angola, Gabon, Nigeria, Philippines, Sierra Leone, Chad, Vietnam, and Zimbabwe have the fastest error correction process (estimated coefficients of at least 1). Hence, column (3) reveals extensive heterogeneity in country GDP which is revealed by using dynamic panel estimators. Figure 1 depicts the elasticity estimates across the globe. It is clear that the long-run elasticities of most countries, except for a few African nations, are quite large. But this is not quite true for short-run estimates as there is a large variation in short-run estimates. Even though short-run estimates are not statistically different from zero for many countries, error correction coefficients are significant. This phenomenon indicates that the elasticities of most countries are driven by long-run behavior rather than short-run one and countries revert to long-run equilibrium quickly once there is an income shock. Table 3 reports descriptive properties of the estimates and shows heterogeneity in estimates across countries. There is substantial variation in the long-run, as shown by the large (16.053) coefficient of variation (CV). Long-run estimates ranged from a minimum of −20.64 to a maximum of 14.59, whereas the mean and median estimates are 0.201 and 0.796, respectively. The 25th and 75th percentiles are similarly very widely divergent, indicating substantial variation in the long-run experience of the countries. Short-run estimates (mean 0.105 and median 0.141) also show a wide variation in the experience of countries, as indicated by large CV (10.562). Error correction process (mean −0.295 and median 0.141) shows limited variation is found in the case of the error correction process, as indicated by a small CV. It is also found that . /fpubh. . there is a positive correlation between short-run estimates and error correction estimates (correlation coefficient is 0.26 with a p-value < 0.01), indicating countries with significant and faster error correction processes have larger short-run estimates. We also tested for cross-sectional dependency (CD) across panels using Pesaran's (72) CD test for weak cross-sectional dependence. The CD statistic is −0.702 which is statistically insignificant (p = 0.483). Thus, cross sectional dependency is not an issue for this study. We tested stationarity where GDP per capita (log of GDP per capita) was found non-stationary in level form (see Supplementary Table A5). However, as PMG can incorporate nonstationary variables, our estimates are suitable for heterogeneous non-stationary panels (73) as well. One limitation of this study is worth noting. With long time series, the chance of having infrequent shocks that leave a permanent effect on a variable is high. Since this study includes panel data with long time series, structural breaks are not unlikely. However, the current study did not include a structural break in the estimation, partly because the structural break issue is more important for time series data and the option of using a structural break with PMG is limited, if not irrelevant. Another limitation of the study is that it is mostly an empirical exercise without using any explicit theoretical model. However, similar approaches are not uncommon in this type of study in literature, such as in Dogan et al. (74), Fedeli (65), Mehmood et al. (62), and Iheoma (64). In addition, no explicit political propositions are considered in the regression, nor did we include a regional analysis. However, in our analysis, we attempted to understand the differences across countries. . Conclusion and policy recommendations Our analysis aimed at estimating both short-run and longrun responsiveness of healthcare spending to changes in a country's GDP using suitable statistical tools for non-stationary dynamic heterogeneous panels. For this purpose, our analysis includes estimates of short-run, error correction, and long-run responses using MG, PMG, and DFE estimators. These estimators have not been used in health economics literature to date and are conventionally suited to the heterogeneous experience of 177 countries over 60 years of analyses. Using MG and PMG estimates, we determine the heterogeneity in the elasticities of healthcare spending. Based on our preferred PMG estimation method, healthcare spending is responsive when a country's GDP changes, with an estimated elasticity in excess of unity: 1.051. Positive GDP shocks result in more than proportional changes in healthcare spending, whereas negative shocks end in larger reductions. Though healthcare spending is more sensitive to changes in state GDP in long run, the error correction process is relatively prolonged. The error correction term is only −0.295, indicating that healthcare spending recovers only 30% of that change in the following year for a country's GDP shock. Furthermore, long-run MG elasticity estimates reveal that, even though the overall estimated elasticity is very low (0.201), the error correction process is rapid with the value of −0.755, indicating that in the second year, all the disequilibrium is removed. The majority (90 out of 108) countries have moderate error correction processes with estimates of more than 0.50 and <0.75. Of them, eight have the fastest error correction process with estimated coefficients close to 1. From country-specific estimation, it was revealed that developed countries have estimated long-run elasticity of more than one and less developed countries have estimated short-run elasticity of more than one, indicating that developed countries' healthcare spending is responsive to GDP shock in longrun whereas least developed country are responsive in short-run. Now, these findings have enormous implications for developing countries as most countries of the world are now facing COVID-19 and post-COVID-19. Since many developing countries' estimated short-run elasticity is relatively larger, developing countries may witness more fluctuation in their healthcare expenditure due to GDP shocks which are expected to occur in near future. This shock in healthcare expenditure may have a negative effect on the population. Therefore, countries should pay sincere attention to keeping their healthcare expenditure stable. This study provides some guidance on how countries will revert to their long-run trend of healthcare expenditure based on their historical trend. It is worth mentioning that even though the current study provides robust estimates, some of its limitations such as reliance on mostly empirical analysis without strong theoretical justification may plague the findings. Future research can address those limitations to improve the estimates. Data availability statement The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
8,122.8
2023-03-07T00:00:00.000
[ "Economics" ]
Concentration and Poincar\'e type inequalities for a degenerate pure jump Markov process We study Talagrand concentration and Poincar\'e type inequalities for unbounded pure jump Markov processes. In particular we focus on processes with degenerate jumps that depend on the past of the whole system, based on the model introduced by Galves and L\"ocherbach in \cite{G-L}, in order to describe the activity of a biological neural network. As a result we obtain exponential rates of convergence to equilibrium. Introduction Our objective is to obtain Poincaré type inequalities for the semigroup P t and theassociated invariant measure of non bounded jump processes inspired by the model introduced in [19] by Galves and Löcherbach, in order to describe the interactions between brain neurons. As a result we obtain exponentially fast rates of convergence to equilibrium. There are three interesting features about this particular jump process. The first is that it is characterized by degenerate jumps, since every neuron jumps to zero after it spikes, and thus looses its memory. The second, is that the probability of any neuron to spike depends on its current position and so from the past of the whole neural system. Thirdly, the intensity function that describes the jump behaviour of any of the non bounded neurons at any time is an unbounded function. For P t the associated semigroup and µ the invariant measure we show the Poincaré type inequality where the second term is a local term for the compact set D := {x ∈ R N + : x i ≤ m, 1 ≤ i ≤ N}, for some m. Accordingly, for every function defined outside the compact set {x ∈ R N + : x i ≤ m + 1, 1 ≤ i ≤ N} we obtain the stronger µ (V ar Pt (f )) ≤ c(t)µ(Γ(f, f ))). Consequently, we derive concentration properties In addition, we show further Talagrand type concentration inequalities, Before we describe the model we present the neuroscience framework of the problem. 1.1. the neuroscience framework. We consider a group of finitely many interacting neurons, say N in number. Every one of these neurons i, 1 ≤ i ≤ N is described by the evolution of its membrane potential X i t : R + → R + at time t ∈ R + . In this way, an N dimensional random process X t = (X 1 t , ..., X N t ) is defined that represents the membrane potential of the N neurons in the network. The membrane potential X i t of a neuron i does not describe only the neuron itself, but also the interactions between the different neurons in the network, through the spiking activity of the neuron. What is called spike, or alternatively action potential, is a high-amplitude and brief depolarisation of the membrane potential that occurs from time to time, and constitutes the only perturbations of the membrane potential that can be propagated from one neuron to another through chemical synapses. The frequency with which a neuron spikes, is expressed through the intensity function φ : R + → R + . When a neuron has membrane potential x, then its intensity is φ(x). Neurons loose their memory every time they spike, in the sense that after a neuron i spikes its membrane potential is set to zero, which can be understood as the resting potential. The membrane potential of the rest of the neurons j = i is then increased by a quantity W i→j ≥ 0 called the synaptic weight, which represents the influence of the spiking neuron i on j. It should be noted that the membrane potential of any of the N neurons between two consequent jumps remains constant. From our discussion up to this point it should be clear that the whole dynamic of the whole interacting neural system is interpreted exclusively by the jump times. Thus, from a purely a probabilistic point of view, this activity can be described by a simple point process. One should however bare in mind that since the spiking neuron jumps to zero these point processes are non-Markovian. For examples of Hawkes processes describing neural systems one can look at [11], [17], [18], [19], [23] and [25]. An alternative view point, instead of focusing exclusively on the jump times, is to try to model the evolution of the membrane potential that occurs between jumps as well, when this evolution is already determined. In the case of deterministic drift between the jumps, for example, as examined in [24], the membrane potential is attracted towards en equilibrium potential exponentially fast. In that case, the process is a Piecewise Deterministic Markov Processe introduced by Davis in [13] and [14]. PDMP processes are frequentely used in probability to model chemical and biological phenomena (see for instance [12] and [30], as well as [4] for an overview). In the current paper we adopt a similar framework, but in our case we do not consider a drift between the jumps, but rather a pure jump Markov process, which for convenience we will abbreviate as PJMP. Although here we work with a finite number of neurons, so that we can take advantage of the Markovian nature of the membrane potential, Hawkes processes in general allow the study of infinite neural systems, as in [19] or [25]. On the contrary of [24], a Lyapunov-type inequality allows us to get rid of the compact state-space assumption. Due to the deterministic and degenerate nature of the jumps, the process does not have a density continuous with respect to the Lebesgue measure. We refer the reader to [29] for a study of the density of the invariant measure. Here, we make use of the lack of drift between the jumps to work with discrete probabilities instead of density. 1.2. the model. Consider the intensity function φ : R + → R + , which satisfies the following conditions: There exist strictly positive constants δ, c such that The intensity function characterizes the Markov process X t = (X 1 t , . . . , X N t ). If we define then the generator L of the process X is expressed through the intensity function, by for every x ∈ R N + and f : R N + → R any test function. Furthermore, for every i = 1, . . . , N and t ≥ 0, the Markov Process X solves the following stochastic differential equation Poisson random measures on R + ×R + with intensity measure dsdz, for some N > 1 fixed. 1.3. Poincaré type inequalities. We have defined a PJMP that describes our neural system, with dynamics similar to the model introduced in [19]. We aim in studying Poincaré type inequalities both for the semigroup P t and the invariant measure µ of the process. We start with a description of the analytical framework and the definition of the Poincaré inequality on a general discreet setting. For more details one can consult [3], [10], [16], [32] and [35]. Throughout the paper we will conveniently write f dv for the expectation of the function f with respect to the measure v, f dv. Consider a Markov semigroup P t f (x) = E x (f (X t )) and the infinitesimal generator Lf := lim t→0+ Ptf −f t of a Markov process (X t ) t≥0 . We will frequentely use the following relationships: d dt P t = LP t = P t L (see for instance [22]). Furthermore, we will say that a measure µ is invariant for the semigroup (P s ) s≥0 if µ satisfies µP s = µ, for every s ≥ 0. From the definition of the generator we obtain that µ(Lf ) = 0. Define the "carré du champ" operator Γ(, ) by In the special case of the PJMP where the infinitesimal generator L has the form (1.4), a simple calculation shows that the carré du champ has the following expression We recall the definition of the variance of a function f with respect to a probability measure m: Having defined all the necessary increments, we can present the definition of the classical Poincaré inequality. A probability measure m satisfies the Poincaré inequality if for some strictly positive constant C independent of the function f . In the case where instead of a single measure we have a familly of measures as in the case of the semigroup {P t , t ≥ 0}, then the constant C may depend on the time t, i.e. C = C(t), as is the case for the examples studied in [35], [3] and [10]. The aforementioned papers used the so-called semigroup method that will be also followed in the current work. The nature of this method usually leads to an inequality for the semigroup P t which involves a time constant C(t). In both [35] and [3], in order to retrieve the carré du champ the translation property was used. Taking advantage of this, for example in [35], the inequality was obtained for a constant C(t) = t for a path space of Poisson point processes. Although this property does not hold in the degenerate PJMP examined here, we can still show that a Poincaré inequality, which also involves the invariant measure, holds for the semigroup {P t , t ≥ 0} but with a time constant C(t) of higher order than one. In a recent paper [26] the same degenerate PJMP as in (1.1)-(1.4) was considered but for bounded neurons, with membrane potential taking values in a compact set D (1.6) D := {x ∈ R N + : x i ≤ m, 1 ≤ i ≤ N} for some positive constant m. The Poincaré type inequality obtained for the compact case was where α(t) is a second order polynomial of time t and β some positive constant. In the more general non compact case examined in the current paper we will prove an alternative weighted Poincaré type inequality, which is formulated by taking the expectation with respect to the invariant measure µ for the semigroup (P t ) t≥0 in the typical Poincaré inequality, that is where on the right hand side we have also added two local term for the compact set D as in (1.6), the one of which has a weight that depends on the intensity function φ. Consequently, the stronger holds for every function f with a domain outside the compact {x ∈ R N + : The reasons why in the unbounded case we focus on this particular Poincaré type inequality rather than the classical one about P t presented above relate with the special features that characterise the behaviour of the PJMP process examined in the current paper. Some of them are similar to the compact case, like the memoryless behaviour of the neuron that spikes and as already mentioned the lack of the translation property. In the non compact case however, we also have to deal with the hindrance of controlling the intensity function φ which is non bounded. In order to handle the intensity functions we will use the Lyapunov method presented in [9] and [5] which has the advantage of reducing the problem from the unbounded to the compact case where variables take value within the compact set D defined in (1.6), that satisfies where the set N i=1 x i ≤ m is the set involved in the Lyapunov method. Since the jump behaviour depends on the current position of a neuron this has the benefit of bounding the values of φ and thus controlling the spike behaviour of the neurons. The Lyapunov method however, as we will see later in more detail in the proof of Proposition 2.6, requires the control of a Lyapunov function V, more specifically of −LV V . As it will be explained in more detail later, this is a problem that although can be solved relatively easy in the case of diffusions by choosing appropriate exponential densities, in the case of jump processes it is more difficult and requires the use of invariant measures. The inequality for the semigroup famille {P t , t ≥ 0} which refers to the general case where neurons take values in the whole of R + follows. Theorem 1.1. Assume the PJMP as described in (1.1)- (1.4). Then, for every t ≥ t 1 , for some t 1 > 0, the following weighted Poincaré type inequality holds while δ 1 (t) a third and δ 2 (t) a second order polynomial of t respectively, that do not depend on the function f , where the set D is as in (1.6). As a direct corollary of the theorem we obtain the following. for every t ≥ t 1 , for some t 1 > 0. We conclude this section with the Poincaré ineqality for the invariant measure µ presented on the next theorem. Concentration and other Talagrand type inequalities. Concentration inequalities play a vital role in the examination of a system's convergence to equilibrium. Talagrand (see [33] and [34]) associated the log-Sobolev and Poincaré inequalities for exponential distributions with concentration properties (see also [8]), that is for some p ≥ 1. In particular, when the log-Sobolev inequality holds, then (1.7) is true for p = 2, while in the case of the weaker Poincaré inequality, the exponent is p = 1. Furthermore, the modified log-Sobolev inequality that interpolates between the two, investigated for example in [7], [20] and [31], gives convergence to equilibrium of speed 1 < p < 2. The problem of concentration properties for measures that satisfy a Poincaré inequality, or as in our case, the Poincaré type inequality, is closely related with exponential integrability of the measure, that is µ(e λf ) < +∞ for some appropriate class of functions f . This problem, is itself connected to bounding the carré du champ of the exponent of a function for some Ψ(f ) uniformly bounded. In the case of diffusion processes where the carré du champ is defined through a derivation, (1.8) is satisfied for ||∇f || ∞ < 1 (see section 3 for more details). For a detailed discussion on the subject one can look at [28]. In our case we consider Then we can obtain exponential integrability and a bound ( This, together with the Poincaré type inequality already obtained, can show concentration properties for a different class of functions than the ones assumed in Corollary 1.5, as presented in the next theorem. Consequently we obtain the following convergence to equilibrium property: For every function f , satisfying where µ is the invariant measure of the semigroup P t . Furthermore, for the case of unbounded neurons, we can obtain Talagrand inequalities in the spirit of the ones proven for the modified log-Sobolev in [7]. x : A few words about the structure of the paper. The proof of the Poincaré inequality for the semigroup P t and the invariant measure µ are presented in sections 2.1 and 2.2 respectively. For both inequalities a Lyapunov inequality will be used to control the behaviour of the neurons outside a compact set. This is proven at the begining of section 2. In the final section 3 the concentration inequalities are proven. At first in Proposition 3.1 we present the main tool that connects the Poincaré type inequality with the concentration properties. Then, the required conditions are verified for the PJMP. proof of the Poincaré inequalityies In both the inequalities involving the semigroup and the invariant measure, the use of a Lyapunov function will be a crucial tool in order to control the intensity function outside a compact set. At first we will work towards deriving the Lyapunov inequality required. That will be the subject of the next lemma. We recall that under the framework of [24], the generator of our process is given, for any function f, by We assume that for all i, j, W i→j ≥ 0, we can then consider that the state space is R N + . We put W i := j =i W i→j . Lemma 2.1. Assume that for all x ∈ R + , φ(x) ≥ cx and δ ≤ φ(x) for some constants c and δ > 0. Then if we consider the Lyapunov function: there exist positive constants ϑ, b and m so that the following Lyapunov inequality holds Proof. For the Lyapunov function V as stated before, we have in which case m = b+α(c∧δ) (1−α)(c∧δ) . Since α can be chosen arbitrary close to 1, if we want to impose α(c ∧ δ) > 1, we need to assume that c > 1 and δ > 1. In the following subsection we show the weighted Poincaré for the semigroup P t , while in subsection 2.2 we show the inequality for the invariant measure. Poincaré inequality for the semigroup. In this section we prove the main results of the paper for systems of neurons that take values on R + , presented in Theorem 1.1. As mentioned in the introduction, the approach used will be to reduce the problem from the unbounded case to the compact case examined in [26]. To do this we will follow closely the Lyapunov approach developed in [9] and [5] to prove superPoincaré inequalities. We start by showing that the chain returns to the compact set D with a strictly positive probability bounded from below. For a neuron i ∈ I and time s, we define p s (x) to be the probability that the process starting with initial configuration x has no jump during time s, and p i s (x) the probability that the process has exactly one jump of neuron i and no jumps for other neurons during time s. Then, as a function of the time s, is continuous, strictly increasing on (0, t 0 ) and strictly decreasing on (t 0 , +∞), while we have p i 0 (x) = 0. For any configuration y ∈ D we define the set of configurations D y containing all configurations x such that for some t > 0, π t (x, y) := P x (X t = y) > 0. Lemma 2.2. Assume the PJMP as described in (1.1)-(1.4). Then, for every y ∈ D and x ∈ D y , Proof. We want to show that for every configuration y ∈ D that belongs to the domain of the invariant measure, one has that π t (x, y) ≥ 1 θ for some positive θ. The proof will be divided in three parts. A) At fist, for y ∈ D, we restrict ourselves to every x ∈ D ∩ D y . Since µ(y) > 0 and lim t→∞ π t (x, y) = µ(y) we readily obtain that for every couple x, y ∈ D there exist θ 1 > 0 and t x,y > 0 such that for every t > t x,y we have that π t (x, y) > 1 θ 1 . But since D is compact, the configurations in D are finite in number and so max x,y∈D {t x,y } < ∞. We thus conclude that there exists a θ 1 > 0 such that π t (x, y) > 1 θ 1 for every t > max x,y∈D {t x,y }. In the next two steps we extend the last result to x ∈ D c . B) We will show that there exist θ 2 > 0 and 1 δ > t 2 > 0, such that for every x ∈ D c ∩ D y there exists a z ∈ D ∩ D y such that We enumerate the N neurons with numbers from 1 to N on decreasing order, so ∆ 1 (x))...) the configuration starting from x after the 1st, then the 2nd up to the time the i'th neuron has spiked in that order. Then for every s i > 0 we have is the probability that the process starting from x has exactly one jump of the neuron i in time s and no jumps of other neurons. If we To see this, from (2.1) we can compute bounds for p i So we obtain π t 2 (x, z) ≥ (Ne) −N , and the result is proven for θ 2 = (Ne) N , z =x N and t 2 ≤ N i=1 s i ≤ 1 δ . C) Having shown (A) and (B) we can now complete the proof of the lemma for x ∈ D c . For this, it is sufficient, for every y ∈ D and x ∈ D c ∩ D y to write π t (x, y) ≥ π t 3 (x,x N )π t 2 (x N , y) and the assertion follows for t ≥ 1 δ + t 2 . Consequently, the lemma follows for t ≥ max{t 1 , t 2 + 1 δ }. Taking under account the last result, we can obtain the first technical bound needed in the proof of the local Poincaré inequality, taking advantage of the bounds shown for times bigger than t 1 . Lemma 2.3. Assume z ∈ D c . For the PJMP as described in (1.1)-(1.4), we have for every t ≥ t 1 . Proof. We can compute Since t ≥ t 1 , we can use Lemma 2.2 to bound for every w and y ∈ D, π u (w, y) ≤ θπ t (x, y) we obtain Now we will use twice the Cauchy-Schwarz inequality, to pass the square inside the two sums. We will then obtain Proof. Consider the semigroup P t f (x) = E x f (x t ). Since d ds P s = LP s = P s L, we can calculate We want to bound the carré du champ of the semigroup on the right hand side Γ(P t−s f, P t−s f ) by the semigroup of the carré du champ P t−s Γ(f, f ) so that the energy of the Poincaré inequality will be formed. If the process is such that the translation property E x+y f (z) = E x f (z + y) holds, as in [35] and [3], then one can obtain the desired bound as shown below. In our case where the degeneracy of the process does not allow for the translation property to take hold we will use a bound based on the Dynkin's formula. If we then use Dynkin's formula we can consequently bound In order to bound the second term above we will use the bound shown in Lemma 2.3 By the definition of the carré du champ we then get If we combine the last one together with (2.3) we obtain From the last lemma we obtain the following local Poincaré inequality. Proof. Since for µ the invariant measure of P t one has µ(x) = y µ(y)P t (y, x) we can write If we now use Lemma 2.4 to bound the semigroup we obtain where B t (x) a function of the semigroup P t of some function with initial configuration x. Then Proof. At first, we can write We can bound the first term on the right hand side from (2.5). For the second term we can use the Lyapunov inequality. That gives If we choose D large enough to contain the set B, i.e. B ∩ D c = ∅ the last one is reduced to The need to bound the quantity −LV V which appears from the use of the Lyapunov inequality is the actual reason why we need to make use of the invariant measure µ and obtain the type of Poincaré inequality shown in our final result, rather than the Poincaré type inequality based exclusively on the P t measure obtained in the previous section for the compact case. If we had not taken the expectation with respect to the invariant measure, we would had needed to bound instead. This, in the case of diffusions can be bounded by the carré du champ of the function Γ(f, f ) by making an appropriate selection of exponential decreasing density (see for instance [5], [6] and [9]). In the case of jump processes however, and in particular of PJMP as on the current paper where densities cannot been specified, a similar bound cannot be obtained. However, when it comes to the analogue expression involving the invariant measure there is a powerful result that we can use, which has been presented in [9] (see Lemma 2.12). According to this, when the expectation is taken with respect to the invariant measure, the desired bound holds as seen in the following lemma. Lemma 2.7. ( [9]: Lemma 2.12) For every U ≥ 1 such that − LU U is bounded from below, the following bound holds where µ is the invariant measure of the process and d 1 is some positive constant. Since V ≥ 1 and for x ∈ D we have from the Lyapunov inequality that − LV V ≥ ϑ we get the following bound for some positive constant d 1 . Since for the infinitesimal operator µ(Lf ) = 0 for every function f , we can write So that, Gathering all together we finally obtain the desired inequality which proves the proposition for a constant δ(t) = a 1 (t) + d 1 2ϑ . The last proposition together with the Lyapunov inequality from Lemma 2.1 and the local Poincaré inequality of Corollary 2.5 proves Theorem 1.1. proof of the Poincaré inequalities for the invariant measure. In the next proposition we see how the Lyapunov inequality is sufficient to prove a Poincaré inequality for the invariant measure µ presented in Theorem 1.3, using methods developed in [5], [6] and [9]. Proof. At first assume µ(f I D ) = 0. We can write For the second term if we work as in Proposition 2.6, with the use of the Lyapunov inequality we have the following bound For the first term, we will use the approach applied in [32] in order to prove Poincaré inequalities for finite Markov chains. Since we have assumed f I D dµ = 0, we can write If we consider J xy = J 1 , ..., J Jxy to be the shortest sequence of spikes that leads from the configuration x to the configuration y without leaving D, then we can denotex 0 = x and for every k = 0, ..., J xy ,x k = ∆ J k (∆ J k−1 (...∆ J 1 (x))...), the configuration after the kth neuron on the sequence has spiked. Since D is finite, the length of the sequence is always uniformly bounded for any couple x, y ∈ D. We can then write Since φ ≥ δ we have If we form the caré du champ, we will obtain This leads to Gathering everything together gives 3. proof of Talagrand inequality for the invariant measure. Now we can prove concentration properties. At first we present the general proposition that connects the Poincaré inequality of Theorem 1.3 with concentration of measure properties. The concentration properties will be based on the following proposition, that follows closely the approach in [27] (see also [28], [8], [1] and [2]). We will also use elements from [7] since one of the main conditions (3.1), will refer to the bounded function F r = min{F, r}. for some λ 0 > 0. Furthermore, Proof. From the Poincaré inequality, for f = e If we bound the carré du champ from condition (3.1) µ(e λFr ) ≤ C 0 λ 2 C 3 µ e λFr + µe For λ < 1 Iterating this gives We notice that µ(e λ 2 n Fr ) 2 n → e λµ(Fr ) as n → ∞ and that Since {P t F r < r} = {P t F < r} we can apply Chebyshev's inequality µ ({P t F > r}) ≤ e −λr µ(e λPtFr ) ≤ e −λr µ(P t e λFr ) = e −λr µ(e λFr ). because of Jensen's inequality and the invariant measure property µP t = µ. Substitute F with F − µ(F ) and the result follows. To complete the proofs of concentration theorems 1.4 and 1.6 and of Corollary 1.4, we need to verify (3.1) . We start with Theorem 1.6. We have to show condition (3.1) for F (x) = N i=1 x i . This will be the subject of the next lemma. Lemma 3.2. Assume the PJMP as described in (1.1)- (1.4) Then µ(Γ(e λFr/2 , e λFr/2 )) ≤ C 3 λ 2 µ e λFr where F r = min(F (x), r) for r > 0. Proof. From the definition of the carré du champ µ(Γ(e λFr/2 , e λFr/2 )) = To bound µ(M i ) we will distinguish four cases: a) Consider the set A := {x : F (x) ≥ r and F (∆ i (x)) ≥ r}. Then, for x ∈ A F r (∆ i (x)) = F r (x) = r and so µ(M i I A ) = 0. b) Consider the set B := {x : F (x) ≥ r and F (∆ i (x)) ≤ r}. Then, for x ∈ B, and computed e λFr I B = e λr I B = µ(e λr I B ) = µ(e λFr I B ). c) Consider the set C := {F (∆ i (x)) ≤ F (x) < r}. Then, for x ∈ C, Since F r ≤ r we know that µ(e λFr ) ≤ e λr < ∞ and so we can bound d) Consider the set D := {F (x) < r and F (x) < F (∆ i (x))}. Then, for x ∈ D, which means that x i is bounded by So, we can compute If we gather all four cases together, we finally obtain µ(Γ(e λF R /2 , e λFr/2 )) ≤ Cλ 2 µ e λFr for a constant In the remaining of the section we prove the main concentration properties of the paper, presented in Theorem 1.4 and Corollary 1.5 . What remains is to present conditions so that (3.1) of Proposition 3.1 holds.
7,092.4
2018-03-28T00:00:00.000
[ "Mathematics" ]
Co-administration of either curcumin or resveratrol with cisplatin treatment decreases hepatotoxicity in rats via anti-inflammatory and oxidative stress-apoptotic pathways Background Cisplatin (CIS) is a broad-spectrum anticancer drug, with cytotoxic effects on either malignant or normal cells. We aimed to evaluate the hepatotoxicity in rats caused by CIS and its amelioration by the co-administration of either curcumin or resveratrol. Materials and Methods Forty adult male rats divided into four equal groups: (control group): rats were given a saline solution (0.9%) once intraperitoneally, daily for the next 28 days; (cisplatin group): rats were given a daily oral dose of saline solution (0.9%) for 28 days after receiving a single dose of cisplatin (3.3 mg/kg) intraperitoneally for three successive days; (CIS plus curcumin/resveratrol groups): rats received the same previous dose of cisplatin (3.3 mg/kg) daily for three successive days followed by oral administration of either curcumin/resveratrol solution at a dose of (20 mg/kg) or (10 mg/kg) consequently daily for 28 days. Different laboratory tests (ALT, AST, ALP, bilirubin, oxidative stress markers) and light microscopic investigations were done. Results Administration of CIS resulted in hepatotoxicity in the form of increased liver enzymes, oxidative stress markers; degenerative and apoptotic changes, the co-administration of CIS with either curcumin or resveratrol improved hepatotoxicity through improved microscopic structural changes, reduction in liver enzymes activity, decreased oxidative stress markers, improved degenerative, and apoptotic changes in liver tissues. Conclusion Co-administration of either curcumin or resveratrol with cisplatin treatment could ameliorate hepatotoxicity caused by cisplatin in rats via anti-inflammatory and oxidative stress-apoptotic pathways. INTRODUCTION Cisplatin (CIS) is the first platinum-based medication licensed by the FDA to treat tumors, and it effectively works in curing a variety of solid tumors, such as breast, ovarian, bladder, and colon cancer (Zhou et al., 2023).It forms DNA adducts that prevent DNA replication and gene transcription in cancer cells, thereby exhibiting its anti-tumor properties (Abo-Elmaaty et al., 2020).Also, cisplatin has been used as the medication of choice for the treatment of cancer because of its mode of action involves DNA damaging in cancer cells by crosslinking purine bases in DNA, which induces apoptosis (Alkhalaf, Mohamed & El-Toukhy, 2023). However, a major concern with antitumor chemotherapy is its lack of selectivity since cytostatic medications interact with both tumor cells and rapidly proliferating normal, healthy cells in the same way (El-Gizawy et al., 2020).Therefore, patients treated with chemotherapeutic drugs as cisplatin are susceptible to serious hazards as nephrotoxicity, cardiotoxicity, neurotoxicity, and hepatotoxicity which significantly impairs their clinical outcomes (Tang et al., 2023).Furthermore, hepatotoxicity was emerged as a significant adverse effect of CIS-based chemotherapy that limits its dosage (Alkhalaf, Mohamed & El-Toukhy, 2023). The precise mechanism causing this damage is still unknown.However, several researches have suggested the cause of CIS-induced hepatotoxicity by the accumulation of CIS-metabolites in the liver leading to generation of reactive oxygen species (ROS) mediating oxidative stress-dependent mechanism (Akcakavak, Kazak & Yilmaz Deveci, 2023;Louisa et al., 2023).Moreover, oxidative stress-related cell death triggers an inflammatory response and is crucial to the pathophysiology of CIS-induced hepatotoxicity (Aboraya et al., 2022). Herbal-based compounds are a subset of modern pharmacotherapy that cause fewer side effects in patients (Chupradit et al., 2022).Hence, we need to find natural or chemical bioactive compounds with antioxidant and anti-inflammatory effects to overcome CIS-induced hepatotoxicity.Several scientific researches have demonstrated that the use of antioxidants is highly significant to our bodies because of their tendency to scavenge and stabilize free radicals, thereby preventing any cellular damage that may be caused by oxygen species and free radicals (Al-Baqami & Hamza, 2021). There is a growing interest in plant-derived compounds as a result of the need to develop alternate sources of novel medications with advantageous health features.Naturally occurring products have been employed for the treatment and prevention of a number of chronic diseases, including cancer.Among the most varied classes of secondary metabolites found in plants, polyphenolic compounds are particularly well-known for their ability to serve as multifunctional agents in promoting health as potential anticancer and hepatoprotective agents (Micale et al., 2021). Curcumin is the primary phenolic curcuminoid present in turmeric (curcuma longa) with a broad pharmacologic action which include antimicrobial, anticarcinogenic, antiinflammatory, antioxidant, immunomodulatory, and antimutagenic properties (Ali et al., 2020).Prior researches have investigated that the application of curcumin as a hepatoprotective substance may enhance cisplatin anticancer action and lessen the adverse hepatic toxicity caused by the chemotherapeutic drugs (Louisa et al., 2023;de Porras et al., 2023;Palipoch et al., 2014). Resveratrol (RSV) is a naturally occurring polyphenolic structure present in grapes and plums which exist as a monomer or as an oligomer with two to four monomer units (Ibrahim et al., 2021).Resveratrol is known to possess various health-promoting effects as anti-inflammatory, anti-oxidative, anti-neoplastic, and antimicrobial characters (Inchingolo et al., 2022).Several studies have investigated the efficacy of co-administration of RSV with cisplatin to augment its anticancer effect in the treatment of many carcinogenic diseases as ovarian, breast and uterine cancers and decrease the side effects on various body organs as kidney, liver, testis, and heart (Wang et al., 2009;Abd-Elhafiz & Issa, 2021;Aly & Eid, 2020). Although, curcumin and resveratrol had low water solubility, poor absorption rates, limited bioavailability and enhanced oxidation upon exposure to heat and light, but, they have been the most investigated polyphenolic compounds in the last two decades, because of their many powerful medicinal effects, particularly their strong antioxidant and inflammatory effects (Intagliata et al., 2019). Also, many studies revealed that the unique use of either resveratrol or curcumin had protective effects on alleviating cisplatin-induced damage in rat's liver or kidney (El-Gizawy et al., 2020;Kara & Kilitci, 2022).However, no research had studied the combined effects of either curcumin or RSV with cisplatin on the liver of rats in a single study. Combination treatment and molecular hybridization are useful methods for enhancing the activity of polyphenolic compounds (curcumin and resveratrol) (Hosseini-Zare et al., 2021). Moreover that, there were many deficiencies in the biochemical and histopathological parameters in assessment of the protective effects of those polyphenolic compounds used in the previous studies including, structural, anti-inflammatory, antioxidative, as well as the use of image analyzing program in assessment of the anti-fibrotic, anti-apoptotic effects. Hence, we aimed to examine the potential protective effects of co-administration of either curcumin or RSV with cisplatin on the liver of rats by evaluating laboratory, histopathological, image analyzing data and immunohistochemical changes. Ethical approval The ethical committee of the Damietta Faculty of Medicine, Al-Azhar University, Egypt, approved the experimental protocol (DFM-IRB 0001267-23-08-014) and all animals received the appropriate care followed the rules of national institutes of animal's health service policy on the use of laboratory animals. Chemicals and dosage Cisplatin was purchased from Sigma-Aldrich as 1 mg per 1 ml concentration which was administered intraperitoneally by a single injection of cisplatin in a dose of 3.3 mg/kg every day for 3 days (Alrashed & El-Kordy, 2019).Curcumin was purchased from local market, (authenticated and identified by a specialist at the Botany department, Faculty of Agriculture, Al-Azhar University) in the form of dry rhizome which was mechanically ground and then extracted using boiling water for a full night.Three reports on the technique were made, and it involved pooling, concentrating under low pressure, and freeze drying (Ahmed, El-Deib & Ahmed, 2010).Following its suspension in 0.05% gum acacia solution, 20 mg/kg of curcumin was given orally (1 ml of the solution contained 2.5 mg of curcumin) (Diab et al., 2014). Resveratrol was purchased from Sigma-Aldrich, produced freshly in 0.9% normal saline at a concentration of (10 mg/kg; daily) via oral gavage (Ibrahim et al., 2021). Animals We used 10 animals/group to avoid any interference of results with any possible mortality rates and good statistical analysis of biochemical results. Forty adult male albino rats (11 weeks age), weighing between 115 and 145 g each, they were obtained from Serum and Vaccine Institute at the Agricultural Research Center, Cairo, Egypt.They were housed in the animal house of the Damietta Faculty of Medicine, Al-Azhar University, Egypt, randomly assigned into groups, housed in a labelled hygienic, well-ventilated steel cages (5/cage) at room temperature and under carefully regulated light/dark cycles (12/12 h).They also had unlimited access to tap water and regular rodent chow a week prior to the experiment, to become acclimatized and all through the experiment. Experimental design Rats were acclimatized in a room with a consistent temperature of 22 ± 1 C, humidity (45-65%) and randomly separated into four groups (10 rats/group) and organized in a labeled cages (5/cage) as follows: Group I (Control): rats were given a saline solution (0.9%) once intraperitoneally, daily for the next 28 days; Group II (cisplatin only): rats were given a daily oral dose of saline solution (0.9%) for 28 days after receiving a single dose of cisplatin (3.3 mg/kg) intraperitoneally for three successive days; Group III (cisplatin plus curcumin): rats were given a single dose of cisplatin (3.3 mg/kg) intraperitoneally daily for three successive days, then received a daily oral dose of curcumin solution (20 mg/kg) for 28 consecutive days; Group IV (cisplatin plus Resveratrol): rats were given a single dose of cisplatin (3.3 mg/kg) intraperitoneally daily for three successive days, then received a daily oral dose of resveratrol solution (10 mg/kg) for 28 consecutive days. Sampling After 4 weeks from the first dose of cisplatin treatment, 4% isoflurane (SEDICO Pharmaceuticals, Cairo, Egypt) in 100% oxygen was used to anesthetize the rats, each rat's retro-orbital plexus was punctured to extract blood samples, which were then placed in sterile, dry centrifuge tubes and allowed to coagulate for 30 min at room temperature (RT) in a slanted position.The serum was then extracted by centrifuging the samples at 1,200 × g for 20 min, and it was stored at −20 C until it was needed for additional biochemical study. Following the collection of blood, all groups of animals were sacrificed by cervical decapitation, and each rat's liver was removed before being cleaned with physiological saline, a specimen of liver tissue was processed for determination of oxidative stress, lipid peroxidation parameters and another specimen was fixed in n 10% neutral buffered formalin for histological and immunohistochemical study.Outcomes were blindly assessed by the investigator who is ignorant of either treated or control rats.All expected or unexpected adverse events were recorded. Biochemical estimation The serum activity of liver enzymes Alanine aminotransferase (ALT) (LOT: 32307166), aspartate aminotransferase (AST) (LOT: 10107023), alkaline phosphatase (ALP) (LOT: 32060283); and the serum levels of total and direct bilirubin (LOT: 202188) were measured by the kits provided by the Biodiagnostic Company (Cairo, Egypt).The indirect bilirubin levels were estimated using the difference between the total and direct bilirubin levels. Analysis of lipid peroxidation and antioxidant status in the liver tissue homogenate A specimen of liver tissue was homogenized in Tris-HCl buffer (pH 7.4), after centrifuging the homogenate for 10 min at 4 C at 3,000 rpm, the supernatant was kept at −20 C until determination of the oxidative stress parameters as glutathione (GSH), superoxide dismutase (SOD), catalase, and glutathione peroxidase (GPx), lipid peroxidation was also quantified in terms of malondialdehyde (MDA) production in liver tissue homogenates by the aid of the commercially available kits provided by Biodiagnostic Company (Cairo, Egypt) using spectrophotometry (Mispa Viva, Swiss) (Ruiz-Larrea et al., 1994). Histopathological assessment Another liver specimen was fixed in 10% formaldehyde and further processed for histological analysis by a histologist blinded to the study groups using Raywild light microscope with built in camera (15 mega pixels) and image analyzing system after staining with hematoxylin and Eosin stain for structural changes.Masson trichrome stain for fibrotic changes & caspase three for apoptotic changes (Suvarna, Layton & Bancroft, 2018). Statistical analysis The experimental findings were presented as means ± SD, or means ± standard deviation.All data were analyzed using the Statistical Package for the Social Sciences for Windows, version 20.0 (SPSS Inc., Chicago, IL, USA).Multivariate statistical analysis and ANOVA were used to compare the groups, and P ≤ 0.05 was chosen as the significance probability. Biochemical parameters The cisplatin only group had a significant increase in the mean blood levels of ALT, AST, ALP, T. bilirubin, D. bilirubin, and I. bilirubin (P < 0.05) when compared to the control group, while the groups of co-administration of either curcumin or resveratrol with cisplatin treatment revealed a significant reduction in the previous parameters (P < 0.05) when compared to the cisplatin only group (Table 1).The cisplatin only group had a significant reduction in the mean tissue levels of SOD, GPX and CAT (P < 0.05); and a significant elevation in the mean tissue level of MDA (P < 0.05) in comparison to the control group, while the groups of co-administration of either curcumin or resveratrol with cisplatin treatment revealed a significant increase in the mean tissue levels of SOD, GPX and CAT (P < 0.05); and a significant decrease in the mean tissue level of MDA (P < 0.05) in comparison to the cisplatin only group (Table 2). The cisplatin only group had a significant elevation in the mean serum inflammatory markers levels (IL-1β, IL-6, and TNF-a) (P < 0.05) and a significant reduction in the level of IL-10 in comparison to the control group, while the groups of co-administration of either curcumin or resveratrol with cisplatin treatment revealed a significant reduction in the previous parameters (P < 0.05) and a significant increase in the level of IL-10 in comparison to the cisplatin only group (Table 3). The morphometric analysis of the image analyzing study of the liver section stained with Masson trichrome for detection of collagen deposition (fibrosis) revealed a significant elevation (P < 0.05) in the percentage area of collagen deposition in the liver of rats exposed to cisplatin only compared to control group, while, there were significant decrease in the collagen deposition in the groups of co-administration of either curcumin or resveratrol with cisplatin treatment when compared to the cisplatin only group (Table 4). The morphometric analysis of the image analyzing study of the liver section stained with caspase-3 immune stain (apoptosis) revealed a significant elevation (P < 0.05) in the percentage area of caspase-3 immune stain expression in the liver of rats exposed to Cisplatin only compared to control group, while, there were significant decrease in the caspase-3 immune stain expression in the groups of co-administration of either curcumin or resveratrol with cisplatin treatment when compared to the cisplatin only group (Table 4). Histopathological assessments The liver sections stained with HX.&E:The control group appear with normal central vein, normal portal tract, hepatocytes arranged in cords and normal blood sinusoids in-between (Figs.1A and 1E); the cisplatin group showed dilated congested central vein and portal tract in addition to infiltration of the central vein and periportal region with inflammatory cells, increased pyknotic cells (Figs. 1B and 1F).The groups of co-administration of either curcumin or resveratrol with cisplatin treatment revealed amelioration of the structural lesions caused by cisplatin-only in the liver tissue with more or less restoration of the diameters of central and portal veins (Figs.1C-1H). The liver sections stained with masson trichrome: the control group revealed minimal collagen deposition in both regions of the central vein and portal tract (Figs.2A and 2E).The cisplatin-treated group revealed marked collagen deposition (marked fibrosis) around the central vein and portal vein in both regions of the central vein and portal tract, in addition to the marked dilatation and congestion in both the central vein and portal tract (Figs.2B and 2F).The groups of co-administration of either curcumin or resveratrol with cisplatin treatment revealed amelioration of the collagen deposition caused by cisplatin-only in the liver tissue with more or less restoration of the diameter of the central vein and portal tract which appeared less congested than the cisplatin treated group (Figs.2C-2H). The liver sections stained with caspase-3 immune stain: the control group revealed minimal expression of the immune stain in both the central vein and portal tract regions, Notes: * Significant differences between the CIS and control groups.# Significant differences between CIS plus CUR or RES-treated groups and CIS group.which appeared normal in structure (Figs.3A and 3E).The cisplatin-treated group revealed marked expression of the caspase-3 immune stain (marked apoptosis) in both regions of the central vein and portal tract, in addition to excessive dilatation of central vein and portal vein (Figs. 3B and 3F).While, the groups of co-administration of either curcumin or resveratrol with cisplatin treatment revealed amelioration of expression of the immune stain induced by cisplatin-only in the liver tissue, also both of central vein and portal vein were restored to normal diameter compared to the cisplatin treated group (Figs.3C-3H). DISCUSSION Cisplatin (CIS) is one of the most potent cytotoxic anticancer drugs with hepatotoxic effects suggested to be caused by increased ROS production and cellular damage (Aboraya et al., 2022).Moreover, it results in induction of oxidative stress which can lead to inflammation and the synthesis of cytokines including TNF-a and IL-6.It has been shown that elevated ROS and pro-inflammatory cytokines cause hepatocyte apoptosis (Ingawale, Mandlik & Naik, 2014).In this study, we assessed the effect of co-administration of either curcumin or resveratrol with cisplatin treatment to decrease the CIS-induced hepatotoxicity in rats via anti-inflammatory and oxidative stress-apoptotic pathways. Administration of cisplatin to rats in a dose of 3.3 mg/kg b.wt.resulted in structural histopathological changes in the liver in the form of infiltration of the central vein and periportal region with inflammatory cells, dilated congested central vein and portal tract, increased pyknotic cells.This was similar to the findings of a previous studies documented that the liver sections of a rat exposed to CIS in different doses (1, 3.3, 5, 7.5,15 mg/kg b. wt.) displayed signs of liver damage, including sinusoidal dilatation, vascular congestion, inflammatory cell infiltration of the liver's stroma and portal triad, and focal sites of degeneration (Aboraya et al., 2022;Ogbe, Agbese & Abu, 2020;Bademci et al., 2021;Qu et al., 2019;El-Sayyad et al., 2009;Pace et al., 2003). In this study, the alteration in the structural changes in the liver due to cisplatin exposure was corelated with the alteration in the liver functions as proved by the notable elevation in the mean blood levels of liver enzymes (ALT, AST, ALP) in the cisplatin exposed group in comparison to the control group.This was in agreements to the assay of liver functions results of other studies investigating the hepatotoxic effects of cisplatin exposure in rats (Alkhalaf, Mohamed & El-Toukhy, 2023;Akcakavak, Kazak & Yilmaz Deveci, 2023;Ogbe, Agbese & Abu, 2020). The ability of CIS to cause an increase in ALT, AST, and ALP serum activity is thought to occur as a byproduct of CIS-induced liver injury and the subsequent hepatocyte leakage of these enzymes and may indicate liver degeneration and fibrosis (Gressner, Weiskirchen & Gressner, 2007). Moreover, we assayed the bilirubin levels in our study as it is a well-known marker of tissue damage from toxic chemicals, the serum levels of T. bilirubin, D. bilirubin, I. bilirubin were elevated in the cisplatin exposed group in comparison to the control group.In accordance to our results a recent study (Aboraya et al., 2022) assumed that rats treated with CIS demonstrated hyperbilirubinemia as a result of higher levels of total and indirect bilirubin, they suggested the elevated indirect bilirubin levels caused by hemolytic anemia in the hematological picture.Also, it may have come from either the reduced rate of bilirubin conjugation in the liver or the decreased hepatic absorption of bilirubin (VanWagner & Green, 2015). In contrary to our results a previous study revealed normal albumin level following CIS injection, despite a notable drop in the levels of total protein and globulin (Neamatallah et al., 2018).This could be due to the difference in rat ages, species, timing and mode of treatment. This was in agreements to the assay of liver functions results of other studies investigating the hepatotoxic effects of cisplatin exposure in rats. The deterioration in the liver structure and function due to exposure to cisplatin in our study could be caused by the induction of oxidative stress caused by the imbalance between oxidant-antioxidant levels influenced by increased generation of ROS (reactive oxygen species) in CIS-treated rats and reduce the scavenging power toward ROS.This was evidenced by the increase in hepatic MDA level and the decrease in enzymatic antioxidants, such as hepatic SOD and CAT.Furthermore, the hepatic tissue of rats given CIS is more vulnerable to oxidative stress due to the reduction of GPX levels.Similar to our findings several researches revealed increased oxidative parameters (MDA) in the liver tissue of rats exposed to CIS, along with a decrease in the enzymatic antioxidant activity including liver tissue levels of CAT, SOD and GPX (Bentli et al., 2013;Omar et al., 2016). In this study, oxidative stress in the liver induced by CIS-exposure has been predicted to cause inflammation as noticed by downregulation of IL-10 and significant upregulation of IL-1β, IL-6, TNF-a in the hepatic tissue, as well as apoptosis, which is confirmed by overexpression of hepatic caspase-3 immune stain.This was coinciding with results of a previous researches assayed the previous markers in rat's liver exposed to CIS (Neamatallah et al., 2018;Tahoun, Elgedawy & El-Bahrawy, 2021). The protein TNF-a is linked to apoptosis and has a role in inflammatory responses and the IL-10 can also be released by apoptotic cells.Thus, our findings indicate that CIS-induced hepatotoxicity is linked to inflammatory and apoptotic pathways as the first stage in the onset of apoptosis brought on by a variety of triggers is caspase activation. Moreover, oxidative stress generation in the cisplatin-treated group is responsible for the marked deposition of collagen fibers in the liver tissue indicating fibrosis that was seen in our study, which was similar to a previous study revealed a noticeable accumulation of collagen fibers (He et al., 2006). The results of this research displayed that co-administration of either curcumin or resveratrol with cisplatin treatment decreases liver toxicity in rats as noticed by improvement in the liver structure (amelioration of pathological changes) and functions (ameliorations of liver enzymes and bilirubin levels). Similar to our results multiple researchers found that co-administration of curcumin with cisplatin treatment was found to decreases hepatotoxicity induced by CIS in rats (El-Gizawy et al., 2020;Palipoch et al., 2014;Ahmed, El-Deib & Ahmed, 2010;Diab et al., 2014).The beneficial use of curcumin in this study to reduce CIS-mediated hepatotoxicity was suggested to be evidenced by its anti-inflammatory effect via its ability to eradicate the free radicals, reduce pro-inflammatory cytokine levels (TNF-a, IL-6, and IL-1β); antiapoptotic effect through reducing the immune-expression of caspase-3; and antifibrotic effects as stated by previous studies (El-Gizawy et al., 2020;Louisa et al., 2023;He et al., 2006;Gao et al., 2022). Along similarity, multiple studies found that co-administration of resveratrol with cisplatin treatment was found to decreases hepatotoxicity induced by CIS in rats (Ibrahim et al., 2021;Wang et al., 2009;Abd-Elhafiz & Issa, 2021).The hepatoprotective effect of resveratrol in this study was suggested to be through its ability to reduce the pro-inflammatory cytokine levels (TNF-a, IL-6, and IL-1β); antiapoptotic effect through decrease in the expression of caspase-3; and antifibrotic effects as revealed by different studies (Al-Baqami & Hamza, 2021;Abd-Elhafiz & Issa, 2021;Liu et al., 2018). From the above findings, the present study was the first study to compare the co-administration of either resveratrol or curcumin with cisplatin through different techniques in a single study to prove the potential protective effects of both compounds and found that both compounds have antifibrotic, anti-inflammatory, antiapoptotic, antioxidative effects ameliorating the hepatoxic effects of cisplatin. CONCLUSIONS Co-administration of either curcumin or resveratrol with cisplatin treatment could ameliorate hepatotoxicity caused by cisplatin in rats via anti-inflammatory and oxidative stress-apoptotic pathways in spite of the low absorption rates of either resveratrol or curcumin. Limitations of the study The combination therapy used in this study gave limited predictions about whether or not real cancer patients will take it as a co-medication.Also, the low absorption rates of either resveratrol or curcumin appears to limit their in vivo biological effects which represent a major barrier in the development of therapeutic applications for the compounds.Hence, a newer delivery system of those polyphenolic compounds including nanoparticles is suggested to be investigated in the future studies. Table 1 Investigation of the levels of serum liver enzymes in the study groups. * Significant differences between the CIS and control groups.# Significant differences between CIS plus CUR or RES-treated groups and CIS group. Table 2 Investigation of the levels of oxidative/antioxidative stress markers in the study groups. Notes:SOD, Superoxide dismutase; GPx, glutathione peroxidase; CAT, catalase; MDA, malondialdehyde; H 2 O 2 , hydrogen peroxide.* Significant differences between the CIS and control groups.# Significant differences between CIS plus CUR or RES-treated groups and CIS group. Table 3 Investigation of the levels of serum inflammatory parameters in the study groups. Table 4 Investigation of the Liver fibrosis and apoptosis parameters in the study groups.
5,724.6
2024-07-22T00:00:00.000
[ "Medicine", "Environmental Science", "Chemistry" ]
Cross-sectional Study on Technological Pedagogical Content Knowledge (TPACK) of Mathematics Teachers Teachers are best known to play a vital role in the educational system, especially in implementing a new curriculum. Adopting the Philippine’s Curriculum requires continuous monitoring and assessment to ensure its programs’ transparent improvement. This study examines the level of TPACK of mathematics preservice teachers (PSTs) and mathematics educators (MTEs) and determines whether a significant difference existed between them. Also, this study will establish a relationship between the MTE’s TPACK and their technology integration. The participants of the study include 174 PSTs and 41 MTEs. The research instrument adopted considered central components of mathematical TPACK with Cronbach’s alpha of 0.967. Results revealed that the mathematics PSTs had an average TPACK, while the MTEs’ TPACK was high. A significant difference existed in the TPACK of PSTs and MTEs, indicating PSTs to be more exposed to learning the interconnections of these three knowledge bases, which can be seen demonstrated by MTEs in their instruction. A significant positive relationship existed between the MTEs’ TPACK and their technology integration in class, indicating the regular use of technology in the instructional environment suggested higher TPACK among MTEs. The implications of the findings for practice on the teacher education programs are likewise discussed. Introduction TPACK focuses on the interrelated components of a teacher's knowledge of content (CK), pedagogy (PK), and technology (TK). It is considered a vital part of today's educational system as it addresses the growing demand for technology integration without giving less value to the content and its delivery for classroom instruction. TPACK is an emergent knowledge every teacher must be informed of to continuously track the new trends in education and be knowledgeable with the TPACK components to integrate into their classroom environment [13] effectively. Students also benefit from the TPACK framework since most of them are born in this technology era. They can work better by integrating technology and have a deeper conceptual grasp of the subject matter. Thus, by adding the concept of technology of Koehler & Mishra [13] to Shulman's PCK model [24], students become more interactive in the learning process [5]. Technology in education uses modern tools and equipment to encourage more interactions among students, ensuring knowledge acquisition most efficiently and effectively [21]. However, Philippine's educational system performed worse on the 2017 Global Innovation Index, recorded a dismal rank at 113th place among the 127 countries. It ranked 76th among the 137 countries for the quality of mathematics and science education according to the Global Competitiveness Index for 2017-2018 [17]. According to the World Bank [17], the Philippine educational system underperformed East Asia counterparts and the Pacific. The students' scores were recorded below average in the international examinations, i.e., Programme for International Student Assessment (PISA) and the Trends in International Mathematics and Science Studies (TIMMS). The country's current issues in the educational sector pave the way to the recent implementation of the K to 12 Basic Education Curriculum and the most talked ASEAN integration, which demands a lot of curriculum innovations [11]. In particular, secondary mathematics teachers need to embrace the change in mathematics, including integrating technology. Osinem [18] pointed out two of the positive social effects of technology-driven education on developing employable individuals and collaborative/cooperative learning. The utilization of technology in the teaching and learning process enables students to acquire skills needed in their future employment. It also enhances student-teacher-other stakeholder's relationships without the hindrance of geographical space [18]. The DepEd K-12 Curriculum Guide for Mathematics [14] issued by the Department of Education, recognized appropriate mathematics tools in teaching the subject. These include concrete and manipulative objects, measuring devices, calculators and computers, smartphones and tablet PCs, Internet, interactive whiteboards, mathematics software packages (e.g., educational dynamic programs such as GeoGebra, Matlab, Bagatrix, Graphmatica, and others). Accompanying the curriculum innovations has been the agenda of strengthening quality teaching at all education systems throughout the country. According to the study conducted by Proctor, Finger, & Albion [20] cited an example of the McKinsey report indicated that "The quality of an education system cannot exceed the quality of its teachers." Teachers are best known for their primary role in educating a child. It has been said that the teachers' teaching style in explaining mathematics depends mainly on their conceptual grasp acquired during their college classes. They often teach as they were taught and modeling themselves after their college mathematics teachers [2]. "Quality teaching requires developing a nuanced understanding of the complex relationship between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations" [13]. A teacher who aims to achieve a successful technology integration in the teaching-and learning process needs to consider all these interrelated components other than just a sole subject matter, pedagogy, or technology expert. Preparing preservice teachers for ICT-based classroom instruction attracts more attention for many teacher educations institutes [3]. Preservice teachers will be future mathematics teachers of today's 21st-century learnerswho are geared to be an excellent critical thinker, problem solver, communicator, collaborator, creative, innovative, and technologically literate [20]. Reference [2] indicated that several units or courses during college days have touched on using ICT by demonstrating or suggesting how these could be integrated into instruction. However, preservice teachers narrated that only a few lecturers have integrated ICT for classroom use. We have seen the gap between educators trying to teach only theories with these preservice teachers other than what is being transpired during their classroom applications. There is a weak connection between what educators aim to achieve for their preservice teachers from what they ought to teach and model in their instructional environment. An Australian project called Teaching Teachers for the Future, with its partnership in ICT learning study, argued that preservice teachers are the future leaders of ICT-rich provisions in schools. They should be at the vanguard of change; their ideas and enthusiasm for ICT-based instruction are crucial [2]. As technology integration is considered essential preparation for preservice teachers, there is a need to assess their Technological Pedagogical Content Knowledge (TPACK) to audit our graduates' current status. The study will also establish whether the utilization of technology in education is translated into teaching practice by mathematics teacher educators. This critical finding can be fed back to them to spearhead their professional growth. Thus, this study intends to assess the level of technological pedagogical content knowledge of mathematics preservice teachers (PSTs) and mathematics teacher educators (MTEs). Specifically, this study will be guided by these research questions: 1. What is the level of mathematics preservice teachers (PSTs) and mathematics teacher educators (MTEs)' TPACK? 2. Is there a significant difference between the TPACK of the PSTs and MTEs? 3. Is there a significant relationship between MTEs' TPACK and their technology integration in class? 4. What are technologies used by the MTEs and its frequency of use? 5. What is the MTEs' reason(s) for using technology in their mathematics classes? Materials and Methods This study utilized a descriptive-correlational research design. The participants were the one hundred seventy-four (174) mathematics PSTs who were officially enrolled in the second semester of the school year 2016-2017 and fortyone (41) MTEs handling mathematics classes for the math major students in the Bachelor of Secondary Education Curriculum. Of these 174 students, 95% are regular students, while 5% are irregular students. The study took place at Visayas State University in Region 8. In the development of the instrument of this study, the researcher adopted the instrument from [12,15,16,23,25] and put into consideration Guerrero's [8] four central components of mathematical TPACK such as (1) Conception and Use of Technology, (2) Technology-Based Mathematics Instruction, (3) Management, and (4) Depth and Breadth of Mathematics Content. This study's TPACK assessment tool consisted of 52 items distributed to the four components of the mathematical TPACK. The instrument used a Researcher's Rating Scale with the following Descriptive Ratings and Corresponding Descriptions in the Participant's TPACK: Very Low (0-10% competent), Low (11-30% competent), Average (31-70% competent), High (71-90% competent), and Very High (91-100% competent). The survey instrument also included questions regarding the technologies used by the MTEs in their mathematics classes, frequency of use, and reason(s) for using those technologies. The instrument also used a scale with the following descriptive ratings and corresponding descriptions in the participant's frequency of use of technology integration: Never (not used this technology at all), Rarely (used only once in a semester), Sometimes (used twice or thrice in a semester), Often (used four to ten times in a semester), and Regularly (used more than ten times in a semester). The survey instrument undergone validation from six mathematics experts and was pilot tested to the fourth-year mathematics major education students and mathematics teacher educators at West Visayas State University, Iloilo City. There were about 93 PSTs and 27 MTEs as to the sample size used for the pilot test. The instrument was found out to be reliable, with an overall Cronbach's alpha of 0.967. Concerning the tool's sub-constructs, the reported Cronbach's alpha is as follows: Conception and Use of Technology -0.827, Technology-based Mathematics Instruction -0.907, Management -0.949, and Depth and Breadth of Mathematics Content -0.912. The researcher asked permission from the University President and Dean of the College of Education at West Visayas State University to validate the survey instrument by the fourth-year mathematics major education students and mathematics teacher educators after they have signed the consent form. The result of the validation process was analyzed, and necessary revisions of the instrument were made. After the instrument's validation, the researcher administered it to fourth-year Bachelor of Secondary Education (BSEd) major in mathematics students and the mathematics teacher educators handling mathematics classes in the BSEd curriculum after granted permission by the VSU President and the Dean of the College of Education. A signed consent form from the participants was collected indicated the participation of the participants in the study. The conduct of the survey instrument consumed more or less an hour. A focus group discussion was also conducted to selected PSTs and MTEs to further substantiate the survey instrument results. After collecting the data gathered, mean and standard deviations were used to determine the levels of TPACK of mathematics PSTs and the MTEs, the technologies that the MTEs used in their mathematics classes, and the duration of their use of these technologies. Percentages were also used to identify the MTEs' reasons for using technology. For inferential analyses, a t-test for Two Independent Samples was used to ascertain if there would exist a significant difference in the mathematics PSTs and MTEs' TPACK and Spearman Rank-Order Correlation Coefficient if there would exist significant relationships between the MTEs' TPACK and their use of technology in Mathematics classes. Results and Discussions In general, this study aims to describe the TPACK level of mathematics PSTs and MTEs handling mathematics classes in the BSED mathematics curriculum. Level of Mathematics Pre-Service Teachers and Mathematics Teacher Educators' Technological Pedagogical Content Knowledge. The results of PSTs' and MTEs' TPACK are shown in Table 1. By inspection of the means, the mathematics PSTs had average TPACK (M=3.47, SD=0.94), which implies that the PSTs were 31-70% competent to teach technology-based mathematics instruction. Some components of the PSTs' TPACK demonstrated high levels such as the Conception and Use of Technology (M=3.79, SD=0.87) and on the Depth and Breadth of Mathematics Content (M=3.65, SD=0.91). According to Guerrero [8], the first component (Conception and Use of Technology) relates technology to pedagogical content knowledge by focusing on how the teacher can use technology to make the subject matter more comprehensible and accessible to students. This component becomes a basis for decision-making related to instructions and curricula that teachers make in rendering the subject matter more accessible to students. The final component (Depth and Breadth of Mathematics Content) relies on the teachers' knowledge of mathematics content and embracing their responsibility to understand their content areas with both breadth and depth due to technology integration. The finding above agrees with the results of the study of [20], wherein 345 preservice teachers showed high levels of interest in and on technology integration both for personal and professional purposes. However, they believed that they utilized ICT for teaching and learning purposes only, other than their strong conception of ICT's value to improve students' learning outcomes. The result agrees with [22], which revealed that preservice had the lowest score in TPACK than with the in-service teachers. Although the PSTs showed a high level of self-perceived competence of their technological knowledge, they still found it difficult to apply some technologies in their classroom instruction. The result is evident in the other two components, which only showed an average level: Technology-Based Mathematics Instruction (M=3.28, SD=0.94) and Management (M=3.16, SD=1.12). The second component (Technology-based Mathematics Instruction) includes teachers' knowledge of and ability to maneuver through various instructional issues specifically related to technology supporting mathematics teaching and learning [8]. The third component (Management) covers management issues related to teaching and learning with technology, including a teacher's understanding of how to handle students' attitudes toward technology and their behavior due to using technology. Ten mathematics PSTs were randomly selected during the focus group discussion to explain the second and third components' average levels. Nine out of ten students said that less is given the emphasis on the use of technology as part of their teacher's instructional repertoire, and most of their teachers do not have the skill to do some troubleshooting for minor technical problems (i.e., network problems and operating the LCD projector). Further, students narrated that some of their teachers do not use the PowerPoint Presentation effectively. [22] is in disagreement with this result. Their study revealed that in-service teachers had the lowest mean score in the technological knowledge domain. However, the result of this study is similar to the results of the study of [25], wherein 37 mathematics teacher educators from the ten state teacher education institutions in Central Luzon, Philippines, handling mathematics classes in the Bachelor of Secondary Education, Mathematics curriculum had high levels of technological pedagogical content knowledge. The MTEs considered themselves to be highly knowledgeable about content, pedagogy, and technology in mathematics teaching. Nevertheless, when students were asked about how their mathematics educators used and integrated technology in their mathematics classes, most of them indicated that the mathematics teacher educators seldom used technology integration (M=2.08, SD=0.84) in their mathematics classes. Based on [25] study, a discrepancy between the MTEs perceived TPACK and the students' perception of the MTEs' TPACK. The MTEs' high TPACK level did not reveal a transparent demonstration of the three essential knowledge bases: content, pedagogy, and technology. Table 2 shows that there exists significant differences in the three components of TPACK: (1) Technology-based Mathematics Instruction (t = -4.65, p = .000), (2) Management (t = -2.02, p = .047), and (3) Depth and Breadth of Mathematics Content (t = -2.62, p = .011), between the mathematics PSTs and MTEs. On the other hand, no significant difference existed in one component -Conception and Use of Technology (t = -.78, p = .441). The results indicated that the two groups' knowledge was similar in making educational technology decisions to make the subject matter more accessible and understandable. However, MTEs outperformed PSTs' knowledge to (1) maneuver through various instructional issues specifically on the use of technology in support of the mathematics teaching and learning process; (2) management issues specifically related to technology-based mathematics instruction; and (3) the teachers' breadth and depth of mathematics content knowledge as influenced of using technology in their instruction. The findings conformed to the study of [9], wherein in-service teachers and preservice teachers performed the same in decision-making about the use of educational technology. However, in-service teachers performed better than the preservice teachers in terms of depth, breadth, and practical, contextual, and pedagogical knowledge. Also, the authors of the study narrated their differences. Citing that "the rationales upon which preservice teachers based their instructional decisions were more superficial, uncritical, and relied largely on consideration of students and classroom-related facts of the case, compared to in-service teachers' responses, which were more detailed, better elaborated, more interpretive, and critical of the school context." t-test results in The results further revealed a significant difference in the general TPACK of PSTs and MTEs, t = 2.87, p = .005. The result implies that the PSTs' TPACK and MTEs' TPACK were statistically different from each other. It can be inferred that the MTEs' TPACK is higher than that of the PSTs. The finding conforms to [22], indicating that math teachers' TPACK was significantly higher than preservice Math teachers' TPACK. The finding may also be supported by Bate et al. (2013), wherein the preservice teachers expressed sentiments about their experience in their undergraduate studies. They claimed less confidence to use mathematics software even though lengthy discussions on its use. These software or technology as a whole should be integrated into teachers' discussion of concepts or theories so that students can model these strategies in their actual field of teaching. The finding is also similar to Aquino's (2015) study; wherein there is a significant difference between the TPACK of preservice biology teachers and the perceived TPACK modeled by their faculty. Relationship between Mathematics Teacher Educators' Technological Pedagogical Content Knowledge and Their Use of Technology Integration Spearman's Rank-Order Correlation Coefficient results in Table 3 reveal a significant relationship between MTEs' TPACK and their use of technology integration (r=.929, p=.000). The computed p-value is less than .001. It can be inferred that the MTEs' TPACK is transparent to their use of technology in their mathematics instruction. The higher the MTEs' TPACK, the more frequent MTEs' usage and integration of technology into their mathematics classes. This finding is similar to the study of [25], wherein there was a moderate positive but significant linear correlation between the MTEs' level of TPACK and the extent of technology integration in mathematics classes (r=0.50, df =35, p < .01). Mathematics Teacher Educators' Frequency of Use of Technology Integration Technology in this study was divided into two categories: hardware and software. Hardware technologies refer to the tangible tools and equipment supporting mathematics instruction such as laptop/desktop, calculator, LCD projector, and others. In contrast, software technologies refer to the intangible tools and equipment that supports mathematics instruction, i.e., MS Word, MS Excel, MS Powerpoint, GeoGebra, Matlab, SPSS, Mathematica, and others of this type. As shown in Table 4, the study results revealed that Mathematica, Bagatrix, and Graphmatica were the identified software technologies used by the MTEs in their mathematics classes. Based on the results, the MTEs regularly used laptop/desktop (M=3.57, SD=0.57), scientific calculator (M=3.57, SD=0.79), an LCD projector (M=3.57, SD=0.54) for the hardware technology; a word processor, i.e., MS Word (M=3.71, SD=0.49), and a presentation program, i.e., MS PowerPoint (M=3.71, SD=0.48) for the software technology. The result means that the MTEs used these technologies more than ten times in a semester. The MTEs used these technologies more than ten times in a semester. The MTEs often used a spreadsheet, i.e., MS Excel (M=3.29, SD=0.95), GeoGebra (M=2.86, SD=1.07), and website resources (M=2.86, SD=1.46) for software technology. The result means that the MTEs used these technologies four to ten times in a semester. The MTEs sometimes used a MS Mathematics Teacher Educators' Reasons for Using Technology When MTEs were asked about their purpose for integrating technology in their BSEd-Mathematics classes, some MTEs indicated various reasons, as showed in Table 5. Results revealed that a total of 35 (85%) indicated numerical computation, 41 (100%) graphical presentation, 23 (56%) interactive learning, 41 (100%) tabular presentation, and 11 (27%) symbolic manipulation. Most of the MTEs' purpose for integrating technology in their mathematics classes were for graphical and tabular presentations. The MTEs' use of graphical presentation in a simple, clear, and effective manner of the quantitative data facilitates the comparisons of values, trends, and relationships (In & Sangseok, 2017.) Also, MTEs use a tabular presentation to organize data for further statistical treatment and decision-making. They use technologies that would help them present complicated figures accurately or graphs to lessen students' misconceptions of the concepts introduced (In & Sangseok, 2017.) According to a study conducted by Goos (2010), teachers use technology to support effective mathematics teaching and learning, and these enhanced student's opportunities to use technology to (1) improve speed, accuracy, and access to a variety of mathematical presentations, (2) improve the display of mathematical solution processes and support student's collaborative work, and (3) support new goals or teaching methods for a mathematics course. In an article issued by the Center for Technology in Learning (2007), there are two main reasons why teachers use technology: computation and representation. Technology can reduce the effort devoted to tedious computations and increase students' focus on more important mathematics. Equally important, technology can represent mathematics in ways that help students understand concepts. In combination, these features can enable teachers to improve both how and what students learn. The findings of this study have to be seen in light of some limitations that could be addressed in future research. This study did not include the participants profile as one of the variables under study. Quantitative variables such as average age, sex distribution, academic abilities, and others were not included under the investigated variable. Conclusions Most of the mathematics preservice teachers had an average technological pedagogical content knowledge, which means that they are only 31-70% competent to teach technology-based mathematics instruction. The results may be credited to their current status as inexperienced in the field of education. The PSTs might not have acquired enough knowledge of teaching pedagogy, especially on their management capabilities, where they had the lowest mean among the other three components. The improvement of their TPACK seems to have taken a long time, if not in a few years, and they need to move beyond teacher preparation programs into the teaching profession's actual field to build their TPACK [20]. On the other hand, mathematics teacher educators have a high level of TPACK, which reported 71-90% competent to teach technology-based mathematics instruction. The MTEs might have been well exposed and have attended various seminar workshops or training, where learning of these three important knowledge bases was given greater emphasis. With these, our next target is to transfer this HIGH TPACK of MTEs to our PSTs, so when they are in the actual field of teaching, they can imitate and might as well use and demonstrate appropriate technology in their mathematics lessons. Recommendations Based on the findings and conclusions of this study, the following recommendations are advanced: 1. Teacher education institutions should provide a program of opportunities [for teachers and students] to acquire the knowledge and experiences needed to integrate appropriate technology in teaching and learning mathematics. 2. Universities should procure and invest in purchasing equipment needed to improve technology integration in the curriculum. 3. Teacher education institutions, curriculum designers, and policymakers may use the baseline data on the TPACK of mathematics PSTs and MTEs to develop potential knowledge of planning and design learning materials that may exhibit teachers' TPACK teaching-learning process. Implications for Practice 1. The average TPACK of mathematics PSTs implies that schools, institutions, and colleges offering teacher education programs must regularly assess students' technology experiences in the BS programs. This will ensure that all preservice teachers have the necessary knowledge bases (TPACK) and confidence to incorporate technology into the curriculum, especially in this technology era, where students are considered digital natives [19]. 2. The high TPACK of MTEs implies that college departments and universities must nurture and even enhance this knowledge by providing them with seminar-workshops on how to capitalize on these knowledge bases so that educators could demonstrate and make this transparent to the students. 3. The significant difference in the technological-pedagogical content knowledge of mathematics preservice teachers and mathematics teacher educators implies that mathematics PSTs need to be more exposed to learning the interconnections among the three knowledge bases: content, pedagogy, and technology; which can be seen demonstrated by the MTEs in their instruction. Mathematics preservice teachers are soon-to-be teachers in the actual teaching field, and most of what they can offer to their class is model from their previous instructors or professors in class. 4. The significant relationship between the mathematics teacher educators' technological pedagogical content knowledge and the use of technology integration in their mathematics classes imply that the regular use of appropriate technology in the instructional environment increases the level of TPACK among MTEs. 5. The mathematics teacher educators' regular use of hardware technologies and sometimes the use of software technologies in their mathematics classes imply that mathematics educators must be more willing to learn and embrace emerging technologies that have potential use in the instructional environment. 6. The identified various reasons for mathematics teacher educators' use of technology in their mathematics classes imply that technology helps students and teachers get answers more quickly and accurately, but it has more to offer. If technological tools are considered providing access to new understandings of relations, processes, and purposes, technology's role relates to a conceptual construction kit rather than efficiency.
5,805.8
2020-12-01T00:00:00.000
[ "Education", "Mathematics" ]
The energetic effect of hip flexion and retraction in walking at different speeds: a modeling study In human walking, power for propulsion is generated primarily via ankle and hip muscles. The addition of a ‘passive’ hip spring to simple bipedal models appears more efficient than using only push-off impulse, at least, when hip spring associated energetic costs are not considered. Hip flexion and retraction torques, however, are not ‘free’, as they are produced by muscles demanding metabolic energy. Studies evaluating the inclusion of hip actuation costs, especially during the swing phase, and the hip actuation’s energetic benefits are few and far between. It is also unknown whether these possible benefits/effects may depend on speed. We simulated a planar flat-feet model walking stably over a range of speeds. We asked whether the addition of independent hip flexion and retraction remains energetically beneficial when considering work-based metabolic cost of transport (MCOT) with different efficiencies of doing positive and negative work. We found asymmetric hip actuation can reduce the estimated MCOT relative to ankle actuation by up to 6%, but only at medium speeds. The corresponding optimal strategy is zero hip flexion and some hip retraction actuation. The reason for this reduced MCOT is that the decrease in collision loss is larger than the associated increase in hip negative work. This leads to a reduction in total positive mechanical work, which results in an overall lower MCOT. Our study shows how ankle actuation, hip flexion, and retraction actuation can be coordinated to reduce MCOT. To unravel the mechanisms of such gait adaptations, it is essential to understand the energetic effects of ankle and hip actuation. The pioneering work of Kuo (2002) revealed that impulsive push-off applied via the trailing leg just before heel strike can substantially reduce the energy losses associated with collision of the leading leg. It can thus reduce the total mechanical work required to maintain periodic gaits. In Kuo's model, however, toe-off impulse was applied at the (point) foot, to represent push-off forces of the stance leg generated by muscles at the ankle, knee and hip joints (Kuo, 2002). Ankle and hip actuation have different functional roles during walking. Ankle actuation mainly contributes to center-of-mass acceleration during push-off and swing leg initiation before toe-off (Zelik & Adamczyk, 2016). Hip actuation mainly plays three roles: (i) push-off via the hip extension torque during the late stance phase (DeVita & Hortobagyi, 2000;Winter, 1983); (ii) weight acceptance during the first half of stance phase (Winter, 1980); and (iii) acceleration and deceleration of the swing leg during the early and late swing phase respectively (Doke, Donelan & Kuo, 2005;Muybridge, 2012). The last role enables hip muscles to modulate step length and frequency, which is the focus of our study. Several modeling studies demonstrated energetic benefits of spring-like elasticity around ankle and hip (Bregman et al., 2011;Duindam, 2006;Hasaneini, 2014;Kerimoglu et al., 2021;Kuo, 2002;O'Connor, 2009;Zelik et al., 2014). The addition of a torsional hip spring can reduce collision loss by reducing the step length, thus requiring less positive mechanical work in a periodic gait (Kuo, 2002). When hip actuation is considered a conservative spring with zero net mechanical work over a gait cycle, its associated metabolic cost is often ignored (Kuo, 2002;Zelik et al., 2014). As already noted by Kuo (2002) and Zelik et al. (2014), generating hip torques is not for free: hip torques demand metabolic energy due to muscle (de-)activation and cross-bridge cycling (Homsher & Kean, 1978;Woledge, Curtin & Homsher, 1985). This raises the question whether a reduced collision loss outweighs the increase in metabolic cost due to hip actuation. Hasaneini et al. (2013) optimized the work-based metabolic cost of a model with telescoping ankle push-off and hip actuation during the swing and stance phase. They found an optimal gait with ankle push-off and hip actuation contributing equally to the metabolic cost. Why this actuation strategy is most energy efficient, however, has not been answered. Kuo (2001) tested whether minimizing the metabolic energy in simple walking models pinpoints the relationship between walking speed and step length in humans. When modeling the metabolic cost of the push-off impulse as proportional to its mechanical work, and when modeling the metabolic cost of the spring-like hip torque as proportional to its (peak) force rate, it does. Yet, Kuo (2001) noted that the work done by spring-like hip torque was not included in the metabolic cost. If account for, it may have resulted in burst-like hip impulses as observed in humans (Doke, Donelan & Kuo, 2005). Adding hip actuation can be beneficial in terms of reducing the MCOT (Hasaneini et al., 2013;Kuo, 2001;Kuo, 2002;Zelik et al., 2014). It is also essential for achieving high walking speeds (Dean & Kuo, 2009). Whether and how hip actuation, when modelled as independent hip flexion and retraction actuation, can reduce the MCOT compared to ankle actuation only for a variety of walking speeds, is an open question. To address this, we investigated the effect of hip flexion and retraction actuation on the MCOT at various speeds. The model we used is a planar flat-feet walker model actuated by ankle and hip torques. The ankle actuation was similar to Zelik et al. (2014), which enables a non-instantaneous ankle push-off or double stance phase. Our hip flexion and retraction actuation were modelled by two independent hip springs switching on before and after zero-crossing of the hip angle, respectively. We compared the MCOT of stable periodic gaits with varying hip flexion and retraction actuation at each speed, then identified and analyzed the optimal actuation strategy in terms of the lowest MCOT. We further analyzed how mechanical and metabolic energy components were influenced by different levels of hip flexion and retraction actuation. METHODS We employed a simple planar flat-feet walker model to investigate the effect of hip actuation on the estimated MCOT over a range of walking speeds. The amount of hip flexion and retraction actuation was varied at each speed, and the effects on the estimated MCOT were studied. We identified stable periodic gaits at each speed that minimized the estimated MCOT, and analyzed why these gaits are optimal. Model Our model consisted of four rigid segments (with inertia) representing two straight legs and two flat feet, connected in three frictionless hinge joints, one representing both hips and two representing the ankle joints. The model's geometric and mass distribution parameters were almost similar to Kuo's anthropomorphic walker (Kuo, 2002), except that we replaced the circular feet with flat feet (length 0.15 and center-of-mass located at the foot midpoint). The ankle joint in our model allowed for a non-instantaneous double stance phase; see Fig. 1; see Table 1 for parameter settings. To account for variations in body mass and limb morphology, the total mass and leg length were used to normalize the model parameters: masses are given as a proportion of the total mass m tot , time was rescaled by ffiffiffiffiffiffi l=g p , speed by ffiffiffi ffi gl p , ankle and hip spring stiffness by m tot g=l, ankle damping coefficient by m tot ffiffiffiffiffiffi g=l p , work by m tot gl, with g denoting the gravitational constant. As a result, a dimensionless value of 1 of speed and step length corresponds to a speed of 3.1 m/s and a step length of 1 m. From here-on we will consider all configuration parameters in normalized units unless specified otherwise. We defined generalized coordinates , where each f i is the angle between the positive x-axis and the line from the proximal to the distal joint. The subscripts refer to the segments numbered along a kinematic chain starting with the toe of the stance foot (i.e., 1 = stance foot, 2 = stance leg, 3 = swing leg, 4 = swing foot). Each segment's configuration was determined by four parameters: center-of-mass m i , length l i , distance of the segment's center-of-mass to the segment's distal end d i , and moment of inertia relative to the centerof-mass j i . With this definition, d 2 þ d 3 equals leg length l; see Fig. 1 As illustrated in Fig. 1, a full cycle of the planar feet walker consisted of the following phases: heel-constrained phase, foot-constrained phase, toe-constrained phase and double stance phase, where 'constrained' indicates that the corresponding part of the foot segment contacts the ground. Depending on whether heel-off (preemptive push-off) occurs before Figure 1 The planar flat-feet walker and its gait cycle. The walker may or may not have a toe-constrained phase, depending on whether the heel leaves the ground before contralateral heel strike. For ankle actuation, the walker is implemented with an ankle spring k a and pulse torque T a which is activated from the peak ankle dorsiflexion until the end of the gait cycle (toe-off). Dampers were added to the ankle joint of the swing leg after toe-off and of the leading foot after heel strike to reduce oscillations. For hip actuation, the walker is implemented with a hip flexion spring k hf and retraction spring k hr activated before and after the hip angle reaches zero. or immediately after contralateral heel strike, the toe-constrained phase may or may not take place. As such, a full gait cycle consisted of either three, or four phases (Fig. 1). The transitions between different phases were detected by the following events in temporal order within a gait cycle: 1. Toe-off: vertical ground reaction force on the trailing toe becomes zero. Toe-off moments of the swing foot and of contralateral foot were defined as the beginning and the end of a gait cycle, respectively. 2. Toe strike: stance toe hits the ground, while the heel remains on the ground; the toe strike was assumed to be instantaneous and inelastic (no slip and no bounce). 3. Heel-off: ground reaction force moves to the toe from any other point on the foot; for four-phase gait such an event occurs before heel strike while for three-phase gait it occurs immediately after heel strike. 4. Heel strike: vertical position of the leading heel reaches the ground; in addition, swing leg rotates clockwise ( _ f 3 < 0) to avoid detection of foot-scuffing and to consider only (stable) long-period gaits with swing leg reversal before heel strike (Kwan & Hubbard, 2007); the heel strike was assumed to be instantaneous and inelastic. Zelik et al. (2014), the walker was actuated at the ankle by torques generated by a spring with stiffness k a in addition to a constant torque T a added after ankle reversal (from peak ankle flexion until toe-off) (see Fig. 2A for an illustration). Prior to ankle pushoff, elastic energy could be stored by the ankle spring and was subsequently released during ankle push-off ( Fig. 2A). To reduce undesirable oscillations, dampers with a small damping coefficient c a were added to the ankle joint(s) whenever the toe of the corresponding foot segment was not in contact with the ground. For instance, the curve in stance ankle torque from toe-off to contralateral toe strike in Fig. 2A is an effect of this Prior to ankle push-off, elastic energy was stored at the ankle spring. After ankle reversal (from peak ankle flexion until toe-off), the walker was actuated at the ankle by torques generated by the spring in addition to the pulse torque T a . The walker could also be actuated by spring-like hip flexion and retraction torques, which were active only before and after the hip angle reached zero. Similar to Full-size  DOI: 10.7717/peerj.14662/ fig-2 damper and the spring. The negative work done by dampers during the swing phase was found to be less than 2% of collision loss and was ignored in our analysis. The total ankle torque during push-off (no damper in this phase) is given by: The total ankle torques outside the push-off phase were defined similarly, but with the constant T a set to zero. The walker was also actuated at the hip by flexion and retraction torques, which were active before and after the hip angle reached zero, respectively (see Fig. 2B for an illustration). The hip flexion and retraction torques were used to influence step frequency and step length. The torque-angle relation for hip flexion and retraction was modelled independently by two springs with separate stiffnesses: and The energy losses from inelastic collisions of the heel/toe could be compensated by performing net positive work around the ankle and/or hip joint. This collision loss equaled the opposite of (ground) impulsive work W impulsive , which was computed as half the dot product of impact velocity and (ground reaction) impulse, shown below; for a full derivation, we refer to Font-Llagunes & Kövecses (2009), and for an intuitive proof, see Supplemental Material B. where v bxÀ ; v byÀ are the horizontal and vertical pre-collision impact velocity of the collision point (i.e., heel), and S x ; S y are the horizontal and vertical ground reaction impulse. Numerical simulations and optimizations We simulated the dynamics of the walker using a Runge-Kutta (4,5) integrator, setting the absolute tolerance and relative tolerance to 10 À6 . We found stable periodic gaits by enforcing two conditions: (a) all elements of the end state after walking five steps should be close to initial states q 0 within error of 10 À6 ; (b) the maximum Floquet multiplier (error multiplication factor from step-to-step at post-impact state; see, e.g., Hurmuzlu & Moskowitz, 1987;Wisse & Schwab, 2005) should be less than 1. We systematically varied speed v ¼ 0:16:0:02:0:54, where 0:54 is the highest speed at which an ankle actuation periodic gait can be found. At each speed, we found optimal gaits with varied hip flexion and retraction actuation: (1) zero hip flexion and retraction actuation (Results section "Energetics of ankle actuation"); (2) hip flexion actuation with zero retraction actuation (Results section "Can MCOT be reduced by adding only hip flexion actuation?"); (3) hip flexion and retraction actuation (Results section "Can adding hip retraction actuation in addition to hip flexion actuation reduce MCOT?"). In all three cases, the hip actuation parameters k hf and k hr were manipulated or constrained to zero. To minimize the risk of obtaining "local minimum" solutions during the optimization process, we used grid search by sweeping one of the ankle actuation parameters k a over a feasible range. The remaining parameter optimization problem was solved using the Matlab function "fmincon" (with the SQP algorithm), to find the combination of ankle control parameter T a and eight initial state parameters q 0 that minimize the cost function (MCOT) subjected to the constraint of a stable and periodic gait at a given desired speed v, given k a , k hf and k hr . We used the following cost function as an estimate of the overall 'metabolic' energy required to travel a unit distance (Schmidt-Nielsen, 1972). where m tot is the normalized total mass and g is gravitational constant, which are both 1, s is the step length, W þ is the total positive work from ankle and hip joint, W À is the total negative work, g þ is the inverse of efficiency of generating positive work and g À is the inverse of efficiency of generating negative work. Based on Margaria (1968), we set In general, the sum of positive and negative internal work (from ankle and hip) and external work (ground impulsive work, gravity work) equals the change in kinetic energy, which is zero after a periodic gait cycle. Thus, the total negative mechanical work is equal but opposite to the total positive mechanical work from ankle and hip joint. For an example of this relation, see Supplemental Material C, which depicts kinetic energy change and mechanical (internal and external) work performed for a periodic gait within a gait cycle. Note that the negative work performed at collision was not included in the metabolic cost, but collision loss must be compensated by the same amount of positive work performed at ankle and hip, thus the cost of the collision loss is implicitly included in MCOT. RESULTS We compared the MCOT for only ankle actuation, and ankle actuation with hip flexion and/or retraction actuation, when walking at the same speed. We assessed if MCOT could be decreased by adding hip actuation and what the mechanisms were for any difference in MCOT. We focused on a range of speeds at which the model could walk stably and periodically both with and without hip actuation, i.e., 0.16-0.54 in dimensionless units or 0.50-1.69 m/s. General model behavior Before comparing MCOT for these different actuation strategies, we first introduce the basic mechanisms of how the ankle and/or hip actuate the walker by performing (joint) mechanical work, and briefly discuss how metabolic work for ankle/hip actuation is expended in performing mechanical work. We first illustrate the kinematics of the model (blue stick diagram in Fig. 3A) and how mechanical power is generated at the ankle joint over a gait cycle for an optimal only ankle actuation gait. As indicated by the blue curve of ankle power over a gait cycle in Fig. 3B, starting from mid-stance, the ankle performs negative work before ankle reversal, followed by a large burst of positive work during push-off. The MCOT for this gait was 0.41. The ankle push-off work, which compensates for energy losses (here due to both collision loss and ankle negative work), is performed mostly before heel strike. As shown by Kuo (2002) and Ruina, Bertram & Srinivasan (2005), such a pre-emptive ankle push-off Figure 3 Comparison of kinematics and mechanical power for two actuated periodic gaits. (A) Stick diagram of the periodic gaits for only ankle actuation (blue) and ankle actuation with equal amounts of hip flexion and retraction actuation (red) at speed 0.47. The addition of hip flexion and retraction actuation reduces step length; (B) time-normalized ankle power for ankle-actuated optimal gait (blue) and for a periodic gait with a symmetric hip spring (red). The positive ankle power indicates ankle pushoff. Note that toe-off (black vertical dashed line) occurs shortly after heel strike (red vertical dashed line), indicating an almost instantaneous double stance phase; (C) time-normalized hip power for ankle actuation with a hip spring. Hip power on the swing/leading leg (purple) and stance leg/hind leg (orange) is different because of different angular velocities of the two legs. For only ankle actuation, hip power is zero. Note that the peak hip power is much smaller in magnitude than peak ankle power. Full-size  DOI: 10.7717/peerj.14662/ fig-3 strategy decreases the collision loss by changing the direction of the pelvis velocity before heel strike. Over a gait cycle, negative mechanical work performed at the ankle and hip as well as the collisions need to be compensated by an equal amount of positive mechanical work at the ankle and hip joint. To understand the energetic effect of hip actuation, it is essential to understand both (1) how hip actuation performs mechanical work and (2) how hip actuation influences collision loss. Figure 3A shows a stick diagram and Fig. 3C shows the mechanical power at the hip over a gait cycle for a gait with ankle actuation and equal hip flexion and retraction actuation at the same speed. The MCOT for this gait was 0.50. The first noticeable change in Fig. 3A is that the step length is substantially shorter than the gait with only ankle actuation, leading to a lower collision loss. Figure 3B shows that the addition of symmetric hip actuation reduces the positive ankle power over the gait cycle. Figure 3C illustrates how mechanical power is performed at the hip: during the single stance phase, hip flexion and retraction torques accelerate and decelerate the swing leg, which increases step frequency and decreases step length. During the (short) double stance phase, push-off was aided by the hip extension torques, as indicated by the positive hip power on the leading leg. By mainly actuating the swing leg which has smaller mass than the pelvis, hip actuation influences the swing foot trajectory, step length and collision loss. As mentioned in the Methods section, metabolic work in our model was obtained by summing all the positive and negative mechanical work performed by the ankle and hip actuators divided by the corresponding efficiencies of performing positive or negative work. MCOT is then computed by dividing metabolic work per step by the step length. In a periodic gait, the total metabolic cost for performing negative work is higher than the metabolic cost directly associated with this negative work. This is because negative mechanical work needs to be compensated by an equal amount of positive mechanical work, which is metabolically costly. As such, only about 1 1:2 = 1 0:25 þ 1 1:2 % 17% of the total metabolic cost is associated with actually performing negative work. In general, to understand the energetic effect of adding independent hip flexion and retraction actuation, (1) mechanical energy analysis is required to explain how hip actuation influences ankle/ hip positive/negative work and collision loss; (2) metabolic energy analysis is required to explain how these mechanical work components divided by metabolic efficiency of performing positive or negative work and step length together influence the MCOT. These analyses are applied in the following sections. Energetics of ankle actuation Before investigating the effects of hip actuation on MCOT, we first display for different speeds the MCOT and step length for only ankle actuation, which serves as a baseline for the addition of hip actuation. At each speed, eight state parameters and two control parameters (ankle spring stiffness and ankle pulse torque) were optimized to obtain the optimal ankle actuation gait that has the lowest MCOT. Figure 4 shows the MCOT and step length for optimal ankle actuation gaits at different speeds. From Figs. 4A and 4B, it can be seen that both MCOT (0.19-0.51) and step length (0.5-1.4) increase monotonically with speed (0.16-0.54). A monotonic increase of MCOT with speed, and an almost linear relation between speed and step length was also found by Kuo (2002). Can MCOT be reduced by adding only hip flexion actuation? Hip flexion actuation provides direct control over the swing leg, but it is not clear if, and if so how, the addition of only flexion actuation can reduce MCOT. To investigate this, we studied the model's mechanical/metabolic energy at a low, medium and high speed (0.21, 0.38, 0.54 respectively). At each of these three speeds, we added increasing amounts of hip flexion actuation, and for each hip flexion actuation, we searched for the optimal parameters resulting in a periodic gait with the lowest MCOT. While we succeeded in finding periodic gaits for a range of hip flexion actuation, the feasible range was quite small for high speeds, rendering the hip flexion actuation there negligible. Figures 5A and 6A demonstrate the MCOT as a function of hip flexion stiffness for a low and medium speed, and show that adding hip flexion actuation does not lower MCOT. In fact, the gait with the lowest MCOT was a gait with zero flexion actuation (indicated by the blue dots in Figs. 5A and 6A). The stick diagram for zero hip flexion actuation (and thus only ankle actuation) and the highest value of hip flexion actuation (leading to a 16.8% increase in MCOT) at a low walking speed are shown in Fig. 5B. From this figure, it is clear that the hip flexion actuation raised the swing leg, which caused a greater collision loss mainly due to larger (negative) impulsive work in vertical direction, as shown in Fig. 5C (and similarly for a medium speed, see Fig. 6C). The vertical impulsive work is the product of the vertical impact velocity and the vertical impulse. Figures 5D and 6D show that both the vertical impact velocity and vertical impulse increased when adding hip flexion actuation. An additional reason for an increase in MCOT (Fig. 5A) could be a decrease in step length. However, the step length decreased only slightly at a low speed (see Fig. 5B) compared to the increase in metabolic work per step (see the solid curve in Fig. 5F), and step length even increased at a medium speed (see Fig. 6B). Figures 5E and 5F show the mechanical and metabolic work components for a gait at a low speed. From these two figures it is clear that the increase in hip positive mechanical (and metabolic) work exceeds the reduction in ankle positive mechanical (and metabolic) work, leading to higher total metabolic work per step. To conclude, hip flexion actuation causes an increase in MCOT mainly due to the higher collision loss from larger vertical impact velocity and vertical impulse, which needs larger total mechanical and thus metabolic work at ankle and hip joint to compensate. Can adding hip retraction actuation in addition to hip flexion actuation reduce MCOT? As shown above, adding hip flexion actuation resulted in higher MCOT compared to ankle actuation, mostly due to the larger vertical impact velocity and vertical impulse leading to an increase in collision loss. Adding hip retraction actuation could potentially reduce the collision loss by reducing the vertical impact velocity and impulse. However, the reduced collision loss would be at the cost of higher hip negative work, which, as discussed before, requires an equal amount of positive work to compensate. Moreover, hip retraction actuation reduces step length, which leads to higher MCOT. Motivated by these tradeoffs, in this section, we investigated the effect of adding hip retraction actuation for a given hip flexion actuation on MCOT. We found that only ankle actuation led to the lowest MCOT at both low and high speeds, as discussed later in this section. At medium speeds, the addition of (optimal) hip retraction actuation reduced MCOT compared to any given flexion actuation gait, as can be seen in Fig. 7A. Interestingly, for the zero hip flexion actuation, adding optimal retraction actuation led to a gait with a lower MCOT compared to the only ankle actuation gait (Fig. 7A). To understand why adding only retraction actuation is optimal, we investigated the effects of adding retraction actuation (with no hip flexion actuation) on collision loss, impact velocity and impulse, as illustrated in Figs. 7C and 7D. From these figures, it can be seen that the collision loss decreases with increasing hip retraction actuation. The stick diagram in Fig. 7B provides an intuitive reason why this is so: the swing heel height is closer to the ground for the optimal hip retraction actuation gait, resulting in lower vertical impact velocity (Fig. 7D). Figure 7E shows that increasing retraction actuation reduced collision loss and only slightly increased hip negative work, resulting in overall lower total positive and negative mechanical work. The optimal retraction actuation led to a reduction in metabolic work per step (15%, see Fig. 7F), but also a reduction in step length (9.6%, see Fig. 7B), together resulting in about 5.7% reduction in MCOT compared to only ankle actuation (Fig. 7A). At low and high speeds, adding hip flexion actuation with optimal retraction actuation led to a higher MCOT. Figure 7G shows that at a low speed, adding hip retraction actuation reduced collision loss. However, the increase in required hip negative work was more than the reduction in collision loss, which resulted in more positive work from the ankle and hip, and thus, higher total metabolic work. Combined with the fact that the addition of only hip retraction actuation led to shorter step length, adding hip retraction actuation led to higher MCOT for these lower speeds. Figure 7H shows that at a higher speed, increasing hip retraction actuation actually led to larger collision loss due to larger vertical impact velocity. As such, both the collision loss and the negative hip work increased with hip retraction actuation, leading to substantially higher total positive work needed and higher metabolic work per step, resulting in higher MCOT. To conclude: Ankle actuation with some hip retraction actuation was optimal in terms of MCOT at medium speeds (0.32-0.44), with average and maximal reductions of 4.6% and 6% in MCOT compared to only ankle actuation. At all other speeds, only ankle actuation was optimal. These optimal gaits with hip retraction actuation at medium speeds had a low swing heel trajectory above the ground, decreasing the vertical impact velocity, which reduced collision loss. DISCUSSION We evaluated the independent effect of hip flexion and retraction actuation on the MCOT at different speeds in a simple model of human walking. We found that ankle actuation only was optimal at low and high speeds, and only at medium speeds did the addition of hip retraction reduce MCOT (by maximally 6%) compared to ankle actuation. Effects of hip actuation on collision loss The relation between impact velocity, impulse and (ground) impulsive work (Eq. (4)) has been shown analytically in Font-Llagunes & Kövecses (2009); in Supplemental Material B we provide a more intuitive proof. To the best of our knowledge, this relation has not yet been applied to demonstrate the effect of hip actuation on collision loss. We found that adding hip flexion actuation leads to higher collision loss due to both a larger vertical impact velocity and larger vertical impulse. The opposite is true for adding hip retraction actuation. However, reducing the collision loss does not necessarily yield a lower MCOT, because lower impact velocity requires hip retraction actuation performing more negative work. This comes with an increase in total positive work and metabolic work per step and results in higher MCOT. Effects of hip actuation on MCOT MCOT is determined by metabolic work per step and step length, and metabolic work is computed from ankle and hip mechanical work. To analyze the mechanisms of optimal gaits, it is useful to study the effects of hip actuation on mechanical work components (ankle/hip positive/negative mechanical work, collision loss) and step length, and how these components contribute to the MCOT. Here, adding hip flexion actuation increased the collision loss substantially due to larger vertical impact velocity and vertical impulse, resulting in higher total positive mechanical work, metabolic work per step and MCOT. Moreover, at medium speeds (0.32-0.44), adding hip retraction actuation with zero flexion actuation led to a larger reduction in metabolic work per step (in percentage) relative to the reduction in step length. This resulted in a reduction in MCOT of maximally 6% compared to ankle actuation only. For low and high speeds, such an energetic benefit of hip retraction actuation was absent. Apparently, at low speeds, the collision loss is already small and the reduction in collision does not outweigh the increase in hip negative work, leading to higher total mechanical work (Fig. 7G) and higher MCOT. At high speeds, adding hip retraction actuation causes an increase in collision loss (Fig. 7H) due to higher vertical impact velocity, resulting in higher MCOT. We investigated the effects of hip flexion and retraction actuation on MCOT in a 'simple' model with the aim to understand the mechanisms underlying these effects. These predictions are based on specific model features like actuation types and based on assumptions of specific metabolic cost functions. Therefore, we elaborate next on how these model details and assumptions influence the predictions we made about optimal gaits and underlying mechanisms. Model simplifications Our model lacked, e.g., knee, trunk, and muscle dynamics, because we sought to highlight fundamental mechanisms underlying the optimal gaits. Here, we justify some of the simplifications we made but also discuss some pitfalls. Our ankle actuation model consisted of an ankle spring and pulse torque, which allowed for adjustments of push-off timing and magnitude, and from which the resulting angle-torque relation is roughly similar to the angle-torque relation in human walking (Zelik et al., 2014). The pulse torque was initiated when the angle changed from dorsiflexion to plantarflexion (see Fig. 2A), while in humans, the increase in ankle torque is more gradual (Shamaei, Sawicki & Dollar, 2013). This results in gaits with step lengths larger than 1 m at medium and high speeds in our model, because for these gaits the shorter rise time of ankle torque (compared to human ankle torque) can lead to earlier peak angular acceleration of the ankle and thus a larger ankle angle at contralateral heel strike. We modelled spring-like hip actuation, which is different from the burst-like hip torques observed in human walking (Doke, Donelan & Kuo, 2005). However, the spring-like hip actuation and burst-like hip actuation can similarly modulate the step frequency (Kuo, 2002). Our model was also able to generate hip extension torques after heel strike (see Fig. 3C), which is an important source of work for push-off in human walking (Browne & Franz, 2017;Umberger, 2010;Winter, 1983). Therefore, the ankle and hip actuation can be considered realistic in their roles in push-off propulsion and swing leg control, and thus capture important characteristics of human walking. Metabolic cost simplifications The metabolic cost of muscle contraction can generally be partitioned into muscle (de-) activation and cross-bridge cycling (Homsher & Kean, 1978;Woledge, Curtin & Homsher, 1985). There have been several attempts to model the link between muscle mechanics and muscle energetics (Anderson & Pandy, 2001;Bhargava, Pandy & Anderson, 2004;Lichtwark & Wilson, 2005;Umberger, Gerritsen & Martin, 2003) or to directly predict muscle energetics from cross-bridge models (e.g., Huxley models; cf. Huxley, 1957;Julian, 1969;Lemaire et al., 2016). The extent to which these models are capable of adequately predicting metabolic cost of muscle contraction and, in addition, of locomotion involving complex musculoskeletal models is still under debate. Several studies have investigated the relative importance of various muscle mechanical factors (e.g., force production, force rate and work; cf. Doke & Kuo, 2007;Kuo, 2001;Umberger & Rubenson, 2011) in predicting metabolic cost, but reported results were not consistent (Beck et al., 2022;van der Zee & Kuo, 2021), and the predictions were "very sensitive to the metabolic model, muscle model and neural controller" (Hicks et al., 2015;Miller, 2014). All in all, accurately predicting metabolic cost is far from straightforward. Here, we chose the metabolic work as proxy for overall metabolic cost of walking. The validity of this proxy can be justified by the fact that in both isolated leg swinging (Doke, Donelan & Kuo, 2005) and in locomotion (Riddick & Kuo, 2022), the joint mechanical power and metabolic power are monotonically related, suggesting that lower metabolic work can be a proxy for lower metabolic cost and higher energy efficiency. Still, we overestimated the metabolic cost at the ankle and hip joints (dimensionless MCOT for humans is 0.23 and for our optimal gait is 0.38 at the same human preferred speed), due to the fact that humans generate burst-like hip torques rather than spring-like hip torques, and due to storage and release of elastic energy at tendons. For instance, the Achilles tendon was shown to passively store and release up to 50% of the total mechanical work involved in a gait cycle of running (Ker et al., 1987;Sasaki & Neptune, 2006). As a result, our optimization results should be interpreted based on the assumption that all ankle and hip joint torques are actively generated by muscles, with implications to human walking discussed in the next paragraph. The exact differences between the predictions made from our model and predictions that could have been made from other (more complicated) models are beyond the scope of our current work. Implications to human walking We investigated the effect of hip flexion and retraction actuation on the MCOT in a simple model. The energetic effect of varied hip actuation and ankle actuation was previously investigated in modeling (Kuo, 2002;Neptune, Sasaki & Kautz, 2008) and experimental studies (Lewis & Ferris, 2008;Pieper et al., 2021;Teixeira-Salmela et al., 2008;Umberger & Martin, 2007). These studies indicated that strong hip actuation increases MCOT when hip actuation cost is included, and is beneficial when this actuation cost is ignored. In our model, we also found that the MCOT decreased with larger 'free' symmetric hip actuation, see Supplemental Material D. In human walking, hip flexion torques ($0.4 and $1 Nm per kg of body mass at slow and fast speed) are larger than hip retraction torques (0 and $0.3 Nm per kg of body mass at slow and fast speed) during the swing phase (Winter, 1984). It seems that the substantial hip flexion torques found at all speeds in humans cannot be explained by our model prediction that any hip flexion actuation is inefficient. The reasons for this disagreement may be that we overestimated the metabolic cost of hip actuation compared to humans, that step lengths in human walking are generally smaller than our model predictions, and that hip flexion torques may serve other objectives such as trunk angular control (Nott et al., 2010), trunk stabilization and gait robustness (Deng, Zhao & Xu, 2017;Rummel & Seyfarth, 2010;Wisse, Hobbelen & Schwab, 2007). For instance, Deng, Zhao & Xu (2017) found that adding hip torsional springs between the torso and leg is necessary for trunk stabilization in a simple compass walker model. Hasaneini et al. (2013) used a telescoping ankle and hip actuated walker model with trunk to optimize the MCOT, and found that ankle push-off and hip actuation contributed equally to the metabolic cost. Note that our model did not include the trunk because our main goal was to study the energetic effect of hip actuation, whereas for a walker model with trunk, this energetic effect is dependent on trunk stabilization, making it difficult to interpret the energetic effect. The absence of hip retraction torques at slow speeds in human walking is consistent with our predictions, suggesting that the energy inefficiency of hip retraction actuation at slow speeds is likely to be responsible for its absence in human walking. The considerable magnitude of hip retraction torques in normal speed walking ($0.1 Nm per kg of body mass; see Winter, 1984) also agrees with our predictions that some retraction actuation at medium speeds is energy efficient. At high speeds, humans generate larger hip retraction torques, which disagrees with our prediction that ankle actuation only is optimal. The reason for this disagreement may be that a higher step frequency requires at least some hip retraction actuation, and that hip retraction torques serve other objectives, such as improving gait robustness, as has been suggested in modeling studies (Hobbelen & Wisse, 2008;Wisse, Atkeson & Kloimwieder, 2005). Taken together, our study showed at best limited energetic benefits of hip flexion and retraction actuation across speeds, whereas humans in general have larger hip flexion and retraction actuation than our predictions, suggesting that hip actuation in humans is likely to play other roles, such as trunk stabilization and improving gait robustness. CONCLUSIONS We studied the effects of independent hip flexion and retraction actuation on the MCOT at different speeds. Ankle actuation only is optimal at low and high speeds. Adding hip retraction actuation can lead to a modest decrease in MCOT compared to ankle actuation only (maximally 6%) at medium speeds. The mechanisms for this lower MCOT from adding hip retraction actuation are a larger reduction in collision loss than the associated increase in hip negative work, which both require positive mechanical work to compensate, causing a larger reduction in metabolic work per step than the reduction in step length. Taken together, hip flexion actuation does not appear beneficial because it increases the collision loss due to larger vertical impact velocity and vertical impulse. ADDITIONAL INFORMATION AND DECLARATIONS Funding Sjoerd M. Bruijn and Jian Jin are funded by a VIDI grant no. (016.Vidi.178.014) from the Dutch Organization for Scientific Research (NWO). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Grant Disclosures The following grant information was disclosed by the authors: Dutch Organization for Scientific Research (NWO): 016.Vidi.178.014.
9,233
2022-09-19T00:00:00.000
[ "Engineering" ]
Recurrence analysis of phase distribution changes during boiling flow in parallel minichannels In the paper, flow boiling in three parallel minichannels with a common inlet and outlet area was examined. The synchronization between flow distributions in minichannels was analyzed in local area (image analysis) and as the process of the synchronization between inlet and outlet pressure fluctuations. These processes were studied using cross recurrence plot. The analysis of pixel brightness changes inside minichannels has been applied to identify the similarity of flow patterns changes inside minichannels. The results have revealed that the processes of synchronization have a negative impact on water inlet and outlet temperature and inlet and outlet pressure oscillations. During synchronization high amplitude of oscillations of temperature and pressure occur. The mentioned behaviors are caused mainly by reverse flow. In the paper it has been shown that recurrence analysis of inlet and outlet pressure oscillations can be used for assessment of boiling synchronization in minichannels. Introduction In the systems with parallel minichannels dynamic instabilities are accompanied by synchronization of processes in neighboring minichannels: temperature fluctuations [1], pressure oscillations [2] and changes of two-phase flow patterns [3]. Two-phase flow instabilities are related to periodic or non-periodic oscillations of heat and mass transfer [4]. In heat-exchangers with minichannels, a thinner gap between the channels and a common inlet area induce the processes of synchronization [5]. Recent advantages in technology have led to a greater attention given to investigations of the performance of mini/microchannel devices. Two-phase flow in mini/microchannel is characterized with ability to remove large amount of heat in a limited space. However, maldistribution during these processes is observed, which reduces thermal and hydraulic performance [6]. Research is still being carried out in order to investigate and control the occurring maldistribution (e.g., non-uniform heating, flow maldistribution, uneven pressure distribution) [7][8][9]. In the paper, flow boiling in three parallel minichannels with a common inlet area was examined using cross recurrence plot (CRP). A qualitative analysis of a e-mail<EMAIL_ADDRESS>(corresponding author) CRPs considering only diagonal lines was performed by applying the diagonal cross-recurrence profile analysis (DCRP) [10]. The synchronization between flow distributions in minichannels was analyzed in local area (image analysis) and as the process of the synchronization between inlet and outlet pressure fluctuations. An asymmetry of synchronization between pairs of channel was observed. Such asymmetry is a result of the heatexchanger construction based on two parts: one made of copper, the second-out of Teflon. This geometry was designed in such a way to avoid heating the water in the common inlet plenum. In such construction the axis of common inlet plenum cannot be equally positioned with the axis of the middle channel (because of O-rings usage). The results have revealed that the processes of synchronization have a negative impact on water inlet and outlet temperature and inlet and outlet pressure oscillations. The mentioned phenomena are caused mainly by reverse flow. In the paper it has been shown that recurrence analysis of inlet and outlet pressure oscillations can be used for assessment of boiling synchronization in minichannels. Experimental setup and data characteristics In Fig. 1, the experimental setup and an example frame of the recorded video are presented. The heatexchanger with three parallel minichannels was heated using electric power (Fig. 1a-q sup ). The positioning of the minichannel element was based on O-rings. A vertical row of three K-type thermocouples was installed under minichannels in the copper block ( Fig. 1a- . The internal dimensions of minichannels were equal to 0.25 mm (width) × 0.50 mm (depth) and 32 mm (length) (the wall between the channels: 0.25 mm wide). The minichannels were covered with a plexi cover which allowed observations of flow boiling inside minichannels. The high speed camera ( Fig. 1a-8) was used to record flow patterns inside minichannels. The videos were recorded at the speed of 2000 fps. The working liquid was distilled water. A gear pump ( Fig. 1a-1) was used to pump the water to the compressible volume tank ( Fig. 1a-2). A Coriolis mass flow meter ( Fig. 1a-3) was used to measure the water mass flow rate. The thermocouples ( Fig. 1a-T in , T out ) were placed in the water inlet and outlet common area. The pressure sensors were used to measure the pressure in inlet and outlet common area ( Fig. 1a- 4,6). Two data acquisition systems were used to record all signals with a frequency of 1 kHz. The registration of signals and films was synchronized. The system did not contain any automatic systems to control the mass and heat flux, therefore, the pressure oscillations generated during the experiment were related to the studied phenomenon of flow boiling. During the experiment the electric power was constant, but the water flow rate was changing in the range of:ṁ = 175-570 g/h. The average heat flux (q) vs. water flow rate (ṁ) is shown in Fig. 2, where a 3rd degree polynomial was used to approximate the values of the average heat flux. When the water flow rate was low, the boiling was intense and the values of heat flux were high. With the increase in water flow rate, the decrease in heat flux was observed (the boiling was less intense). In Table 1, the examples of different flow patterns observed near the inlet common area were presented. When the water flow rate was low, the boiling was intense and a higher content of vapor in the channels was observed (ṁ = 175 g/h). We have also observed vapor reverse flow in the inlet common area which created vapor bubbles in this area (ṁ = 175 g/h,ṁ = 348 g/h). With the increase in water flow rate (ṁ = 383 g/h), the fragmentation of vapor slugs occurred and the amount of liquid in minichannels was increasing. Short slugs and vapor bubbles were filling the minichannels near the inlet area (ṁ = 383 g/h). During less intense boiling the changes of flow patterns occurred more rarely, a higher content of liquid phase was observed in the channels (ṁ = 472 g/h). An analysis of pixel brightness changes near the inlet area based on the experimental data gathered in the same experimental setup ( Fig. 1) has been previously performed [11]. The authors have observed that during intense boiling reverse flow occurred which resulted in a formation of vapour bubbles in the inlet area (which Table 1 Flow characteristics based on visual observations will be further called as 'reverse flow bubbles'-RFB). The creation and movement of the RFB influenced flow patterns observed in minichannels near the inlet area. During intense boiling, the RFB were created (oscillated) almost periodically in the inlet area and pushed back into minichannels. This caused the occurrence of rather long vapour slugs in minichannels (Table 1,ṁ = 175 g/h). During less intense boiling, the RFB were created rather separately on each minichannel. Rarely, vapour from RFB was pushed back into minichannels and this way small vapour bubbles and short slugs were filling the minichannels (Table 1,ṁ > 383 g/h). Phase distribution during boiling flow In order to analyze the changes of phase distribution in minichannels, the pixel brightness in the middle part of each minichannel was summed in the area (called 'gates') denoted in Fig. 1b with the letter S , located 16 mm away from the inlet area. When the vapor was filling minichannel the pixel brightness was lower than in case of liquid occurrence in the channel. However, the front of vapor slugs was often represented by white pixels (light reflection occurred). The dimensions of the gates were following: 0.40 mm × 0.25 mm. The width of the gate corresponded to the width of minichannel while the length of the gate corresponded to the length of a short slug. The sum of pixel brightness indicated the phase distribution (presence of vapor or water) at a particular time. In Fig. 3a pixel brightness changes (sum) in two minichannels during very intense boiling in function of time were presented. High oscillations of pixel brightness changes correspond to the occurrence of short and long vapor slugs which fronts reflect the light during the experiment. Figure 3b shows pixel brightness changes in two neighboring minichannels during less intense boiling. In Fig. 3b a rather constant level of pixel brightness corresponds to the occurrence of liquid flow in the analyzed part of minichannel. Due to small dimensions of the analyzed parts of minichannels (0.4 mm × 0.25 mm) the calculated sum of pixel brightness is directly related to the changes of phase distribution in the gates. In order to quantitatively analyze the phase distribution changes and the process of flow synchronization in the parts of neighboring minichannels, the Diagonal Cross-Recurrence Profile (DCRP) Analysis was applied [12]. We have applied the CRP algorithm to the recorded visual data in order to benchmark the similarity of the states occurring in parallel minichannels during flow boiling. The cross recurrence plot (CRP) is a special variation of the recurrence plot. Compared to a recurrence plot, the CRP is used to describe the similarity between the states of two dynamical systems x i and y j (i = 1,. . . ,N ; j = 1,. . . ,M ) which are embedded in the same phase space. Cross recurrence plot is a matrix with dimensions N × M and is described by the following relationship [13]: where Θ-Heaviside step function, ε-a diameter of the sphere inside which the distance of two points is tested, The CRP shows all systems states in which points on the trajectory of one dynamical system are close to points on the trajectory of the other dynamical system. If the distance between points x i and y j is less than or equal to ε, then CRP i,j = 1, otherwise CRP i,j = 0. In Fig. 4 the examples of CRPs calculated based on the pixel brightness changes (sum) in two neighboring minichannels (1 and 2) are shown. For a higher water flow rate (Fig. 4b), a higher density of CRP is observed. In Fig. 4a, b the processes observed in both gates are very dynamic, states with only liquid or only vapor occur rarely in the gates. The recurrence in this case is calculated between different phase distributions in the gates. For water flow rate equal to 570 g/h (Fig. 4c) the CRP is almost completely blackened. This is related to a very high content of liquid inside minichannels. In this case the recurrence is related to the occurrence of only one phase in both gates, the dynamics of phase change is low. Thus, further cross recurrence analysis of pixel brightness will be performed forṁ < 400 g/h. A quantitative evaluation of CRPs is performed using the cross recurrence quantification analysis (CRQA) [12]. Diagonal cross recurrence profile (DCRP) [14] is a type of CRQA analysis. This method is limited to the analysis of diagonal lines visible on the CRP. The most important coefficient used in DCRP analysis is the recurrence rate (RR), calculated using the following formula [12]: where t is the characteristic time shift defining the distance from the diagonal line (LOS), P t (l ) is the histogram of the l -length continuous diagonal lines (t = 0 denotes the main diagonal line called also as the line of synchronization-LOS). The RR coefficient is used to calculate the number of recurrence points on diagonal lines. The analysis carried out in this way allows to obtain a graph of changes of the RR coefficient as a function of the lag, t, which has been called diagonal cross recurrence profile. The highest synchronization between both time series is observed for lag = 0 and for the highest RR. Figure 5 presents RR functions vs. water flow rates and k = 4000 diagonal lines (lags). The analysis of RR function vs. water flow rates was performed forṁ < 400 g/h. Figure 5a shows RR functions for minichannels 1 and 2, Fig. 5b for minichannels 2 and 3 and Fig. 5c for minichannels 1 and 3. The functions of RR obtained for two pairs of minichannels: 1 and 2 and 1 and 3 are very similar (Fig. 5a, c). For water flow rates in the range of 210-281 g/h the RR function increases-similar flow patterns and flow dynamics are identified in minichannels. The maximum of RR for both those pairs of minichannels (1 and 2, 1 and 3) is observed forṁ = 281 g/h and it indicates flow boiling synchronization in the analyzed parts of minichannels. In this case, very intense flow boiling with reverse flow, long vapor slugs filling the minichannels and several small bubbles were observed (Table 1). A high content of vapor was filling minichannels. When flow boiling becomes less intenseṁ > 281 g/h, the RR functions decreases. A lower value of RR corresponds to a change of two-phase flow patterns and flow dynamics in neighboring minichannels. Figure 5b represents an analysis of synchronization between minichannels 2 and 3. During very intense boilingṁ = 175-210 g/h the first maximum of RR function is observed (ṁ = 175 g/h). The second maximum of RR function is shifted towards flows with slightly higher water flow rate (ṁ = 316 g/h) than in the case of minichannels 1 and 2 and 1 and 3 (Fig. 5a, c). The maximum value of RR is also higher than in Fig. 5a and c. This indicates that the highest boiling synchronization is observed between minichannels 2 and 3. We can suppose, that flow between those minichannels may be influenced by an intense and periodic movement of RFB which is described in Sect. 2. The maximum values of RR in Fig. 5 occur mostly for lag = 0, but also for different values of lags. This is caused by the fact, that the highest synchronization was observed in the gates when short slug flow occurred. If a vapor slug was longer than the length of the gate, then the maximum of RR was observed for different time lags (not only for lag = 0). This is also visible based on CRPs in Fig. 4 where the diagonal lines are not parallel to the main diagonal line, so the analysis based on time lags is not always clear. The DCRP analysis of pixel brightness changes inside minichannels has been applied to identify the similarity of flow patterns changes inside minichannels. However, we lack information considering flow synchronization between inlet and outlet pressure fluctuations. Moreover, in the paper [15] authors have stated that the analysis of pressure distribution can be a potentially effective method for detecting flow maldistribution and its intensity. Thus, we have performed a DCRP analysis of pressure oscillations recorded at the inlet and outlet common area. Pressure analysis In order to assess flow synchronisation between inlet and outlet common area we analysed pressure oscillations recorded at the inlet and outlet of the minichannels. The pressure oscillations registered at different water flow rates were used for the DCRP analysis. Each signal has been normalized before the analysis. Then, the CRPs were created. Two examples of analysed pressure signals are shown in Fig. 6. Additionally, a standard deviation of the inlet pressure (σ pin ) and outlet pressure (σ pout ) was calculated and included in Fig. 6 capture. The standard deviation of inlet and outlet pressure oscillations is much higher during intense boiling (Fig. 6a). The parameters m, τ and ε for the CRP analysis were determined for the pressure oscillation signal recorded at the inlet to the minichannels. The value of ε was equal to 15% of the maximum attractor diameter. The m value was estimated using the false nearest neighbors method [16]. The value of τ was determined using the mutual information method [16]. All diagonal lines appearing on the CRPs were quantified by the RR coefficient which was calculated using DCRP function from the CRP package in MATLAB. Figure 7 shows RR coefficient for main diagonal line (lag = 0) calculated based on image analysis (RR1-2, RR2-3, RR1-3) and based on inlet and outlet pressure fluctuations (RRp). Conclusions In Fig. 7, the RR coefficient changes for the main diagonal line for analyzed water flow rates was presented. The greatest synchronization of pixel brightness changes in two pairs of minichannels (1-2 and 1-3) was observed for water flow rates in the range of 246-350 g/h. For minichannels 2-3 the amplitude of RR is much higher and the peak is observed for water flow rates in the range of 280-350 g/h. This indicates the highest boiling synchronization observed between minichannels 2 and 3. The asymmetry in this case was observed in the geometry of the setup which affected the movement of the 'reverse flow bubble' in the inlet common plenum. The RR coefficient calculated between inlet and outlet pressure signal (RRp, lag = 0) has its first peak for water flow rates in the range of 280-350 g/h. This corresponds to the high boiling synchronization observed for minichannels 2 and 3 which was influenced by RFB movement. In order to assess the flow stability during high boiling synchronization the standard deviation of inlet and outlet temperature and pressure was calculated and presented in Fig. 8. The highest synchronization of phase distribution during flow boiling was identified for water flow rates from 280 g/h up to 350 g/h (Fig. 7). The beginning of the synchronization is closely related to reverse flow Fig. 7 The RR coefficient (DCRP analysis) for main diagonal line (lag = 0) for pixel brightness changes in minichannels (RR1-2, RR2-3, RR1-3) and for inlet and outlet pressure drop signal (RRp) vs. water flow rates (ṁ) Fig. 8 The standard deviation of boiling parameters registered atṁ = 175 g/h,ṁ = 281 g/h,ṁ = 316 g/h,ṁ = 383 g/h, m = 570 g/h: a inlet and outlet temperature (SD(Tin), SD(Tout)), b inlet and outlet pressure (SD(pin), SD(pout)) and RFB movement as the standard deviation of inlet temperature and pressure is the highest (Fig. 8). It can be concluded that the processes of synchronization have a negative impact on temperature and pressure oscillations. During synchronization processes an increase in temperature and pressure oscillations is observed-heat flux decreases, the heat exchange is not uniform [6,8]. With the increase in water flow rate, reverse flow occurs more rarely, the oscillations of water inlet temperature decrease, the standard deviation of inlet pressure decreases. The oscillations of the outlet temperature and outlet pressure decrease-the process of boiling is becoming less intense, it is less likely to generate reverse flow. The decrease in synchronization (lack of synchronization) causes a decrease in the intensity of reverse flows. The results show that recurrence analysis of inlet and outlet pressure oscillations can be used for assessment of boiling synchronization in minichannels. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Data availability statement The data that support the findings of this study are not openly available due to large amount of the processes data and are available from the corresponding author upon reasonable request in a controlled access repository where relevant. Conflict of interest The authors have no competing interests to declare that are relevant to the content of this article.
4,625.6
2022-12-15T00:00:00.000
[ "Engineering" ]
Orbital: An Electric and Magnetic Field Quantum mechanics is a non-intuitive subject which is very difficult to understand. Various investigations have been done on understanding the basic quantum mechanics. Nevertheless, many attempts have been made for understanding the appropriate techniques, which are known to help conceptual under-standing of basics of orbital. As per the study the orbital is considered as an electric and magnetic field. The electric and magnetic field gets generated due to the continuous motion of electrons. So the orbital is nothing more than an electric and magnetic field. It is because of the electric and magnetic field of the orbital that it possesses electric and magnetic momentum. So taking all these points into consideration, it could be concluded that an orbital is no more than an electric and magnetic field. Introduction The orbital is defined as the place or space where there is probability to locate an electron. As a known fact the electrons are found only in these small spheres, without them there is no any place, space or an area where there is probability to find an electron. The idea of energy quantization was introduced in Atomic Physics in 1913 with the first explanation of the hydrogen electronic structure by the Dane Niels Bohr [1]. Inspired by Planck's theory [2,3] of black-body radiation, Bohr admitted that the electrons in hydrogen atom can only exist in stationary states with a well-defined energy. Transitions between these states occur by absorption or emission of energy. Bohr defended that electrons in such states follow classical circular or-bits around the nucleus. The idea of orbitals as probability functions was still to come. Influenced by the interpretation of the Compton effect, the Frenchman Louis de Broglie [4] suggested, in 1924 ,that the accepted wave-particle duality for photons could be extended to any moving particle which would then have a wavelength associated with it. The somewhat mysterious wave of de Broglie was the predecessor of the wave function. The wave function contains all of the important properties of the electron: knowing it we can calculate the value of any measurable quantity. The probabilistic interpretation of the wave function was proposed, also in 1924, by the German Max Born. The wave function is simply related to the position of the electron in space. The square module of the wave function, is the probability density for finding the particle at the(x, y, z) position. The sum of all probabilities in full space is unity since the particle should be anywhere. Helped by that interpretation, from 1925 to 1927, a pool of young physicists developed the complete theoretical machinery for getting the wave function and obtaining information from wave functions. An important step was given in 1925, when the Austrian Erwin Schrodinger, [5] inspired by the de Broglie theory [4], pro-posed a wave approach to quantum mechanics. For the simplest case of a free particle the Schrodinger equation is a An operator describing the kinetic energy which acts on the wave function (his Planck's constant and m the particle mass)? If the particle is confined to a limited region of space (box) the solution of the wave equation leads to a discrete set of energy values. Energy quantization appears, therefore, associated to the localization of the wave function. For a particle under a potential, the operator ˆH has to include a potential term. Like the energy, any measurable physical quantity, or observable, has associated with it an operator, which acts on the wave function to yield the wave function multiplied by a real number, the value of the observable which should be measured. In 1927, the German Werner Heisenberg [6] identified incompatible observables meaning that we could not measure with arbitrary precision on pair of observables. That is the case of (linear) momentum and position. This so-called uncertainty principle expresses therefore the impossibility of preparing a state for which both position and momentum can be determined with arbitrarily small uncertainties. Still in 1927, with the first observations of electron diffraction by Clinton Davisson and Lester Germer [7], in the United States, and by George Thompson [8], in Great Britain, the fundamental aspects de Broglie's theory were confirmed. Further observations confirmed the validity of Bohr's interpretation, the Schrodinger equation and Heisenberg's uncertainty principle. Until now, none of the predictions of quantum mechanics have been contradicted by experiment. When dealing with atomic systems and going beyond old quantum theory, the classical notion of particle trajectory has to be abandoned, since, in contrast to Newtonian Mechanics, a well-defined position and momentum are no longer possible at a given time. We can only describe the probability for the particle to be at a certain position, or the probability for it to have a certain momentum. Trajectories are replaced by diffuse spatial distributions. These distributions can be represented by surfaces on which all points have the same value of probability density ψ 2 , the so-called isodensity surfaces. Electrons surrounding atoms are concentrated in regions of space described by atomic orbitals. The boundaries of an atomic orbital are conventionally drawn by the surface of 90% probability, but they extend to infinity. From the Schrodinger equation we can calculate the wave function of the hydrogen atom and therefore the probability for the position the electron can take [5]. For hydrogen the energy depends on the principal quantum number n, which is an integer (n= 1, 2. . .). Angular momentum is also an observable. It is found that the angular momentum is quantized accord-in g to: With l the angular momentum quantum number (l =0 ...n−1). The z-component of the angular momentum is given by Where m l is the magnetic quantum number (ml=−l... +l). In the lowest-energy state (ground-state) of the hydrogen atom the electron has a spherical distribution in space since the wave function has spherical symmetry. At higher energy the orbitals may take other shapes. Spence et al. [9,10] also claim that it is acceptable to use words like "orbital" in different senses, provided that this does not lead to confusion. However, it was precisely because that the conflation of the terms orbital and charge density does cause confusion that there is currently some theoretical interest directed at performing orbital-free density functional calculations [11]. But in order not to seem too dogmatic, it must be said that some alternative interpretations of quantum mechanics, such as Bohm's theory, do regard electrons as having definite trajectories. However, this theory has not yet received any experimental evidence that might lead one to prefer it to the currently accepted Copenhagen interpretation [12]. Results and Discussion As usual an orbital may be defined as the place or space where there is maximum probability to find an electron. Definitely electrons could be found only within these small spheres, without them there is no any place, space or an area where there is probability to find an electron. First there is probability in finding an electron; no any exactness is there in the location of an electron. But the point to be considered over here is, why do electrons exist only in spheres, the answer is simple that in multielectron system, there occurs some boundary for a particular set of electrons, which doesn't allow another set of electrons to collapse in an isolated atom. In actual sense after studying various phenomenon's it could be concluded that an orbital is nothing other than a field (Both electric as well as magnetic) which gets generated around an electron, due to the circulation of an electron around its own axis. As electrons being negatively charged body is always under motion as per Bohr's theory. Any moving body develops some sort of field either electric or magnetic as per Columbic law [13]. So what about a negatively charged moving electron, which could develop much more intense field than the neutral body, so in actual sense, an orbital is nothing less than a field developed due to the motion of an electron (Fig:1). Also an electric field can be felt only when another field is being brought in-contact with the first field. The other mechanism which could support, that an orbital is actually a field is that an orbital posses an orbital angular momentum. If we simply consider an orbital as a place or space how it could be in such a situation that it could develop angular momentum. So while considering orbital as a field, it does posses momentum-i.e. orbital angular momentum. All it had been shown that orbital does posses some sort of shape like-S-orbital as spherical, i.e. the field developed due to the circulation of an electron in s-orbital is same along all sides, or electric or magnetic flux is same along all the three axis from the point of its origin. In case of p-orbital having dumble shape, meaning thereby is that the alignment of the field is only along one axis at once, it could be along X, Y or Z axis, or it could follow the entire three axis at once depending upon the presence of electrons in the respective orbitals. Same trend goes for the electrons which are located in the (d) and (f) sub shell orbitals respectively (Fig: 2). While discussing the probability of finding an electron, two important concepts are being taken into consideration, radial wave probability and angular wave probability. It is by the utilisation of these two concepts that it had been found that (ψ 2 ) does posses a value else (ψ) does have any value. A single (ψ) could give information about the distance of electron from the nucleus, or the angle at which the electron is located from the nucleus, so the total (ψ 2 ) could give information about the distance or the angle at which the electron is located. So by taking distance factor and the angle factor into consideration, it had been mentioned that single (ψ) doesn't possess any value, but (ψ 2 ) bears a value. An electron always remains in circular motion, is because of the fact that it had to develop some force which could kept the electron on its own tract or orbit. The force that kept the electron under motion and aligned without any support is the centripetal and centrifugal force which gets developed due to the circular motion of an electron. During the course of motion of an electron, the electron looses energy in order to maintain pace and to remain in a circular way. The energy released is acting as a barrier which also does have circular direction so is simply acting as an orbital. So an orbital is simply the field developed due to the circulation of an electron, which kept this electron to behave like a magnet. During the course of bond formation as per the valance bond theory and the molecular orbital theory it is simply mentioned that the nuclear pull of one atom attracts the electron pairs from the another atom leading to the formation of molecular orbitals. All though there are so many mathematical proofs regarding the bonding and anti-bonding molecular orbital formation on the basis of the interaction between the electronic cloud and the nuclear portion of the concerned atoms leading to the formation of the molecules. But the main point of formation could be due to the orbital-orbital interactions, i.e., the field generated from one atom in the form of orbital interacts with the field of another atom (but only those atoms interact which may have comparable field strength, orbitals having same energy and this could be the reason that every atom in the periodic table doesn't interact with every other atom in the periodic table, besides some other chemical properties, so only those atoms interact which have comparable energy). The orbitals of the concerned atoms also interact on the basis of the direction of the orbitals, either clockwise or anticlockwise, if the orbital direction of the connecting atoms is same either both clockwise or both anticlockwise a bonding orbital gets formed, and if the direction of the connecting atoms is in such a way that one atom have clockwise direction and another atom anticlockwise direction, an anti-bonding molecular orbital formation may take place. Conclusion So, on the basis of the above results it could be concluded that a bond formation is because of the orbital interaction meaning thereby that these are actually the fields which get connected. Total bond formation is only up to the involvement of orbitals, although their generation is totally because of the circulation of the electrons both on their own axis as well as round the nucleus. Electrons are simply the sources for the generation of field, which acts as an orbital. So in total it could be concluded that an orbital is nothing than a field-either electric or magnetic.
2,976.4
2021-01-01T00:00:00.000
[ "Physics", "Education" ]
Grating Lobe Suppression with Element Count Optimization in Planar Antenna Array The novel approach of this paper describes the suppression of grating lobe level with the element count optimization in planar antenna array. Rectangular lattice (RL) and triangular lattice (TL) structures are chosen for determining the achievable array element patterns (EP) and further suppressing the grating lobe level. The element spacing and number of elements (10 × 20 array) are taken into account for particular lattice. Grating lobe peaks are observed for the 200-element planar array at maximum scan angle (θ) with the set frequency of 3 GHz. Further, it is found that 14 ̊ bore sight elevation of rectangular lattice produces a transformed field of view, which permits a reduction in element count of 20.39% compared with 10 ̊ bore sight elevation. Finally, the typical values of elevation, element count and array size (25 cm2) are trained using artificial neural network (ANN) algorithm and element count is predicted after testing the network. The network shows a high success rate. Introduction Planar antenna array design with operational bandwidths would result in benefits such as the ability to use a single array for wideband or widely separated signals and the ability to share a common aperture for multiple functions.Since fewer openings would be required in a host platform needing to communicate on widely spaced frequency bands, the use of wideband arrays could reduce integration cost and also ease other system-level requirements [1].Significant analytical and empirical effort is usually required in order to design wideband arrays.This is in part due to grating lobe between array elements, which complicates the array design.Grating lobe re-sults when array elements located in close proximity interact in a manner that alters the element count and element patterns. Because of the difficulty associated with predicting these effects, grating lobe level is traditionally considered an obstacle to array design.Grating lobe typically tends to make the active element pattern more directive than the ideal element pattern indicating high scan roll off of the array. One of the important array design parameters is element spacing.It is often desirable to design a planar array with larger element spacing so that more real estate can be made available for transmission lines and discrete components [2]. However, to avoid the formation of high grating lobes, element spacing is limited to less than 1λ 0 for broadside beam design and less than 0.6λ 0 for a wide-angle scanned beam.In designing a wide-angle scanned planar array, rectangular and triangular lattice structures are taken as shown in Figure 1. Selection of the maximum element spacing corresponding to the minimum number of controlled elements in the limited-scan arrays results in the presence of the array factor grating lobes in real space [3]. The higher level of lobe is usually undesirable, since it corresponds to lowering the array gain due to taking a part of the radiated power away from the main lobe [3]. The grating lobes of a planar array antenna are conveniently shown in the projection coordinates space as given in Equation ( 1), by making use of the direction cosines u and v, where sin cos , sin sin . The rectangular lattice array with axes parallel to array edges, the incipient grating lobes in the array plane given by Equation (2), determines the optimum lattice spacing is given by [3], [ ] Analysis of Grating Lobes The grating lobe constraint restricts array element spacing and is a result of the array periodicity.The periodicity imposes constraints on element spacing in order to avoid the formation of unwanted radiation peaks called grating lobes.First grating lobe is parallel to the array when the beam is scanned to an angle θ ′ off bore sight and the length of the array (L = Nd, N = number of elements and d = inter element spacing).The position of the first outside (array from bore sight) null for an array beam steered to θ and is given as [3], ( ) As θ ′ = 60˚, the separation d can be large as 0.536λ before the grating lobe peak emerges from the plane of the array as defined by Equation (3).For element (k, l), the phase of element relative to (1, 1) element, kl ψ is given as, ( ) ( ) where k 0 = 2π/λ 0 and the planar array radiation is given by Equation ( 5), , , e , e S θ ϕ is array element radiation pattern and the grating lobes occur [4] when ( ) ( ) m is integer number.This condition can only be met if to start with , where p & q are integer numbers. Subject to the condition that, these lobes fall within the unit circle of 2 2 1 u v + < .In u-v space, the area with circle of radius "one" corresponding to real angles θ and φ (−90˚ < θ < 90˚, 0˚ < φ < 360˚) and is called visible space as in Equation ( 6).The area outside this is called invisible space with complex angles.For an element distance equal to one wavelength, grating lobes occur in the principal planes (u = 0, v = 0).Choosing the larger spacing for y d λ and with v = 0, generates a lobe (u 0 , v −1 ) as shown in Figure 2. This often yields unsatisfactory results since grating lobes changes the element performance from the isolated response.The array designer must balance avoiding undesired mutual coupling effects with eliminating grating lobes due to the element spacing at the high end of the frequency band [4] [5].This limits the array bandwidth when designing with traditional elements. Elements that are typically used in wideband arrays also tend to be deep and not amenable to conformal applications.These limitations have prevented array designers from providing array systems that are wideband, planar, and free from grating lobes over a large scan volume.The spacing of the array's elements should be chosen such that grating lobes do not occur when the main beam is steered within the boundaries of the specified field of view [5] [6].In addition, it is desirable to space the elements as far apart as possible, in the context of the grating lobe constraint, in order to minimize array cost and complexity, since larger separation permit a given aperture to be filled using fewer elements [7]. Optimization of Element Count in Planar Arrays Artificial Neural Network is used for this problem in optimizing the element count taking in to account the grating lobe level for a given array area and variation of bore sight elevation from 10˚ to 40˚ with a step of 2˚. The ability of these networks to generalize relationships between inputs and outputs is a key to their effectiveness [8].The accuracy of a properly trained network depends on the accuracy of the data used to train the network.Therefore care must be taken while generating training data, whether the data is generated by simulation or experimentally [9].The data patterns generated are well trained and confined in retrieving the actual and predicted values of both rectangle and triangular lattice structures.The inputs of the network are expressed as vector {X}, hidden layer is represented by h(X) and output layer is denoted by {Y} as represented in Figure 3 [10]. Results and Discussion Grating lobe patterns are generated for the case of rectangular lattice planar array antenna structure with element spacing of d x = 0.29λ, d y = 0.5λ as shown in The optimized element count for triangular array element pattern provides element savings of 13%, 13.2% and 25.03% for 10˚, 14˚ and 26˚ bore sight elevations respectively, relative to the optimum rectangular lattice. The data patterns are well trained and tested further in determining the predicted element count.Table 1 shows the actual and predicted element count values for the inputs selected for the rectangular lattice structure.The network performs high success rate. Table 2 shows the actual and predicted element count values for the inputs selected for the triangular lattice structure.The network performs high success rate. Conclusion An approach of grating lobe suppression with the array element count optimization using neural is obtained.Analysis on the grating lobe and conditions applied for suppression is explained and performed with maximum scan angle.With change in the bore sight elevation value in rectangular and triangular structures, the grating lobes are observed for different inter element spacing using MATLAB simulation.The element count decreases with the increase in bore sight elevation.Comparison of both lattice structures gives the proper optimization of element count with the variation in different values of elevation.As observed, the optimum element savings of about 25.03% for 26˚ elevations is achieved for triangular array when compared with optimum rectangular array.Element count optimization for grating lobe suppression is obtained using radial basis function ANN.The network shows a high success rate for element count for RL and TL structures. Figure 2 . Figure 2. Grating lobe location for two dimensional rectangular grid array. Table 1 . Actual and predicted values of optimum element count (rectangular lattice). Table 2 . Actual and predicted values of optimum element count (triangular lattice).
2,118.2
2015-02-16T00:00:00.000
[ "Physics" ]
Solar cycle variation of the statistical distribution of the solar wind ε parameter and its constituent variables We use 20 years of Wind solar wind observations to investigate the solar cycle variation of the solar wind driving of the magnetosphere. For the first time, we use generalized quantile‐quantile plots to compare the statistical distribution of four commonly used solar wind coupling parameters, Poynting flux, B2, the ε parameter, and vB, between the maxima and minima of solar cycles 23 and 24. We find the distribution is multicomponent and has the same functional form at all solar cycle phases; the change in distribution is captured by a simple transformation of variables for each component. The ε parameter is less sensitive than its constituent variables to changes in the distribution of extreme values between successive solar maxima. The quiet minimum of cycle 23 manifests only in lower extreme values, while cycle 24 was less active across the full distribution range. Introduction The 11 year solar cycle is variable, with an extended minimum observed in cycle 23 and a notably quiet maximum in the most recent cycle 24 [Lockwood, 2013;Zerbo and Richardson, 2015]. Satellites in the solar wind upstream of Earth provide a comprehensive data set covering several solar cycles. Variables observed in situ are combined into solar wind parameters that aim to capture the driving of the magnetosphere by the solar wind [e.g., Gonzales, 1990]. This letter focuses on the systematic changes in the statistical distributions of these solar wind parameters between different phases of the solar cycle. The Poynting flux S is the energy flux density carried by the electromagnetic fields of the solar wind plasma. Along with S, we will consider three of the most commonly used parameters: the parameter [Perreault and Akasofu, 1978], in which S is scaled to account for energy transport into the magnetosphere [Koskinen and Tanskanen, 2002]; the westward electric field, estimated as v x B z [Burton et al., 1975]; and finally, B 2 , found to be more closely related to solar activity [Kiyani et al., 2007]. There has been extensive work on the statistics of solar wind variables. Early work [Burlaga and King, 1979] identified an approximately lognormal probability density function (PDF) of the magnetic field strength. Feynman and Ruzmaikin [1994] showed that the PDF of the interplanetary magnetic field (IMF) field strength could not be exactly lognormal due to nonzero kurtosis, which was substantiated later by Burlaga and Ness [1996] who determined the PDF to be lognormal with an exponential tail but also discussed the possibility of a Pareto tail. Koons [2001] successfully fitted the Gumbel class of extreme value distribution to annual maxima of the 60 MeV proton flux. Moloney and Davidsen [2010] found that the block maxima of the parameter follow the Fréchet distribution; however, this fit overestimated the highest values. Since the Fréchet class is the limiting distribution for a variable that has a PDF with a power law tail [Schumann et al., 2012], a Fréchet fit to block maxima of the parameter would support Burlaga's earlier proposal of a lognormal PDF with a Pareto tail. Burlaga and Lazarus [2000] also found evidence for lognormal distributions of the solar wind speed, density, and temperature, and that these distributions vary with the phase of the cycle. They attributed this variation to the dominance of corotating streams in the solar wind at solar minimum [Tsurutani et al., 2006]. Common to all these findings is the multicomponent nature of PDFs of solar wind variables. A complementary approach to the statistical analysis of solar wind variables is the use of bursts. These are defined as the integrated signal over periods where the variable continuously exceeds a given threshold; they have been used to compare the solar wind with both solar flares [Moloney and Davidsen, 2011] and geomagnetic indices [Freeman et al., 2000a] • Figure S2 • Figure S3 • Supporting Information S1 to be power law with an exponential roll off [Wanliss and Weygand, 2007;Freeman et al., 2000b], as has the distribution of waiting times of bursts in B z,S [D'Amicis et al., 2006]. Of these fits, Wanliss and Weygand [2007] found the power law exponents of the and (v x B z ) s distributions to be solar cycle dependent; however, when looking at higher thresholds, Moloney and Davidsen [2014] found no such dependency. In the related studies of geomagnetic indices, Freeman et al. [2000a] found the distribution of burst lifetimes of AU and AL to be multicomponent, including a power law and exponential cutoff similar to , and Hush et al. [2015] found a multicomponent distribution of AE burst size, including a solar cycle-dependent exponential component at extreme values. While these studies have been successful in fitting parts of the distribution of solar wind parameters, there are still open questions, including how the PDF changes over the solar cycle and which of the constituent variables within a parameter such as drives these changes. In this paper, we address these questions, with the first application of quantile-quantile (QQ) plots [Gilchrist, 2000] to the comparison of observed distributions of each solar wind parameter measured at different phases of the solar cycle. We examine the changes in the cumulative distribution functions (CDFs) of the coupling parameters S, B 2 , , and (v x B z ) s in cycles 23 and 24, between each maximum and minimum, between successive maxima, and between successive minima. In each case the distribution is multicomponent, and we find that over a broad range each subcomponent has a functional form that remains unchanged across all phases of the solar cycle. The change in CDF, and hence PDF, is captured by a transformation of variables for each subcomponent; this transformation takes the form of P(x) → P( x + ), independent of the underlying PDF functional form. Our results quantify the effect of the quiet minimum of solar cycle 23 and the quiet maximum of cycle 24 on the likelihood of observed values of these parameters. We see systematic changes in the PDFs of the constituent variables of and identify which variable drives the variation in the statistics of . We find that the statistics of can be less sensitive to these changes than those of its constituent variables. In section 2 we introduce the data set and solar wind parameters. In section 3 we describe the QQ plot method and apply it to the solar wind Poynting flux. In section 4 we repeat this analysis for parameters B 2 , solar wind , B z and v x B z , and for all parameters across the full solar cycles. We conclude in section 5. Data Data from the Magnetic Fields Investigation and Solar Wind Experiment on the Wind spacecraft were provided by National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC's) OMNI data set. Data were only taken at those times when the spacecraft was situated in the upstream solar wind. During preprocessing, the 15 s cadence interplanetary magnetic field (IMF) and 92 s ion plasma velocity are interpolated to 1 min resolution. The plasma velocity measurements include both fast and slow streams, and all measurements are in geocentric solar magnetospheric (GSM) coordinates. One year of data was taken spanning each of the minima and maxima of cycle 23 and cycle 24, using inclusive dates December 1995 to November 1996 (cycle 23 minimum), October 1999 to September 2000 (cycle 23 maximum), August 2007 to July 2008 (cycle 24 minimum), and November 2013 to October 2014 (cycle 24 maximum). Data gaps are reasonably evenly distributed between successive maxima and minima; they typically cover ∼10% of the data, ranging from ∼5% for the minimum of solar cycle 24 to ∼26% for the maximum of cycle 23. The remaining sample sizes are of order 400,000 samples in each data set. The Wind satellite orbits L1 and is, for our yearlong samples at maxima and minima, within 100 R E of the Sun-Earth line. It is outside this range for ∼8% of the full data set. The Poynting flux, S, is the characteristic energy flux carried by the solar wind. For ideal magnetohydrodynamics, the electric field is E = −v × B, so that the component of Poynting flux along the Sun-Earth line is approximated by The parameter is based on the Poynting flux but is scaled to account for the interaction between the solar wind and magnetosphere. In SI units it is where v is the ion plasma speed, B S is the magnitude of the IMF with southward B z , l 0 is a length scale taken to be 7 R E , and = tan −1 (B y ∕B z ) is the clock angle. The parameter depends on all three components of the IMF, while the Poynting flux depends on B y and B z only. Energy transfer between solar wind and the magnetosphere is in part ordered by the flux of southward directed IMF (i.e., negative B Z in GSM), due to the increased rate of magnetospheric reconnection. We therefore calculate the solar wind parameters S and B 2 for both the full data set and the subset of data where B Z < 0 and the magnetosphere coupling parameters and v x B z at times of southward IMF only. After removing the northward IMF values, the southward IMF subsets of data contained approximately 200,000 data points. The subscript S will denote the southward IMF only subsets of each parameter. QQ Plots for Poynting Flux The quantile-quantile (QQ) plot is a tool for comparing two statistical distributions. The distribution of an independently and identically (iid) random variable X can be described both by its probability density function (PDF), P(x), and the corresponding cumulative density function (CDF), C(x), which are related via is the likelihood that the value of variable X will occur below a value x (X ≤ x). The CDF can be inverted, so that for any likelihood q, a value x(q) can be calculated such that the likelihood of a measurement of X occurring below x(q) is q. The value x(q) is referred to as the qth quantile of the distribution of variable X; for example, the q = 0.5 quantile of the distribution, x(0.5), is the median: the value below which 50% of the data lie. Consider two iid random variables, X 1 and X 2 , distributed according to PDFs P 1 (x 1 ) and P 2 (x 2 ) with corresponding CDFs C 1 (x 1 ) and C 2 (x 2 ). The quantiles of the two distributions are as follows: where C −1 (q) is the inverse of CDF C(x(q)). The QQ plot then has as its coordinates the quantiles x 1 (q) and x 2 (q), where the likelihood q is a parametric coordinate. If X 1 and X 2 have same distribution, their quantiles will be the same, and so a straight "y = x" line of gradient 1 will be recovered. Any other straight line on the QQ plot would be described by in which case the CDFs are related by a transformation of variables: With equation (3) this yields so that the corresponding PDFs are related by the same transformation of variables. The scaling parameter preserves the normalization, so that the integrated PDF is unity. Hence, if the QQ plot is linear, the functional form of the underlying PDF of both variables X 1 and X 2 is the same, subject to a transformation of variables from x to (x − )∕ . The parameter denotes a change in scale, while is a shift in location. A positive value indicates an increase in mean, so that the likelihood of large values of X increases as the whole PDF is shifted. If > 1, a given likelihood q corresponds to a larger value of X, resulting in the stretching of the PDF tail. This in turn modifies the raw higher moments of the distribution; however, the standard measures of skewness and kurtosis can be normalized to account for the change in scale [Gilchrist, 2000]. Both and can be found from the data with a linear regression to the QQ plot. Table S1 in the supporting information. the successive maxima, and in the right column we compare each cycle's maximum to its minimum. In each case we compare the 0.0001 to 0.9999 quantiles. A plot in the same format is given for additional quantities in the supporting information, along with a discussion of uncertainties. Figures 1ai-1aiii show the results for the solar wind Poynting flux. In each case the QQ plots are multicomponent, with three approximately linear regions. A linear region of the QQ plot indicates that the functional form of the PDF in that region does not change; thus, a transformation of variables captures how the distribution is changing over the solar cycle, as described above. These ranges are indicated on the plots. We perform a three-component linear-piecewise fit to these plots. The R 2 statistic associated with the each linear fit (given in Tables S1-S3 in the supporting information) exceeds 0.9 in almost all cases, indicating that the linear transformations provide a good representation of the change in distribution. The QQ plot of Poynting flux with southward IMF, S S , comparing successive minima (Figure 1ai (5), showing a decrease in the scale of these values, but a large (∼10%) increase in their mean. The high and extreme components appear above the y = x reference line, so that the increased activity of the minimum of cycle 24 relative to cycle 23 is manifest mainly in these extreme values, rather than in the bulk of the distribution. Conversely, we can see that in Figure 1aii the quantiles all lie below the y = x line, so that the maximum of cycle 24 was significantly less active than the maximum of cycle 23. Again, the QQ plot can be divided into linear regions which we denote as bulk, high, and extreme components. These linear relationships again indicate that within each of the components of the underlying PDF, the functional form is the same at both solar maxima. The crossover points between these regions of the CDF occur at q = 0.9800 and q = 0.9988. Unlike the change between successive minima, the bulk component of the distribution changes scale significantly between the successive maxima, with The high component of the PDF therefore decreases in scale but slightly increases in mean, while the opposite is true for the extreme component. We repeated this analysis comparing the full 11 year data sets of cycles 23 and 24 (see supporting information) and find that cycle 23 is overall more active. Between the successive maxima, the translations of the high and extreme components are sensitive to whether the full data set or the southward IMF subset is included in the CDF. The bulk component undergoes approximately the same transformation of variables in both cases. The change of scale of the high component increases from = 0.209(2) for S S to = 0.366(5), while for the extreme component is roughly the same, changing from = 1.32(20) to = 1.42(10). The shift in location is ∼50% smaller for the high component and ∼20% larger for the extreme component, once the northward IMF values of S are included. The QQ plot thus has three well-defined linear components that translate according to equation (5) between phases of the solar cycle, irrespective of whether or not northward IMF values are included. However, the changes in scale and location ( and ) of the underlying PDF are more pronounced if we only consider the southward IMF subset. Figure 1aiii compares the distribution of Poynting flux at the maximum and minimum of each cycle. Again, the QQ plot is composed of several approximately linear regions, indicating the multicomponent and invariant functional form of the PDF discussed above. For both solar cycles, the bulk component occurs close to the origin, so that on this scale the QQ plots appear to show only the high and extreme components. These components again undergo a simple translation (as in equation (5)) between solar minimum and maximum; however, the transformations are different for the two cycles. In cycle 23, the high component has a large change in scale of = 9.18(17), while the extreme component shifts in location, = 0.291(31) but has a smaller change in scale ( = 1.57(32)). For cycle 24, the high component remains closer to the y = x line ( = 0.422(46), = 0.0587(43)) and so shows a less pronounced change between solar maximum and minimum. The extreme component does have a significant transformation, with both a large change of scale = 5.90 ± 1.98 and large shift in location = −0.618(266). Therefore, the quietness of cycle 24 is manifest in the bulk and high components, which are the same at maximum and minimum; only the extreme values above the q = 0.9989 quantile show increased likelihood at maximum relative to the cycle minimum. QQ Plots for B 2 , S , B z,S , and (v x B z ) S Figures 1bi-1biii through 1ei-1eiii plot the above solar cycle phase comparisons for the B 2 , B z,S , S , and (v x B z ) S parameters, with the latter three calculated using only values with southward IMF. The QQ plots can again be split into multiple approximately linear components, with R 2 values consistently above 0.9, so that the PDF functional form is approximately invariant for all parameters; it, again, simply translates as in equation (5). Quantiles of both the full and southward IMF only B 2 data sets are shown in Figures 1bi-1biii. Qualitatively, these panels show the same behavior as in Figures 1ai-1aiii, indicating that S simply tracks B 2 . In all three plots, the transitions between components occur at similar quantiles for both B 2 S and S S . However, although the trends are the same, the and parameters differ between S S and B 2 , suggesting that the solar wind speed does not affect the form of the transformations but does alter the details. The comparison of the solar maxima ( Figure 1bii) also shows the same sensitivity as the Poynting flux to whether all values or the southward IMF only subset is used. Between successive solar minima, the QQ plot for S (Figure 1ci) is qualitatively similar to those of S S and B 2 S (Figures 1ai and 1bi). However, when we consider how the distribution translates between successive solar maxima ( Figure 1cii) the distribution of the S parameter transforms in a remarkably different way to the other variables. The PDF retains its functional form and is translated by a single change in scale over the full range. The value of this transformation is 0.501(1), which is similar to the values of the bulk components of the S S and B 2 S . The high and extreme values of S thus do not retain the distinct changes between successive maxima seen in S S and B 2 S . Similar behavior of S is found when comparing the maximum and minimum of cycle 23 in Figure 1ciii; that is, the high and extreme components are indistinguishable and translate with a single change of scale = 4.81(9). Now, depends upon all three components of magnetic field, whereas the Poynting flux depends on the y and z components only; also includes a factor that depends on the clock angle. In Figure S1, we plot, in the same format as Figure 1, the changes in B 2 x and B 2 yz = B 2 y + B 2 z , along with the clock angle contribution to . We see that the distribution of the clock angle contribution does not change between the different phases of the solar cycles; therefore, this does not explain the relative insensitivity of . However, the distributions of B 2 x and B 2 yz translate in opposite directions on the plot; for example, the extreme components have < 1 and > 1, respectively (values are = 0.0602(408) and = 1.50 (20)). When these are combined within a single parameter, they may tend to suppress these changes. We have repeated this analysis (see supporting information) for the Milan and Newell coupling parameters [Milan et al., 2008;Newell et al., 2007] and find that they are intermediate in sensitivity between and Poynting flux. In common with the Poynting flux, they only depend on the y and z components of the IMF. Within Figures 1di-1diii and 1ei-1eiii, the distributions of variables B z,S and (v x B z ) S show little change between the two minima except at the extreme values (Figures 1di and 1ei), with less sharp transitions between components. The comparison of the maxima for these variables (Figures 1dii and 1eii) shows three linear components, but again with less clear transitions; this is reflected in the lower R 2 values, which are 0.9722 and 0.9670 for the high component of B z,S and (v x B z ) S , respectively, compared to 0.9938 and 0.9829 for S S and B 2 S . The similarity of the QQ plots for B z,S and (v x B z ) S again suggests that the magnetic field drives the variation of the distribution of the constructed parameters. The solar wind speed also contributes to the change in distribution between the maximum and minimum of cycle 24: compare Figures 1diii and 1eiii with Figure S1aiii in the supporting information. In Figures 1aiii, 1biii, and 1ciii we see similar qualitative behavior across the parameters. The bulk and high components of the cycle 24 QQ plot are close to the y = x line in most cases, indicating that the distribution of each variable for cycle 24 was the same at maximum and minimum up to the highest quantiles. The outstanding case is in Figure 1diii, where the bulk component of the B z,S distribution transforms in the same manner for both cycle 23 and 24, up to the q = 0.9800. The relatively low activity of cycle 24 is also evident; for all variables, the quantiles of cycle 24 lie below those of cycle 23. The extreme component occurs between q = 0.9989 and q = 0.9995 for all variables in cycle 24, and all except S and (v x B z ) S in cycle 23, suggesting that in all these variables (with the exception of S ) there is an extreme tail which transforms differently to the rest of the distribution. 10.1002/2016GL068920 Finally, we compared the distributions of the two full cycles; see the supporting information for QQ plots and a detailed discussion. Overall, cycle 23 shows higher activity, despite its quiet minimum. For all four parameters, ranges are again found over which the QQ plots are linear, suggesting that the transformation of variables in equation (5) captures the change in the PDF. Summary We investigated how the probability density functions of four solar wind-magnetosphere coupling parameters change over the two most recent solar cycles using quantile-quantile plots. The distributions for Poynting flux, B 2 , S and the westward electric field (v x B z ) S were compared between cycle 23 minimum and cycle 24 minimum, between cycle 23 maximum and cycle 24 maximum, and between the maximum and minimum of each of cycles 23 and 24. We found that for all parameters the PDF has a multicomponent functional form which does not change over the solar cycle or between cycles. Instead, a linear transformation of variables for each component is required to map that region of the PDF from one solar cycle phase to another. These transformations can be found by fitting least squares regressions to linear regions of the QQ plot. We identified three regions of the distributions in the QQ plots; the bulk component, up to q ∼ 0.96, and beyond this the high and extreme components. The bulk component undergoes a change in scale between solar maxima but is roughly the same at both minima. The high and extreme components for S, B 2 and (v x B z ) S are each associated with a unique change in both scale and location. The S parameter behaves differently to S S , B 2 S and (v x B z ) S as the change in its PDF between successive maxima is captured by a single change in scale over the full range, so the high and extreme values are insensitive to the changes in their distribution seen in S S and B 2 S . The QQ plots for S resemble those of B 2 , and likewise the QQ plots for (v x B z ) S track those of B z,S , implying that changes in B drive changes in the PDFs of the solar wind coupling parameters. Cycle 23 exhibited a quieter minimum than cycle 24, which is manifest in the distribution as larger amplitude (quantile) high and extreme components at the minimum of cycle 24 than cycle 23, while the bulk component was roughly the same for both minima. The overall activity of cycle 23 was higher than that of cycle 24, seen across the distribution at the maximum of cycle 23 compared to the maximum of cycle 24. The changes in the distribution of S and B 2 between successive maxima were also found to be more pronounced in the distribution of values when IMF is southward. As the functional form of the distribution of each solar wind variable does not change between extrema of cycles 23 and 24, it could be an intrinsic property, and as such would remain the same in future cycles. This qualitative property of the full distribution provides a benchmark for models and may support prediction of the likelihood of extreme space weather events. However, as we have shown, this depends on the variable chosen, as the parameter is less sensitive to changes in the likelihood of its large values than the S, B 2 and (v x B z ) S variables.
6,095.6
2016-06-16T00:00:00.000
[ "Physics", "Environmental Science" ]
Knock-In Rat Lines with Cre Recombinase at the Dopamine D1 and Adenosine 2a Receptor Loci Abstract Genetically modified mice have become standard tools in neuroscience research. Our understanding of the basal ganglia in particular has been greatly assisted by BAC mutants with selective transgene expression in striatal neurons forming the direct or indirect pathways. However, for more sophisticated behavioral tasks and larger intracranial implants, rat models are preferred. Furthermore, BAC lines can show variable expression patterns depending upon genomic insertion site. We therefore used CRISPR/Cas9 to generate two novel knock-in rat lines specifically encoding Cre recombinase immediately after the dopamine D1 receptor (Drd1a) or adenosine 2a receptor (Adora2a) loci. Here, we validate these lines using in situ hybridization and viral vector mediated transfection to demonstrate selective, functional Cre expression in the striatal direct and indirect pathways, respectively. We used whole-genome sequencing to confirm the lack of off-target effects and established that both rat lines have normal locomotor activity and learning in simple instrumental and Pavlovian tasks. We expect these new D1-Cre and A2a-Cre rat lines will be widely used to study both normal brain functions and neurological and psychiatric pathophysiology. Introduction Dopamine and adenosine are important chemical messengers in the brain, vasculature, and elsewhere in the body. Within the brain, one key site of action is the striatum (including nucleus accumbens), a critical compo-nent of basal ganglia circuitry involved in movement, motivation, and reinforcement-driven learning (Denny᎑Brown and Yanagisawa, 1976;Marsden, 1982;Gerfen and Surmeier, 2011;Berke, 2018). Most (90 -95%) striatal neurons are GABAergic medium spiny neurons (MSNs) with two distinct subclasses (Gerfen and Surmeier, 2011). "Direct pathway" neurons (dMSNs) express dopamine-D1 receptors and project primarily to the substantia nigra pars reticulata/globus pallidus pars interna (SNr/GPi), whereas "indirect pathway" neurons (iMSNs) express both dopamine-D2 receptors and adenosine-A2a receptors, and project primarily to the globus pallidus pars externa (GPe). Although our understanding of their distinct functions is incomplete, dMSNs and iMSNs have complementary roles promoting and discouraging motivated behaviors, respectively (Collins and Frank, 2014). The investigation of dMSNs and iMSNs has been transformed by transgenic mice. Random genomic insertion of BACs (bacterial artificial chromosomes) encoding dopamine receptor promoters driving fluorescent protein expression confirmed the near-total segregation of striatal D1 and D2 receptors (Shuen et al., 2008;Matamales et al., 2009) and enabled identification of dMSNs/iMSNs in brain slices (Day et al., 2006). BAC lines in which dopamine receptor promoters drive Cre recombinase expression (D1-Cre, D2-Cre, etc.) have allowed in vivo identification and manipulation of neuronal subpopulations in striatum (Kravitz et al., 2010(Kravitz et al., , 2012Cui et al., 2013;Barbera et al., 2016) and cortex (Kim et al., 2017). iMSNs targeting is further improved using an A2a promoter, rather than D2, because A2a receptors are selectively expressed on iM-SNs while D2 receptors are also expressed on other striatal cells and synapses (Alcantara et al., 2003). However, for many experiments, rats are more suitable than mice. Their larger size means they can bear complex intracranial implants without loss of mobility. Furthermore, rats can learn more sophisticated behavioral tasks, including those investigating reinforcement learning (Hamid et al., 2016) and behavioral inhibition (Schmidt et al., 2013). The advent of CRISPR/Cas9 methods has facilitated the generation of knock-in rat lines (Mali et al., 2013;Jung et al., 2016), and knock-ins are more likely to have faithful expression patterns compared to BACs for which (for example) different D1-Cre lines show markedly different expression (Heintz, 2004). Here, we describe the generation of transgenic D1-Cre and A2a-Cre rat lines using CRISPR/Cas9. We then demonstrate the specificity of iCre mRNA expression in the intended cells, in both dorsal striatum (DS) and nucleus accumbens. Next, we confirm Cre-dependent expression to demonstrate that Cre is functional and appropriately confined to the direct or indirect pathways. Finally, we demonstrate normal locomotor activity, learning and motivation in simple behavioral tasks. Materials and Methods All animal procedures were approved by the relevant Institutional Animal Care and Use Committees. Genetic engineering CRISPR/Cas9 (Mali et al., 2013) was used to generate genetically-modified rat strains. Two single guide RNA (sgRNA) targets and protospacer adjacent motifs (PAMs) were identified downstream of the rat Adora2a termination codon . sgRNA targets were cloned into plasmid pX330 (Addgene #42230, a gift of Feng Zhang) as described . Guide targets were C30G1: CTAAGGGAAGAGAAACCCAA PAM: TGG, and C30G2: GGCTGGACCAATCTCACTAA PAM: GGG. Purified pX330 plasmids were co-electroporated into rat embryonic fibroblasts with a PGKpuro plasmid (McBurney et al., 1994). Genomic DNA was prepared after transient selection with puromycin (2 g/ml). A 324-bp DNA fragment spanning the expected Cas9 cut sites was PCR-amplified with forward primer GGGATGTGGAGCTTCCTACC and reverse primer GCAGCCCTGACCTAACACAG. DNA sequencing of the amplicons showed that C30G1-treated, but not C30G2treated, cells contained overlapping chromatogram peaks, indicative of multiple templates that differ because of nonhomologous end-joining repair of CRISR/Cas9-induced chromosome breaks resulting in the presence of small deletions/insertions (indels). sgRNA C30G1 was chosen for rat zygote microinjection. A DNA donor was synthesized (Bio-Basic, cloned in pUC57) to introduce the following elements between codon 410 and the termination codon of Adora2a: a glycine-serine-serine linker with porcine teschovirus-1 selfcleaving peptide 2A (P2A; Kim et al., 2011) followed by iCre recombinase (Shimshek et al., 2002) with hemagglutinin tag YPYDVPDYA (Kolodziej and Young, 1991) and a termination codon with the bovine growth hormone polyadenylation sequence (Goodwin and Rottman, 1992). To mediate homologous recombination a 5' arm of homology (1804 bp of genomic DNA 5' to codon 410) and a 3' arm of homology (1424 bp of genomic DNA downstream of the termination codon) were used. The 20-bp sequence of C30G1 was omitted from the 3' arm of homology to prevent CRISPR/ Cas9 cleavage of the chromosome after insertion of the DNA donor. A similar approach was used for Drd1a. Two sgRNA were identified downstream of the Drd1a termination codon, C31G1: TTCCTTAACAGCAAGCCCAA PAM: GGG and C31G2: CTGAGGCCACGAGTTCCCTT PAM: GGG. A 293-bp DNA fragment spanning expected Cas9 cut sites was PCR-amplified with forward primer TGGAATAGCTA-AGCCACTGGA and reverse primer CTCCCAAACT-GATTTCAGAGC. Both sgRNAs were found to be active after transfection in rat fibroblasts by T7 endonuclease 1 (T7E1) assays (Sakurai et al., 2014). Briefly, DNA amplicons were melted and re-annealed, then subjected to T7EI digestion. The presence of indels produced by nonhomologous endjoining repair of Cas9-induced double strand breaks resulted in the presence of lower molecular weight DNA fragments for both sgRNA targets, and C31G1 was chosen for zygote microinjection. A DNA donor was synthesized (BioBasic, cloned in pUC57) to introduce the following elements between Drd1a codon 446 and the termination codon: a glycine-serine-serine linker with P2A followed by iCre recombinase with V5 peptide tag GKPIPNPLLGLDST (Yang et al., 2013) and a termination codon with the bovine growth hormone polyadenylation sequence. To mediate homologous recombination a 5' arm of homology (1805 bp of genomic DNA 5' of codon 446) and a 3' arm (1801 bp of genomic DNA downstream of the termination codon) were used. The 20-bp sequence of C31G1 was omitted from the 3' arm of homology to prevent cleavage of the chromosome after insertion. Rat zygote microinjection was conducted as described (Filipiak and Saunders, 2006). sgRNA molecules from a PCR-amplified template were obtained by in vitro transcription (MAXIscript T7 Transcription kit followed by MEGAclear Transcription Clean-Up kit, Thermo Fisher Scientific). The template was produced from overlapping long primers (IDTDNA) that included one gene-specific sgRNA target and T7 promoter sequence that were annealed to a long primer containing the sgRNA scaffold sequence (Lin et al., 2014). Cas9 mRNA was obtained from Sigma-Aldrich. Circular DNA donor plasmids were purified with an endotoxin-free kit (QIAGEN). Knock-in rats were produced by microinjection of a solution containing 5 ng/l Cas9 mRNA, 2.5 ng/l sgRNA, and 10 ng/l of circular donor plasmid. Before rat zygote microinjection, fertilized mouse eggs were microinjected with the nucleic acid mixtures to ensure that the plasmid DNA mixtures did not cause zygote death or block development to the blastocyst stage. Rat zygotes for microinjection were obtained by mating superovulated Long-Evans female rats with Long-Evans male rats from an in-house breeding colony. A total of 353 rat zygotes were microinjected with A2a-Cre reagents, 289 survived and were transferred to pseudopregnant SD female rats (Strain 400, Charles River), resulting in 60 rat pups; 401 rat zygotes were microinjected with D1-Cre reagents, 347 survived and were transferred, resulting in 95 pups. Genomic DNA was purified from tail tip biopsies (QIAGEN DNeasy kit) to screen potential founders for correct insertion of iCre. Colony management and genotyping Lines were maintained by backcrossing with wild-type Long-Evans rats (Charles River or Harlan). Offspring were genotyped using real-time PCR (Transnetyx), using insertion-spanning primers (Table 2). Genome sequencing was performed at the UCSF Institute for Human Genetics using blood samples (1 ml per rat) from 5th generation backcrossed rats. Libraries were prepared from fragmented DNA (Kapa Hyper Prep) and sequenced (Illumina NovaSeq 6000, S4 flow cell, pairedend mode, read lengths 150 bp). Sequencing reads were aligned to the rat genome (RGSC Rnor_6.0) using the Burrows-Wheeler Aligner (BWA-MEM). We used GATK HaplotypeCaller, Samtools, Bedtools, Pysam, and MAT-LAB for variant calling, subsequent analysis and visualization. To determine the location of the inserted iCre cassette, we selected reads that did not align as a pair to the rat genome, which includes reads where only one mate or no mate of the pair aligned to the genome. These unaligned reads will include matches to the inserted iCre cassette sequence, which is not part of the reference genome. We searched for paired-end reads where one mate is aligned to the iCre cassette and then examined where in the genome the other mate is aligned. To further verify the integrity of our lines we examined potential off-targets (D1-Cre: 197 sites and A2a-Cre: 557 sites) predicted by an in silico sgRNA off-target prediction algorithm (CRISPOR.net RSGC Rnor_6.0). CRISPR/Cas9induced mutations in exons are of particular concern. Only two potential off-targets were predicted to be in exons in the D1-Cre line and none in the A2a-Cre line. After analyzing assembled genomic sequence data, we found the D1-Cre sequence contained a one base pair deletion at chr18:49935989 (Zfp608) and a single nucleotide variant (SNV) at ch1:258074844 (Cyp2c). On inspection, these changes were present in both the D1-Cre line and the A2a-Cre line, consistent with natural variations in the Long-Evans strain rather than off-target mutations from the D1-targeting sgRNA. Among the 195 predicted intronic off-targets for the D1-Cre strain located in introns, we observed 11 changes (eight contained SNVs and three contained indels). For the 557 predicted intronic off-target locations in the A2a-Cre strain, 48 locations showed changes (39 contained SNVs, six contained indels, and three contained both). Closer inspection of the indels revealed that 100% were present in both D1-Cre and A2a-Cre lines. This is again consistent with Long-Evans strain variation rather than off-target changes. In situ hybridization Frozen brains (n ϭ 6, one male and two females from each line) were stored at -80°C (overnight, two weeks), then sectioned on a cryostat at 20 m and mounted on glass slides. Sections were fixed in 4% PFA at 4°C for 15 min and dehydrated through 50%, 75%, 100% and fresh 100% EtOH at room temperature (RT) for 5 min each. Slides were dried completely for 5 min. A hydrophobic barrier (Advanced Cell Diagnostics) was drawn around each section. Slides were rinsed twice in 1ϫ PBS (ϳ1-3 min) and incubated with Protease IV reagent (Advanced Cell Diagnostics) for 30 min at RT. Fluorescent probes (RNAScope, Advanced Cell Diagnostics, iCre catalog 312281, Drd1a catalog 317031-C2, and Adora2a catalog 450471-C3) were added (2 h, 40°C) followed by manufacturer-specified washing and amplification. DAPI was added to the slides before coverslipping (Prolong Gold, Thermo Fisher Scientific). We used MIPAR software (https://www.mipar.us) to segment cell boundaries and fluorescent puncta using separate processing pipelines. To define nuclear boundaries, the DAPI channel of each image was first histogram-equalized to compensate for uneven illumination (512 ϫ 512 pixel tiles) and convolved with a pixelwise adaptive low-pass Wiener filter (5 ϫ 5 pixel neighborhood size) to reduce noise. The image was then contrast-adjusted (saturating the top and bottom 1% of intensities). Bright objects were segmented using an adaptive threshold (pixel intensity Ͼ110% of mean in the surrounding 30-pixel window). Image erosion followed by dilation further reduced noise (five-pixel connectivity threshold, 10 iterations). The Watershed algorithm was applied to improve object separation. Objects Ͼ5000 pixels (i.e., clustered nuclei) were identified and reprocessed to improve separation. Since mRNA fluorescent puncta can be located in the endoplasmic reticulum, we dilated the boundaries of each segmented nucleus by five pixels to include these regions. To segment fluorescent puncta, each of the three probe channels were first preprocessed using a Top-hat filter (9to 15-pixel radius), Wiener filter (15 ϫ 15 pixel neighborhood size) followed by contrast adjustment (saturating top and bottom 1% of intensities). Bright regions were segmented using the extended-maxima transform (8connected neighborhood, 5 H-maxima). A Watershed algorithm followed by erosion was used to improve object separation. Objects less than five pixels were rejected as noise. The location of each punctum is defined as the centroid of the segmented object. For each fluorescent probe image channel, we counted the number of segmented puncta lying within a nuclear boundary. To determine the puncta threshold for specific versus non-specific probe hybridization, we estimated the "baseline" number of puncta expected per nucleus by chance from non-specific hybridization. We first calculated the puncta count per pixel for all puncta lying outside of cell nuclei and then multiplied this value by the number of pixels for each DAPI-labeled nucleus. This background puncta count was assumed to follow a Poisson distribution, and we defined our threshold for categorizing a cell as "positive" for a given mRNA probe as the 95th percentile of this distribution. Consistency was calculated as the percentage of Drd1aϩ (or Adora2aϩ, in the case of A2a-Cre) nuclei that are also positive for iCre. Specificity was calculated as the percentage of iCreϩ nuclei that are also Drd1aϩ (or Adora2aϩ). Off-target consistency and specificity were calculated the same way but substituting Drd1aϩ (or Adora2aϩ) for each other in the above two equations. In vivo opto-tagging Rats (n ϭ 2, one male from each line) were injected with 1.0 l of hSyn-Flex-ChrimsonR-TdTomato virus (UNC Vector Core) bilaterally in ventral striatum (AP: ϩ1.75, ML: Ϯ1.6, DV: -7.0 from brain surface) and implanted with two 64-channel drivable tetrode arrays, each with a fixed optical fiber extending centrally through the array to a depth of 6.5 mm. After three weeks of transfection, the tetrodes were lowered into the ventral striatum and recorded wideband (1-9000 Hz) at 30,000 samples/s using an Intan digital headstage. Recording ended with a brief laser stimulation protocol (1 mW, 638 nm, 1-10 ms/1 Hz). The rat was awake, unrestrained, and resting quietly throughout the recording. Units were isolated offline using automated spike sorting software (MountainSort; Chung et al., 2017) followed by manual inspection. For a unit to be considered a successfully-identified Creϩ neuron it had to meet several criteria: (1) evoked spiking within 10 ms of laser onset, that reached the p Ͻ 0.001 significance level in the stimulus-associated latency test (Kvitsiani et al., 2013); (2) peak firing rate (z-scored) of Ͼ10 during both 5-and 10-ms laser pulses; (3) a Pearson correlation coefficient Ͼ0.9 between their average light-evoked wave form and their average session-wide wave form. Imaging Images were taken with a Nikon spinning disk confocal microscope with a 40ϫ objective (Plan Apo Lambda NA 0.95). For viral tracing, images (2048 ϫ 2048 pixels at 16-bit depth) were stitched in FIJI. Behavior Rats were maintained on a reverse light-dark schedule (12/12), testing was conducted during the dark phase, and rats were at least 70 d old at the start of the studies. Males and females were used for instrumental and Pavlovian studies. Males were used to evaluate cocaineinduced locomotor activity because of well-established sex differences in response to cocaine (Becker and Koob, 2016) and an insufficient available number of females to examine them separately. For instrumental and Pavlovian procedures, procedures were conducted in operant chambers as described (Derman and Ferrario, 2018). Rats [D1-Cre-, n ϭ 9 (three males, six females); D1-Creϩ, n ϭ 16 (seven males, nine females); A2a-Cre-, n ϭ 8 (six males, two females); A2a-Creϩ, n ϭ 16 (10 males, six females)]. were food restricted to 85-90% of free-feeding body weight. For instrumental training, a food cup was flanked by two retractable levers. First, rats were given two sessions in which 20 food pellets (45 mg, Bioserv #F0021) were delivered into the food cup on a variable-interval schedule of 60 s (VI60). Next, rats underwent instrumental training in which responses on the "active" lever resulted in delivery of a single pellet (fixed ratio 1; FR1) and responses on the other 'inactive' lever had no consequences. Rats were A B C Figure 1. Details of insertion design and founder line screening. A, Schematic of insertion cassettes into Adora2a (above) and Drd1a (below) genes. NLS, nuclear localization sequence; HA, influenza hemagglutinin protein tag YPYDVPDYA; V5, peptide tag GKPIPN-PLLGLDST; bGH, bovine growth hormone polyadenylation sequence. B, PCR primer loci (above) and corresponding gels (below) demonstrating G0 screening of the A2a-Cre line. The top row of gels indicate that rats 507, 509, 516, 520, and 527 are transgenic for trained to an acquisition criterion of 50 pellets within 40 min. The same rats then underwent Pavlovian conditioning using two auditory conditioned stimuli (CSs; tone and white noise, 2 min; four presentations of each CS per session, 5 min ITI, 12 sessions, 1 h/session). Fifteen seconds following CSϩ onset, four pellets were delivered on a VI30 schedule. The CS-was presented an equal number of times, unpaired with pellets. Food cup entries were recorded in 10-s bins and entries during the first 10 s of CS presentations were used to evaluate conditioned anticipatory responding [i.e., before unconditioned stimulus (US) delivery]. Locomotor activity was assessed in a subset of the rats trained above (Cre-, n ϭ 7; D1-Creϩ, n ϭ 7; A2a-Creϩ, n ϭ 7) using procedures similar to (Vollbrecht et al., 2016). Rats were allowed to feed freely for at least 5 d before locomotor testing. Testing was conducted in rectangular plastic chambers (25.4 ϫ 48.26 ϫ 20.32 cm) outfitted with photocell arrays around the base perimeter. Beam breaks were measured using CrossBreak software (Synaptech; University of Michigan). Rats were habituated to the testing chambers (30 min) and given two injections of saline (1 ml/kg, i.p.) separated by 45 min. Next, the acute locomotor response to cocaine was assessed. After a 30-min habituation, rats were given a saline injection followed 45 min later by cocaine (15 mg/kg, i.p.) and remained in the chambers for an additional 60 min. Locomotor activity was recorded in 5-min bins throughout and reported as crossovers (beam break at one end of the cage followed by beam break at the opposite end of the cage). Molecular design The D1-Cre and A2a-Cre rat lines were designed so that the native Drd1a or Adora2a promoter drives expression of both the native receptor and the codon-improved Cre recombinase (iCre) sequence in a single transcription event (Fig. 1A). The use of iCre over Cre has been shown to enhance recombinase expression and limit epigenetic silencing in mammalian cells (Shimshek et al., 2002). For each line, a unique single strand guide RNA (sgRNA) was generated to induce double strand breaks at the terminus of the receptor coding sequence and microinjected into Long-Evans rat zygotes along with Cas9 and a circular plasmid containing the donor gene cassette. After correct recombination of the donor cassette, the 3' end of target receptor sequence will be joined in frame with the "selfcleaving" peptide P2A (to separate the Cre protein after translation), followed by Cre with a nuclear localizing signal affixed at the amino terminus, and a peptide tag (HA for Adora2a, V5 for D1) to facilitate antibody-based detection. Founder screening, germline transmission, and full genome sequencing DNA samples from G0 potential founders were screened with primers to detect iCre in the genome (for primer sequences, see Materials and Methods). From this screen 21/96 potential D1-Cre, and 9/60 potential A2a-Cre founders were positive for iCre. Positive rats were then screened with additional primers across the junctions between native and introduced DNA stretches, to discriminate between correct and random genomic integration events (Fig. 1B). This yielded 7/21 correct D1-Cre insertions and 7/9 correct A2a-Cre insertions. The iCre insert was then completely sequenced in these rats (14 total) to confirm complete integration. These G0 founders were mated with wild-type Long-Evans rats, and the G1 pups genotyped for iCre specific insertion as above to verify germline transmission. Colonies from one successful founder for each line were established and maintained by back-crossing to wild-type Long-Evans rats from commercial vendors (see Materials and Methods); all experimental results shown are from rats back-crossed for at least three generations. After five generations of back-crossing, we took one female rat each from the D1-Cre and A2a-Cre lines and sequenced their entire genomes to confirm that iCre was present in the intended location and nowhere else (Fig. 1C). Average sequencing depths for D1-Cre and A2a-Cre lines were 80ϫ (1,503,983,138 reads) and 71ϫ (1,358,732,834 reads), respectively. To determine the location of the inserted gene cassette, we identified paired sequence reads for which one mate of the pair aligned to the rat genome (Rnor_6.0) and the other mate aligned to the inserted gene cassette. All such reads were aligned to the genome in the expected location in each line (24/24 D1-Cre, 25/25 A2a-Cre), indicating correct, single copy insertion (Fig. 1C). Partial sequence matches between the sgRNA and genomic locations away from the intended target may induce "off-target" cleavage events. Any off-target changes are likely to be progressively diluted over successive generations of back-crossing. We nonetheless performed an extensive screen and found no evidence for off-target events (see Materials and Methods). Consistent and specific Cre expression in Drda1expressing or Adora2a-expressing cells The knock-in design ought to produce iCre mRNA expression that is highly faithful to the natural distribution of Drd1a (or Adora2a) mRNA. To assess this, we used triple fluorescent in situ hybridization, together with DAPI labeling of cell nuclei. Probe sets targeting iCre, Adora2a receptor and Drd1a receptor mRNA with distinct color continued iCre. The bottom gels show that rats 520 and 527 have iCre inserted correctly at both the 3' and 5' junctions. See Table 1 for full primer sequences for screening both lines. pc, single copy detection; nc, unrelated rat tail DNA; H2O, water control; E, empty. C, Reads from whole genome sequencing aligned to a wild-type rat genome demonstrate that, for each transgenic line, the iCre cassette is inserted only once in the genome and at the target loci. Each row corresponds to one paired-end read, where one mate of the pair is aligned to the inserted cassette (red) and the other mate in the genome (black). Sequence reads with at least 100-bp match in the inserted cassette are shown. All such pairs map to only one location in the genome. Figure 2. Confirmation and quantification of iCre production in D1ϩ and A2aϩ MSNs. A, left of each column, Example 40ϫ images of FISH labeling used for quantification, taken from DS (scale bars ϭ 50 m). Right of each column: closeup images (top) aligned with their corresponding automated software output (bottom). Gray regions indicate DAPI boundaries and colored dots indicate puncta within DAPI boundaries, using the same color scheme as the raw images. Gray dots indicate the locations of puncta detected outside of DAPI boundaries. B, Scatterplots of raw puncta counts for each cell show selective iCre mRNA co-localization with the target receptor mRNA. Black, dark red, and red lines indicate the 50th (i.e., median), 95th, and 99.9th confidence limits, respectively. Subpanels are grouped into rows by region and into columns by genotype. Atlas images depict the locations of confocal images used for mRNA quantification. Barplots show specificity and consistency of on-target and off-target expression, in each rat (n ϭ 3 rats per line). labels were multiplexed and visualized simultaneously ( Fig. 2A). mRNA expression was quantified in three distinct striatal subregions, the DS, the nucleus accumbens core, and the nucleus accumbens medial shell. Automated software was used to define cell boundaries and count fluorescent puncta per cell, for each probe ( Fig. 2A). To further assess the specificity and consistency of iCre mRNA expression we defined thresholds for considering neurons as positive for a given probe. Given the wide distributions of puncta counts, the choice of threshold is non-trivial; it forces a trade-off between Type I and Type II errors. Therefore, rather than picking an arbitrary threshold, for each probe we chose the 95% upper confidence limit, assuming a Poisson background distribution of puncta (see Materials and Methods). Using these thresholds (marked by red lines on the Fig. 2B, scatterplots) we estimated A2a-Cre specificity (% of iCreϩ that are also Adora2aϩ) to be 93.5% (DS), 91.8% (core), and 89.2% (shell), and consistency (% of Adora2aϩ that are also iCreϩ) to be 82.8% (DS), 77.4% (core), and 86.2% (shell). In the D1-Cre line, we estimated specificity (% of iCreϩ that are also Drd1aϩ) to be 89.1% (DS), 87.4% (core), and 81.8% (shell), and consistency (% of Drd1aϩ that are also iCreϩ) to be 77.5% (DS), 70.1% (core), and 74.6% (shell). If we use even higher thresholds for Drd1a and Adora2a (e.g., Ͼ30 puncta/cell), we can be essentially certain of cell identity and assessed this way consistency was close to 100% for both lines (Fig. 2B). Cre-dependent protein expression We next examined whether iCre mRNA expression results in functional Cre protein confined to the appropriate basal ganglia pathway. To this end, we injected DS with a virus for Cre-dependent expression of a fluorescent protein (AAV-CAG-FLEX-tdTomato) and examined the expression pattern four weeks later. Consistent with pathway-specific expression of functional Cre protein, injection into the D1-Cre line resulted in clear expression in the striato-nigral pathway, while injection into the A2a-Cre line produced labeling in both DS and GPe, but no expression in the SNr (Fig. 3A). One important use of Cre lines is to enable positive identification of recorded neuron subtypes in awake behaving animals (Kravitz et al., 2010), via Cre-dependent opsin expression and monitoring neuronal responses to light pulses. We found that both the D1-Cre and A2a-Cre rat lines can be used for this purpose. In rats from each line we injected a virus (AAV-hSyn-FLEX-ChrimsonR- A B C D E F Figure 4. Instrumental and Pavlovian discrimination are similar between transgenic lines and Cre-littermate controls. A, The average total number of responses on the active and inactive lever did not differ between groups and all groups preferentially responded on the active lever; ‫ء‬p Ͻ 0.05 active versus inactive responses. B, The total time to reach the acquisition criterion does not differ between groups. C, The average rate of food cup entries during the first 10 s of CSϩ presentations increases across two-session training blocks and is similar between groups. D, The average rate of food cup entries during the first 10 s of CS-presentations is low, does not change across training blocks and is similar between groups. E, The average latency to approach the food cup following CSϩ onset gets faster across training and is similar between groups. F, The average latency to approach the food cup following CSbecomes slower across training and is similar between groups. Note the scale difference between panels E, F; the dotted line in panel F indicates 15 s on the y-axis to facilitate comparison. All data represented as mean Ϯ SEM. Tdtomato) into the accumbens core for Cre-dependent expression of the red-shifted opsin Chrimson (Klapoetke et al., 2014;Fig. 3B, left) followed by a custom optrode (Mohebi et al., 2019). After allowing three weeks for opsin expression, we readily observed light-responsive single units (Fig. 3B, middle). In a representative example session from a D1-Cre rat 17 neurons were identified as dMSNs, as they showed both a reliable response to red light stimulation and the wave form properties typical of MSNs (Fig. 3B, Cocaine significantly increases locomotor activity compared to saline, and the magnitude of this response is similar across groups; ‫ء‬p Ͻ 0.005 locomotor activity in response to cocaine versus saline. All data represented as mean Ϯ SEM. As these cells were intermingled within the larger MSNs cluster, it would not have been possible to identify them without this optogenetic tagging procedure. Normal acquisition and performance of instrumental and Pavlovian discrimination and cocaine-induced locomotor activity Given that behavioral comparisons are likely to be made across these two independent transgenic lines, and between Creϩ rats and Cre-controls, we assessed acquisition and expression of instrumental responding for food and Pavlovian conditioned approach, and cocaineinduced locomotor activity in these lines. In the instrumental discrimination task presses on an active lever were reinforced with food pellet delivery (fixed ratio of 1; FR1), whereas presses on an inactive lever were never reinforced. Rats were trained to an acquisition criterion of earning 50 pellets within Ͻ40 min. Figure 4A shows the average number of active and inactive lever responses, and Figure 4B depicts the average time to reach the acquisition criterion in each group. As expected active lever responding was greater than inactive lever responding, and this did not differ between groups (twoway repeated-measures ANOVA, main effect of lever: F (1,90) ϭ 193.2, p Ͻ 0.0001; n.s. main effect of group: F (3,90) ϭ 1.379, p ϭ 0.2545; n.s. group ϫ lever interaction: F (3,90) ϭ 0.408, p ϭ 0.747). The time to reach acquisition criterion did not differ between groups (two-way repeated-measures ANOVA, n.s. main effect of lineage: F (1,45) ϭ 2.593, p ϭ 0.1143; n.s. main effect of genotype: F (1,45) ϭ 1.578, p ϭ 0.2155; n.s. lineage ϫ genotype interaction: F (1,45) ϭ 0.1086, p ϭ 0.7433). Following instrumental training, the acquisition and expression of Pavlovian conditioned approach were assessed in the same rats. During each session, one auditory cue was paired with food pellet delivery (CS: CSϩ), whereas a second auditory cue was never paired with food (CS-). Rats received 12 training sessions (60 min) in which each CS (tone or white noise, counterbalanced for CSϩ/CS-assignment) was randomly presented four times per session. Acquisition of Pavlovian conditioned food cup approach was similar across transgenic lines and between Cre-and Creϩ groups. Specifically, Figure 4C,D shows the average number of food cup entries during the first 10 s of CSϩ and CS-in twosession blocks, respectively. Anticipatory food cup entries during CSϩ presentations increased across training blocks and did not differ between groups (two-way Thus, acquisition and maintenance of discriminatory conditioned approach were similar across transgenic lines, and between Cre-and Creϩ groups. To provide an additional measure of learning we also examined the latency to enter the food cup following CS presentations. The average latency to enter the food cup following the onset of the CSϩ decreased across training blocks and this decrease did not differ between groups, demonstrating that all groups were similarly motivated to respond to reward-predictive cues (two-way repeatedmeasures ANOVA, main effect of training block: F (5,225) ϭ 16.95, p Ͻ 0.0001; n.s. main effect of group: F (3,45) ϭ 1.239, p ϭ 0.307; n.s. group ϫ training block interaction: F (15,225) ϭ 0.3964, p ϭ 0.9791; Fig. 4E). In contrast, the average latency to enter the food cup following the onset of the CS-increased across training blocks, and did not differ between groups (two-way repeated-measures ANOVA, main effect of training block: F (5,225) ϭ 12.38, p Ͻ 0.0001; n.s. main effect of group: F (3,45) ϭ 0.6639, p ϭ 0.578; n.s. group ϫ training block interaction: F (15,225) ϭ 0.4812, p ϭ 0.9485; Fig. 4F). Together, the results from these behavioral studies show that introduction of Cre into either D1-or A2a neurons does not disrupt normal acquisition or expression of instrumental and Pavlovian discriminations. Locomotor habituation and cocaine-induced locomotor activity were used to assess general striatal function in both lines (Oginsky et al., 2016). Creϩ rats and their Crelittermates were placed in standard locomotor chambers equipped with photocell beams around the perimeter. After a 30-min habituation period, they were given two intraperitoneal injections of saline (1 ml/kg). Both lines showed typical habituation to the locomotor chambers, and short-lived responses to saline injection that decreased with repeated injection (two-way repeatedmeasures ANOVA, main effect of time: F (21,378) ϭ 10.42, p Ͻ 0.0001; Fig. 5A). Locomotor activity was similar across 5C)]. Thus, all genotypes showed a significant increase in locomotor activity following cocaine versus saline injection, and this effect did not differ between genotypes. These data suggest that there is no overt striatal dysfunction due to Cre expression, and that behavioral responses to elevations in dopamine are similar across D1-Cre and A2a-Cre lines. Discussion We have demonstrated successfully targeted, functional knock-in of Cre recombinase at the Drd1a and Adora2a loci, without off-target insertions as assessed by multiple methods including whole-genome sequencing. Comparable behavioral performance across lines and between Creϩ and Cre-littermates in several basic behavioral procedures provides further confidence that there are no unexpected deleterious effects of genetic manipulation or co-production of Cre recombinase with endogenous receptors. Within striatum we showed that Cre expression was consistent and selective to the correct populations of direct pathway D1ϩ and indirect pathway A2aϩ cells, respectively. Thus, these D1-Cre and A2a-Cre transgenic rats enable selective monitoring or manipulation of dMSNs and iMSNs with high specificity. Although we fully expect Cre to be correctly targeted in other brain regions too, further characterization will be required to confirm this. D1-Cre and A2a-Cre transgenic rats offer clear advantages over currently available transgenic models. First, the greater capacity of rats to learn complex behaviors make them stronger candidates for a wider range of tasks compared to mice. Second, the increased carrying capacity afforded by rats facilitates the chronic implantation of larger devices (i.e., high channel-count headstages, graded-refractive-index lenses). Thirdly, knock-ins can be used with higher confidence that the genetic modification was selective and specific to the target, compared to BAC lines. A long-standing question in basal ganglia research has been the degree to which the striatal MSNs population can be fully divided into distinct D1ϩ and D2ϩ/A2aϩ subpopulations. Based on BAC transgenic mice overlap has been reported to range from 4% to 5% in DS and nucleus accumbens core, and up to 17% in shell (Bertran-Gonzalez et al., 2008;Wei et al., 2018). Our quantification of Drd1 and Adora2a mRNA expression found overlap to be consistently very low in all striatal subregions examined, including shell, providing additional evidence for a fundamentally segregated striatal architecture. Since Cre expression was highly specific to the intended striatal pathways, these rats are powerful tools for pathway-specific neuron identification and manipulations. One caveat is that a subset of fast-spiking, parvalbuminpositive (PVϩ) interneurons also express D1 receptors (Bracci et al., 2002), and may thus also express Cre in D1-Cre rats. However, PVϩ are only ϳ0.7% of striatal neurons (Luk and Sadikot, 2001) and at least in electrophysiological studies can be readily differentiated from MSNs (Kawaguchi, 1993;Koós and Tepper, 1999;Berke, 2008). We chose to examine behavior during simple instrumental and Pavlovian tasks as well as cocaine-induced locomotor activity, as these behaviors rely heavily on striatal function. Although behavioral differences might emerge under other, more complex task conditions, the lack of any overt differences between the D1-Cre and A2a-Cre transgenic lines, or between Cre-and Creϩ littermates, strongly suggest that these rats are wellsuited for behavioral and systems neuroscience studies. Beyond striatum, A2a receptors are found in the cortex, globus pallidus, hippocampus, thalamus, cerebellum (Rosin et al., 1998), and throughout the cardiovascular system. Similarly, D1 receptors are located in prefrontal cortex, hippocampus, thalamus, and hypothalamus (Fremeau et al., 1991). In coordination with a rapidly expanding set of optical and genetic tools, these rats increase our ability to address fundamental questions about brain circuitry and mechanisms underlying neurologic and psychiatric disorders.
8,316
2018-10-29T00:00:00.000
[ "Biology" ]
Nitrogen-Doped Banana Peel–Derived Porous Carbon Foam as Binder-Free Electrode for Supercapacitors Nitrogen-doped banana peel–derived porous carbon foam (N-BPPCF) successfully prepared from banana peels is used as a binder-free electrode for supercapacitors. The N-BPPCF exhibits superior performance including high specific surface areas of 1357.6 m2/g, large pore volume of 0.77 cm3/g, suitable mesopore size distributions around 3.9 nm, and super hydrophilicity with nitrogen-containing functional groups. It can easily be brought into contact with an electrolyte to facilitate electron and ion diffusion. A comparative analysis on the electrochemical properties of BPPCF electrodes is also conducted under similar conditions. The N-BPPCF electrode offers high specific capacitance of 185.8 F/g at 5 mV/s and 210.6 F/g at 0.5 A/g in 6 M KOH aqueous electrolyte versus 125.5 F/g at 5 mV/s and 173.1 F/g at 0.5 A/g for the BPPCF electrode. The results indicate that the N-BPPCF is a binder-free electrode that can be used for high performance supercapacitors. Introduction Supercapacitors, also known as ultracapacitors or electrochemical capacitors (ECs), have attracted significant attention since the first patent filed in 1957 followed by successful commercialization for hybrid electric vehicles (HEVs) in the 1990s [1]. Versus conventional capacitors and Li-ion batteries, supercapacitors offer superior performance including high power capability, good operating voltage, long cycle life (>100,000 cycles), low cost, low maintenance, superior safety, environmentally benign, and fast charge propagation dynamics [2,3]. Recently, supercapacitors have shown advantages over other electrochemical energy storage (EES) devices in many fields requiring high reliability and short load cycle, including portable electronic devices, electric vehicles (EVs), memory back-up systems, etc. Porous carbon along with metal oxides and conductive polymers is the most widely used electrode for supercapacitors because it offers a large surface area, low cost and easy processing [4][5][6]. Porous carbon offers a high capability for charge separation/accumulation at the electrode/electrolyte Nanomaterials 2016, 6, 18 2 of 10 interface, depending on the charge-storage mechanism [7,8]. Generally, porous carbon is derived from organic molecules (e.g., acetonitrile) [9], polymers (e.g., polypyrrole) [10], meta-aminophenol formaldehyde resin [11], monolithic carbide [12], etc. This often involves synthetic steps using toxic reagents and complicated synthesis procedures [13]. These concerns as well as requirements for tailored materials have led scientists to develop sustainable, cheap, safe and environmentally friendly porous carbon for use as supercapacitor electrodes. Here, we report nitrogen-doped banana peel-derived porous carbon foam (N-BPPCF) for use as a binder-free electrode for supercapacitors. The high porosity provided by the framework of the banana peel (BP) offers a high specific surface area and suitable pore size distribution for efficient contact between the electrolytes and the active materials. This, in turn, provides more active sites for electrochemical reactions and outstanding specific capacitance values [14][15][16]. To the best of our knowledge, this is the first report to describe the use of banana byproducts to generate carbon foam as an electrochemical reagent. This has significant implications for both the chemical and environmental community and is an excellent example of green synthesis. Results and Discussion To understand the formation mechanism of N-BPPCF, a schematic illustration is proposed in Figure 1a. The pristine BP was firstly air-dried, hydrothermally heated and freeze-dried to yield a brown BP precursor. This precursor has a ribbon pattern-like structure 6 cm long and 2 cm wide, similar to non-processed BP. After the carbonization and nitrogen doping, black N-BPPCF was created with a length of 4.5 cm and a width of 1.5 cm. Figure 1b,c shows typical scanning electron microscopy (SEM) images of the BPPCF and N-BPPCF porous carbon foam morphology, respectively. We used transmission electron microscopy (TEM) and high-resolution TEM (HRTEM) to further investigate the microstructure details of the BPPCF and N-BPPCF superstructures. Figure 2 shows TEM and HRTEM images at different magnifications of BPPCF and N-BPPCF. Both samples showed a porous structure with a possible pseudographite phase (Figure 2e,f). Moreover, the N-BPPCF produces a high specific surface area (SSA) of 1357.6 m 2 /g, a pore volume of 0.77 cm 3 /g and a Barrett-Joyner-Halenda (BJH) adsorption average mesopore size distribution around 3.9 nm ( Figure 3 and Table 1). This is because the additional NH 3 treatment at 900˝C further activates the carbon [17]. The N-BPPCF exhibited higher SSA and bigger pore volume than those of BPPCF. The porous-structured N-BPPCF offers good contact with electrolytes, and these pores strongly favor immediate electron and ion transmission [18]. Porous carbon along with metal oxides and conductive polymers is the most widely used electrode for supercapacitors because it offers a large surface area, low cost and easy processing [4][5][6]. Porous carbon offers a high capability for charge separation/accumulation at the electrode/electrolyte interface, depending on the charge-storage mechanism [7,8]. Generally, porous carbon is derived from organic molecules (e.g., acetonitrile) [9], polymers (e.g., polypyrrole) [10], meta-aminophenol formaldehyde resin [11], monolithic carbide [12], etc. This often involves synthetic steps using toxic reagents and complicated synthesis procedures [13]. These concerns as well as requirements for tailored materials have led scientists to develop sustainable, cheap, safe and environmentally friendly porous carbon for use as supercapacitor electrodes. Here, we report nitrogen-doped banana peel-derived porous carbon foam (N-BPPCF) for use as a binder-free electrode for supercapacitors. The high porosity provided by the framework of the banana peel (BP) offers a high specific surface area and suitable pore size distribution for efficient contact between the electrolytes and the active materials. This, in turn, provides more active sites for electrochemical reactions and outstanding specific capacitance values [14][15][16]. To the best of our knowledge, this is the first report to describe the use of banana byproducts to generate carbon foam as an electrochemical reagent. This has significant implications for both the chemical and environmental community and is an excellent example of green synthesis. Results and Discussion To understand the formation mechanism of N-BPPCF, a schematic illustration is proposed in Figure 1a. The pristine BP was firstly air-dried, hydrothermally heated and freeze-dried to yield a brown BP precursor. This precursor has a ribbon pattern-like structure 6 cm long and 2 cm wide, similar to non-processed BP. After the carbonization and nitrogen doping, black N-BPPCF was created with a length of 4.5 cm and a width of 1.5 cm. Figure 1b,c shows typical scanning electron microscopy (SEM) images of the BPPCF and N-BPPCF porous carbon foam morphology, respectively. We used transmission electron microscopy (TEM) and high-resolution TEM (HRTEM) to further investigate the microstructure details of the BPPCF and N-BPPCF superstructures. Figure 2 shows TEM and HRTEM images at different magnifications of BPPCF and N-BPPCF. Both samples showed a porous structure with a possible pseudographite phase (Figure 2e,f). Moreover, the N-BPPCF produces a high specific surface area (SSA) of 1357.6 m 2 /g, a pore volume of 0.77 cm 3 /g and a Barrett-Joyner-Halenda (BJH) adsorption average mesopore size distribution around 3.9 nm ( Figure 3 and Table 1). This is because the additional NH3 treatment at 900 °C further activates the carbon [17]. The N-BPPCF exhibited higher SSA and bigger pore volume than those of BPPCF. The porous-structured N-BPPCF offers good contact with electrolytes, and these pores strongly favor immediate electron and ion transmission [18]. Figure 4a shows the XRD patterns of BPPCF and N-BPPCF. There is a broad peak at 2θ of about 23° corresponding to the (002) plane reflection of graphite. In addition, there is a small shoulder peak that appears at 2θ of 44° which corresponds to the (100) plane reflection of graphite. These two broadening peaks reveal the possible presence of the amorphous phase [19,20] and possible pseudographite nature [21] within the carbonaceous BPPCF and N-BPPCF. Figure 4a shows the XRD patterns of BPPCF and N-BPPCF. There is a broad peak at 2θ of about 23° corresponding to the (002) plane reflection of graphite. In addition, there is a small shoulder peak that appears at 2θ of 44° which corresponds to the (100) plane reflection of graphite. These two broadening peaks reveal the possible presence of the amorphous phase [19,20] and possible pseudographite nature [21] within the carbonaceous BPPCF and N-BPPCF. Table 1. Specific surface area, porosity parameters and nitrogen content of the BPPCF and N-BPPCF samples. In the table, S BET stands for BET surface area, while BJH pore diameter, total pore volume and meso-pore valume were abbreviated as D BJH , V TPV and V meso . Samples S BET (m 2 /g) D BJH (nm) V TPV (cm 3 /g) V meso (cm 3 Figure 4a shows the XRD patterns of BPPCF and N-BPPCF. There is a broad peak at 2θ of about 23˝corresponding to the (002) plane reflection of graphite. In addition, there is a small shoulder peak that appears at 2θ of 44˝which corresponds to the (100) plane reflection of graphite. These two broadening peaks reveal the possible presence of the amorphous phase [19,20] and possible pseudographite nature [21] within the carbonaceous BPPCF and N-BPPCF. XPS All the different oxygen species were formed after thermal annealing of BP. The oxygen content in N-BPPCF decreases slightly with the increase of the nitrogen atomic percentage since the BPPCF was treated by ammonia gas. In the high resolution N 1s spectra, the peak can be attributed to the intensities of four components, such as pyridinic N (397.6 eV), pyrrolic N (399.1 eV), graphitic N (401.0) and pyridine N oxide (402.8 eV). The total nitrogen content in N-BPPCF was 8.7% higher than that of BPPCF (4.2%), as shown in Table 2. In N-BPPCF, pyridinic N constitutes 22.6 at. %, quaternary N constitutes 53.3 at. %, pyrrolic N consititues 15.3 at. % and pyridine N oxide constitutes 8.8 at. % (Table 3). Due to the doped nitrogen atoms acting as functional groups, the N-BPPCF has good hydrophilicity of the surface and easily contacts with electrolytes [17]. Figure 4c shows CV performance of the BPPCF and N-BPPCF samples in 6 M KOH at the scan rate of 5 mV/s. The CV curves of both samples were rectangular, which is attributed to an ideal capacitance behavior of a porous carbon electrode. The rectangular shape of CV curves is not seriously distorted, even at high scan rates (Figure 5a,b). This indicates the porous carbon is suitable for aqueous electrolytes and that there is little concentration polarization within the pores due to ion transport limitations [22]. The N-BPPCF offers high specific capacitance (185.8 F/g at 5 mV/s) versus 125.5 F/g at 5 mV/s for BPPCF (Figure 5d). With increasing scan rates, the N-BPPCF shows smaller discharge capacitance, such as 179.5 F/g at 10 mV/s, 169.9 F/g at 20 mV/s and 161.9 F/g at 30 mV/s. Even at the scan rate of 40 mV/s and 50 mV/s, the N-BPPCF delivered discharge capacitance of 154.7 F/g and 148.0 F/g, respectively. This is 83.3% and 79.7% of the maximum capacitance at 5 mV/s. It was easy for N-BPPCF to achieve specific capacitance values over 140 F/g-the BPPCF was limited to below 130 F/g. This is principally because the N-BPPCF offers a specific surface area that, in turn, increases the capacitance [8,11,19]. Figure 4c shows CV performance of the BPPCF and N-BPPCF samples in 6 M KOH at the scan rate of 5 mV/s. The CV curves of both samples were rectangular, which is attributed to an ideal capacitance behavior of a porous carbon electrode. The rectangular shape of CV curves is not seriously distorted, even at high scan rates (Figure 5a,b). This indicates the porous carbon is suitable for aqueous electrolytes and that there is little concentration polarization within the pores due to ion transport limitations [22]. The N-BPPCF offers high specific capacitance (185.8 F/g at 5 mV/s) versus 125.5 F/g at 5 mV/s for BPPCF (Figure 5d). With increasing scan rates, the N-BPPCF shows smaller discharge capacitance, such as 179.5 F/g at 10 mV/s, 169.9 F/g at 20 mV/s and 161.9 F/g at 30 mV/s. Even at the scan rate of 40 mV/s and 50 mV/s, the N-BPPCF delivered discharge capacitance of 154.7 F/g and 148.0 F/g, respectively. This is 83.3% and 79.7% of the maximum capacitance at 5 mV/s. It was easy for N-BPPCF to achieve specific capacitance values over 140 F/g-the BPPCF was limited to below 130 F/g. This is principally because the N-BPPCF offers a specific surface area that, in turn, increases the capacitance [8,11,19]. We used galvanostatic charge/discharge (GCD) measurements to further investigate the electrochemical performance of the BPPCF and N-BPPCF at various current densities ( Figure 6). The charge/discharge curves of both samples are linear and symmetrical without any infrared spectroscopy (IR) drop. However, the N-BBPCF offers a high specific capacitance of 210.6 F/g and 178.5 F/g, respectively, at 0.5 A/g and 1.0 A/g, respectively. These are much larger than 173.1 F/g and 136.3 F/g, respectively, for BPPCF. At the current density of 1.5 A/g and 2.0 A/g, the N-BPPCF can still deliver discharge capacitance of 164.3 F/g and 155.0 F/g. This confirms the excellent rate capabilities, with 78.0% and 73.6% of the maximum capacitance (210.6 F/g at 0.5 A/g). Even at the high current density of 2.5 A/g, the discharge capacitance of 146.9 F/g can be achieved. The corresponding obtained capacitance retention of N-BPPCF is 69.8% which is superior to 61.7% of We used galvanostatic charge/discharge (GCD) measurements to further investigate the electrochemical performance of the BPPCF and N-BPPCF at various current densities ( Figure 6). The charge/discharge curves of both samples are linear and symmetrical without any infrared spectroscopy (IR) drop. However, the N-BBPCF offers a high specific capacitance of 210.6 F/g and 178.5 F/g, respectively, at 0.5 A/g and 1.0 A/g, respectively. These are much larger than 173.1 F/g and 136.3 F/g, respectively, for BPPCF. At the current density of 1.5 A/g and 2.0 A/g, the N-BPPCF can still deliver discharge capacitance of 164.3 F/g and 155.0 F/g. This confirms the excellent rate capabilities, with 78.0% and 73.6% of the maximum capacitance (210.6 F/g at 0.5 A/g). Even at the high current density of 2.5 A/g, the discharge capacitance of 146.9 F/g can be achieved. The corresponding obtained capacitance retention of N-BPPCF is 69.8% which is superior to 61.7% of BPPCF (Figure 6d). All data indicated that the N-BPPCF has great electrochemical performance that is superior to that of BPPCF. Of note, the results are comparable to those reported in literature, such as 175 F/g at 0.5 A/g in 6 M KOH [11] and 212 F/g at 0.5 A/g in 6 M KOH [13]. Nanomaterials 2016, 6, 18 6 of 10 BPPCF (Figure 6d). All data indicated that the N-BPPCF has great electrochemical performance that is superior to that of BPPCF. Of note, the results are comparable to those reported in literature, such as 175 F/g at 0.5 A/g in 6 M KOH [11] and 212 F/g at 0.5 A/g in 6 M KOH [13]. Figure 7 shows the capacitance of BPPCF and N-BPPCF. Both samples exhibited good reversible capacitance with a capacitance retention rate of about 100% upon cycling and after 500 cycles at 0.5 A/g. Even at 2.5 A/g, N-BPPCF as well as BPPCF performed good cyclic capacitance retention of 100% after 5000 cycles. This underlines the excellent charge-discharge stability of the N-BPPCF as well as BPPCF. The N-BPPCF architecture offers excellent performance and practical sample preparation for supercapacitors. Of course, bananas are one of most popular fruits worldwide. There are more than 100 million tons produced every year. This results in significant organic waste from the peels. Using this byproduct as an electrode material is both environmentally sensitive and powerful from an electrochemical perspective. This coincides with other work focused on biomass [23] including from bagasse [24], rice husk [25], dead leaves [26], paulownia flower [27], tamarind fruit shell [28], etc. Furthermore, the as-prepared N-BPPCF has nitrogen-containing functional groups to enhance the capacity, surface wettability and electronic conductivity of carbon materials due to nitrogen doping [29][30][31]. The N-BPPCF was easily achieved by treating the BPPCF with ammonia to incorporate nitrogen-containing functional groups [32,33]. The as-prepared, binder-free N-BPPCF could be directly used as an electrode without any conductive additives or binders. Recently, interest in binder-free electrodes has grown due to their efficiency and activity [34,35]. We believe that N-BPPCF is a powerful new binder-free electrode for supercapacitors. It has significant potential for use as is or in similar superstructures using other porous carbon foams. Figure 7 shows the capacitance of BPPCF and N-BPPCF. Both samples exhibited good reversible capacitance with a capacitance retention rate of about 100% upon cycling and after 500 cycles at 0.5 A/g. Even at 2.5 A/g, N-BPPCF as well as BPPCF performed good cyclic capacitance retention of 100% after 5000 cycles. This underlines the excellent charge-discharge stability of the N-BPPCF as well as BPPCF. The N-BPPCF architecture offers excellent performance and practical sample preparation for supercapacitors. Of course, bananas are one of most popular fruits worldwide. There are more than 100 million tons produced every year. This results in significant organic waste from the peels. Using this byproduct as an electrode material is both environmentally sensitive and powerful from an electrochemical perspective. This coincides with other work focused on biomass [23] including from bagasse [24], rice husk [25], dead leaves [26], paulownia flower [27], tamarind fruit shell [28], etc. Furthermore, the as-prepared N-BPPCF has nitrogen-containing functional groups to enhance the capacity, surface wettability and electronic conductivity of carbon materials due to nitrogen doping [29][30][31]. The N-BPPCF was easily achieved by treating the BPPCF with ammonia to incorporate nitrogen-containing functional groups [32,33]. The as-prepared, binder-free N-BPPCF could be directly used as an electrode without any conductive additives or binders. Recently, interest in binder-free electrodes has grown due to their efficiency and activity [34,35]. We believe that N-BPPCF is a powerful new binder-free electrode for supercapacitors. It has significant potential for use as is or in similar superstructures using other porous carbon foams. Experimental Section The raw banana peel (BP) was air-dried, collected and put in a glass dryer prior to use. The air-dried BP (1.5 g) was firstly added into 50 mL deionized water and then transferred into a 100 mL Teflon autoclave and hydrothermally treated at 120 °C for 5 h. Hydrothermal BP was achieved after filtering and washing with deionized water for three times. Subsequently, the as-prepared hydrothermal BP was freeze-dried at −50 °C for 12 h to obtain BP precursor. The carbonization and nitrogen doping process was carried out in two steps. The as-prepared BP precursor was firstly calcined at 900 °C for 5 h in Ar atmosphere to obtained BP-derived porous carbon foam (BPPCF). Secondly, the as-obtained BPPC was reduced in NH3 atmosphere at 900 °C for 1 h and then denoted as nitrogen-doped BPPCF (i.e., N-BPPCF). The BPPCF and N-BPPCF formation mechanism were evaluated with X-ray diffraction (XRD) analysis on a Bruker D8 Advance X-ray diffractometer (Karlsruhe, Germany) with Cu Kα radiation (λ = 1.5406 Å). Scanning electron microscopy (SEM) images were performed on a Hitachi SU8010 microscope (Tokyo, Japan). Transmission electron microscopy (TEM) images were obtained with a Tecnai G2 F30 field emission transmission electron microscope (Hillsboro, OR, USA). Micromeritics ASAP 2020 BEET apparatus (Norcross, GA, USA) was employed to determine Barrett-Joyner-Halenda (BJH) pore structure and Brunauer-Emmett-Teller (BET) specific surface area. The X-ray photoelectron spectroscopy (XPS) data were obtained with an AMICUS/ESCA 3400 electron spectrometer (Manchester, UK) from Kratos Analytical using Mg Kα (20 mA 12 KV)radiation. The binding energies were referenced to the C 1s line at 284.8 eV from adventitious carbon. The electrochemical performance of the as-obtained BPPCF and N-BPPCF was evaluated using a standard three-electrode cell. The electrochemical performance such as cyclic voltammetry (CV) and galvanostatic charge/discharge (GCD) curves were performed using a CHI 660E electrochemical workstation at ambient condition. To fabricate the working electrode in three-electrode configuration, the as-obtained samples were cut into squares with edge length of 10.0 mm. A platinum sheet (10.0 mm × 10.0 mm) and a Hg/HgO electrode was the counter electrode and the reference electrode, respectively. The potentials were reported relative to the Hg/HgO reference electrode and the electrochemical measurements of the electrodes were recorded after stabilization. CV measurements were carried out at ambient temperature using 6 M KOH aqueous solution as electrolyte, the potential scan rates ranged from 0.5 to 50 mV/s within a potential range of −0.2 to 1.0 V vs. Hg/HgO. The specific capacitances of the electrodes can be calculated by using Equation (1) with the measured CVs and Equation (2) from the galvanostatic discharge branches, respectively [3,36]. Experimental Section The raw banana peel (BP) was air-dried, collected and put in a glass dryer prior to use. The air-dried BP (1.5 g) was firstly added into 50 mL deionized water and then transferred into a 100 mL Teflon autoclave and hydrothermally treated at 120˝C for 5 h. Hydrothermal BP was achieved after filtering and washing with deionized water for three times. Subsequently, the as-prepared hydrothermal BP was freeze-dried at´50˝C for 12 h to obtain BP precursor. The carbonization and nitrogen doping process was carried out in two steps. The as-prepared BP precursor was firstly calcined at 900˝C for 5 h in Ar atmosphere to obtained BP-derived porous carbon foam (BPPCF). Secondly, the as-obtained BPPC was reduced in NH 3 atmosphere at 900˝C for 1 h and then denoted as nitrogen-doped BPPCF (i.e., N-BPPCF). The BPPCF and N-BPPCF formation mechanism were evaluated with X-ray diffraction (XRD) analysis on a Bruker D8 Advance X-ray diffractometer (Karlsruhe, Germany) with Cu Kα radiation (λ = 1.5406 Å). Scanning electron microscopy (SEM) images were performed on a Hitachi SU8010 microscope (Tokyo, Japan). Transmission electron microscopy (TEM) images were obtained with a Tecnai G2 F30 field emission transmission electron microscope (Hillsboro, OR, USA). Micromeritics ASAP 2020 BEET apparatus (Norcross, GA, USA) was employed to determine Barrett-Joyner-Halenda (BJH) pore structure and Brunauer-Emmett-Teller (BET) specific surface area. The X-ray photoelectron spectroscopy (XPS) data were obtained with an AMICUS/ESCA 3400 electron spectrometer (Manchester, UK) from Kratos Analytical using Mg Kα (20 mA 12 KV)radiation. The binding energies were referenced to the C 1s line at 284.8 eV from adventitious carbon. The electrochemical performance of the as-obtained BPPCF and N-BPPCF was evaluated using a standard three-electrode cell. The electrochemical performance such as cyclic voltammetry (CV) and galvanostatic charge/discharge (GCD) curves were performed using a CHI 660E electrochemical workstation at ambient condition. To fabricate the working electrode in three-electrode configuration, the as-obtained samples were cut into squares with edge length of 10.0 mm. A platinum sheet (10.0 mmˆ10.0 mm) and a Hg/HgO electrode was the counter electrode and the reference electrode, respectively. The potentials were reported relative to the Hg/HgO reference electrode and the electrochemical measurements of the electrodes were recorded after stabilization. CV measurements were carried out at ambient temperature using 6 M KOH aqueous solution as electrolyte, the potential scan rates ranged from 0.5 to 50 mV/s within a potential range of´0.2 to 1.0 V vs. Hg/HgO. The specific capacitances of the electrodes can be calculated by using Equation (1) with the measured CVs and Equation (2) from the galvanostatic discharge branches, respectively [3,36]. Here, C is the gravimetric specific capacitance (F/g), I is the current (A), ν is the scan rate (mV/s), ∆V is the potential (V) and m is the total mass (g) of the samples. C " Iˆt ∆Vˆm (2) Here, C is the gravimetric specific capacitance (F/g), I is the discharge current (A), ∆V is the potential (V), m is the total mass (g) of the samples, and t is the discharge time (s). Conclusions In conclusion, nitrogen-doped banana peel-derived porous carbon foam (N-BPPCF) was successfully prepared and used as a binder-free electrode for supercapacitors. The N-BPPCF shows excellent electrochemical performances including a high specific capacitance of 185.8 F/g at 5 mV/s using CV measurement and 210.6 F/g at 0.5 A/g using galvanostatic charge/discharge measurement. We hope that N-BPPCF architecture will offer an additional way to prepare the binder-free electrodes and it shows potential for the synthesis of many other porous carbon foams.
5,537
2016-01-01T00:00:00.000
[ "Materials Science" ]
CBCT-based volumetric and dosimetric variation evaluation of volumetric modulated arc radiotherapy in the treatment of nasopharyngeal cancer patients Objective To investigate the anatomic and dosimetric variations of volumetric modulated arc therapy (VMAT) in the treatment of nasopharyngeal cancer (NPC) patients based on weekly cone beam CT (CBCT). Materials and methods Ten NPC patients treated by VMAT with weekly CBCT for setup corrections were reviewed retrospectively. Deformed volumes of targets and organs at risk (OARs) in the CBCT were compared with those in the planning CT. Delivered doses were recalculated based on weekly CBCT and compared with the planned doses. Results No significant volumetric changes on targets, brainstem, and spinal cord were observed. The average volumes of right and left parotid measured from the fifth CBCT were about 4.4 and 4.5 cm3 less than those from the first CBCT, respectively. There were no significant dose differences between average planned and delivered doses for targets, brainstem and spinal cord. For right parotid, the delivered mean dose was 10.5 cGy higher (p = 0.004) than the planned value per fraction, and the V26 and V32 increased by 7.5% (p = 0.002) and 7.4% (p = 0.01), respectively. For the left parotid, the D50 (dose to the 50% volume) was 8.8 cGy higher (p = 0.03) than the planned values per fraction, and the V26 increased by 8.8% (p = 0.002). Conclusion Weekly CBCTs were applied directly to study the continuous volume changes and resulting dosimetric variations of targets and OARs for NPC patients undergoing VMAT. Significant volumetric and dosimetric variations were observed for parotids. Replanning after 30 Gy will benefit the protection on parotids. Due to its sharp dose gradient, intensity modulated radiotherapy (IMRT) has been accepted as the primary treatment modality for nasopharyngeal cancer (NPC) patients [1,2]. Studies have confirmed that the dosimetric advantages of IMRT over conventional treatment translated into clinical outcomes with reduced parotid toxicity [3]. However, geometry and anatomic changes during the long course of IMRT treatment have limited the clinical benefits of IMRT [4]. Onboard cone beam CT (CBCT) has been applied to resolve the critical aspects of IMRT, such as patient setup and target localization [5]. The CBCT using a kilovoltage (kV) imaging system mounted on a linear accelerator has emerged as a significant technique for registering the soft tissue [6]. Anatomic changes in head and neck cancer patients throughout the radiation therapy treatment course due to tumor shrinkage, body weight loss, and soft tissue changes have been reported [7,8]. Daily CBCT for setup purposes during image-guided radiotherapy (IGRT) has been conducted to assess the soft tissue changes [9,10]. However, dosimetric variation and accuracy are more of concern during the radiotherapy course. The feasibility and accuracy of applying CBCT-based dose calculation are still under investigation due to the severe scatter problem of CBCT images [11]. Weekly computed tomography images during IMRT in the treatment of head and neck patients have been conducted to study the spatial variability and dosimetric differences between planned and delivered dose [12,13]. However, rescanning and replanning with weekly CT are not favored because of the time consuming and additional machine occupancy [14]. Weekly CBCT employed directly for dosimetric verification for IMRT or VMAT in the treatment of NPC patients is a promising solution. In a previous study, we had achieved reasonable dose calculation accuracy for head-and-neck cancer patients based on CBCT with a region of interest (ROI) mapping method [15]. The purpose of this study is to evaluate the anatomic changes and related dosimetric effect based on weekly CBCT directly for NPC patients undergoing volumetric modulated arc therapy (VMAT) treatment. Patient characteristics and planning This study was approved by the Institutional Review Board and performed at the 1st Affiliated Hospital of Wenzhou Medical University. We retrospectively reviewed 10 consecutive NPC patients treated by dual arc VMAT between January 2011 and November 2012 with weekly CBCT for setup error corrections. All the patients had diagnosed NPC with various AJCC stages, as summarized in Table 1. Five patients received induction chemotherapy with paclitaxel and cisplatin. One was treated by concurrent chemotherapy with paclitaxel, and the other four were treated by radiotherapy only. Patients were immobilized with a thermoplastic head mask and scanned on a planning kilovoltage CT scan (Philips Medical Systems, Eindhoven, The Netherlands) with a 3-mm slice thickness. Target and normal tissue delineations have been reported in our previous study and generalized here only briefly [16]. Gross tumor volume (GTV) was delineated as the mass shown in the enhanced CT images and/or MRI images, including the nasopharyngeal tumor, retropharyngeal lymphadenopathy, and enlarged neck nodes. The clinical target volume (CTV) was defined as the GTV plus a margin of potential microscopic spread, encompassing the inferior sphenoid sinus, clivus, skull base, nasopharynx, ipsilateral parapharyngeal space, and posterior third of the nasal cavity and maxillary sinuses. High-risk nodal regions, including the bilateral upper deep jugular nodes, submandibular nodes, jugulodigastric, mid-jugular, low jugular, and supraclavicular nodes and the posterior cervical nodes were included. The planning target volume (PTV) was created by adding a 3 mm margin to the CTV to account for setup variability. Prescription doses were 70 Gy and 56 Gy for GTV and CTV in 28 fractions, respectively. OARs consisting of the brainstem, spinal cord, left and right parotids were constrained for optimization. Dual arc VMAT plans were generated on Philips Pinnacle 3 treatment planning system (TPS) (clinical version 9.2; Philips, Fichburg, WI,USA). Optimization parameters and process have been reported in our previous study [17]. Briefly, the first arc rotates clockwise with a start angle of 181°and a stop angle of 180°, and the second arc rotates counterclockwise from 180°to 181°. During the optimization, leaf motion of 0.46 cm/deg and a final arc space degree of 4 were employed. CBCT imaging and number to density curve calculation VMAT plans were delivered on an Elekta Synergy linac (Elekta Ltd., Crawley, UK) which integrated an onboard kV-CBCT. CBCT images were acquired at the first treatment day (CBCT 1) with patients in the treatment position prior to radiation delivery, and then performed weekly after. The acquisition parameters were 120 kV, 25 mA, 40 ms per projection with F0 filter. A total of about 650 projections were acquired for a full rotation in about 2 min. S20 collimator cassette was used on all patients giving a nominal irradiated scan length at the isocenter of approximately 26 cm. A region of interest (ROI) CT number mapping method was used to generate the CBCT number to physical electron density conversion curve for the dose calculation with a phantom, Catphan-600 module CTP503 (Phantom Laboratory, NY) [18]. This process has been reported in our previous study [15] and summarized here: (1) register the planning CT images and kV-CBCT images in the Pinnacle TPS; (2) map the ROIs from conventional CT dataset to the CBCT dataset, and record the mean CBCT number values of these ROIs, and (3) Generate the kV-CBCT numbers to physical electron density calibration curve based on the density values measured on the conventional CT. The typical CT number to density curves for CT and CBCT were presented in Figure 1. Volumetric and dosimetric evaluation Volumetric changes and resulting dosimetric effects based on CBCT images were investigated using the Raystation TPS (version 3.5, RaySearch, Stockholm, Sweden). The Raystation TPS was commissioned with the same beam data as the Pinnacle system. The dose deviations between Raystation and Pinnacle were within 1.5% during the commission process. All VMAT plans with initial CT data were exported from Pinnacle TPS to Raystation TPS through DICOM service and the dose distributions were recalcuated based on the same CT number to density calibration curve. Weekly CBCT images were also imported into the Raystation TPS through DICOM service. For each patient, each weekly CBCT image was rigidly registered to the planning CT individually. The rigid registration was performed automatically and final manual adjustment was used for better alignment. After the rigid registration, a deformable registration was also performed automatically using vertex-vertex correspondence between the reference image set and the target image sets. That is the user can convert an region of interest (ROI) with contour shape to a new ROI with triangle mesh shape. The new ROI can be used as controlling ROI, which means that it has the same number of vertices in all image sets and that it has pointto-point correspondence for the vertices. As a result, each weekly CBCT image had one rigid and one deformed registration to the original planning CT. Auto contours were conducted for target volumes and OARs on weekly CBCTs by mapping the contours in the planning CT to CBCT with the deformable registrations. A physician carefully evaluated all contours and corrections were performed if necessary. For each patient, the beam arrangements and optimization parameters in the initial treatment plan on the planning CT was directly applied to the weekly CBCTs. Using the CBCT number to density calibration curve, the fractional dose based on the weekly CBCT were recalculated. To compare the planned dose in the initial planning CT and the delivered dose on weekly CBCT, the dose to 95% (D95) and 90% (D90) of the GTV and CTV, and the volume of CTV irradiated by 110% of the prescription dose (V110) were recorded and compared. The dose to 1% (D1) of brainstem and spinal cord, the dose to 50% (D50) of parotids, the mean dose (Dmean), the volume of parotids receiving 26 Gy (V26) and 32 Gy (V32) were also recorded and compared. Statistical analysis Descriptive statistics were calculated to characterize the dosimetric and volumetric changes of targets and OARs. Comparisons between the planned dose in the initial CT and recalculated dose based on weekly CBCTs were analyzed using one-way ANOVA. When an overall significant difference was observed, the post hoc Tukey test was used to determine which pairwise comparisons differed. All statistical analyses were conducted with SPSS 17.0 software (spss Inc., Chicago, IL). Differences were considered statistically significant when p < 0.05. Figure 2 shows the typical planning CT with manual contours and the weekly CBCT with deformed contours. Detailed average volume changes for targets and OARs were listed in Table 2. The average volumes of CTV on the first CBCT were smaller than those in the planning CT, however, no statistical difference of volume changes were observed during the treatment course. There were also no general trend and significant volume changes for GTV and brainstem. The average volumes of parotids on the first CBCT (CBCT 1) before the radiotherapy were also close to those in the planning CT, but the average volumes of parotids decreased continuously during the treatment course. The average volume of the right parotid and left parotid measured from the fifth CBCT (CBCT 5) were about 4.4 and 4.5 cm 3 less than those measured from the CBCT 1, respectively. Individual volume changes for both parotids were presented in Figure 3. Results Dosimetric differences resulting from volume changes and geometrical errors were summarized in Table 3. There were no significant dose differences between the average planned dose and recalculated delivered dose for both GTV and CTV. There were also no significant differences to the average maximum dose of brainstem and spinal cord between planned and delivered doses. Dose delivered to the parotids demonstrated some significant differences. Detailed pairwise comparison p values between planned dose and recalculated CBCT dose for parotids were presented in Table 4. As presented in Table 3 and 4, the D50 of right parotid increased significant since CBCT 3 at the 10th fraction with a dose of 7.1 cGy higher than (p = 0.02) planned dose per fraction. The mean dose of right parotid was 10.5 cGy higher than (p = 0.004) planned dose per fraction after CBCT 4 at the 15th fraction. The V26 and V32 of right parotid from CBCT 4 increased by 7.5% (p = 0.002) and 7.4% (p = 0.01) compared to the planned values, respectively. The D50 of left parotid was 8.8 cGy higher than (p = 0.03) planned dose per fraction from CBCT 4. The V26 of the left parotid from CBCT 4 increased by 8.8% (p = 0.002) compared to the planned value. Discussion Anatomic and dosimetric variations of NPC in radiotherapy have long been concerns. In this study, relying on the CBCT-based dose calculation, weekly CBCTs were applied directly to study the volumetric changes and resulting dosimetric effects of 10 consecutive NPC patients underwent VMAT treatment. Due to the limited field of view, the CBCT may not span the complete longitudinal dimension of the target volume for some NPC patients. The calculation grids in the initial planning CT of two patients were adjusted and shortened in longitudinal direction to match the target volumes in the initial CTs and in the CBCTs. The average volumes of CTV and spinal cord on the first CBCT were smaller than those in the planning CT. However, except for parotids, Figure 2 Planning CT with manual contours, CBCTs with deformed contours and a typical fusion image. no significant volume changes of targets and OARs were observed based on CBCT 1 in this study. The volume changes of parotids were unique for each individuals as shown in Figure 3. According to Table 2, the average weekly shrinkage of right and left parotids were 4.4% and 4.7%, respectively. This was close to the reported glands shrinkage of 4.9%/wk in the study of Robar et al., in which weekly CT was applied to study the spatial variability of OAR and the resultant dosimetric effects during IMRT for 15 head and neck patients [12]. Currently, the design of onboard CBCT is far from optimal and its quality is adversely influenced by many factors, such as scatter, beam hardening and intra-scanning organ motion. The question of whether CBCT images can be used directly for radiation dose calculation has been raised and investigated. Based on reliable CBCT HU and density calibration curve, studies have demonstrated the reliability and accuracy of CBCT-based dose calculation [19,20]. Our previous study also demonstrated that ROI mapping method was an effective and simple method for the CBCT-based dose calculation [15]. Therefore, we applied the same method in this study to investigate the dosimetric effects during VMAT based on weekly CBCT directly. There were no significant differences between the planned and delivered doses for GTV and CTV. This was consistent with the study of Zhang et al., in which a planning CT and weekly repeat CT were scanned to study the actual dose variability of targets and OARs for 11 NPC patients during IMRT [21]. The average maximum doses represented by D1 for the brainstem and spinal cord were not significant different between planned and delivered doses. However, patient 1 demonstrated a 16.8% and 26.0% dose increase for brainstem and spinal cord on CBCT 5, respectively. This indicated a random dosimetric variability for brainstem and spinal cord similar to the reported results in the study of Robar et al. [12]. The increased dose on CBCT 5 could be caused by a dramatic anatomic changes resulting from a sharp volume shrinkage of parotids, as shown in Figure 3. Both parotids shifted towards a greater delivered dose during the VMAT treatment, which was consistent with the studies based on weekly CT during IMRT treatment [12,21]. Compared to the planned dose, the delivered dose on D50, Dmean, V26 and V32 of parotids were increased significantly after CBCT 4, which was obtained at the 15th fraction with a dose of 30 Gy and 37.5 Gy for CTV and GTV, respectively. This implies a replanning after 30 Gy will benefit the parotid protection for NPC patients during VMAT treatment. However, the time trend for the dosimetric changes of parotids was less obvious compared to the time trend of their volume changes, as shown in Table 3. The small dosimetric fluctuations among CBCTs could be caused by the errors of CBCT based dose calculation [15]. Another explanation could be the limited number of patients included in this study. Additional work on a larger patient population is warranted to decide the proper adaptive replanning time for NPC patients. CBCT and megavoltage CT (MVCT) [4] have been widely employed during the radiotherapy for geometric error corrections. Direct dose calculation based on CBCT and MVCT will certainly provide a more convenient and straightforward way than weekly repeated CT images for adaptive replanning [12,13]. However, due to the intrinsic limitations of CBCT, extensive work on the reliability and accuracy of CBCT-based dose calculation is also warranted to evaluate whether our findings were accurate enough and could actually translate into guidelines. Conclusion CBCT-based dose calculation was applied directly to study the anatomic changes and resulting dosimetric variations for NPC patients undergoing VMAT treatment. Continuous volume changes in the parotids were observed with weekly CBCT. Significant dosimetric variations in the parotids were presented at 15th fraction with CBCT 4. No significant difference in volumetric and dosimetric variations were observed for other OARs and targets. A replanning after 30 Gy may be useful in the VMAT for NPC.
3,952.2
2013-12-01T00:00:00.000
[ "Medicine", "Physics" ]
RG flows in 6D N=(1,0) SCFT from SO(4) half-maximal 7D gauged supergravity We study $N=2$ seven-dimensional gauged supergravity coupled to three vector multiplets with $SO(4)$ gauge group. The resulting gauged supergravity contains 10 scalars consisting of the dilaton and 9 vector multiplet scalars parametrized by $SO(3,3)/SO(3)\times SO(3)$ coset manifold. The maximally supersymmetric $AdS_7$ vacuum with unbroken $SO(4)$ symmetry is identified with a $(1,0)$ SCFT in six dimensions. We find one new supersymmetric $AdS_7$ critical point preserving $SO(3)_{\textrm{diag}}\subset SO(3)\times SO(3)\sim SO(4)$ and study a holographic RG flow interpolating between the $SO(4)$ and the new $SO(3)$ supersymmetric critical points. The RG flow is driven by a vacuum expectation value of a dimension-four operator and describes a deformation of the UV $(1,0)$ SCFT to another supersymmetric fixed point in the IR. In addition, a number of non-supersymmetric critical points are identified, and some of them are stable with all scalar masses above the BF bound. RG flows to non-conformal $N=(1,0)$ Super Yang-Mills with $SO(2)\times SO(2)$ and $SO(2)$ symmetries are also investigated. Some of these flows have physically acceptable IR singularities since the scalar potential is bounded above. These provide physical RG flows from $(1,0)$ SCFT to non-conformal field theories in six dimensions. Introduction The AdS/CFT correspondence has attracted a lot of attention during the past twenty years. The original proposal in [1] discussed many examples in various dimensions. These examples included the duality between M-theory on AdS 7 × S 4 and (2, 0) superconformal field theory (SCFT) in six dimensions. The AdS 7 × S 4 geometry can arise from the near horizon limit of M5-brane. In term of N = 4 seven-dimensional gauged supergravity with SO(5) gauge group, the AdS 7 geometry corresponds to the maximally supersymmetric vacuum of the gauged supergravity, see for example [2]. In this paper, we will explore AdS 7 /CFT 6 correspondence with sixteen supercharges. The dual SCFT to the AdS 7 background in this case would be (1, 0) sixdimensional SCFT. Six-dimensional gauge theories with N = (1, 0) supersymmetry are interesting in many aspects. In [3], it has been shown that the theories admit nontrivial RG fixed points. Examples of these field theories also arise in string theory [4], see also a review in [5]. After the AdS/CFT correspondence, a supergravity dual of a (1, 0) field theory with E 8 global symmetry has been proposed in [6]. The dual gravity background has been identified with the orbifolds of AdS 7 × S 4 geometry in M-theory. The operator spectrum of the (1, 0) six-dimensional SCFT has been matched with the Kaluza-Klein spectrum in [7,8]. Like in lower dimensions, it is more convenient to study AdS d+1 /CFT d correspondence in the framework of (d+1)-dimensional gauged supergravity. A consistent reduction ansatz can eventually be used to uplift the lower dimensional results to string/M theory in ten or eleven dimensions. A suitable framework in the holographic study of the above (1, 0) field theories is the half-maximal gauged supergravity in seven dimensions coupled to n vector multiplets. The supergravity theory has N = 2 or sixteen supercharges in exact agreement with the number of supercharges in six-dimensional (1, 0) superconformal symmetry. This has been proposed long time ago in [9]. With the pure gauged supergravity and critical points found in [10] and [11], holographic RG flows to a non-supersymmetric IR fixed point and to a non-conformal (1, 0) gauge theory have been studied in [12] and [13]. Pure N = 2 gauged supergravity in seven dimensions admit only two AdS 7 vacua with one being maximally supersymmetric and the other one being stable non-supersymmetric. To obtain more AdS 7 critical points, matter coupled supergravity theory is needed. This has been constructed in [14] but without the topological mass term for the 3-form field which is a dual of the 2-form field in the supergravity multipet. Without this term, the scalar potential of the matter coupled gauged supergravity does not admit any critical point but a domain wall as can be verified by looking at the scalar potential explicitly given in [14]. Although mistakenly claimed in [15] that the topological mass term is not possible, the theory indeed admits this term as shown in [16] in which the full Lagrangian and supersymmetry transformations of this massive gauged supergravity have been given. This provides the starting point for the present work. In this paper, we are interested in the gauged supergravity with SO(4) gauge group. This requires three vector multiplets since six gauge fields are needed in order to implement the SO(4) gauging. The theory can be obtained from a truncation of the maximal N = 4 gauged supergravity [17]. In addition to the dilaton, there are extra nine scalars from the vector multipets parametrized by SO(3, 3)/SO(3)×SO(3) ∼ SL(4, R)/SO(4) coset manifold. We will explore the scalar potential of this theory in the presence of topological mass term and identify some of its critical points. The critical points will correspond to new IR fixed point of the (1, 0) SCFT identified with the maximally supersymmetric critical point with SO(4) symmetry. We will also study RG flows between these critical points as well as RG flows to non-conformal field theories. The paper is organized as follow. We briefly review the matter coupled gauged supergravity in seven dimensions and give relevant formulae which will be used throughout the paper in section 2. Some critical points of seven-dimensional gauged supergravity with SO(4) gauge group are explored in section 3. A number of supersymmetric and non-supersymmetric critical points and the corresponding scalar masses will also be given in this section. In section 4, we study supersymmetric deformations of the UV N = (1, 0) SCFT to a new superconformal fixed point in the IR and to non-conformal SYM in six dimensions. Both types of the solutions can be analytically obtained. The paper is closed with some conclusions and comments on the results in section 5. N = 2, SO(4) gauged supergravity in seven dimensions We begin with a description of N = 2 gauged supergravity coupled to n vector multiplets. All notations are the same as those of [16]. The gravity multiplet in sevendimensional N = 2 supersymmetry contains the following field content gravity multiplet : A vector multiplet has the field content (A µ , λ A , φ i ). Indices A, B label the doublet of the USp(2) R ∼ SU(2) R R-symmetry. Curved and flat space-time indices are denoted by µ, ν, . . . and m, n, . . ., respectively. B µν and σ are a two-form and the dilaton fields. For supergravity theory coupled to n vector multiplets, there are n copies of (A µ , λ A , φ i ) r labeled by an index r = 1, . . . , n, and indices i, j = 1, 2, 3 label triplets of SU(2) R . The 3n scalars φ ir are parametrized by SO(3, n)/SO(3) × SO(n) coset manifold. The corresponding coset representative will be denoted by L = (L i I , L r I ), I = 1, . . . , n + 3 . (2. 2) The inverse of L is given by L −1 = (L I i , L I r ) where L I i = η IJ L Ji and L I r = η IJ L Jr . Indices i, j and r, s are raised and lowered by δ ij and δ rs , respectively while the full SO(3, n) indices I, J are raised and lowered by η IJ = diag(− − − + + . . . +). There are some relations involving components of L and are given by Gaugings are implemented by promoting a global symmetryG ⊂ SO(3, n) to a gauge symmetry. Consistency of the gauging imposes a condition on theG structure meaning that η IJ is invariant under the adjoint action ofG. General semisimple gauge groups take the form ofG ∼ G 0 × H ⊂ SO(3, n) with G 0 being one of the six possibilities: SO(3), SO(3, 1), SL(3, R), SO(2, 1), SO(2, 2) and SO(2, 2) × SO(2, 1) and H being compact with dimH ≤ (n + 3 − dimG 0 ). In this paper, we are interested in the SO(4) gauged supergravity corresponding to G 0 = SO(3) and H = SO (3). To obtain AdS 7 vacua, we need to consider the gauged supergravity with a topological mass term for a 3-form potential. The 3-form field is a dual of the 2-form B µν . With all modifications to the Lagrangian and supersymmetry transformations as given in [16], the bosonic Lagrangian involving only scalars and the metric can be written as where the scalar potential is given by The constant h characterizes the topological mass term. The quantities appearing in the above equations are defined by We also need fermionic supersymmetry transformations with all fields but scalars vanishing. These are given by where SU(2) R indices on spinors are suppressed. σ i are the usual Pauli matrices. In the remaining of this section, we focus on n = 3 case withG = SO(4) ∼ In this case, the structure constants for the gauge group are given by where g 1 and g 2 are coupling constants of SO(3) R and SO(3), respectively. Critical points of N = 2, SO(4) seven-dimensional gauged supergravity In this section, we will compute the scalar potential of the SO(4) gauged supergravity and study some of its critical points. Although complicated, it is possible to compute the scalar potential for all of the ten scalars. However, the long expression would make any analysis more difficult. Consequently, we will proceed by studying the scalar potential on a subset of the ten scalars as originally proposed in [18]. In this approach, the scalar potential is computed on a scalar submanifold which is invariant under some subgroup H 0 of the full gauge symmetry SO(4). This submanifold consists of all scalars which are singlet under the unbroken subgroup H 0 . All critical points found on this submanifold are essentially critical points of the potential on the full scalar manifold. This can be seen by expanding the full potential to first order in scalar fluctuations which in turn contain both H 0 singlets and H 0 non-singlets. By a simple group theory argument, the non-singlet fluctuations cannot lead to H 0 singlets at first order. Their coefficients, variations of the potential with respect to non-singlet scalars, must accordingly vanish. This proves to be more convenient and more efficient. However, the truncation is consistent only when all relevant H 0 singlet scalars are included on the chosen submanifold. With only some of these singlets, the consistency is not guaranteed. Critical points on SO(3) diag scalars We begin with the most simplest case namely the potential on SO (3) The scalar potential is given by Notice that there is no critical point when h = 0 as mentioned before. In this case, the SO(4) supergravity admits a half-supersymmetric domain wall as a vacuum solution. For φ = 0, the above potential is the potential of pure N = 2 gauged supergravity with SO(3) gauge group studied in [10] and [11]. There are two critical points in the pure gauged supergravity. One of them preserves all of the supersymmetry while the other completely breaks supersymmetry. In our conventions, they are given by It can be readily verified by using supersymmetry transformations of ψ µ , χ and λ r that the first one is supersymmetric. We can bring the supersymmetric point to σ = 0 by choosing g 1 = −16h and find that the two critical points are now given by where V 0 denotes the value of the cosmological constant. Although non-supersymmetric, the second critical point has been shown to be stable in [11]. In the presence of matter scalars, this is however not the case. This can be seen from the scalar masses given below. The AdS 7 radius L in our conventions is given by (3). The BF bound in seven dimensions is m 2 L 2 ≥ −9. Therefore, the non-supersymmetric critical point of pure gauged supergravity is unstable in the matter coupled theory. This is very similar to the six-dimensional N = (1, 1) gauged supergravity pointed out in [19]. Scalar masses at the supersymmetric point are given in the table below. In the dual (1, 0) SCFT, these scalars correspond to dimension-4 operators via the relation m 2 L 2 = ∆(∆ − 6). There is one non-trivial supersymmetric point at At this point, scalar masses are computed as follow. In the table, we have decomposed all of the ten scalars in representations of the SO(3) diag residual symmetry. This can be done by the following decomposition. Under SO(3) R × SO (3), the nine scalars transform as (3,3). They then transform as 3 × 3 = 1 + 3 + 5 under SO(3) diag . Notice that the 3 scalars are massless corresponding to Goldstone bosons of the symmetry breaking There is one non-supersymmetric critical point given by This critical point is stable as can be seen from the mass spectrum below. For g 2 = g 1 , we also find another non-supersymmetric critical point given by This critical point is however unstable. Scalar masses at this point are given below. We can see that the mass of 5 scalars violates the BF bound. Critical points on scalar manifold with smaller residual symmetry To find other critical points, we can consider smaller residual symmetries. Breaking SO(3) diag to SO(2) diag , we find that there are two singlets from SO(3, 3)/SO(3)×SO (3) with the coset representative This gives the scalar potential, with g 1 = −16h, This potential does not admit any supersymmetric critical points unless φ 1 = φ 2 which is the previously found SO(3) diag point. When φ 1 = 0, the above scalar submanifold preserves SO(2) × SO(2) symmetry, but there is no critical point except for φ 2 = 0. We are not able to obtain any new critical points from the above potential. We now move to scalar fields invariant under SO(2) R ⊂ SO(3) R . There are three singlets corresponding to Y 11 , Y 12 and Y 13 . Denoting the associated scalars by φ i , i = 1, 2, 3, we find a simple potential which does not admit any non-trivial critical points. Supersymmetric RG flows We now consider domain wall solutions interpolating between critical points identified in the previous section. These solutions will generally have an interpretation in terms of RG flows in the dual field theories in six dimensions. We are mainly interested in supersymmetric RG flows which can be obtained from solving BPS equations coming from supersymmetry variations of fermionic fields ψ µ , χ and λ r . A stable nonsupersymmetric AdS 7 critical point also admits a well-defined dual CFT, but in most cases, finding the corresponding flow solutions requires a numerical analysis. Accordingly, we will not consider non-supersymmetric flows in this paper. An RG flow to a supersymmetric SO(3) fixed point There is one supersymmetric AdS 7 critical point with SO(3) symmetry. In this subsection, we will find the domain wall solution interpolating between this point and the trivial critical point at σ = φ = 0. Using the standard domain wall metric ds 2 = e 2A(r) dx 2 1,5 + dr 2 (4.1) where dx 2 1,5 is the flat metric in six-dimensional space-time and the projection condition γ r ǫ = ǫ, we can derive the following BPS equations The solution is easily found to be A = 1 8 2φ − σ − 2 ln 2 − 2e 4φ + 2 ln g 1 + g 2 + (g 1 − g 2 )e 2φ . (4.10) Near the UV point σ ∼ 0 and φ ∼ 0 with g 1 = −16h, we find sincer ∼ r near σ ∼ 0. The flow is then driven by vacuum expectation values (vev) of relevant operators of dimension ∆ = 4. In the IR, we find that the solution behaves as (4.12) From this, we see that the operator dual to φ acquires an anomalous dimension and has dimension 10 in the IR. This is consistent with the value of m 2 L 2 given previously. RG flows to non-conformal field theories A supersymmetric flow to non-conformal field theory in pure gauged supergravity has been studied in [13]. We will study similar solutions in the matter coupled gauged supergravity. These solutions would be a generalization of the solution given in [13]. Flows to SO(2) × SO(2), 6D Super Yang-Mills We first consider SO(2) R singlets scalars. With γ r ǫ = ǫ, the BPS equations for these three singlets, denoted by φ i , i = 1, 2, 3, σ and A are given by The above equations clearly admit only one critical point at φ i = 0. For φ 1 = φ 2 = 0, the solution will preserve SO(2) R × SO(2) symmetry. This is easily seen to be a consistent truncation. The solution to the above equations is given by where as in the previous caser is related to r via dr dr = e − σ 2 . Near the UV point, the asymptotic behavior of φ 3 and σ is given by In the IR, we will consider φ 3 > 0 and φ 3 < 0, separately. For φ 3 > 0, there is a singularity when φ 3 → ∞ as 16hr ∼ C 1 . With C 2 = 0, we find As 16hr ∼ C 1 , we find the relation between r andr to be 16hr − C = 5 6 (16hr − C 1 ) 6 5 with C being another integration constant. As expected from the general DW/QFT correspondence [20,21,22], the metric in the IR takes the form of a domain wall where the multiplicative constant has been absorbed in the rescaling of the x µ coordinates. Flows to non-conformal field theories usually encounter singularities in the IR. As can be seen from the above metric, there is a singularity at 16hr ∼ C. The criterion for determining whether a given singularity is physical or not has been given in [23]. The condition rules out naked time-like singularities which are clearly unphysical. According to the criterion of [23], the IR singularity in the solution is acceptable if the scalar potential is bounded above. One way to understand this criterion has been given in [24] for four-dimensional gauge theories. We will follow this argument and briefly discuss the meaning of the criterion in [23] in the context of six-dimensional field theories. Near the IR singularity, scalars φ i , assumed to be canonical ones, and the metric warped factor A behave as where we have chosen the integration constant so that the singularity occurs at r 0 . In the IR, the bulk action for these scalars mainly contains the kinetic terms since the potential is irrelevant. This is because the potential diverges logarithmically, but the kinetic terms go like (r − r 0 ) −2 . According to the AdS/CFT correspondence, the one point function or the vacuum expectation value of operators O i dual to φ i is given by we find We can see that O i diverges for κ < 1 6 . We then expect that solutions with κ < 1 6 will be excluded. In four dimensions, it has been shown that this is related to the fact that the scalar potential becomes unbounded above. In the present case, we will see in the solutions given below that this is the case namely all solutions with κ < 1 6 have V → ∞. It can be checked by using the scalar potential given in (3.10) that as 16hr ∼ C 1 , the solution in (4.20) gives V → −∞. The solution is then physical and describes a supersymmetric RG flow from (1, 0) SCFT to six-dimensional SYM with SO(2)×SO(2) symmetry. For C 2 = 0, the solution becomes This is also physical since it leads to V → −∞. For φ 3 < 0 and 16hr ∼ C 1 , the above solutions give, for any values of C 2 , which give rise to V → −∞. This solution is then physically acceptable. The solution with all φ i = 0 turns out to be very difficult to find although the above BPS equations suggest that φ 1 = φ 2 = φ 3 . Most probably, a numerical analysis might be needed. Therefore, we will not further investigate this case. Since there are no interesting truncations, we now consider a solution to the above equations with φ 1 , φ 2 = 0. Finding the solution for a general value of g 2 turns out to be difficult. However, for g 2 = g 1 = −16h, we can find an analytic solution. The first step in finding this solution is to combine (4.27) and (4.28) into a single equation which is solved by Changing to a new radial coordinater via dr dr = e − σ 2 −φ 2 , we obtain the solution to equation (4.27) To find the solution for σ, we change to another new coordinate R via dR dr = −e − σ 2 −φ 2 −2φ 1 . Equations (4.27),(4.28) and (4.29) can be combined to whose solution, after using σ solution, is given by (4.37) As in the previous case, we separately consider the two possibilities for φ 1 > 0 and φ 1 < 0. For φ 1 > 0, we can find the relation between R andr by using the relation dR dr = −e −2φ 1 (r) . This results in 8hR = 8hr − ln 2(e C 1 + e 16hr ) . (4.38) In term ofr, the σ and A solutions become Near the IR singularity at 16hr ∼ C 1 , we have φ 2 ∼ −φ 1 for all values of C 2 . In the IR, the solution behaves differently for C 3 = 16e C 1 and C 3 = 16e C 1 . This is because the logarithmic term in (4.39) and (4.40) diverges, in this limit, when C 3 = 16e C 1 . For C 3 = 16e C 1 , we find This gives rise to V → ∞ which is physically unacceptable. However, if C 3 = 16e C 1 , the solution becomes This gives V → −∞, so this singularity is acceptable. We see that flows with φ 1 > 0 are physical provided that C 3 = 16e C 1 . For φ 1 < 0, the solution φ 1 = − 1 2 ln 1+e C 1 −16hr 1+e C 1 −16hr gives 8hR = 8hr − ln 2(e C 1 − e 16hr ) . Accordingly, the solutions for σ and A become σ = − 2 5 2φ 1 + φ 2 + ln 1 − C 3 e 16hr 4(e C 1 − e 16hr ) 2 , (4.44) In this case, the logarithmic term in (4.45) diverges as 16hr ∼ C 1 when C 3 = 0, but the logarithmic term in (4.44) vanishes. When C 3 = 0, the situation is reversed. Unlike the φ 1 > 0 case, the value of C 2 is important since there are two possibilities φ 1 = ∓φ 2 depending C 2 = 1 8 or C 2 = 1 8 . We begin with the first case with C 2 = 1 8 and C 3 = 0. The IR behavior of the solution is given by The metric becomes ds 2 = (16hr − C) 2 dx 2 1,5 + dr 2 . (4.47) When C 3 = 0, the solution in the IR becomes A ∼ 3 10 ln(16hr − C 1 ), ds 2 = (16hr − C) Both of them lead to V → −∞. Therefore, the solution with φ 1 < 0 and C 2 = 1 8 is physical for all values of C 3 . For C 2 = 1 8 , we find, with C 3 = 0, the IR behavior of the solution and, for C 3 = 0, Both of them lead to V → ∞. We then conclude that flows with φ 1 < 0 and C 2 = 1 8 are not physical for any C 3 . It could be very interesting to have interpretations of these results in terms of six-dimensional gauge theories. Conclusions We have studied some critical points of N = 2, SO(4) gauged supergravity in seven dimensions. We have found one new supersymmetric AdS 7 critical point with SO(3) symmetry. Recently, many new AdS 7 × M 3 solutions have been identified in massive type IIA theory [25]. It would be interesting to see weather the new supersymmetric AdS 7 obtained here could be related to the classification in [25]. We have also found a number of non-supersymmetric AdS 7 critical points and checked their stability by computing all of the scalar masses. We have found that although the non-supersymmetric critical point originally found in pure gauged supergravity has been shown to be stable, it is unstable in the presence of vector multiplet scalars. On the other hand, new stable non-supersymmetric points are discovered here and should correspond to new non-trivial IR fixed points of the (1, 0) SCFT. An analytic RG flow solution interpolating between the SO(3) supersymmetric critical point and the trivial point with SO(4) symmetry has also been given. To the best of the author's knowledge, this is the first example of holographic RG flows between two supersymmetric fixed points of the (1, 0) field theory in six dimensions. We have further studied supersymmetric flows to non-conformal field theories and identified the physical flows. These would provide more general flow solutions than those considered in [12] and [13] and could be useful in a holographic study of the dynamics of six-dimensional gauge theories similar to the analysis of [26]. Finding a field theory interpretation of the gravity solutions obtained in this paper is also interesting.
6,153
2014-04-01T00:00:00.000
[ "Physics" ]
Charmed baryon spectrum from lattice QCD near the physical point We calculate the low-lying spectrum of charmed baryons in lattice QCD on the $32^3\times64$, $N_f=2+1$ PACS-CS gauge configurations at the almost physical pion mass of $\sim 156$ MeV/c$^2$. By employing a set of interpolating operators with different Dirac structures and quark-field smearings for the variational analysis, we extract the ground and first few excited states of the spin-$1/2$ and spin-$3/2$, singly-, doubly-, and triply-charmed baryons. Additionally, we study the $\Xi_c$-$\Xi_c^\prime$ mixing and the operator dependence of the excited states in a variational approach. We identify several states that lie close to the experimentally observed excited states of the $\Sigma_c$, $\Xi_c$ and $\Omega_c$ baryons, including some of the $\Xi_c$ states recently reported by LHCb. Our results for the doubly- and triply-charmed baryons are suggestive for future experiments. I. INTRODUCTION Recent experimental results from the LHCb Collaboration on the Ω c , Ξ c and the doubly charmed Ξ cc state have put further emphasis on the relevance of the hadron spectroscopy. There now exist 31 observed charmed baryons, 25 of which are classified with at least three stars by the Particle Data Group (PDG) [1]. Charmed baryons provide a unique laboratory to study the strong interaction and confinement dynamics due to the composition of the light and charm quarks. Studying the excited states of the charmed baryons has the potential to reveal their internal dynamics and the nature of the excitation mechanisms. Experimentally, the singly-charmed baryon sector is most accessible. Within this sector, the Λ c channel is most established. In addition to the ground state, there are four excitations with total spin up to 5/2, although in need of a confirmation of the assigned quantum numbers. Out of the three Σ c states that are listed by the PDG, two are the lowest J P = 1/2 + and 3/2 + states and Σ c (2800) is their only observed excitation. This state has been detected in the Λ c π channel by the Belle [2] and the BABAR [3] Collaborations. Its quantum numbers are not measured. In contrast, the Ξ c sector is quite rich since it can have flavor symmetric and antisymmetric wave functions. There are up to seven Ξ c excitations observed by the Belle [4][5][6][7][8][9], the BABAR [10,11] and very recently by the LHCb [12] Collaborations in the energy range of 2920 to 3120 MeV/c 2 . The PDG considers the existence of three of them to be very likely or certain while the confidence for the other two is smaller. LHCb states are not included in the review yet. These excited states appear in the invariant mass distributions of several singly-charmed baryon B c + K or π channels depending on the strangeness number of the baryon and in the ΛD channel where the charm quark is confined in the meson system. This unique behavior makes the Ξ c system a good laboratory to study the internal excitation dynamics of the charmed baryons and the diquark correlations. The quantum numbers of these states remain undetermined. The LHCb Collaboration has also reported the precise measurements of the masses and the decay widths of five new Ω 0 c states [13], which are observed in the Ξ c K channel in the energy range from 3000 to 3120 MeV/c 2 . Their spin-parity quantum numbers remain undetermined. There are several works in the literature investigating the nature of these states and assigning conflicting spin-parity quantum numbers. It is a triumph of the experiments to identify many states in such narrow energy windows. The lowest-lying states of the singly-charmed baryons are already established by experimental studies and the lattice QCD results agree well with those observations. The Ξ cc is the only observed doubly-charmed baryon for the time being. It was first observed by the SELEX Collaboration [14,15] but its results were not confirmed by other experiments until the LHCb Collaboration has reported the same particle with a different mass [16]. Lattice QCD predictions for the mass of the Ξ cc lie above the SELEX reported value but agree very well with the LHCb value. The lowest-lying charmed baryon states have been studied by various lattice groups as well. Early investigations utilized the quenched approximation [58][59][60][61][62], while recent studies employ up to 2 + 1 + 1-flavor dynamical gauge configurations with several lattice spacings, volumes and light-quark masses to estimate the baryon masses at the physical point [63][64][65][66][67][68][69][70][71][72][73][74][75]. We summarize the recent studies of several lattice groups in Table I. There is a remarkable agreement between the results of the different groups utilizing different types of quark actions and approaches to the physical point. Most of those studies are motivated by the observation of the Ξ cc baryon by LHCb and thus their focus has been on the lowest-lying positive parity baryons. Extracting the excited states, however, is a challenge compared to calculating the ground states. The majority of the attention has been on the light-quark sector, especially on the Roper resonance and the Λ(1405), while there are just a few groups that have studied the excited states of the charmed baryons. The RQCD Collaboration reported results for the singly-and doubly-charmed baryons, including excited states [69]. They employ several 2+1-flavor gauge ensembles with a fixed lattice spacing but two different volumes and varying light-quark masses with the lightest one corresponding to a pion mass of m π ∼ 260 MeV/c 2 . All the sea and valance quarks (including the charm quark) are treated via a non-perturbatively improved stout-smeared Clover action. The bare charm-quark mass is tuned to reproduce the 1S spin-averaged charmonium mass. In addition to spectrum calculations, they also investigate the light-flavor dependence of the singly and doubly charmed states. To this end, the operator set they use consists of interpolating fields based on SU (4) symmetry and heavy quark effective theory (HQET) pictures. In order to access the excited states, they perform a variational analysis over a set of interpolating fields with three different quark-field smearings. Their chiral extrapolations follow a different approach compared to the other groups since they start from an SU (3) symmetric point for the light and strange quarks and vary their masses while keeping the singlet quark mass fixed in their descent to the physical point. This leads to fits based on Gell-Mann -Okubo relations for the charmed baryons. The lowest-lying extracted states are in good agreement with the other lattice determinations and with experimental values where available. The Hadron Spectrum Collaboration (HSC) extracts the charmed baryon spectrum including positive and negative parity baryons with total spin up to J = 7/2. They use N f = 2 + 1 anisotropic lattices generated with a treelevel tadpole-improved Clover fermion action with a pion mass of m π = 391 MeV/c 2 . The anisotropic Clover action is used for the charm quark as well with its mass parameter tuned non-perturbatively so as to reproduce the dispersion relation for the η c meson. By using a large set of continuum interpolating operators, including nonlocal covariant derivative operators, subduced to the irreducible representations of the cubic group, they form the basis for the variational correlation matrix analysis and extract the spectrum of the singly-, doubly-and triplycharmed baryons [71][72][73][74][75]. Although the systematics are left unchecked and the pion mass is unphysical, their pioneering results provide valuable insight into the charmed baryon spectrum. In this work, we follow a conventional approach by using local operators only. Notable improvements of this study compared to the previous works that extract the excited baryon spectrum are the fully relativistic treatment of the charm quark, thus suppressing the O(am Q ) discretization errors, and working on gauge configurations with almost physical light quarks, hence eliminating the chiral extrapolation systematics. We also perform variational analyses over sets of operators with different Dirac structures and quark smearings and their combinations. Preliminary results of this work have been presented in Ref [76]. This paper is structured as follows: we outline the approach to extract the baryon energies and the formulation of the variational analysis in Section II. Details of our lattice setup, the heavy quark action that we employ, and the choice of baryon operators are given in Section III. A detailed discussion on the variational analyses and the states we extract are presented in Section IV. Section V holds the summary of our findings. II. EXTRACTING EXCITED STATES For a given interpolator, χ i , the two-point correlation function contains the contributions from all the states that couple to the corresponding quantum number, where (E B ) B stands for the (energy of the) baryon state. The desired parity state can be isolated by applying the parity operator, P ± C ij (t) = 1 2 (1 ± γ 4 )C ij (t). Using a set of operators that couple to the same quantum numbers, one can utilize a variational approach to extract the tower of states. One can form an N × N correlation function matrix, where each element, C ij (t), is an individual correlation function given in Equation (1). Then, by solving the generalized eigenvalue problem [77,78], one extracts the left and right eigenvectors, ψ α and φ α , and uses them to diagonalize the correlation-function matrix, to access the energies of the states, E α . One can alternatively utilize the individual eigenvalues, λ α (t, t 0 ) ∼ e −Eα(t−t0) (1 + O(e −∆Eαt )), of the left and right eigenvalue equations given in Equation (3) to extract the energies of the states. Both approaches give complementary results with some caveats [79]. We prefer the method outlined above. Note that a suitable combination of the time-slice t 0 and the time slice of the eigenvectors, t , is chosen with respect to the quality and stability of the signal. Additionally, t may or may not be chosen equal to t. Once the correlation function matrix is diagonalized, one can follow the standard techniques and perform an effective mass analysis for each state, α, A. Quark Actions We employ the 32 3 × 64, 2 + 1-flavor gauge configurations that are generated by the PACS-CS Collaboration [80]. These configurations are generated with the Iwasaki gauge action (β = 1.9) and with the nonperturbatively O(a)-improved Wilson (Clover) action (c sw = 1.715) for the sea quarks. We perform our simulations on the κ sea ud = 0.13781 subset, which have almost physical light quarks corresponding to m π = 156(9) MeV/c 2 as measured by PACS-CS. The hopping parameter of the strange quark is fixed to κ sea s = 0.13640, while the lattice spacing is determined to be a = 0.0907(13) fm (a −1 = 2.176 GeV). We use the Clover action for the valence u/d and s quarks. The hopping parameter of the valence light quarks is set equal to those of sea quarks, κ val u/d = κ sea u/d . Due to an overestimation of the mass of the Ω − particle with κ val s = κ sea s , however, we re-tune the hopping parameter of the valence strange quark to κ val s = 0.13665, in order to match the physical Ω − mass on these configurations. Details of this tuning are discussed in Ref. [81]. We employ a relativistic heavy-quark action for the charm quark, where the Ψs are the heavy quark spinors and the fermion matrix is given as with the free parameters r s , ν, c B and c E to be tuned in order to remove the discretization errors appropriately. We adopt the perturbative estimates r s = 1.1881607, c B = 1.9849139 and c E = 1.7819512 [82] and the nonperturbatively tuned ν = 1.1450511 value [66]. We re-tune the charm-quark hopping parameter to κ Q = 0.10954007 non-perturbatively so as to reproduce the relativistic dispersion relation for the 1S spin-averaged charmonium state. With these parameters, masses of the η c and the J/ψ are m ηc = 2.984(2) GeV/c 2 , m J/ψ = 3.099(4) GeV/c 2 . The hyperfine splitting is estimated as ∆E (V −P S) = 116(4) MeV/c 2 , in agreement with its experimental value. Further details of our charm quark tuning can be found in Ref. [81]. B. Baryon operators The baryon operators that we employ are tabulated in Table II in a shorthand notation while the explicit forms of the operators can be found in Table III. Note that we do not distinguish between u and d quarks since they are degenerate in our lattice setup. For the spin-1/2 baryon, we form three individual operators by using the Dirac structures, (see Table III). An explicit example for the N -like operator is The χ 4 -type operator with the Dirac structure [Γ 1 , Γ 2 ] = [γ 5 γ 4 , 1] corresponds to the time component of an operator with [Γ 1 , Γ 2 ] = [γ 5 γ µ , γ 5 ], which couples to both spin-1/2 and spin-3/2 particles. It has been shown that projecting out the spin-1/2 component of such an operator results in two terms: a linear combination of the χ 1 and the χ 2 , and a term containing the χ 4 operator [83]. Furthermore, the χ 4 -type operator is distinct from the χ 1 and the χ 2 from a chiral transformation perspective [84], making it a viable choice for the basis set of the spin-1/2 operators. Note that we limit ourselves to only one Dirac structure for the spin-3/2 baryons, which Among the operators discussed in this section, the ones coupling to the Ξ c and Ξ c states deserve special attention. The Ξ c (Ξ c ), which belongs to an SU (3) anti-triplet (sextet) is anti-symmetric (symmetric) with respect to the exchange of s and u/d quarks, which should hold for the respective operators. For Ξ c , this can be achieved by both N -like and Λ-like operators, which will both be used in this work. Note that our N -like Ξ c operator was referred to as "HQET" in Ref. [69]. For Ξ c , we employ a different operator combination with the correct symmetry properties as shown in Table III. While Ξ c and Ξ c states decouple in the SU (3) limit, they can in principle mix in our setup due to the breaking of the SU (3) symmetry. This mixing can be studied by computing cross-correlators of Ξ c and Ξ c operators. The results of such an analysis will be discussed in Section IV. C. Simulation details Quark fields of the interpolating operators are Gaussian smeared in a gauge-invariant manner at the source, (x, y, z, t) = (16a, 16a, 16a, 16a), for all the baryons with three different sets of smearing parameters, corresponding to an rms radius of ∼ 0.2, 0.4 and 0.7 fms for the quark wave-functions. Sink operators are smeared in the same manner. However, we find that the signal deteriorates rapidly with increasing sink operator smearing. For this reason, we analyze the spin-1/2 baryons with smeared-source -point-sink correlation functions with a fixed source smearing for all the quark fields. Correlation functions depend mildly to the smearing of the singly represented quarks and the plateau regions become independent of the smearing after a certain number of iterations. Therefore, we apply the smearing to the quark fields depending on their flavor and quantity. We treat the u-, d-and the s-quarks on an equal footing and consider them as light quarks in comparison to the charm quark. When the interpolating operator is formed by TABLE III. Interpolating operators with generic Dirac structures for spin-1/2 and spin-3/2 baryons. C = γ2γ4 is the charge conjugation operator. [Γ1, Γ2] choices and the quark contents are given in the text and in Table II. Spin Baryon Operator two light quarks and a charm quark, we fix the smearing of the charm quark to 0.7 fms, which is the widest of the smearings that we have, to decouple its effects and perform the variational analyses over the smearings of the remaining light quarks. Smearing parameters of the individual light quarks are set to be equal. This is true for all the baryon fields with the exceptions of Ω ( * ) cc , in which case the smearing of the strange quark is fixed to 0.7 fms and the smearings of the charm quarks are varied, and Ω ccc , for which the treatment is the same as light quarks. For the spin-3/2 baryons, we use smeared-source -smeared-sink correlators to form an operator basis from an operator with fixed Dirac structure. A discussion on the operator basis is given in Section IV A 1. Parity is selected by applying the parity projection operator, P ± , to the individual correlation functions. We bin our data with a bin size of 15 measurements to account for the autocorrelations on this ensemble and estimate the statistical errors via a single elimination jackknife analysis. We performed our computations using a modified version of the Chroma software system [85] on CPU clusters along with the QUDA library [86,87] for the valence u−/d− and s-quark propagator inversions on GPUs. The charm quark inversions are done on CPUs. A. Variational analysis To obtain the individual states from a set of operators, one solves the generalized eigenvalue problem on each time slice, t, against a reference time-slice, t 0 , as discussed in Section II. To ensure the consistency of this step, it is necessary to check that the solutions are stable with respect to t 0 , since it can be chosen freely. Another concern is associating the eigenvalues with the states. Eigenvalues are sorted in increasing order on each time slice. However due to the faster deterioration of the higher states' signal, their eigenvalues fluctuate heavily as time evolves and can sometimes be smaller than the eigenvalue associated with the lower state. This situation might misguide the analysis if not addressed properly. In order to make sure that the eigenvalues are associated with the correct states, we fix the time-slice of the eigenvectors, t , that is used to diagonalize the correlation function matrix, to a specific value. This procedure, however, introduces an extra parameter dependence to the analysis. We check this dependence for each channel for a range of t values. The dependencies on t 0 and t can be tracked by investigating the respective eigenvectors, whose components should be stable when changing both fictitious time parameters. We illustrate such a consistency check in Figure 1. We perform this check for each channel and select a (t , t 0 ) combination that optimizes the signal quality. A common choice is t ≥ 2a. 1. Operator dependence a. Operator basis: Having three operators with differing Dirac structures, it is possible to analyze both the full 3 × 3 correlator, but also various combinations of 2 × 2 correlators. While the full information for all of them is contained in the 3 × 3 case, the 2 × 2 correlators can provide valuable and comprehensible information about which state couples to which operator. For this purpose we here investigate the correlators with different operator sets. We find that the variational analyses over two different sets of spin-1/2 operators, namely over {χ 1 , χ 4 } and {χ 1 , χ 2 }, give two distinct second eigenvalues for the positive parity states. The {χ 4 , χ 2 } set produces similar results to that of {χ 1 , χ 2 }. For negative parity, only the {χ 1 , χ 4 } combination yields mostly wellseparated second eigenvalues, whereas the second eigenvalues of the {χ 1 , χ 2 } and {χ 4 , χ 2 } bases lie closer to the first eigenvalues. When we extend the operator basis to the {χ 1 , χ 2 , χ 4 } set and solve the corresponding 3 × 3 variational system, the 2 × 2 results are reproduced. These findings are illustrated in Figure 2 for the positive and negative parity Ξ c , Ω c , and Ξ cc baryons where we show the fit results from a plateau approach. These representative baryons are chosen such that they correspond to the different operator characteristics, i.e. Λlike, singly-charmed N -like, and doubly-charmed N -like, respectively. b. N -like operators: Although we use the same Nlike operators for the singly-charmed and the doublycharmed spin-1/2 baryons, it is reasonable to expect a different behavior when we solve the variational system, since they belong to different layers of the mixed-flavor c. Λ-like operators: Λ c and Ξ c belong to the totally flavor-antisymmetric SU (4) anti-quadruplet and hence are studied via the flavor-octet Λ-like operators. The behaviors of these operators depicted in Figures 2c and 3 show similarities to the N -like Ξ cc case. It can naively be expected that the first term of the Λ-like operator (see Table III) would have the dominant contribution, which would mean that it is in essence the same as the N -like operator. Indeed, by rearranging the latter two terms of the Λ-like operator via Fierz transformations, one can show that the coefficient of the q T a 1 (x)Cγ 5 q b 2 (x) q c 3 (x) term of the operator is five times the other resulting terms. The same argument holds for the other Dirac structures as well. This dominance is realized in our comparisons of the Ξ c ( 1 2 + ) results illustrated in Figure 4, where we have an almost identical signal for the ground states calculated via the Λ-like and the N -like operator. Additionally, the flavor decomposition of the Λ c studied in Ref. [88] by three of the present authors shows that the negative parity Λ c baryon consists of a mixture of flavor-singlet and flavor-octet wave-functions. The flavor-octet interpolating operator that we employ for the Λ c baryon may therefore be inadequate to resolve the lowest-lying negative parity state by itself. A similar conclusion was reached in Ref. [89]. The first excited negative parity state on the other hand, is dominated by a flavor-octet wave-function and it is possible that this state is contaminating our lowest Λ c ( 1 2 − ) signal, which could be a plausible explanation of the apparent overestimation of its mass (see Table IV and Figure 8). We analyze the Ξ c channel with two different types of operators. One being the Λ-like, the other the Nlike operator as given in Table III. We find that both give consistent results for the positive parity case while there is a difference for negative parity. As shown in Figure 2f, the N -like operator couples to a lower-lying state for the {χ 1 , χ 4 } basis. Similar differences between these operators for the negative parity sector have been reported by the RQCD Collaboration [69]. d. Ξ c − Ξ c mixing: We perform a correlation matrix analysis consisting of the Ξ c , and N -like and Λ-like Ξ c operators in order to investigate the possible mixing between these baryons. We construct the correlationfunction matrices for this analysis in two steps. First, we solve a variational system over the {χ 1 , χ 4 } basis for each element of the correlation matrix and take the lowest lying state. We find that this approach helps to isolate the ground states better. We then solve another 2×2 correlation matrix with both Ξ c and Ξ c ground state operators to investigate the mixing effects. For positive parity Ξ c and Ξ c , we analyze the cross correlators between the flavor-octet SU (4) Ξ c -Ξ c , and the N -like Ξ c -Ξ c individually. We find that the Ξ c and Ξ c signals separate nicely, and the N -like and Λ-like Ξ c operators produce consistent signals with negligible mixing (see Figure 4). Magnitudes of the eigenvectors also con- Table III. Note that we only have two smearings for that case. State numbering, α = 1 − 3, follows the notation of the 3 × 3 solutions even for the 2 × 2 solutions to emphasize the coupling of certain operators to certain states. All energies are extracted via a plateau method, see main text for a discussion. firm that the Ξ c and Ξ c states have distinct signals. In case of negative parity, there appears to be non-negligible mixing between the two states dependent on the variational parameters. Specifically, the Λ-like Ξ c has a negligible Ξ c component, while the N -like Ξ c state has up to a 10% Ξ c mixing although the effect seems to depend on the variational parameters. The reason why the negative parity Λ-like operator gives signals close to the Ξ c is understood to be related to the overestimation of the mass obtained for that operator rather than a mixing effect. The Ξ c appears to have at most a mixing of 5% with the N -like Ξ c . In all, we see that for negative parity the mixing is not completely negligible, but nevertheless quite small. Smearing dependence a. Spin-1/2 baryons: We observe that, evidently, the ground state signals remain stable with respect to the smearing radius. The excited-state signals on the other hand show a clear dependence to the smearing radius of the source quark fields. This is readily visible for every case given in Figure 2. For both positive and negative parity, states that are clearly separated from the ground state tend to decrease as the smearing radius increases with no apparent plateau behavior. Note that all the energies are extracted via a plateau approach, which are dependent on the choice of the fit windows. Extracting the energies from two-exponential fits are more reliable for the ∼ 0.2 and 0.4 smearings, where those fit results coincide with that of ∼ 0.7 extracted via a plateau approach or a two-exponential fit. This indicates that the signals of the widest smearing are the most reliable to estimate the energy levels. When we enlarge the operator basis by combining two operators with two different smearings and perform a 4×4 analysis, we end up with quite noisy solutions due to the current limited statistics, which renders a conclusive analysis impossible. We, however, observe an apparent degeneracy in three out of four solutions as shown in the b. Spin-3/2 baryons: We find that solving a 3 × 3 variational system with smeared-smeared operators only, provides no additional information compared to a 2 × 2 system with the smearings at hand. One solution turns out to be indistinguishable from the other so we focus on the solutions from the two narrower smearings, which give less noisier signals. B. Charmed baryon spectrum The energy levels from the diagonalized correlation functions are extracted by fitting the data to the form given in Equation (4). Additional exponential terms are employed to stabilize the fits against excited-state contributions. In most of the cases, where the signal forms a plateau in the effective-mass plots, masses of the lowest states extracted from the one-exponential fits agree with the multi-exponential fit results within their error bars. Yet, a two-exponential form stabilizes the fits and improves the accuracy of the results. This is especially true when analyzing the widest smearing case. The extracted energies are compiled in Table IV. Since we are at the isospin-symmetric point, m u = m d , our results should be understood as the isospin averaged masses of the respective states. As we have discussed in Section IV A 2, a variational analysis over a set of different smearings for a fixed operator returns solution eigenvectors that couple to the widest smearing. Therefore, we always use an operator basis with quark smearings fixed to the widest one. For the spin-1/2 cases, we perform 3 × 3 variational analyses with a fixed smearing over the operator sets {χ 1 , χ 2 , χ 4 } and extract signals of three states for each channel. The third energy level with largest energy is however usually lost to noise already at relatively early time slices or decays to the ground states due to inaccuracies in the diagonalization procedure of Equations (3) and (4). For instance, in case of the positive parity spin-1/2 Ξ c baryons, we find that the state dominantly coupling to the χ 2 operator decays to the ground state signal before showing a plateau that may be a candidate signal for an excited state (blue rectangles in the top left plot of Figure 5). Signals of possible third states for the spin-1/2, positive parity Σ c , Ξ c and Ω c channels emerge in early time slices of effective mass analyses but are quickly lost to noise. It is usually possible to identify a fit region of 2-3 points for the narrowest smearing but we find the energy extracted via this approach to be unreliable, since the fit window is very small and the smearing dependency of the state cannot be established. Positive parity spin-1/2 Ξ cc and Ω cc signals mimic the behavior of Ξ c , where there appear signals one could potentially identify as distinct states. However we find that those states are rather unstable under the change of variational parameters. In addition, extracted energies are highly dependent on the extraction method -plateau approach or a two-exponential fit. Therefore, even though we show their signals in the plots, we do not extract or report any corresponding energy values. In general, we find that the negative parity sector appears to be richer in comparison to the positive parity case. Indeed, we could identify three distinct states for most of the negative parity spin-1/2 channels. Isolating the low-lying states via a plateau approach is a challenge here since multiple energy levels appear in a narrow energy range. Two-exponential fits are very helpful in such cases to disentangle and extract the states more accurately. Effective mass plots illustrating the above discussions are given in Figure 5. a. Mass differences: Hyperfine splittings, the mass differences between the spin-3/2 and spin-1/2 states, of the Σ c , Ξ c , and Ω c channels are reproduced in good agreement with the experimental values. Mass differences between the positive and negative parity states also agree well with the available experimental results. The first excited states of the positive parity baryons lie quite high, 400 MeV to 1 GeV, above the ground states. A common pattern is that, more than one negative parity state for the singly-and doubly-charmed spin-1/2 baryons appear in between the positive parity ground and first-excited state. The first two negative parity states of the Σ c , Ξ c , Ω c , Ξ cc , and Ω cc channels lie close to each other. The splittings between those states are smaller for the Ω c and Ω cc baryons compared to those of Σ c , Ξ c , and Ξ cc . The situation is different for the Λ c and the Ξ c baryons where the negative parity states are roughly 300 MeV apart. b. Scattering states: It is essential to examine the relevant thresholds for the negative parity states in order to check if they could correspond to scattering states. It is possible for the negative parity ground states to couple to the S-or D-wave scattering states of a positive parity baryon and a negative parity meson. The relevant thresholds which respect to isospin, spin, parity, strangeness and charm quantum numbers are, We plot the above two-particle thresholds together with the extracted negative parity energies in Figure 6. The two-particle scattering energies are calculated via E = M 2 1 + p 2 1 + M 2 2 + p 2 2 , where M i is the mass of the particle and p i = 2πn/L the lattice momentum. We use the π mass quoted in the PACS-CS paper [80] and the experimental K mass, since we use a strange quark mass re-tuned to its physical value via the K mass input [89], along with the positive parity baryon masses from Table IV of this work in calculating the threshold energies. The Λ + D threshold has to be estimated differently since we do not calculate the Λ baryon or the D meson in this work. In estimating the threshold, we take the experimental Λ mass and multiply it by a correction factor, Λ our c /Λ exp c , due to our overestimation of the Λ c mass. The uncertainty of this value is assumed to be same as that of Λ our c . The D meson mass is taken to be its experimental value with its uncertainty neglected. The momenta p 1 and p 2 are set to zero. An inspection of Figure 6 shows that some of the Ξ c baryon signals may contain scattering states because of their vicinity to various thresholds. Indeed, lie close to at least one related threshold. We also find some states that lie above the thresholds to be close to their respective boosted (n > 0) thresholds. MeV/c 2 , 40 MeV/c 2 lower than that of BABAR. It is noted in the PDG listings that the state that has been observed by BABAR might be a different Σ c excitation. Given that these states have been seen in the Λ c π invariant mass spectra, a straightforward assignment for the quantum numbers would be J P = 1/2 − . From a quark model perspective (see paragraph f.), there are three possible low-lying negative parity spin-1/2 Σ c excitations. Two λ-modes with diquark spin j = 0 and j = 1, and a ρ-mode with diquark spin j = 1. In the heavy quark limit, the S-wave Σ c (2800) → Λ c π transitions of the j = 1 λ-and ρ-modes would be forbidden due to the violation of the spin-parity conservation of the light-quark degrees of freedom. A heavy quark effective theory calculation estimates a very large decay width, of the order of 885 MeV, for the j = 0 λmode [56], which rules out the 1/2 − quantum number for Σ c (2800). On the other hand, a D-wave transition is possible and points to the J P = 3/2 − , 5/2 − possibil- Note that the three extracted negative parity Σ c states are well above their respective two-particle thresholds so that the two-particle contribution to the signals should be suppressed. d. Excited Ξ c and Ξ c states: The experimental spectrum of the Ξ c and Ξ c channels consists first of the respective J P = 1/2 + ground states, and the first Ξ c ( 1 2 − ) excited state, which are all experimentally well established and which we reproduce well in our work. The energy levels above the lowest three are less well established, both experimentally and theoretically. Above 2.9 GeV/c 2 , the PDG reports the five states Ξ c (2930), Ξ c (2970), Ξ c (3055), Ξ c (3080) and Ξ c (3123), for none of which the spin and parity quantum numbers have been measured. Very recently, the spectrum of these states has received an update by a new measurement of the LHCb Collaboration [12] in the Λ + c K − channel. According to this measurement, the Ξ c (2930) (observed earlier by the Belle [4] and the BABAR [10] Collaborations in the same channel) should be considered to be a previously unresolved combination of two independent states Ξ c (2923) and Ξ c (2939). The third observed state in Ref. [12], Ξ c (2965), corresponds either to the already seen Ξ c (2970), or is another entirely new resonance. Let us discuss potential interpretations of our findings with regard to this rather rich experimental spectrum. We find two negative parity spin-1/2 Ξ c states in the vicinity of the lowest three (or four) states above 2.9 GeV/c 2 , Ξ c (2923), Ξ c (2939), Ξ c (2965) and potentially Ξ c (2970), which suggests that such quantum numbers can be assigned to at least two of these states. While our numerical results are not precise enough to draw any firm conclusions, our obtained spectrum is most naturally interpreted as either Ξ c (2923) or Ξ c (2939) and similarly Ξ c (2965) or Ξ c (2970) being a Ξ c ( 1 2 − ) state. The already known Ξ c (2970) state has been observed in the Λ c Kπ channel -also proceeding approximately half of the time via the intermediate Σ c (2455)K channel -and in the Ξ c π, and Ξ c (2645)π channels by the Belle [5][6][7] and BABAR [11] Collaborations. These decay channels imply several possible quantum numbers, J P = (1/2 ± , 3/2 ± , 5/2 ± ), for this state, which is not in contradiction with the above potential assignment. The Ξ c (3055) was observed by the Belle and the BABAR Collaborations in the Σ c K channel [8,11] and in the ΛD channel only by the Belle Collaboration [9]. Finally, the Ξ c (3080) was reported by the Belle Collaboration [9] in the Σ c K, Σ * c K, and ΛD channels and by the BABAR Collaboration [11] in the Λ c Kπ channel via the Σ c (2455)K channel. Similar to the Ξ c (2970) case, these decay channels suggest several quantum numbers, such as J P = (1/2 ± , 3/2 ± , 5/2 ± ). Our second Ξ c ( 1 2 + ) state appears to be the most probable candidate for this resonance. e. Excited Ω c states: The five new excited Ω 0 c states reported by the LHCb Collaboration [13] were seen in the Ξ c K channel. One would hence naively expect these states to have negative parity. A first dedicated lattice QCD calculation has confirmed this expectation by assigning negative parity to these states [57], with total spin ranging from J = 1/2 to 5/2. The two Ω c ( 1 We should reiterate that since we only employ local three-quark operators, we are limited in our ability to resolve all molecular, radial or orbital excitation modes of the higher lying states. Our results should hence be considered as indicative in identifying potential compact three-quark states among the experimentally observed energy levels in the Ξ c and the Ω c channels. Conversely, the levels that we are not able to reproduce, could be candidates for molecular or orbitally excited states. It is however at present too early to assign definite quantum numbers without a through scattering state analysis since some of our negative parity states lie close to the thresholds. The values in Table IV are illustrated in Figure 7 together with the relevant experimental results. The latest Ξ c results from the LHCb Collaboration are shown as well. The similarities between the Λ c and Ξ c , and Σ c , Ξ c and Ω c are evident as expected from their flavor structures. f. Interpretation from a quark model perspective The quark model (QM) has has been useful in giving a pictorial and intuitive interpretation of the mass spectrum obtained by lattice QCD computations. The QM derives the energy and structure of a system by considering constituent valence quarks and their interactions. For the excited states, in particular, it can clarify what the essential degrees of freedom in a specific excitation are. For heavy-quark baryons, the heavy-quark spin symmetry plays an important role. As the coupling of a heavy quark to the magnetic component of gluons is suppressed by a 1/m Q factor, the heavy quark spin is approximately conserved. For singly charmed baryons, this symmetry is manifested by the appearance of heavy-quark spin doublets, in which spin (j −1/2, j +1/2) pair states approach each other with increasing quark masses. Here, j represents the total spin minus the heavy quark spin of the considered baryon. We will here briefly compare the present lattice QCD results with the QM predictions and study how the essential excitation modes arise in the spectrum. Quite remarkably, multiple features of the QM predictions are confirmed in the obtained lattice QCD spectrum of the charmed baryons. 1. Our lattice QCD results for the positive parity "ground" states agree completely with the QM assignments, i.e., the spin, parity, isospin and flavor representation, and the mass orderings are consistent. The QM predictions for the splitting between the spin 1/2 and 3/2 states are also in quantitative agreement with the obtained lattice results. 2. Among the positive parity ground states, Ξ c is most interesting, because it contains three different valence quarks, c, s, and u/d. In the QM, the total spin of s and u/d can take either S = 0 (Ξ c ), or 1 (Ξ c ). The existence of two low-lying positive parity states is indeed realized in lattice QCD as well as in experiment. In the QM, the distinction of Ξ c and Ξ c is guaranteed by the flavor SU (3) symmetry, while the SU (3) breaking with m s = m u/d will mix the two Ξ c 's. The QM predicts, however, that the mixing is suppressed for the ground state due to the heavy quark spin symmetry, which is confirmed in our lattice QCD results. 3. Low-lying negative parity singly charmed baryons are described in the QM as orbital P -wave excitations. They are categorized in two classes, λ-mode and ρ-mode [17,28]. The λ-mode is characterized by the P -wave excitation between the charm quark and the center of mass of the light quarks, while the ρ-mode is given by the excitation between the light quarks. The QM predicts that the λ modes are lighter than the ρ-modes for singly heavy baryons. The QM spectrum depends on the flavor structure: For the flavor anti-triplet Λ c and Ξ c , we find a set of (1/2 − , 3/2 − ) states in the λ-mode, and (1/2 − ), (1/2 − , 3/2 − ) and (3/2 − , 5/2 − ) states in the ρ-mode. Thus, among the three 1/2 − states, the QM predicts that one λ-mode state is lighter than the other two. This structure is indeed seen in the Λ c and Ξ c spectrum given in Table IV and Figure 7. The next 1/2 − state is about 300 MeV higher, which can be regarded as the mass splitting between the λ-and ρ-mode states. On the other hand, the flavor 6 baryons, Σ c , Ξ c and Ω c , have two λ-mode 1/2 − states, one of them being accompanied by a 3/2 − state. In terms of the heavy-quark spin symmetry, we have a (1/2 − , 3/2 − ) spin doublet and an isolated singlet 1/2 − . The lower two λ-mode states come close in energy, but can be distinguished by the total angular momentum of the light-quark system. Thus we expect two 1/2 − and one 3/2 − states as the lowest negative parity excitations for Σ c , Ξ c and Ω c . One sees that, indeed, these three states turn out to be almost degenerate in the lattice QCD spectrum of these channels in Table IV and Figure 7. Other states are much higher in energy, which again confirms the predicted QM assignments. In all, the low-lying spectra of both the positive and negative parity charmed baryons confirm the effectiveness of the QM in assigning the quantum numbers and symmetry properties of heavy baryons. g. Comparison to other lattice results: We compare our results to other lattice determinations and experimental values in Figure 8. Our positive parity ground states are in good agreement with the experimental results and the calculations of the other lattice groups with the exception of the Λ c , which is overestimated in our work. Taken altogether, this is a good indication that we are close to the physical point. The first excited positive parity states also mostly agree with the predictions of the HSC [71,72] and the RQCD Collaboration [69]. For negative parity, there are notable differences between our and RQCD's results, especially for the doubly-charmed baryons. For the excited states of the Ξ cc and Ω cc , there are discrepancies between our extracted spectrum and that of RQCD, while our results are similar to those obtained by the HSC [72]. Although we do not show the corresponding HSC spectrum in Figure 8, the pattern they extract in their preliminary studies for the negative parity spin-1/2 singly-charmed baryons [75] is similar to our results as well. Such a qualitative agreement for the low-lying spectrum is quite encouraging since, in contrast to the HSC, which utilizes both local and non-local operators, we only use local operators. V. SUMMARY AND CONCLUSIONS We have calculated the ground and the first few excited states of the charmed baryons on 2+1-flavor gauge configurations with a pion mass of ∼ 156 MeV/c 2 . The charm quark is treated relativistically by employing a relativistic heavy-quark action to remove O(am Q ) discretization errors. The states are extracted via a variational approach over a set of interpolating fields with different Dirac structures and quark-field smearings. By performing separate variational analyses with multiple subsets of the operator basis, we have studied the Dirac-structure and smearing dependence of the excited states. Our results indicate that the excited-state signals are highly susceptible to the width of the quark smearing. Additionally, solutions of a variational analysis over a set of smeared operators with fixed Dirac structure couple dominantly to the operator that is smeared the widest within our employed smearing parameter range. These results highlight the importance of forming the variational basis from different Dirac structures since relying on smeared operators only might miss some parts of the spectrum. In comparing the operator dependence of the extracted positive and negative parity states, we have extended the SU (4) operator basis of the Ξ c baryons to include not only Λ-like, but also N -like operators. Both operators give consistent results for the positive parity case while there appears a difference for the negative parity states. We have also investigated the Ξ c -Ξ c mixing by studying the cross-correlators of this system. Our masses of the low-lying states agree well with the available experimental results and previous lattice determinations. Consequently, the hyperfine splittings and the mass differences between the positive and negative parity states are reproduced, which is a good check of the relativistic action we employ for the charm quark. Excited states in the positive parity channel lie 400 MeV to 1 GeV above the ground states depending on the quantum numbers. One or more negative parity states appear in between. This pattern is consonant with the QM expectations. Although we identify several states that are close to observed excited Σ c , Ξ c and Ω c baryons, mostly in the negative parity channels, some of the signals are in close proximity to the related two-particle thresholds. Without a thorough scattering state analysis with multiple volumes and two-particle operators, the contamination from the thresholds remain unidentified. From a qualitative point of view, the spectrum we extract is similar to what has been reported by the Hadron Spectrum Collaboration (HSC). This is quite encourag-
11,079.6
2020-04-20T00:00:00.000
[ "Physics" ]
Keep the Ball Rolling in AI-Assisted Language Teaching: Illuminating the Links Between Productive Immunity, Work Passion, Job Satisfaction, Occupational Success, and Psychological Well-Being Among EFL Teachers Artificial intelligence (AI) revolutionizes education by fundamentally altering the methods of teaching and processes of learning. Given such circumstances, it is essential to take into account the mental and psychological well-being of teachers as the architects of education. This research investigated the links between teacher immunity (TI), work passion (WP), job satisfaction (JS), occupational well-being (OW-B) and psychological well-being (PW-B) in the context of AI-assisted language learning. In order to achieve this objective, 392 Iranian teachers of English as a foreign language (EFL) were given the Language Teacher Immunity Instrument, the Work Passion Scale, the Job Satisfaction Questionnaire, the Occupational Well-Being Scale, and the Psychological Well-Being at Work Scale. By using confirmatory factor analysis and structural equation modeling, the study identified and quantified the impacts of TI, WP, JS, OW-B, and PW-B via data screening. The findings emphasize the crucial role that TI and WP play in providing a balance in teachers’ JS, OW-B, and PW-B while applying AI in their language instruction. The broad ramifications of this research are explored. Introduction In a broad sense, the success of any nation is contingent upon the educational system administered in that country.According to Simmons et al. (2019), education has the potential to be beneficial and efficient provided instructors fulfill the critical role they are expected to play in ensuring that students attain the educational goals imposed by the education system.As stated by Wessels and Wood (2019), the teaching profession is considered to be among the most successful professions in a given community.Due to the fact they are the foundation of any educational system, teachers are regarded as the architects of a country since they are responsible for the care and teaching of future generations.It is a well-known truth that teachers are considered to be the backbone of a country that is both healthy and happy.This is due to the fact that they are the only instructors capable of devotedly managing the challenging process of nation-building (Ryff & Keyes, 1995;Saaranen et al., 2013).In order to accomplish this important goal, it is imperative that educators fulfill their professional responsibilities effectively, especially in AI-assisted language learning.If the objective is to establish a setting where teachers are able to perform their jobs effectively, then some aspects, such as the level of work satisfaction, should get sufficient attention from those in charge of education. The swiftly expanding discipline of computer science identified as AI is developing intelligent computers capable of simulating human intelligence and performing tasks that are typically executed by humans. Transportation, finance, and healthcare are among the industries that are increasingly adopting AI technology.AI's capacity to make decisions with remarkable speed and precision has the potential to radically alter many different types of enterprises.Using machine learning techniques, AI learns new things and becomes better at what it does.With the help of algorithms, robots can analyze massive databases, spot trends, and find insights that humans cannot fathom (Licardo et al., 2024).Despite AI being extensively used in several industries, its full potential in the context of EFL instructors' psychological well-being remains unfulfilled.The purpose of the current study was to assess the connections between TI, WP, JS, OW-P, and PW-B in the context of EFL education. Literature Review The concept of AI was first proposed by McCarthy, who defined it as a scientific and technical concept parallel to the development of intelligent machines (dos Santos & Rosinhas, 2023).AI is a fast-progressing discipline in computer science that focuses on creating robots and software capable of doing activities that typically need human intellect (Terra et al., 2023).These activities manifest the cognitive processes of humans, which include learning, thinking, problem-solving, recognizing patterns, and making predictions (Siemens et al., 2023).AI implementations may emerge in several modalities, using either physical or virtual components, and can function within self-governing or decentralized frameworks.Furthermore, the implementations have the ability to materialize as astute, autonomous entities with the capacity to engage with their surroundings and exercise judgment (Luxton, 2016).Artificial intelligence may be classified into two categories: narrow or weak AI, which focuses on specialized activities, and general or strong AI, which has the capability to do intellectual tasks at a level equivalent to humans (Kay, 2012).Recent inquiries indicate that the use of AI in instruction has a beneficial effect on language learning (Kohnke et al., 2023), providing interactive learning affordances (Chiu et al., 2023).Applying AI in language learning might have both positive and negative effects on teachers, who are at the core of their students' education and who guide their learning step by step. Immunity is a biochemical defense mechanism that activates the body's naturally inherent defenses and refuses infections, according to Hiver (2015).Its purpose is to shield the inside from outside forces that might cause harm or distress (Hiver, 2017).Teacher immunity, as described by Hiver and Dörnyei (2017), is an approach that effectively addresses many conflicts and difficulties encountered in the field of education.As stated by Haseli Songhori et al. (2018), one end of the teacher immunity spectrum represents teachers' levels of passion for teaching, mental wellness, and openness to change, while the other end represents teachers' levels of educational expectations, weariness, and dropout. An offshoot of complexity theory, self-organization theory lies at the heart of the teacher immunity establishment (de Boer, 2005).The process of self-organization involves the transformation of a dynamic system's overall functioning through the interaction of its components.This transformation occurs in four distinct phases: activation, integration, adjustment, and equilibrium.(Randi, 2004).When confronted with challenges, language teachers may exhibit their immunity in two primary ways: productive or maladaptive responses (Hiver, 2015;Hiver & Dörnyei, 2017).Productive immunity includes emotions such as optimism, devotion, enthusiasm, resilience, and motivation.Apathy, conservatism, cynicism, and resistance to change may be attributed to maladaptive immunity.Additionally, maladaptive immunity is characterized by a biological counterpart that functions in a similar manner.This distinction emphasizes how adaptive and maladaptive immunity influence individual behavior and broader system dynamics. Another teacher-associated concept is WP.It is an incentivizing procedure that enables people to efficiently tackle diverse activities.This enthusiasm is evident in workers' willingness to undertake important tasks that demand their energy, ultimately incorporating these behaviours as fundamental to their identity (Vallerand et al., 2003).Vallerand et al. (2003) proposed a dichotomous paradigm for passion, distinguishing two distinct types: harmonious and obsessive passion.Harmonious passion emerges when someone willingly engages in an activity and integrates it into their sense of self.It refers to purposefully engaging in meaningful and essential things, which helps create a sense of harmony with one's complete being.Obsessive passion is distinguished by the integration of control into an individual's psyche as they internalize the action.This fixation is driven by internal cumpulsion and/or external influences such as selfesteem or societal approval, or by overwhelming enthusiasm (Vallerand et al., 2003). The impact of harmonious and obsessive passion on people results in diverse interactions between passion and work needs.The latter refers to occupations that need exertion and are linked to particular expenses (Vallerand et al., 2007).Consequently, these activities have the capacity to exert control over workers, leading to feelings of discomfort, unease, and fatigue (Vallerand et al., 2003).Overwhelming expectations may fuel workers' motivations, leading to an obsessive zeal that compels them to approach their job responsibilities in inflexible and insufficient ways.This ultimately results in reduced levels of wellness for staff members (Cabrita & Duarte, 2023).Therefore, it is reasonable to assume that WP mediates the relationship between job demands and emotional health in the workplace.Enthusiasm for one's work is highly related to intrinsic motivation. According to the self-determination theory, both internal and extrinsic motivations drive and motivate human action (Ryan & Deci, 2017).A driving force that complements motivation, passion enhances motivation, promotes wellness, and infuses daily activities with significance (Cabrita & Duarte, 2023).Due to the joy and fulfillment experienced when performing, people tend to favor some pursuits over others.Moreover, participating in activities that ignite our passions and shape our identities, therefore offering a consistent sense of satisfaction, can significantly influence an individual's psychological well-being (Vallerand et al., 2007).In a nutshell, the desire to reach a goal is dictated by teachers' level of passion, and the process of being motivated is what gets them there. Within educational settings, the level of JS experienced by teachers may be seen as an indicator of their likelihood to stay in their profession, a factor that influences their level of dedication, and ultimately, a factor that contributes to the overall efficacy of the school (Shan, 1998).In this regard, Buitendach and de Witte (2005) contended that JS has a significant effect in influencing teachers' viewpoints and evaluations of their work.This perception, in return, may greatly impact their objectives and accomplishments inside the school system.JS refers to an individual's emotional reactions to certain characteristics, environment, and conditions related to their work (Werang et al., 2017).Regarding teachers, the term pertains to their affective reactions towards their occupation and professional circumstances (Zhang, 2021).JS may manifest either in a broad or a particular manner.The former refers to a general sense of contentment with one's work, while the latter is more specific and pertains to certain parts of the profession (Lopes & Oliveira, 2020).JS is determined by the extent to which one's wants and wishes are fulfilled in comparison to the actual practices in the workplace (Baluyos et al., 2019). In order to evaluate their careers, educators look at what makes their jobs special.JS may be defined as a teacher's subjective perception and attitude towards their profession.Similar to other attitudes, it encompasses an intricate combination of awareness, sentiments, and behavioral inclinations (Werang et al., 2017).A teacher who experiences a high degree of JS has favorable views towards their workplace. Conversely, a teacher who is unsatisfied with their employment harbors negative attitudes towards the working atmosphere.Hence, this favorable or adverse attitude might influence the conduct of instructors in the school setting.Employment satisfaction pertains to an individual's comprehensive attitude towards their employment.JS is the emotional response, either favorable or bad, that arises from evaluating one's experience in a job.Professional factors such as subject knowledge, teaching effectiveness, competence, and academic credentials play a role in teachers' JS (Michaelowa, 2002). OW-B refers to the state of well-being in the context of the workplace.Working on Ryff's (1989) and Warr's (1994) generic definition of well-being, van Horn et al. ( 2004) related it to a complementary and multidimensional phenomenon.Van Horn et al. (2004) focused on emotional state as a critical affective aspect of work well-being.They approached it to measure it through the emotional state of employees, such as JS, emotional exhaustion, and organization commitment.They proposed two further dimensions, psychosomatic well-being (e.g.pain or aches dues to work stress or long working hours) and cognitive wellbeing (e.g., attention and engagement), to the existing concept of OW-B.OW-B has received interest in education and positive psychology.It is commonly observed that the state of well-being is naturally reflected in teachers' classroom practices (Chan et al., 2023).Therefore, investigating occupational well-being recognizes the teachers' presence in their work lives, giving ideas about improving well-being initiatives at work for the practical outcomes of teaching.The PW-B of teachers is a critical factor that has a substantial effect on their performance and, therefore, affects the achievements of their students.It could be due to the fact that students are generally impacted by the caliber of the instructors.Furthermore, there is a considerable emphasis on teachers' well-being as a possible means to alleviate work stress and discontent.(Parker, 2012).Teacher well-being includes aspects such as managing stress, mental health, overall life satisfaction, and a sense of fulfillment.Well-being among students and teachers is associated with both a more favorable emotional state and improved academic achievement (Paterson & Grantham, 2016). Teacher PW-B refers to a broad spectrum of favorable emotions and states of being at the workplace, together with general contentment with life and one's professional path (McInerney et al., 2015).Moreover, there are several distinct perspectives on the concept of well-being.Several academics have identified selfacceptance, a sense of purpose, personal growth, supportive connections, empowerment, and appreciation for nature as essential factors for achieving a state of thriving (Mercer, 2021). Objectives of the Current Research Due to the insufficient amount of research that has been carried out in the field of education and the relevance of the components that have been identified in terms of improving language instruction, the aim of this research was to investigate the potential interplay between TI, WP, JS, OW-B, and PW-B of EFL teachers involved in AI-integrated language teaching.A conceptual framework was developed to demonstrate the interaction among TI, WP, JS, OW-B, and PW-B as illustrated in Figure 1.The conceptual framework was set by recent research and theories in the field.The assessment was conducted using confirmatory factor analysis (CFA) and structural equation modeling (SEM), and the findings were subsequently presented.The following research questions were formed: Context and Participants All 392 survey participants were EFL instructors; 105 men and 287 women participated.They were teaching in Iranian private language institutions which are equipped with AI as part of their language teaching.The participants had been teachers from 1 to 25 years, and their ages ranged from 22 to 48.They completed training courses offered by the institutions they attended in order to incorporate AI into their lessons regarding the scalability of teaching resources and materials and provide suggested teaching strategies for specific subjects in the curriculum. The data was collected via online forms, most especially Google Forms, in 2023.Scales were employed in the target language (English) to maintain the authenticity of the instruments.Data loss was very unlikely because of the meticulous planning that went into the computerized survey.The distribution of the data was initially analyzed using the Kolmogorov-Smirnov test.The data's normality was validated by data screening, proving that parametric procedures would be reliable.With the data assumed to follow a normal distribution, the software LISREL 8.80 (https://ssicentral.com/index.php/products/lisrel/) was used to perform CFA and SEM. Instruments The Language Teacher Immunity Instrument (LTII) developed by Hiver (2017) was applied to determine the level of immunity possessed by participants.The 39 questions that make up this instrument are organized into seven subscales, and each of these subscales has a six-point response scale (1 = strongly disagree; 6 = strongly agree).The subscales include seven items on teaching self-efficacy, five items each on burnout, resilience, and attitudes toward teaching, six items on openness to change and classroom affectivity, and five items on coping.Cronbach's alphas for these items were satisfactory, ranging from 0.72 to 0.83. Work passion was assessed using the Work Passion Scale (WPS) created by Vallerand and Houlfort (2003). The scale is comprised of fourteen components.Seven questions are used to evaluate harmonious passion (α = 0.76), while the other seven are used to examine obsessive passion (α = 0.77).The anchors of the scale range from 1 (strongly disagree) to 7 (strongly agree).The WPS's reliability as determined by Cronbach's alpha was satisfactory for the present inquiry, with values ranging from 0.77 to 0.76.Furthermore, the Job Satisfaction Questionnaire (JSQ) developed by Spector (1985) was used to assess the degree of instructors' job satisfaction.It includes 36 statements that pertain to different aspects of job satisfaction.The questionnaire uses a Likert-type scale ranging from 1 (strongly disagree) to 5 (strongly agree).The research results approved the good reliability of the JSQ as evidenced by Cronbach's alpha values varying from 0.71 to 0.83. A total of twelve questions are included in Warr's (1990) Occupational Well-Being Scale (OWS), which is used to assess the level of happiness experienced by educators.The purpose of this measure is to assess the well-being of educators across two dimensions: physical and emotional.Anxiety, contentment, depression, and enthusiasm were the job-related emotions that participants were asked to rate in the past few weeks. Participants were asked to indicate the degree to which they experienced these emotions.There were six answer alternatives, ranging from 1 (never) to 6 (always).The Cronbach's alpha estimated reliability of the OWS was satisfactory (α = 0.75).This research used the Psychological Well-Being at Work (PWBW) scale developed by Dagenais-Desmarais and Savoie (2012) to assess the psychological well-being of educators.PWBW has 25 assertions, each rated on a 5-point scale ranging from 0 (disagree) to 5 (completely agree).PWBW comprises five distinct components: interpersonal fit at work, thriving at work, feeling of competence at work, perceived recognition at work, and desire for involvement at work.The Cronbach's alpha value for PWBW yielded an acceptable result (ranging from 0.95 to 0.71). Results This section provides a summary of the data analyzed.In Table 1, the descriptive data from the instruments and scales administered in this study are shown. On the WPS, the second instrument used, obsessive passion was shown to be more significant (M = 31.93,SD = 5.74).Contingent rewards had the highest mean score in the JSQ (M = 21.67,SD = 5.09).The mean score relevant to the OWS was occupational well-being (M = 47.28,SD = 7.16).Furthermore, interpersonal fit at work had the highest mean score in the PWBW (M = 19.71,SD = 3.73). Subsequently, the Kolmogorov-Smirnov test was conducted to find any patterns.As shown in Table 2, the p values of all the instruments and their components exceeded 0.05, indicating that the findings normally distributed, which provided justification for using parametric approaches over the data analysis stage.As displayed in Table 3, significant associations were found across various subcomponents such as job satisfaction, occupational well-being, and psychological well-being, with particularly strong associations noted in areas like teaching self-efficacy and attitudes toward teaching.After this computation, the statistical program LISREL 8.80 was used in combination with CFA and SEM to examine the structural relationships between TI, WP, JS, OW-B, and PW-B.The fit model was assessed using indicators: the chisquare magnitude, the root-mean-square error of approximation (RMSEA), the normed fit index (NFI), the good fit index (GFI), and the comparative fit index (CFI).These metrics evaluate how well the model and data match.The relationship for this study is displayed in Table 4. Table 4 presents the fit criteria for two models assessed in the study.The model 1 fit criteria are met by the chi-square/degrees of freedom ratio of 2.845, the RMSEA of 0.069, the GFI of 0.931, the NFI of 0.920, and the CFI of 0.942.Additionally, this table shows that every fit index associated with model 2 is suitable.The chi-square/degrees of freedom ratio of 2.647, the RMSEA of 0.065, the GFI of 0.931, the NFI of 0.935, and the CFI of 0.956 show that the fit criteria have been met.and 3.These variables are also included in Table 6.During the course of the analysis, it was discovered that TI had a noteworthy and favorable influence on JS (β = 0.67, t = 14.25),OW-B (β = 0.55, t = 10.36), and PW-B (β = 0.82, t = 21.48).Furthermore, WP had a significant and favorable impact on JS (β = 0.47, t = 7.52), OW-B (β = 0.41, t = 5.27), and PW-B (β = 0.44, t = 6.38).6.When it comes to the JS, TI, and WP subscales, there is a strong and essential connection between JS and the following subscales: teaching self-efficacy (β = 0.71, t = 14.33), burnout (β = 0.62, t = 12.10), resilience (β = 0.73, t = 15.76),attitudes toward teaching (β = 0.69, t = 14.08), openness to change (β = 0.68, t = 13.43),classroom affectivity (β = 0.78, t = 18.56), coping (β = 0.63, t = 12.88), harmonious passion coping (β = 0.48, t = 8.59), and obsessive passion (β = 0.46, t = 6.42). Discussion The More precisely, the results indicate that there is a correlation between the level of tenacity in following instructions, enthusiasm and determination in teaching, self-awareness, and attention toward others. Productive immunity, in accordance with the self-organization theory's principles, functions as a means of protection against various obstacles encountered in the workplace (Hiver, 2017).The research revealed a robust association between language teachers' efforts to adapt to changes and their cognitive capacities in this domain.It might be argued that higher level cognitive functions enhance self-awareness and that efficient and productive immunity is a consequence of self-organization.Job satisfaction fosters emotional equilibrium, leading to improved immune function and therefore increasing teachers' dedication to perseverance, purpose in the classroom, excitement, and their awareness of themselves and others. Another perspective that may be used to comprehend the results of this research is self-organization theory. Language instructors may adapt to the novel circumstances brought about by AI in the classroom by employing productive immunity.In addition, the study's findings suggest that EFL teachers who adopt productive immunity all through their jobs have a better understanding of their instructional environment and the factors that affect their effectiveness.Previous research (e.g., Amirian et al., 2023;Namaziandost & Heydarnejad, 2023;Rahmati et al., 2019) has identified noteworthy correlations between professional achievement, self-efficacy, resilience, and exhaustion (which are the subscales of the LTII).However, the absence of previous research specifically examining the correlation between TI, WP, and JS precludes any ability to draw comparisons between this finding and others.Consequently, this study can inspire further research on the well-being of teachers at the time of AI applications in language education. The results of the second research question (Does the EFL instructors' TI and WP offer any indication of their OW-B in AI-integrated instruction?)indicate that TI and WP predict the state of OW-B in AIintegrated language instruction.In accordance with positive psychology principles, this result may be supported.Similar to other domains within positive psychology, language education employs self-help principles to enhance the learning experience (Seligman, 2018).So, instructors who exhibit TI and WP are more likely to achieve intrapersonal and interpersonal mindfulness, which can lead to more success.They are less certain and more ardent in the classroom.The study found that positive interactions and peer support enhanced not only the resiliency and determination of EFL instructors in their classrooms but also their sense of purpose and significance.While definitive evidence linking TI, WP, and OW-B is still lacking, the research conducted by Zhang (2021) implies that increased engagement is linked to more persistent behavior as a teacher, which indirectly supports this result. Additionally, it was demonstrated that TI and WP significantly influence a teacher's level of PW-B.Teachers who are immunized and dedicated are more likely to feel a sense of mission and importance in their teaching.Teachers who have acquired the appropriate immunizations are more likely to develop a sense of professional fulfillment, which in turn improves their overall health and satisfaction.Tolerance, selfefficacy in the classroom, fatigue, perseverance, teaching attitudes, adaptability readiness, and responsiveness in the classroom are all potential contributors to professional engagement.It would appear that educators who possess positive relationships with both students and colleagues, exhibit fruitful immunity, and develop engaging and impactful lesson plans are more inclined to exhibit feelings of competence and self-assurance in their vocation.This sense of competence and reliability may increase job satisfaction and contentment, thereby fostering psychological health as a whole (Noori, 2023).Additionally, it can be inferred that educators are less prone to feel exhausted and emotionally distressed when they have a sense of autonomy in their work and possess the necessary abilities and resources to confront any challenges that may emerge.This can be attributed to their exceptional capacity to manage stress and navigate challenging situations effectively, which eventually brings about improved psychological health. Educators who possess adaptive immunity are more likely to be inclined toward improvement, a trait that positively correlates with their mental and psychological health and assists them in managing stressors (Rahmati et al., 2019).Moreover, the results of the study indicate that educators who have immunity demonstrate an unwavering dedication to accomplishing their scholastic objectives and achieving success. The implications of these findings for the design and implementation of AI-based programs and initiatives that aim to enhance the well-being of EFL instructors are substantial.As previous research witnessed (e.g., Jamal, 2023;O'Dea & O'Dea, 2023), AI can improve educators' skills by providing them with access to various tools and resources that may help them become more effective teachers.Assessment systems that are driven by AI may also provide instructors with real-time feedback on the performance of their students. This gives teachers the ability to modify their instructional methods to better comply with the requirements of the students.Additionally, AI may assist educators in personalizing learning by enabling them to develop classes that are tailored to meet the requirements to meet the requirements of each of their students. Conclusion and Pedagogical Implications The study underscore the significance of TI, WP, JS, OW-B, and PW-B, potentially providing educators with insights to enhance their pre-service and in-service curricula, especially in AI-supported language learning. The potential influence of educators' TI and WP on their responses to reform initiatives suggests that the findings of this research may inspire language instructors to employ strategies for productive immunization and engagement when instructing via AI.The integration of AI in language education only maintains effective teaching methods but also enhances language instruction, ensuring continuous progress in effective teaching practices (keep the ball rolling).Furthermore, it is highly recommended that policymakers consider the findings of the current study so as to develop a holistic comprehension of the elements that contribute to the efficacy of certain programs and instructors while rendering others ineffectual.Policy makers, language educators, and instructors must acknowledge the significance of language instructor immunity, considering the novelty and efficacy of this concept. Further investigation may be warranted to address certain constraints that were identified in the current study.To begin with, further research is suggested to enhance the applicability of the results acquired across different higher education institutions nationwide, given that the participants were selected via convenience sampling and AI-based applications were used in teaching.Future research may employ mixed-methods designs to examine the correlation between TI, WP, JS, OW-B, and PW-B, as was done in this quantitative investigation, so as to offer a more detailed understanding of the matter.Moreover, due to the crosssectional design of the current investigation, additional long-term studies are required to examine the relationships between TI, WP, JS, OW-B, and PW-B.Furthermore, additional descriptive variables, such as the demographics of the language instructors, were not examined in this study.Therefore, it is suggested that future research use demographic information regarding language instructors.Last but not least, additional investigation is necessary to determine the degree to which productive immunity, physiological well-being, buoyant inclinations, and learner engagement can serve as predictors of teacher success. Figure 2 Path Figure 2 Figure 3 t Figure 3 aim of this study was to determine the interplay among TI, WP, JS, OW-B, and PW-B.The research revealed a significant and favorable link between TI, WP, JS, OW-B, and PW-B in the EFL setting while the instruction was integrated with AI.The findings of the first part of the research indicate that those who effectively and efficiently fortified their instruction would have been more adept at managing challenging circumstances and conflicts in the workplace.The results of the present study aligned with Rahmati et al.'s (2019) findings, which emphasized the importance of promoting contemplation as a method of enhancing TI. Table 1 Descriptive Statistics of Psychological and Professional (P&P) Measures in Teaching Table 2 Results of Kolmogorov-Smirnov Test on the Distribution of Factors Related to EFL Teacher Well-BeingSince all instruments and their subscales had statistically significant values, greater than 0.05, parametric methods were deemed appropriate to evaluate the data since it followed a normal distribution.This study employed a Pearson product-moment correlation to investigate the relation between TI, WP, JS, OW-B, and PW-B.Results are displayed in Table3. Note.TI = teacher immunity; WP = work passion; JS = job satisfaction; OW-B = occupational well-being; PW-B = psychological well-being.* Dash is used to report that data was not available.**Correlation is significant at the 0.01 level (2-tailed). Table 4 Comparison of Fit Indices in Models Exploring Factors in EFL Teachers' Well-Being Table 5 Review of Model 1's Outcomes
6,419
2024-08-26T00:00:00.000
[ "Education", "Psychology", "Computer Science" ]
Critical Factors Influencing the Adoption of Smart Home Energy Technology in China: A Guangdong Province Case Study : Smart home energy technology has been verified to be successful for energy reduction in the residential sector. However, the current penetration rate of smart home energy technology is at a low level. Considering the factors of economy, policy, and demographics, Guangdong Province in China is a suitable region as an exemplary case to promote smart home energy technology through the urban residents. Therefore, using Guangdong as the targeting area, this research examined the factors influencing residents’ intention to adopt smart home energy technology. A theoretical model based on the theory of planned behavior and Norm Activation Model theory was developed, with special consideration of the complex technical features. A questionnaire survey was performed in Guangdong Province and the data was analyzed by PLS-SEM. The analysis results indicated that residents’ attitude towards technical performance, social norm, perceived behavioral control, and personal norm all have positive influence on the adoption intention, of which, attitude towards technical performance had the strongest e ff ect. On the other hand, the attitude towards economic performance was found not to lead adoption intention. To explain this consequence, the discussion based on behavioral economics was proposed. Introduction Globally, the residential sector is responsible for 20% of the total energy consumption, and this is expected to increase by 10% until 2040 due to the growth of population, economic development and improvement of living standards [1,2]. Many technologies have been developed and engaged to solve the energy efficiency problem of residential buildings. In recent years, with the rapid development of information and communication technology (ICT) and smart grids, smart home technology (SHT) has become a promising measure to benefit home occupant's living environment and improve living quality. One important category of SHT is smart home energy technology (SHET), particularly aiming to provide energy management services or energy reduction measures to residents [3,4]. SHET includes integrated systems or isolated components to manage the demand side of a smart grid by monitoring and arranging the home electricity consumption and various smart home appliances [5]. SHET achieves its energy management goals in two ways: (1) providing residents with their energy consumption information, to help residents cultivate energy saving behaviors; and (2) providing residents the ability to control the domestic appliances which can be scheduled or optimized via smart devices, so that they can utilize some electricity tariff policies to cut their energy bills [5][6][7]. Under the pressure of reducing the energy consumption of residential sector, several country governments have proposed various policies or strategies to promote the use of smart technology In the TPB model, the subjective norm (SN) construct is affected by the prevailing external values in the social environment. In many previous studies, the power of personal norms in explaining the pro-environmental behaviour or altruistic behaviour has been demonstrated [34][35][36]. In the Norm Activation Model (NAM) theory proposed by Schwartz [37], the term "personal norm" was defined as the self-expectations or commitments under one's internal values and reflect one's feelings about the obligations to engage in a specific behaviour [37,38]. Personal norms will have influence on the behaviour intention when someone aware of the consequence (AC) of its behaviour for the benefit of others or one's ascription of the responsibility (AR) for those consequences to oneself [37,39,40]. Therefore, some studies have combined the TPB with the NAM to improve the explaining power of TPB, considering both factors of internal personal norms and external social values. A list of previous studies in the context of energy saving or pro-environmental behavior is shown in Table 1. However, given the complexity of human behavior and human nature, the current theories and studies are not capable to cover all the social and psychological factors as well as personal traits relevant with energy saving behavior [19,41]. Generally, the energy saving behavior includes two fundamental categories: habitual behavior and purchasing behavior [42,43]. The habitual energy saving behavior refers to some daily activities to reduce the energy consumption such as setting thermostats lower, turning lights off when leaving a room, unplugging appliances after usage, etc. [43,44]. The purchasing energy saving behavior, also called "technology choice" [42], requires home retrofitting and financial investment in new energy efficiency technologies [43,45], such as installation of home energy management systems [32] and purchasing energy labeled appliances [20]. The scope of this paper will focus on the purchasing behavior in the context of smart home energy technologies. In the TPB model, the subjective norm (SN) construct is affected by the prevailing external values in the social environment. In many previous studies, the power of personal norms in explaining the pro-environmental behaviour or altruistic behaviour has been demonstrated [34][35][36]. In the Norm Activation Model (NAM) theory proposed by Schwartz [37], the term "personal norm" was defined as the self-expectations or commitments under one's internal values and reflect one's feelings about the obligations to engage in a specific behaviour [37,38]. Personal norms will have influence on the behaviour intention when someone aware of the consequence (AC) of its behaviour for the benefit of others or one's ascription of the responsibility (AR) for those consequences to oneself [37,39,40]. Therefore, some studies have combined the TPB with the NAM to improve the explaining power of TPB, considering both factors of internal personal norms and external social values. A list of previous studies in the context of energy saving or pro-environmental behavior is shown in Table 1. However, given the complexity of human behavior and human nature, the current theories and studies are not capable to cover all the social and psychological factors as well as personal traits relevant with energy saving behavior [19,41]. Generally, the energy saving behavior includes two fundamental categories: habitual behavior and purchasing behavior [42,43]. The habitual energy saving behavior refers to some daily activities to reduce the energy consumption such as setting thermostats lower, turning lights off when leaving a room, unplugging appliances after usage, etc. [43,44]. The purchasing energy saving behavior, also called "technology choice" [42], requires home retrofitting and financial investment in new energy efficiency technologies [43,45], such as installation of home energy management systems [32] and purchasing energy labeled appliances [20]. The scope of this paper will focus on the purchasing behavior in the context of smart home energy technologies. Research Hypothesis Based on the above literature reviews of the behavioral model, this article introduces the construct personal norm from NAM and develops an extended TPB model, in order to strengthen the explaining power of TPB for moral dimension. Additionally, considering the complicated technical features of smart technology, and the potential monetary gains or cost incurred, the original construct "attitude" in TPB could not provide enough explanations covering all aspects of SHET. Therefore, with the purpose to better understand residents' perceptions about the technical and economic performance of SHET, two new attitudinal constructs are developed in this study: one is attitude towards technical performance (ATTP), the other is attitude towards economic performance (ATEP), as shown in Figure 2. The measurement indicators assessing each model construct are obtained from the literature reviews. The specific explanations of the constructs and measurement indicators in this theoretical model are described in the following sections. technical features of smart technology, and the potential monetary gains or cost incurred, the original construct "attitude" in TPB could not provide enough explanations covering all aspects of SHET. Therefore, with the purpose to better understand residents' perceptions about the technical and economic performance of SHET, two new attitudinal constructs are developed in this study: one is attitude towards technical performance (ATTP), the other is attitude towards economic performance (ATEP), as shown in Figure 2. The measurement indicators assessing each model construct are obtained from the literature reviews. The specific explanations of the constructs and measurement indicators in this theoretical model are described in the following sections. Residents' Attitude towards Adoption Intention of SHET Attitude is decided by one's subjective evaluation of the probable outcome that a behavior will produce [33]. It is a mental state of readiness that person learns through experience, and exerts influence on people's response [50]. In the study of household electricity-saving behavior performed by Wang [51], the attitude was decided by a household's evaluation of preference for electricity saving and the availability of information. Liu pointed out that residents' attitudes towards green buildings were affected by their perceptions about the usefulness and environmental awareness [52]. In the context of adoption of smart home energy technology (SHET), attitude represents the residents' evaluation of the performance that the SHET will present. Currently, the smart technology is still under development, constantly providing new features to users. As smart home technology is expected to be involved deeply in people's life, and awareness of residents' daily activities, preference, or living habits [53], the smart living experience is very crucial when residents are making a decision about whether to adopt or not. Wong pointed out the technical (functional and operational) performance was an important factor influencing the adoption of smart home technologies [54]. Mert asserted that consumers' perception of a mature technology would determine one's willingness to use a smart appliance [55]. Here we come up with a hypothesis that residents' attitude towards technical performance (ATTP) of SHET is positively related with the adoption intention of SHET. In total seven categories of measurement indicators relevant with Residents' Attitude towards Adoption Intention of SHET Attitude is decided by one's subjective evaluation of the probable outcome that a behavior will produce [33]. It is a mental state of readiness that person learns through experience, and exerts influence on people's response [50]. In the study of household electricity-saving behavior performed by Wang [51], the attitude was decided by a household's evaluation of preference for electricity saving and the availability of information. Liu pointed out that residents' attitudes towards green buildings were affected by their perceptions about the usefulness and environmental awareness [52]. In the context of adoption of smart home energy technology (SHET), attitude represents the residents' evaluation of the performance that the SHET will present. Currently, the smart technology is still under development, constantly providing new features to users. As smart home technology is expected to be involved deeply in people's life, and awareness of residents' daily activities, preference, or living habits [53], the smart living experience is very crucial when residents are making a decision about whether to adopt or not. Wong pointed out the technical (functional and operational) performance was an important factor influencing the adoption of smart home technologies [54]. Mert asserted that consumers' perception of a mature technology would determine one's willingness to use a smart appliance [55]. Here we come up with a hypothesis that residents' attitude towards technical performance (ATTP) of SHET is positively related with the adoption intention of SHET. In total seven categories of measurement indicators relevant with technical performance are investigated from previous studies, including: information feedback [7], automation, controllability [56], reliability ( [16,54], convenience [57], privacy protection and safety [8,16,55,58,59]. Moreover, a consumer study about smart domestic appliances organized in five European countries (Austria, Germany, Italy, Slovenia and UK) discovered that consumers' adoption intention would depend on their perceptions about financial benefit [55]. A higher expected monetary gains and shorter payback period would improve the evaluation of the smart home equipment [57]. Balta-Ozkan also conducted a comparative study about consumers' perceptions about smart home technology in the UK, Germany and Italy, revealing that people's perception of the economic performance, such as reducing energy cost was one key driver for smart home adoption in the three European countries [59]. Wong pointed out a low maintenance cost during the usage phase is a significant indicator of good economic performance [54]. The benefits of energy cost saving, lower payback period, and higher net present value of smart home technology solutions were demonstrated by experimental simulation for single family houses in Germany and Algeria [60]. Hence, based upon the previous research and literature reviewed, we expect that resident's attitude towards economic performance (ATEP) of SHET will have a positive impact on the adoption intention, and three measurement indicators of economic performance are investigated: save energy expense, low maintenance cost, and cost effective. The two hypothesis about attitude are listed below: H2: Residents' attitude towards economic performance of smart home energy technology is positively related with adoption intention. Perceived Behavioral Control Perceived behavioral control (PBC) is defined as people's perceptions of their ability to perform a given behavior, and determined by the capabilities or resources that can facilitate the performance of this behavior under people's perceptions [33]. PBC can reflect two dimensions of concept: the first is about the availability of some external factors, such as money, time or other resources; while the other is about the internal factors, like self-confidence in the ability to perform one specific behavior [33,61]. Besides, as the smart technology is still developing, and new products or features will be released to market continually, the technical compatibility of the smart products with existing building systems, as well as with other smart products is important [16]. Four measurement indicators of PBC are chosen from the past literature, including knowledge and skills, financial capability, compatibility with existing building system, compatibility with other smart products. In the previous research about the energy saving or environmental friendly behaviors, perceived behavioral control has been widely adopted into the theoretical model, and confirmed as a significant factor influencing the behavioral intention, including [18,20,[62][63][64]. Saqib Ali [19] verified that PBC is positively related to resident's purchase intention of household energy efficient appliances through a questionnaire survey in Pakistan. Therefore, this study has a similar expectation about PBC, and develop the below hypothesis: H3: Perceived behavioral control has a positive relation with resident's intention to adopt SHET. Social Norm Social norms, also named subjective norms, are defined by Ajzen as the perceived social pressures to engage or not to engage in a behavior and related with the expectations of important referents, such as friends, family members etc. [33]. Cialdini categorized the social norm into two types: injunctive norm and descriptive norm [65,66]. The injunctive norm refers to whether one behavior can be supported by the majority of social group, while the descriptive norm reflects a popular behavior welcomed by the society [66]. According to the Theory of Diffusion of Innovation [28], in the decision-making process of a new technology adoption, people will be influenced by factors from the external environment, such as mass media, government policy or regulations, and their social network [67]. In a comparative study of household energy saving behaviors in five Asian countries conducted by Hori [68], the significance of social interaction factors such as "favoring neighborhood" and "participating in community" is investigated through questionnaire survey. Wang also verified the significance of policy in determining Beijing resident's electricity saving behavior [51]. Therefore, policy environment, media publicity, and support from social network are selected as measurement indicators to reflect the factor of Social Norm (SN). The indicator "support from social network" reflects the type of injunctive norm; and the other two indicators reflect the type of injunctive norm. Based on the previous research, one hypothesis is developed: H4: Social norms have a positive influence on resident's intention to adopt SHET. Personal Norm Personal norm (PN) is defined as the self-expectations or commitments under one's internal values and reflect one's feelings about the obligations to engage in a specific behavior [37,38]. Personal norms will have influence on the behavior intention when someone aware of the consequence (AC) of its behavior for the benefit of others or one's ascription of the responsibility (AR) for those consequences to oneself [37,39]. The impact of personal norm onto the motivation of energy saving or carbon reduction behavior has been verified by numerous past research [41,48,69]. What is more, Ritu Agarwal suggested that the person with innovativeness in one's personal trait would be more likely to adopt new technology [70]. Saqib Ai also confirmed the role of innovativeness as a human trait to influence consumers' attitude towards energy efficient appliances [19]. In a consumer acceptance analysis of home energy management system (HEMS) for Korean market, the authors identified social contribution, environmental responsibility, and innovativeness as influential factors [17]. In this study, referring to the previous studies, three measurement indicators are selected to assess the factor personal norm (PN), including social responsibility, environmental awareness, and innovativeness. The fifth hypothesis for the resident's intention to adopt SHET is proposed: H5: Personal norm is positively related to resident's adoption intention for SHET. A summary of the factors, measurement indicators of factors, as well as the description of indicators and their sources highlighted in the literature is provided in Table 2. Table 2. Summary of influential factors and measurement indicators. Measurement Indicator Description of Indicator Source Technical performance attitude (ATTP) Automation (TP1) SHET could achieve the automatic operation, require minimized human intervention. [56] Reliability (TP2) The operation of SHET will not suffer major failure or malfunction. [8,16] Controllability (TP3) The operation of SHET could be under some guideline, could work under interactive mode, could be controlled by human via different methods. [56,71] Safety (TP4) Would not cause threaten to resident's personal and property safety. [8,16] Feedback 1 (TP5) SHET could report household's total energy usage information through smart devices, such as smart phone, In Home Display, etc. [7] Feedback 2 (TP6) SHET could report household's appliance level energy usage information. Feedback 3 (TP7) SHET could report household's energy consumption level among the neighborhood. [72] Privacy 1 (TP8) SHET could ensure resident's personal privacy would not be violated. [8,16,55] Privacy 2 (TP9) Service providers of SHET will not violate the privacy right of resident. Convenience 1 (TP10) The functions and design of SHET could enable resident to use it conveniently. [55,57] Convenience 2 (TP11) The functions of SHET could improve resident's living comfort. Economic performance attitude (ATEP) Energy expense saving (EP1) SHET could help household to save energy bill. Cost effective (EP3) Considering cost of purchase and installation, the SHET is cost effective. Perceived behavioral control (PBC) Knowledge Skill (PBC1) Residents need master enough knowledge and skill to adopt SHET. [33,61] Financial Capability (PBC2) Residents need enough financial capability to adopt SHET. Compatibility with building system(PBC3) The building system of existing home could be compatible with smart home energy products. [16] Compatibility with smart product(PBC4) The existing smart home energy products could be compatible with other products in market. The marketing or advertisement information of SHET on mass media. Social Network Support (SN3) The support from family and members of social network about SHET adoption. Personal norm (PN) Social responsibility (PN1) The resident deem oneself has the responsibility to adopt for the future of society. [17,37,38] Environmental concern (PN2) The residents have the awareness of environmental protection. Questionnaire Survey Design and Data Collection A quantitative analysis based on a questionnaire survey was employed in this study. Survey questions were developed from the literature highlighted in the above sections, and the questionnaire survey included two parts. The first part collected the demographic information of respondents, including gender, age, educational level, household income, and usage experience of SHET. The second part included the questions aiming to measurement indicators. The Likert Scale measurement method has been applied in many studies to analyze the questionnaire survey, such as [19,20,46,[73][74][75]. A five-point Likert scale is developed to measure variables in the survey, ranging from 1 = strongly disagree to 5 = strongly agree. Then the questionnaire survey was distributed by an internet based survey system to urban residents in Guangdong Province. The survey was carried out from February 2019 to March 2019. A total of 2600 questionnaires were distributed and 2391 responses returned, with a 92% respondent rate. During the data screening process, the responses with missing value or the resources from rural village were removed. Finally, 1913 responses were retained to make up the sample for the SEM analysis. Table 3 introduces the demographic information for the respondents, showing that the percentage of male respondents (60%) is higher than that of females (40%); 93.2% of the respondents are young and middle aged (18-60 years old); 63.8% of the respondents have a university degree or above; and a higher portion of respondents had experience of using SHET. Structural Equation Modelling Structural equation modelling (SEM) was employed to analyse the relationship between the model constructs and test the hypothesis. In recent years, SEM has become the most important and influential statistical method in social science research [76]. As a second generation multivariate analysis technique, SEM could achieve the assessment of both measurement model and structural model simultaneously by combining the functions of two powerful statistical methodologies: exploratory factor analysis and linear regression analysis [77,78]. SEM has two dominating approaches: covariance-based SEM (CB-SEM) and variance-based partial least squares (PLS-SEM). Compared to CB-SEM, PLS-SEM has some flexibilities and advantages in the less limitations of sample size, no strict requirement of data normality, and encompassing various formats and large number of variables [79][80][81]. Hence PLS-SEM has gained popularity in many research fields such as strategic management [82], information system [83], business management [84,85], tourism management [86], accounting [78], technology adoption by construction industry [87], and marketing [88]. In Table 3, under age category, the sub-sample of Juvenile and Old is quite small (66 and 64 respectively). Table 4 presents the normality test result for measurement indicators, and the p value shows that the data doesn't conform to a normal distribution. Therefore, considering the applicability and data requirements of CB-SEM and PLS-SEM, this study will employ PLS-SEM to analyse the theoretical model. The execution of PLS-SEM will be performed by the software SmartPLS 3 [89] (SmartPLS 3.2.8, SmartPLS GmbH, Hamburg, Germany) by three processes: (a) assessment of measurement model, (b) assessment of structure model, and (c) assessment of significance of path coefficient [90,91]. The detailed results will be presented in the following section. Assessment of Measurement Model The measurement model is the outer model of SEM, representing the relationships between the latent variable construct and the associated indicator variables [92]. The measurement model will be evaluated by two types of validity: • Convergent validity: outer loadings of indicators > 0.7; composite reliability (CR) > 0.7 and the average variance extracted (AVE) > 0.5 [90], meaning that the indicators are reliable and more than half of the indicator variance is included in the construct [91]; • Discriminant validity: to evaluate whether a construct in SEM is unique from others [92], the criteria is square root of AVE of one construct should be higher than the correlation coefficient shared by this construct and any other constructs [90]. The assessment result of convergence validity is presented in Table 5, where all of the indicator loadings are higher than 0.7, meaning that all the measurement indicators are reliable and can be retained in the model. Both the value of Cronbach's α and Composite Reliability (CR) are more than 0.7, satisfying the requirement of internal consistency; and the value of average variance extracted (AVE) ranges from 0.662 to 0.759, indicating the constructs in the model could explain at least 66% of the indicator variance, according to the recommendation by Hair et al. [90,91], the convergent validity of the measurement model could be convinced. Table 6 introduces the assessment result of discriminant validity. As presented in Table 6, the square root of AVE of one construct (the numbers on the diagonal line) is higher than the correlation coefficient shared by this construct and any other constructs, referring to [90,92], the measurement model has achieved enough discriminant validity, implying the uniqueness of each construct compared with others. Assessment of the Structure Model The primary evaluation criteria for the structural model include the significance of path coefficient, the R 2 measure, and Stone-Geisser's Q 2 value [90]. In this study, the test of path coefficient significance is performed by 5000 samples of bootstrapping procedure and critical values of T test is 2.33, with the significance level of 0.01 (** p < 0.01). As shown in Table 7, the hypothetical test results suggest that hypotheses H1, H3, H4, H5 are supported, while H2 is rejected, meaning that the positive influences of attitude towards technical performance, perceived behavioural control, social norm, and personal norm onto the adoption intention of SHET are empirically supported by the study, however, resident's attitude towards economic performance of SHET could not be verified to have a positive relationship with adoption intention. The R 2 measure is to test the explaining power of the latent variables in the model. In the discipline of consumer behaviour, R 2 result of 0.20 is considered to be high, representing the model could well explain the research object [90]. Cohen suggested that in behavioural science, an R 2 value of 0.35 is substantial [93]. As Table 7 shows, the R 2 value is 0.589, representing that 58.9% of the variance in adoption intention of SHET could be explained by the five antecedent constructs in the proposed model. Besides, Q 2 value is a predominant method to evaluate the model's predictive relevance. The constructs in the model will exhibit predictive relevance if the Q 2 value (0.574) is larger than zero [90]. Figure 3 below is the complete graph of PLS-SEM results of path coefficient and indicator loadings. Assessment of Hypothesis by Category of Demographic Information To further analyse whether the demographic factors (gender, age, education, personal income) would affect the hypothesis test results, the whole data group was divided into several sub-groups and PLS-SEM was re-executed for each sub-group, respectively. The details of the sub-groups and the hypothesis test results are shown in Table 8. As presented by Table 8, under the demographic category of Gender, the hypothesis test result of male group is consistent with the test result of the whole group given in Table 7; however, for the female group, H5 is rejected, indicating that in this study, the personal norm of females would not lead to the adoption of SHET. Under the category of Age, for the juvenile group, only H1 is supported with H2-H5 being rejected, indicating that the only influential factor for teenagers to adopt SHET is their attitudes towards the technical performance. For the groups of young and middle aged adults, the hypothesis test results are the same as the test results of the whole group. Particularly, for the old group, all five hypotheses are rejected, meaning that none of the factors discussed in this article would drive old people to welcome SHET. As for educational level, compared to the test result of whole group, the group of below bachelor degree level rejected the hypothesis H1, signifying that people without university degrees will not accept the SHET under the influence of its technical performance, however, they will intend to use SHET because of the factors of social norm, perceived behavioural control, and personal norm, while for those people who received a university education and above, the hypothesis test results remain the same. Taking personal annual income into consideration, there is no difference between the sub-groups of poor, middle class and the whole group, and they all support H1, H3, H4, H5 but reject H2. In contrast, affluent people, whose annual personal income is higher than 300,000 Yuan, support H1 and H2, but reject H3-H5. Especially for H2, the affluent is the only group whose attitude towards the economic performance will lead to the adoption intention of SHET. Attitude Towards Technical Performance Without consideration of demographic factors, the measurement model confirms that attitude towards technical performance (ATTP) will have a positive relationship with residents' adoption intention of SHET. The result implies that the residents who have favorable attitude towards the technical performances or functions will be more likely to purchase SHET products. This finding is consistent with the theory of Technology Adoption Model (TAM). TAM theory is specifically designed to explain the adoption behavior of information technology, implying that the factor "perceived usefulness", defined as "the degree to which that users believe that the useful functions of information technology" is found to have a positive influence onto the adoption intention [94,95]. Compared to traditional information technology such as computers, the smart technology displays more complicated technical features and is involved more deeply with people's daily life. The highest path coefficient between ATTP and adoption manifests that favorable perception of the complicated technical features of smart technology products (automation, reliability, controllability, safety, feedback, privacy protection, and convenience) is the strongest driver for residents' intention to use SHET. The demographic information shows that 68% of the total respondents have the usage experience of SHET, implying that the urban residents in Guangdong primarily demonstrate positive attitudes towards the technical functions of SHET. Therefore, in an effort to improve the adoption rate of SHET, smart home industry may regard the enhancement of technical performance and user experience as their key objective. Attitudes Towards Economic Performance As shown in Table 7, the hypothetical positive relationship between the attitude towards economic performance (ATEP) of SHET and adoption intention is rejected, meaning that residents' perceptions of economic performance of SHET, such as financial gains through saving energy, cost-efficiency, or low maintenance cost, would not lead residents to adopt these products. This empirical result contradicts with the assumption of traditional economics that human will make rational choices after weighing the benefits and costs [96]. Not uniquely, plenty of previous research has also reported similar findings, for example, Hobman [97] described that only a small minority of Australian customers participated in a cost-reflective electricity tariff program, even it was successful in reducing the peak demand and electricity expense; Anderson [98] analyzed the technology adoption decisions made by manufacture plants after a government-funded energy audits, and noted that half of the energy efficiency projects were rejected by plants even if the project payback period were remarkably short; Allcott [99] pointed out that people fail to adopt those energy technologies which can help them save money, such as better insulation, or efficient domestic appliances and lighting. All these studies suggest that even people have perceived the profitable and cost effective of energy technologies, their decisions might still lead to a lower technology diffusion rate. This phenomenon is named as "Energy Efficiency Gap" [100][101][102], as it derives from consumer's irrational choice that not consistent with assumptions of traditional economics, burgeoning of literature has begun to discuss this phenomenon under the theory of behavioral economics [100,103]. Back to the results of this study, as shown by Table 3, the characteristics of the majority of respondents, including middle-young age (93.2% are 18-60 years old), well educated (63.8% have university degree or above) and having usage experience of SHET (68.3%), signified that these urban residents in Guangdong Province exhibit some personal traits of early adopters of energy technology [28,29,104]. However, the favorable attitude towards economic performance demonstrated by survey respondents could not lead to the adoption intention (Table 7). In the domain of behavioral economics, the Loss Aversion concept found in Prospect Theory could provide some explanations for this consequence [105,106]. Loss aversion refers to people's tendency to weigh more loss than the equivalent gains [106]. Although the residents have perceived the economic gains from usage SHET, they also have concerns about the potential loss from functional risks such as system failure, loss control, or privacy leakage; when making decision, they seem to put more value on these risks compared to the potential financial benefit. This explanation is also discussed in the study about adoption of energy efficient technology by homeowners in New Zealand [107], the author suggests homeowners have an asymmetrical perception of risk caused by social and cognitive biases, which prevents them from adopting energy efficiency technologies, regardless how great energy savings they would receive. Additionally, sunk cost fallacy might be another reason to explain why the hypothetical relationship is not supported. Sunk cost fallacy refers to the tendency to continue a behavior or endeavor once the previously investment was made (time, money or effort) [108]. In the previous decision-making process of energy technology adoption, the sunk cost effect has been observed in both personal and business cases. For example, Verstegen [109] concluded that sunk cost was a significant factor affecting the adoption of energy-saving technologies by horticultural farmers based on a survey. Kong [110] recommended that to facilitate the green manufacture technology diffusion through SMEs, governments should provide some financial support to SMEs for adopting the green technologies, until their savings from production could cover the substantial part of the sunk costs. In the context of this study, the residents might have purchased some non-smart or energy-inefficient household appliances before, and those products are still functioning well. Due to the psychology of not wasting resources, those residents would feel reluctant to discard them and replace them with new smart energy efficient products, even though they could perceive the economic benefits from the smart ones. To mitigate this fallacy, the smart home technology companies may consider some marketing strategies to reduce the salience of cost that consumers have already undertook, meanwhile, emphasizing those risks of retaining old household appliances, such as higher energy bill, or growing carbon emission. The industry and government might introduce some policies to reduce the switching cost for consumer from non-smart in-efficient old appliances to smart energy technology, referring to the rebate program for energy-efficient domestic appliances purchase in South Korea [111]. Perceived Behavioral Control Generally, the perceived behavioral control (PBC) derived from TPB theory is also confirmed to have a positive relationship with the adoption intention of SHET. This finding is also consistent with many discoveries of previous research of energy saving behavior or energy efficient appliance adoption [19,20,44,49]. The relationship between PBC with adoption intention reflects the significance of some non-motivational factors [41]. In this study, the non-motivational factors refer to the residents' perceptions about the resources or conditions they own to adopt the smart products, including the knowledge, affordability, and the infrastructural conditions of their houses. The result implies that if residents believe they have more resources or more appropriate conditions to use the smart products, they are more likely to engage. Social Norm The positive relationship between social norms and adoption intention is confirmed by this study, which is in line with the backbone theory of planned behavior. This significant relationship implies that residents in Guangdong province would be influenced by the external environments such as government policies, the voices of mass media, and social network when they making decisions to adopt the SHET. This finding is supported by some previous studies about the energy saving or pro-environmental behavior in different regions of China, for example, both Wang [44] and Zhang [112] conducted questionnaire surveys in Shandong Province, and confirmed the significant impacts of government policies, media publicity, education onto the energy saving behavior. Zhao [51] demonstrated the importance of policies and social norms to promote electricity saving behavior in Beijing. Ting [113] asserted the social norms were also applicable in Jiangsu Province in the household energy saving area. Outside of China, the social norm was verified to be an important factor to influence the opportunity of energy saving in American workplaces [41]. The social norm was also found to have a positive relation with purchase intention of energy efficient products in Korea [114]. However, some research conducted in other countries such as Pakistan [19] and Malaysia [20] has suggested no positive relationship between the social norm and purchase intention of energy efficient products. The difference of the results between countries might derive from the cultural difference, education level and citizen's perceptions about government enforcement. Personal Norm Meanwhile, this study presents positive impact of personal norms onto the adoption intention of SHET. Personal norm is the moral extension of TPB, reflecting the moral dimension of one's internal values. The result implies that residents owning stronger awareness of energy saving would be more possible to adopt SHET. The indicators reflecting personal norm include the social responsibility and environmental concern, which shares the similar results of some passed research of energy saving behavior [20,44,47,48] Additionally, because of the innovativeness of smart technology, one indicator reflecting one's interest about technology innovation is also employed to measure residents' internal values towards the smart technology innovation. The result confirms the reliability of this indicator. This finding echoes with the study of Ali [19], that the residents who have positive attitude towards the technology and innovation have higher intention to adoption energy efficient household appliances. Gender In this study, the gender difference lies in the H5: the positive relationship between personal norm and adoption intention is supported by the male group while rejected by the female one. This finding about gender difference is consistent with the viewpoint of one literature which asserted "Chinese men show greater environmental awareness than Chinese woman" [115]. However, in the context of western countries, it seems no consensus about the impact of gender difference onto the environmental concerns. Some research reported that women had stronger belief about pro-environmental behavior, while some studies found no relationship [116]. Therefore, the influence of gender factor on the people's adoption intention of SHET needs furtherly survey and study. Age As shown by Table 8, the hypothesis test results are dominated by the group of young and middle aged adults, because of their higher proportion (93.2%). However, the analysis of juvenile and old groups presents some different outcomes. All five hypotheses are rejected by the assessment result of the old group, indicating the theoretical model discussed in this paper is totally not applicable to elderly adults. With the coming of an aging society, plenty of research have emerged to study the adoption of smart technology especially for the older adults, and compared to the energy saving, the elder value more on the function of assisted living, such as personal emergency alarm, which could help them to live in their homes independently [117]. As for juveniles, compared to the adults, the only factor empirically supporting their adoption intention is ATTP, with the other four hypotheses H2-H5 being rejected. This consequence maybe be due to the widespread popularity of smartphones and mobile internet. Teenagers don't perceive smart technology as strange, thus they could have positive perceptions about the technology performance, nevertheless, due to the lacking of enough knowledge and skills, no financial capability, and immature personal values, the juveniles could not build positive relationships between another four factors and the adoption intention. Education The assessment result of sub-group with university degree and above is consistent with the hypothesis test result of the whole group, while the analysis of the sub-group without bachelor degrees presents slightly differences. At the significance level of *p < 0.05, H1 is rejected by the low educational level group. One explanation might be that due to their knowledge limitations, it is hard for them to have positive perceptions about complicated technology performance. This explanation is also supported by the research of Mills [118]. In his study of resident energy efficient technology adoption in European countries, he concluded that education level had a strong impact on family's attitude towards energy efficiency technology. To solve this problem, government or industry organizations might hold some training course to foster the perceptions or understands of smart technology by people with lower education background. 5.6.4. Personal Income H1-H5 are also examined for the sub-groups of poor, middle class and affluent, respectively. The assessment results in Table 8 reveal that there is no difference between the poor and middle class, consistent with the results of the whole group. Nevertheless, the hypothesis test result of the affluent group deviates from the others very much. As shown by Table 8, in contrast with all other sub-groups, H2 is empirically verified for the 169-sample size affluent people, which indicates that the rich people intend to use SHET as if they can perceive the positive economic performance of SHET. Compared to the poor and middle class, rich people are less likely to be trapped in an "Energy Efficiency Gap". This discovery echoes a view from behavior economics research related with poverty, that affluent people are less possible to suffer the behavior [119]. What is more, H1 is also supported by the affluent, the same with the poor and middle class, but H3-H5 are all rejected. This consequence reveals that the affluent respondents in this study are solely goal and profit driven. The only two factors they consider for the SHET adoption are the technical performance and economic benefits, and they are not concerned with factors like external resources, conditions, social or personal norms. Conclusions This study developed a research model to explore the factors influencing resident' intentions to adopt smart home energy technology in Guangdong Province in China. The theory of planned behavior (TPB) was employed as the backbone theory of the model, and the norm activation model (NAM) was combined to improve model's explaining power about the moral dimension. Because of the innovativeness and special technical features of smart technology, the construct of attitude in TPB was replaced by attitude towards technical performance (ATTP) and attitude towards economic performance (ATEP) separately. Generally, the study pays attention to the relationship between the attitude towards technical performance/economic performance, social norm, perceived behavioral control, personal norm and adoption intention of SHET by residents in Guangdong, which we have justified as a good exemplary case for China's situation. In order to examine the model, a questionnaire survey was organized in Guangdong to collect data, and the structural equation modelling technique using PLS was employed to conduct data analysis and research hypothesis test. The analysis results indicated that four hypotheses were supported while one was rejected, confirming the positive relationship between attitude towards technical performance (ATTP), social norm (SN), perceived behavior control (PBC), personal norm (PN) and the adoption intention of SHET. However, the positive impact of attitude towards economic performance on adoption intention was rejected, and two explanations deriving from behavioral economics were proposed to explain this consequence. With the purpose of investigating the impact of demographic factor on the adoption intention, the whole data group was divided into several sub-groups by the category of demographic information and re-modeled by PLS-SEM. The comparisons of the assessment result for each sub-group discovered some differences among each categorical groups: the gender difference lay in the factor of personal norm; the adoption intention of teenagers would be solely driven by their positive perceptions of technology performance; the theoretical model was totally not applicable to the old people; the educational level could affect resident's attitude towards the technical performance, and the high income group only considered two attitude factors when making adoption decisions. Some limitations existing in this study should be acknowledged. The first is that TPB and NAM are the backbone theories adopted by this study, so the factors and measurement indicator are confined under the framework of the two theories. However, as the complexity of human behavior, the adoption intention may also be affected by some other factors neither associated with backbone theory nor mentioned by the study. Secondly, the research data were collected from self-reporting questionnaire, rather than the observation of actual behavior, therefore the respondent's answers may be influenced by some inherent bias resulting from personal characters, society environment, or demographic factors, but not the real situations. Third, the descriptive analysis result showed that only 3.3% of the respondents are elderly people. As China is gradually becoming an aging society, more and more requirements of the elderly should be considered in future. Finally, in the analysis for demographic factors, the sample size of some categorical groups are not compatible with each other; although PLS-SEM does not require the large enough data sample, it still lacks some preciseness and need further efforts.
10,154.2
2019-11-01T00:00:00.000
[ "Engineering" ]
Distribution of the Deposition Rates in an Industrial-Size PECVD Reactor Using HMDSO Precursor : The deposition rates of protective coatings resembling polydimethylsiloxane (PDMS) were measured with numerous sensors placed at different positions on the walls of a plasma-enhanced chemical vapor deposition (PECVD) reactor with a volume of approximately 5 m 3 . The plasma was maintained by an asymmetric capacitively coupled radiofrequency (RF) discharge using a generator with a frequency 40 kHz and an adjustable power of up to 8 kW. Hexamethyldisiloxane (HMDSO) was leaked into the reactor at 130 sccm with continuous pumping using roots pumps with a nominal pumping speed of 8800 m 3 h − 1 backed by rotary pumps with a nominal pumping speed of 1260 m 3 h − 1 . Deposition rates were measured versus the discharge power in an empty reactor and a reactor loaded with samples. The highest deposition rate of approximately 15 nm min –1 was observed in an empty reactor close to the powered electrodes and the lowest of approximately 1 nm min –1 was observed close to the precursor inlet. The deposition rate was about an order of magnitude lower if the reactor was fully loaded with the samples, and the ratio between deposition rates in an empty reactor and loaded reactor was the largest far from the powered electrodes. The results were explained by the loss of plasma radicals on the surfaces of the materials facing the plasma and by the peculiarities of the gas-phase reactions typical for asymmetric RF discharges. Introduction Many materials should be coated with a thin protective layer to provide an adequate surface finish and stability in harsh environments [1][2][3][4][5]. A variety of techniques have been proposed, and a few have also been commercialized [6][7][8][9][10]. One technique for depositing compact and hydrophobic films similar to polydimethylsiloxane (PDMS) is plasma polymerization. A suitable monomer is provided and partially dissociated and ionized under plasma conditions [11,12]. The radicals adhere to the surface of any object exposed to the plasma and form a thin film. The structure and composition of the coating depend on the type of precursor, plasma parameters and specifics of the discharge used for sustaining gaseous plasma [13][14][15][16][17]. The growth kinetics is complex and difficult to control because of the large number of radicals formed in the gaseous plasma. An early report of the kinetics was presented by Bourreau et al. [18]. The authors used different sources to deposit protective coatings rich in silicon oxides: silane (SiH 4 ), hexamethyl disiloxane (HMDSO) and tetraethoxysilane (TEOS). They correlated the evolution of the coverage with the deposition kinetics and compared the growth rates. The profiles were independent of the substrate temperature or the deposition rate when silane was used as a precursor. In the case of organic precursors, however, the deposition rate decreased with an increase in the deposition temperature. They found the adsorption-desorption phenomena to be important factors for the coverage evolution. At low deposition temperatures, the film growth rate was sensitive to ion surface bombardment and resulted in a non-conformal deposit even in compounds with high surface mobility. Theirich et al. [19] studied the gas-phase reactions in HMDSO/O 2 mixtures and pressures between 20 and 70 Pa. Plasma was characterized by mass spectrometry and infrared spectroscopy. They found the film homogeneity dominated by the precursor content and its spatial distribution in the gas or plasma phase. Three reactive intermediate species were proposed to act as a precursor for silica-like film growth, all having a mass of 148 Da, so the authors concluded that further work should be performed to distinguish between the radicals. In their classic paper, Hegemann et al. [20] studied the deposition rate and threedimensional uniformity of capacitively coupled radio-frequency (RF) plasma useful for depositing protective layers using HMDSO as a precursor. The deposition rate increased with monomer gas flow, whereas it was independent of pressure. Large differences in the deposition rates at different positions of the samples were reported, as well as the influence of the dimensions of the samples on the growth kinetics. In another paper [21], the same group investigated the deposition rate in symmetrical and asymmetrical electrode configurations and found that the deposition rate depended on the so-called reaction parameter (power input per gas flow of the monomer). More recently, Ropcke's [22] group performed a detailed characterization of the HMDSO plasma by optical emission spectroscopy (OES) in the visible spectral range and infrared laser absorption spectroscopy (IRLAS). They used a plasma reactor of a rather large power density (discharge power per volume of the discharge chamber) of the order of 100 W per liter. They managed to derive the concentrations of the various stable and unstable plasma species, which were found to be in the range between 10 17 and 10 21 m −3 . They also studied the influence of the discharge parameters, such as power, pressure and gas mixture, on the molecular concentrations. Based on the construction principle of the reactor, the plasma generation was characterized by a certain degree of inhomogeneity with different temperature zones, i.e., hottest, hot and colder zones. This complexity was characterized by the multiple molecular species, including the HMDSO precursor and products in the ground and excited states existing in the plasma. Plasma-enhanced chemical vapor deposition (PECVD) technique for the deposition of protective coatings from HMDSO was commercialized decades ago despite the experimentally observed non-homogeneities and instabilities, which may lead to inadequate properties of the deposited films. Recently, Gosar et al. [16] reported that the composition of the deposited films depended on the time-evolution of the plasma parameters, although the discharge parameters (power, pressure, flow rate, pumping speed) remained fairly constant. The time evolution was explained by the drifting plasma parameters, which was detrimental to the quality of the protective films, especially where a rather high power density was used to sustain the gaseous plasma. At low discharge powers, however, the properties of the deposited films were not time dependent. The quality of the films is a crucial parameter in the industrial application of the PECVD technique using HMDSO, so many industrial reactors operate at a very low power density to minimize the risk [23]. On the other hand, the low power density results in a poor deposition rate, as explained by the above-cited authors. The problem of plasma non-uniformity and the resultant deviations of the film thickness from the desired value in large plasma reactors may be suppressed by rotating samples upon plasma processing [24]. This is a standard solution in commercial reactors for depositing protective coatings in batch mode. The samples are mounted on planetaria and moved through zones with different plasma parameters. The relatively long treatment time (several minutes in commercial plasma reactors) ensures a reasonable coating thickness and uniformity. Still, the problem arising from plasma inhomogeneities is not solved, so there is a need to develop configurations of plasma reactors with deposition rates that are as uniform as possible throughout the entire reactor. Commercial reactors for the deposition of the protective coatings using the HMDSO as the precursor may be upgraded if the non-uniformities are known and understood. Several groups have already reported the non-uniformity in plasma parameters, but only a few have measured the deposition rates in different parts of the plasma reactor [12,13,20]. The present paper provides measurements of the deposition rate performed with several sensors mounted in selected positions within a large plasma reactor. The deposition rates for an empty and a fully loaded reactor were measured to reveal the influence of the samples on the non-uniformity of the deposition rates. Plasma-Enhanced Chemical Vapor Deposition Reactor The industrial PECVD reactor useful for the deposition of PDMSO-like coatings was presented in detail in our previous paper [25]. The reactor has a cylindrical shape with a diameter of 1.9 m and a height of 1.8 m. During the deposition, the reactor was pumped with two roots pumps with a total nominal pumping speed 8800 m 3 h -1 , backed by two rotary pumps of a total nominal pumping speed 1260 m 3 h -1 . Before the deposition, in order to get the base pressure as low as possible (around 0.02 Pa), the reactor was also pumped with two diffusion pumps with a total pumping speed 35,000 L/s. HMDSO was the only gas that was introduced into the plasma reactor. It was introduced through a calibrated flow controller. The pressure was measured with a Pirani gauge. At the HMDSO inlet of 130 sccm (cm 3 /min STP ), which is the standard flow rate used in mass production, the pressure was about 4 Pa. Plasma was characterized by optical emission spectroscopy (OES) AvaSpec-Mini4096CL (Avantes, Apeldoorn, Netherlands) near one of the powered electrodes as shown in Figure 1. depositing protective coatings in batch mode. The samples are mounted on planetaria and moved through zones with different plasma parameters. The relatively long treatment time (several minutes in commercial plasma reactors) ensures a reasonable coating thickness and uniformity. Still, the problem arising from plasma inhomogeneities is not solved, so there is a need to develop configurations of plasma reactors with deposition rates that are as uniform as possible throughout the entire reactor. Commercial reactors for the deposition of the protective coatings using the HMDSO as the precursor may be upgraded if the non-uniformities are known and understood. Several groups have already reported the non-uniformity in plasma parameters, but only a few have measured the deposition rates in different parts of the plasma reactor [12,13,20]. The present paper provides measurements of the deposition rate performed with several sensors mounted in selected positions within a large plasma reactor. The deposition rates for an empty and a fully loaded reactor were measured to reveal the influence of the samples on the non-uniformity of the deposition rates. Plasma-Enhanced Chemical Vapor Deposition Reactor The industrial PECVD reactor useful for the deposition of PDMSO-like coatings was presented in detail in our previous paper [25]. The reactor has a cylindrical shape with a diameter of 1.9 m and a height of 1.8 m. During the deposition, the reactor was pumped with two roots pumps with a total nominal pumping speed 8800 m 3 h -1 , backed by two rotary pumps of a total nominal pumping speed 1260 m 3 h -1 . Before the deposition, in order to get the base pressure as low as possible (around 0.02 Pa), the reactor was also pumped with two diffusion pumps with a total pumping speed 35,000 L/s. HMDSO was the only gas that was introduced into the plasma reactor. It was introduced through a calibrated flow controller. The pressure was measured with a Pirani gauge. At the HMDSO inlet of 130 sccm (cm 3 /minSTP), which is the standard flow rate used in mass production, the pressure was about 4 Pa. Plasma was characterized by optical emission spectroscopy (OES) AvaSpec-Mini4096CL (Avantes, Apeldoorn, Netherlands) near one of the powered electrodes as shown in Figure 1. An asymmetric capacitively coupled RF discharge was used for sustaining gaseous plasma. The discharge was powered by an RF generator (PE II 10K, Advanced Energy, Denver, CO, USA) operating at 40 kHz and adjustable power between 1 and 8 kW. A An asymmetric capacitively coupled RF discharge was used for sustaining gaseous plasma. The discharge was powered by an RF generator (PE II 10K, Advanced Energy, Denver, CO, USA) operating at 40 kHz and adjustable power between 1 and 8 kW. A couple of powered electrodes were mounted close to the pump duct. The area of each electrode was approximately 0.4 m 2 . The area of the grounded electrode (housing) was approximately 16 m 2 . The ratio between the areas of the powered and grounded electrodes was approximately 40. Therefore, the plasma was sustained by an asymmetrical capacitive coupled RF discharge, and the gradients in the plasma parameters were expected. The HMDSO inlet was provided through vertically oriented grounded metallic tubes, as shown in Figure 1. The tubes were positioned close to the grounded walls of the plasma reactor. They had small holes separated by 15 cm. The precursor was thus introduced into the reactor unevenly. Sensors of the Deposition Rate Eight sensors were fixed on the sidewalls of the plasma reactor (BDS-MF, Arzuffi, Vallezzo Bellini, Italy) for the real-time monitoring of the deposition rate, as shown in Figure 1 (marked with S1 to S8). The sensor S1 was positioned on the rough grid, which separates the discharge chamber from the polycold pump duct, which was not used in this experiment. A photo of the sensor S1 is shown in Figure 2a. Other sensors were fixed on the chamber walls on the grounded housing. Vallezzo Bellini, Italy) for the real-time monitoring of the deposition rate, as shown in Figure 1 (marked with S1 to S8). The sensor S1 was positioned on the rough grid, which separates the discharge chamber from the polycold pump duct, which was not used in this experiment. A photo of the sensor S1 is shown in Figure 2a. Other sensors were fixed on the chamber walls on the grounded housing. Each sensor essentially consisted of a single-mode optical fiber, which was cleaved and exposed to the processing chamber on one side, while being connected to an appropriate opto-electronics signal integration system on the other side. Opto-electronics signal integration system launched light into the fiber, while acquiring and processing back-reflected optical power from cleaved fiber end. Since the deposited PDMSO-like layer has a different refractive index than vitreous silica, the back-reflectance from the cleaved fiber end changed during the PDMSO deposition. This change was correlated with the change in thickness of the deposited material. The correlation was obtained by an appropriate calibration and processing of acquired signals. One such sensor was already used in our previous work [26], where the deposition rates measured with such sensor in real time were the same as those measured with time-consuming post-deposition surface analysis such as atomic force microscopy (AFM) (Solver PRO, NT-MDT, Moscow, Russia), X-ray photoelectron spectroscopy (XPS) (TFA XPS Physical Electronics, Münich, Germany) and time-of-flight secondary ion mass spectrometry (ToF-SIMS) depth profiles (ToF-SIMS 5 instrument, ION-TOF GmbH, Münster, Germany). Figure 2b shows a photo of an optical fiber sensor fixed on the aluminum holder, which was fixed on the wall of the plasma reactor. Optical Emission Spectroscopy (OES) An optical lens was mounted in the PECVD reactor ( Figure 1) and connected with optical fiber through optical feedthrough to a standard low-resolution optical spectrometer Avantes AvaSpec-Mini4096CL (Avantes, Apeldoorn, Netherlands). The spectrometer Each sensor essentially consisted of a single-mode optical fiber, which was cleaved and exposed to the processing chamber on one side, while being connected to an appropriate opto-electronics signal integration system on the other side. Opto-electronics signal integration system launched light into the fiber, while acquiring and processing back-reflected optical power from cleaved fiber end. Since the deposited PDMSO-like layer has a different refractive index than vitreous silica, the back-reflectance from the cleaved fiber end changed during the PDMSO deposition. This change was correlated with the change in thickness of the deposited material. The correlation was obtained by an appropriate calibration and processing of acquired signals. One such sensor was already used in our previous work [26], where the deposition rates measured with such sensor in real time were the same as those measured with time-consuming post-deposition surface analysis such as atomic force microscopy (AFM) (Solver PRO, NT-MDT, Moscow, Russia), X-ray photoelectron spectroscopy (XPS) (TFA XPS Physical Electronics, Münich, Germany) and time-of-flight secondary ion mass spectrometry (ToF-SIMS) depth profiles (ToF-SIMS 5 instrument, ION-TOF GmbH, Münster, Germany). Figure 2b shows a photo of an optical fiber sensor fixed on the aluminum holder, which was fixed on the wall of the plasma reactor. Optical Emission Spectroscopy (OES) An optical lens was mounted in the PECVD reactor ( Figure 1) and connected with optical fiber through optical feedthrough to a standard low-resolution optical spectrometer Avantes AvaSpec-Mini4096CL (Avantes, Apeldoorn, Netherlands). The spectrometer measures light emission spectra. The device is based on AvaBench 75 symmetrical Czerny Turner design with a 4096-pixel CCD detector with a focal length of 75 mm. The range of measurable wavelengths is from 200 nm to 1100 nm, and the wavelength resolution is 0.5 nm. The spectrometer has a USB2.0 interface, enabling high sampling rates up to 150 spectra per second. Signal-to-noise ratio is 300:1. Integration time is adjustable from 30 µs to 50 s. At integration times below 6.5 ms, the spectrometer itself performs internal averaging of spectra before transmitting them through the USB interface. The spectrometer was connected to the process computer via USB. The integration time was set to 5 s. Results and Discussion Plasma in the empty discharge chamber was characterized by OES. Here it should be stressed that an empty chamber means that there are no samples and no planetaria (sample holders) inside the reactor. A typical OES spectrum is shown in Figure 3. The spectrum consists of Balmer series of radiative transitions of H atoms from excited states to the first excited state. The next prominent spectral feature arises from the relaxation of the CH radicals with the bandhead at 431 nm. Other features are marginal. The OES indicates partial dissociation of the precursor molecules, but otherwise, it does not provide any additional significant information. Other radicals are also in the reactor, but their emission is marginal. More interesting is the intensity of the spectral features versus the discharge power. Figure 4 shows quite linear curves. The emission intensity depends on the electron density and temperature as well as the density of radicals in the ground state, and the dependence is not trivial. Still, the behavior of the lines in Figure 4 indicates either more extensive dissociation of the precursor molecules or higher electron density/temperature or both at higher power. This observation is expected, considering that the optical lens for acquiring spectra was mounted just next to the powered electrode. measures light emission spectra. The device is based on AvaBench 75 symmetrical Czerny Turner design with a 4096-pixel CCD detector with a focal length of 75 mm. The range of measurable wavelengths is from 200 nm to 1100 nm, and the wavelength resolution is 0.5 nm. The spectrometer has a USB2.0 interface, enabling high sampling rates up to 150 spectra per second. Signal-to-noise ratio is 300:1. Integration time is adjustable from 30 µ s to 50 s. At integration times below 6.5 ms, the spectrometer itself performs internal averaging of spectra before transmitting them through the USB interface. The spectrometer was connected to the process computer via USB. The integration time was set to 5 s. Results and Discussion Plasma in the empty discharge chamber was characterized by OES. Here it should be stressed that an empty chamber means that there are no samples and no planetaria (sample holders) inside the reactor. A typical OES spectrum is shown in Figure 3. The spectrum consists of Balmer series of radiative transitions of H atoms from excited states to the first excited state. The next prominent spectral feature arises from the relaxation of the CH radicals with the bandhead at 431 nm. Other features are marginal. The OES indicates partial dissociation of the precursor molecules, but otherwise, it does not provide any additional significant information. Other radicals are also in the reactor, but their emission is marginal. More interesting is the intensity of the spectral features versus the discharge power. Figure 4 shows quite linear curves. The emission intensity depends on the electron density and temperature as well as the density of radicals in the ground state, and the dependence is not trivial. Still, the behavior of the lines in Figure 4 indicates either more extensive dissociation of the precursor molecules or higher electron density/temperature or both at higher power. This observation is expected, considering that the optical lens for acquiring spectra was mounted just next to the powered electrode. Figure 5 shows the measured deposition rate versus the discharge power. Interestingly enough, the deposition rate is rather constant in the broad range of powers from approximately 2 to 7 kW. This observation is not correlated with data in Figure 4, which shows a gradual increase in the emission intensity. This paradox can be explained by a fact already reported for small experimental systems [16]: only moderate dissociation of the precursor is sufficient for a reasonable deposition rate. Extensive dissociation of the precursor leads to the formation of various radicals that do not stick to the sample surface but are pumped out from the system; therefore, in cases where large power densities are used for sustaining plasma in HMDSO, the deposit does not resemble PDMS but rather silica. Detailed study of the transition from polymer-like films to films rich in silicon oxides was reported in [16]. The power density used in this study was at least 10 times lower than the power density needed for such full transition; however, there are still mild transitions, towards films richer in silicon, that can affect the deposition rates seen in Figure 5. Figure 5 shows the measured deposition rate versus the discharge power. Interestingly enough, the deposition rate is rather constant in the broad range of powers from approximately 2 to 7 kW. This observation is not correlated with data in Figure 4, which shows a gradual increase in the emission intensity. This paradox can be explained by a fact already reported for small experimental systems [16]: only moderate dissociation of the precursor is sufficient for a reasonable deposition rate. Extensive dissociation of the precursor leads to the formation of various radicals that do not stick to the sample surface but are pumped out from the system; therefore, in cases where large power densities are used for sustaining plasma in HMDSO, the deposit does not resemble PDMS but rather silica. Detailed study of the transition from polymer-like films to films rich in silicon oxides was reported in [16]. The power density used in this study was at least 10 times lower than the power density needed for such full transition; however, there are still mild transitions, towards films richer in silicon, that can affect the deposition rates seen in Figure 5. Figure 5 shows the measured deposition rate versus the discharge power. Interestingly enough, the deposition rate is rather constant in the broad range of powers from approximately 2 to 7 kW. This observation is not correlated with data in Figure 4, which shows a gradual increase in the emission intensity. This paradox can be explained by a fact already reported for small experimental systems [16]: only moderate dissociation of the precursor is sufficient for a reasonable deposition rate. Extensive dissociation of the precursor leads to the formation of various radicals that do not stick to the sample surface but are pumped out from the system; therefore, in cases where large power densities are used for sustaining plasma in HMDSO, the deposit does not resemble PDMS but rather silica. Detailed study of the transition from polymer-like films to films rich in silicon oxides was reported in [16]. The power density used in this study was at least 10 times lower than the power density needed for such full transition; however, there are still mild transitions, towards films richer in silicon, that can affect the deposition rates seen in Figure 5. Both Figures 5 and 6 indicate large differences in the deposition rate at different locations ranging from 1.6 to 14.7 nm min -1 . The deposition rate is the largest for sensor S1. This sensor was placed on the grid between the electrodes, as shown in Figures 1 and 2. The highest deposition rate is on the surface, where it is not needed because the radicals at the position of S1 are likely to be pumped away from the system. The high deposition rate indicates a high density of radicals that are capable of forming the protective coating. According to the state-of-the-art, such radicals are partially dissociated HMDSO molecules, including those found at the mass of 148 Da [19]. In the empty chamber, these radicals are denser or more concentrated at the position near the pump ducts than anywhere else in the system, as revealed in Figures 5 and 6. Figure 6 shows the thickness of the coating obtained from the sensors' signals versus the treatment time for the empty plasma reactor. One can observe almost perfectly linear behavior, which indicates excellent stability of plasma parameters during the deposition of the protective coatings. The stability may be a consequence of the appropriately low pressure in the reactor, which prohibits instabilities that may appear because of the cluster formation [27] and thus the loss of radicals useful for the deposition of the protective coating. Coatings 2021, 11,1218 7 of 14 Figure 6 shows the thickness of the coating obtained from the sensors' signals versus the treatment time for the empty plasma reactor. One can observe almost perfectly linear behavior, which indicates excellent stability of plasma parameters during the deposition of the protective coatings. The stability may be a consequence of the appropriately low pressure in the reactor, which prohibits instabilities that may appear because of the cluster formation [27] and thus the loss of radicals useful for the deposition of the protective coating. Both Figures 5 and 6 indicate large differences in the deposition rate at different locations ranging from 1.6 to 14.7 nm min -1 . The deposition rate is the largest for sensor S1. This sensor was placed on the grid between the electrodes, as shown in Figures 1 and 2. The highest deposition rate is on the surface, where it is not needed because the radicals at the position of S1 are likely to be pumped away from the system. The high deposition rate indicates a high density of radicals that are capable of forming the protective coating. According to the state-of-the-art, such radicals are partially dissociated HMDSO molecules, including those found at the mass of 148 Da [19]. In the empty chamber, these radicals are denser or more concentrated at the position near the pump ducts than anywhere else in the system, as revealed in Figures 5 and 6. Examining Figure 5 and compared to Figure 1, one observes the next largest deposition rate at sensors S2 and S8, which were located a bit farther from the pump ducts. In fact, sensors S2 and S8 were located between the gas inlet and the powered electrodes, as shown in Figure 1. The possible reasons for favored deposition rate at these positions will be discussed later in this report. The deposition rates at the position of sensors far from the electrodes are lower but still reasonably high. For example, Figure 5 reveals the deposition rates of about 6 nm min -1 for the sensors S4, S5, and S6. Conversely, sensors S3 and S7, which were placed close to the gas inlet but away from the powered electrodes, show a poor deposition of approximately 2 nm min -1 . The distribution of the deposition rate in the plasma reactor provides a qualitative model of the gas kinetics that allows the most reasonable degree of fragmentation of the Examining Figure 5 and compared to Figure 1, one observes the next largest deposition rate at sensors S2 and S8, which were located a bit farther from the pump ducts. In fact, sensors S2 and S8 were located between the gas inlet and the powered electrodes, as shown in Figure 1. The possible reasons for favored deposition rate at these positions will be discussed later in this report. The deposition rates at the position of sensors far from the electrodes are lower but still reasonably high. For example, Figure 5 reveals the deposition rates of about 6 nm min -1 for the sensors S4, S5, and S6. Conversely, sensors S3 and S7, which were placed close to the gas inlet but away from the powered electrodes, show a poor deposition of approximately 2 nm min -1 . The distribution of the deposition rate in the plasma reactor provides a qualitative model of the gas kinetics that allows the most reasonable degree of fragmentation of the precursor molecules. The injected HMDSO molecules do not interact with the solid materials but should be partially dissociated to radicals with a reasonable sticking coefficient. The plasma density far from the powered electrodes in the reactor used for these experiments is only on the order of 10 14 m -3 [23]. Such a low density of electrons does not enable immediate dissociation to useful fragments. This may explain the poor deposition rates detected by sensors S3 and S7, located close to the gas inlet but away from the powered electrodes. The molecules should be allowed a prolonged residence time in the weakly ionized gaseous plasma to dissociate into useful radicals. The residence time will be estimated later in this paper. The injected precursor molecules enter the plasma reactor with a significant drift velocity but quickly thermalize (assume the random motion after a few elastic collisions). The motion is then governed by diffusion, i.e., it is random. The molecules suffer numerous collisions with plasma electrons while diffusing from the source (gas inlet) to the position of the sensors S4, S5, and S6. The gas at the position of these sensors is thus reasonably well dissociated, which favors the deposition on the surfaces far away from the electrodes. As mentioned above, the residence time of the injected molecules is too short to cause significant deposition at the positions of sensors S3 and S7. Sensors S2 and S8 are as close to the gas inlet as S3 and S7, but Figure 5 indicates a deposition rate several times higher at S2 and S8 compared to S3 or S7. This paradox may be explained by the larger residence time of molecules striking the surface of the sensors at positions S2 and S8, but the variation of the plasma density versus the distance from the powered electrode may be more important. The asymmetric capacitively coupled RF discharge is characterized by an oscillating sheath next to the powered electrode. Since the frequency of these oscillations is rather low (the RF generator operates at 40 kHz), the electrons oscillate within the sheath and gain energy enough for a rather extensive dissociation and ionization of the gaseous molecules within the oscillating sheath [28]. Therefore, the dissociation of the precursor molecules is more extensive next to the electrodes than in the bulk plasma far away from the powered electrodes. As a result, the deposition rate at the sensors S2 and S8 is favorable despite the proximity of the gas inlet. The radicals stick to surfaces of any material facing plasma; therefore, the deposition rate as determined by the sensors located in the reactor according to Figure 1 should be lower if the reactor is additionally loaded with samples. To study the influence of samples on the deposition rate, samples were mounted on the planetaria, as shown in Figure 7. About 250 medium-sized, approximately 40-cm-long samples, which represented about 100% of the total chamber capacity, were evenly distributed inside the chamber. The height and the diameter of the planetaria were 160 cm and 55 cm, respectively, and the distance between axles was around 60 cm. The planetaria were spinning at a speed of 6 rpm. The deposition rate measurements were repeated with sensors located at the same positions as in the empty chamber. The results are shown in Figure 8. The highest deposition rate was observed for the sensors S2 and S8. These sensors are located between the gas inlet and the powered electrode ( Figure 1). The deposition rate at the positions S2 and S8 are about an order of magnitude greater than at any other position except near the pump ducts. The presence of samples in the plasma reactor, therefore, influences the deposition rate significantly. Not only is it lower than in the empty reactor (compare Figures 6 and 9), but a reasonably large deposition rate is observed only in the region close to the electrodes (S2, S8, and S1). Elsewhere, the deposition rate is below 1 nm min −1 . The very low deposition rate at S4, S5, and S6, as observed in Figure 8, is explained by the loss of radicals on the surfaces of the samples. As discussed above, the plasma density away from the electrodes is low, so the loss of radicals useful for depositing pro- The upper discussion reveals the crucial role of the residence time of molecules in the plasma reactor. Gaseous molecules diffuse in the plasma reactor because the random velocity is much higher than the drifting from the gas inlet to the pump ducts. The drift velocity of gaseous molecules at the entrance to the pump ducts can be calculated if the effective pumping speed at that position is known. The effective pumping speed depends on the nominal pumping speed of the roots pumps and the conductivity of any vacuum elements mounted between the roots pumps and the plasma reactor. The conductivity is difficult to determine, but one can also determine the effective pumping speed from the measured gas flow and pressure inside the reactor by considering the constant mass flow: Here, p1 is the atmospheric pressure, S1 is the gas flow as measured by the flow controller, p2 is the measured pressure in the plasma reactor, and S2 is the effective pumping speed at the grid which separates the plasma reactor and the pump ducts. Taking into The upper discussion reveals the crucial role of the residence time of molecules in the plasma reactor. Gaseous molecules diffuse in the plasma reactor because the random velocity is much higher than the drifting from the gas inlet to the pump ducts. The drift velocity of gaseous molecules at the entrance to the pump ducts can be calculated if the effective pumping speed at that position is known. The effective pumping speed depends on the nominal pumping speed of the roots pumps and the conductivity of any vacuum elements mounted between the roots pumps and the plasma reactor. The conductivity is difficult to determine, but one can also determine the effective pumping speed from the measured gas flow and pressure inside the reactor by considering the constant mass flow: Here, p1 is the atmospheric pressure, S1 is the gas flow as measured by the flow controller, p2 is the measured pressure in the plasma reactor, and S2 is the effective pumping speed at the grid which separates the plasma reactor and the pump ducts. Taking into The very low deposition rate at S4, S5, and S6, as observed in Figure 8, is explained by the loss of radicals on the surfaces of the samples. As discussed above, the plasma density away from the electrodes is low, so the loss of radicals useful for depositing protective coating cannot be balanced by production because of electron-impact dissociation. Conversely, the deposition rate close to the powered electrode (sensors S2 and S8) remains reasonably high because of the higher electron energy in the oscillating sheath. The ratio between the deposition rate in an empty reactor and a full reactor is shown in Figure 9. The highest ratio of 10-20 is observed for sensors positioned far from the electrodes. This observation was already explained by the loss of radicals on the surface of the samples. However, the ratio is much lower for the sensors positioned close to the powered electrodes. For sensors S2 and S8, the ratio is approximately 3 for the lowest power of 1 kW and only 2 for the highest power of 7 kW. The power-dependence of the ratio is explained by the fact that the electron energy in the vicinity of the powered electrodes is much higher than far from the electrodes, so a significant fraction of injected HMDSO molecules get dissociated and thus contribute to the film growth. The upper discussion reveals the crucial role of the residence time of molecules in the plasma reactor. Gaseous molecules diffuse in the plasma reactor because the random velocity is much higher than the drifting from the gas inlet to the pump ducts. The drift velocity of gaseous molecules at the entrance to the pump ducts can be calculated if the effective pumping speed at that position is known. The effective pumping speed depends on the nominal pumping speed of the roots pumps and the conductivity of any vacuum elements mounted between the roots pumps and the plasma reactor. The conductivity is difficult to determine, but one can also determine the effective pumping speed from the measured gas flow and pressure inside the reactor by considering the constant mass flow: (1) Here, p 1 is the atmospheric pressure, S 1 is the gas flow as measured by the flow controller, p 2 is the measured pressure in the plasma reactor, and S 2 is the effective pumping speed at the grid which separates the plasma reactor and the pump ducts. Taking into account the measured values, i.e., p 1 = 10 5 Pa, S 1 = 130 cm 3 /min = 2×10 −6 m 3 s -1 , p 2 = 4 Pa, one can estimate the effective pumping speed as: As calculated from Equation (1), the effective pumping speed is an order of magnitude lower than the nominal pumping speed of the roots pumps. This observation may be explained by the deviation of the real pumping speed of the roots pumps from the nominal value (the latter is just the maximum pumping speed at optimal conditions) and the limited conductivity of vacuum elements mounted between the plasma reactor and the roots pumps. There is a negligible pressure gradient throughout the plasma reactor, because the conductivity is orders of magnitude greater than the effective pumping speed. The crosssection of the plasma reactor is a product of the reactor diameter and height, i.e., A = 3.5 m 2 . The gas drift velocity from the source to the pump ducts is: v = S 2 /A = 0.014 m s −1 . ( This value is orders of magnitude lower than the random velocity due to the thermal motion of the molecules, which is: In Equation (4), we considered the room temperature (T = 300 K) and the HMDSO mass m = 162 Da. By considering the distance between the gas inlet and the grid separating the reactor from the pump ducts of l = 1 m, one can estimate the average residence time of gaseous molecules as: The residence time as calculated from Equation (5) is an averaged value taking into consideration the simple calculations. Because the random velocity as calculated from Equation (4) is orders of magnitude higher than the drift velocity as determined from Equation (3), the residence time is spread broadly from the value calculated using Equation (5), and thus it should be taken just as an estimation. In any case, the residence time is long enough to assure for numerous collisions with plasma electrons. The large residence time is the reason for the rather large deposition rate at any position far from the gas inlet in the empty reactor. The maximal deposition is observed on the grid near the pump ducts (sensor S1) in the empty reactor. The radicals entering the pump ducts are likely to have been created well before reaching the grid. Plasma reactors are useful only when the coatings are deposited on various products mounted on the planetaria. Technologically relevant results are presented in Figure 8. The deposition rate at sensor S1 (mounted on the grid near the pump ducts) is moderate at about 2 nm min -1 , which is favorable from the technological point of view. Still, a significant fraction of the radicals useful for the thin film deposition is pumped out from the reactor. However, the major deficiency of the plasma reactor is the poor deposition rate at any other position. Despite the long residence time of gaseous radicals, the deposition rate is poor because of the loss of radicals on the samples placed on the planetaria. The only useful part of the reactor, when loaded with samples, is at positions S2 and S8, so close to the powered electrodes. The discharge configuration in this reactor is, therefore, inadequate. The configuration with electrodes placed opposite to the pump duct should be better. No sensor was placed on a powered electrode because it would heat significantly. Still, according to the measured deposition rates and according to the above discussion, it is reasonable to assume the large deposition rate on the powered electrodes. In fact, the electrodes should occasionally be etched in chemical baths to remove the excessive deposits. The extensive deposition of thin films on the electrodes and thus loss of radicals for coating the samples is a major drawback of the reactor used in this study. The problem could be minimized using symmetric discharge, but it is often not feasible as in our PECVD reactor. Despite the large dissipation of the deposition rate, the composition of the deposited films remains similar for all films at the positions of different sensors. Figure 10 represents the composition of the films as deduced from XPS survey spectra. The measurements were performed in the reactor loaded with samples. The concentration of carbon is close to 50 at.%, while the concentrations of oxygen and silicon is between 25 and 30 at.% for all samples. The small variations in the composition may be attributed to the accuracy of the XPS technique or to actual variation in the composition, but because the differences are marginal it is possible to conclude that the stoichiometry of the deposited films does not vary significantly between different positions in the plasma reactor. electrodes should occasionally be etched in chemical baths to remove the excessive deposits. The extensive deposition of thin films on the electrodes and thus loss of radicals for coating the samples is a major drawback of the reactor used in this study. The problem could be minimized using symmetric discharge, but it is often not feasible as in our PECVD reactor. Despite the large dissipation of the deposition rate, the composition of the deposited films remains similar for all films at the positions of different sensors. Figure 10 represents the composition of the films as deduced from XPS survey spectra. The measurements were performed in the reactor loaded with samples. The concentration of carbon is close to 50 at.%, while the concentrations of oxygen and silicon is between 25 and 30 at.% for all samples. The small variations in the composition may be attributed to the accuracy of the XPS technique or to actual variation in the composition, but because the differences are marginal it is possible to conclude that the stoichiometry of the deposited films does not vary significantly between different positions in the plasma reactor. Conclusions Many commercial plasma reactors for the deposition of thin films from organic precursors using the PECVD technique suffer from non-uniform deposition rates. Moving the products to be coated by placing them on planetaria enables reasonable coating uniformity, but the efficiency is poor, because a significant fraction of the precursor radicals used as building blocks of the protective coatings are lost by adsorption on the powered electrodes and/or by pumping out from the reactor. An attempt was made to measure the deposition rates at various locations inside an industrial reactor powered by a capacitively coupled RF discharge. The plasma reactor had a volume of approximately 5 m 3 . The maximum deposition rate for an empty reactor was measured on a grid near the pump ducts. The next highest rates were measured close to the powered electrodes, but a reasonable deposition rate was also observed far from the powered electrodes or the pump duct. The observation was interpreted by the formation of radicals useful for the deposition of the thin films throughout the reactor. The average residence time of approximately 80 s en- Conclusions Many commercial plasma reactors for the deposition of thin films from organic precursors using the PECVD technique suffer from non-uniform deposition rates. Moving the products to be coated by placing them on planetaria enables reasonable coating uniformity, but the efficiency is poor, because a significant fraction of the precursor radicals used as building blocks of the protective coatings are lost by adsorption on the powered electrodes and/or by pumping out from the reactor. An attempt was made to measure the deposition rates at various locations inside an industrial reactor powered by a capacitively coupled RF discharge. The plasma reactor had a volume of approximately 5 m 3 . The maximum deposition rate for an empty reactor was measured on a grid near the pump ducts. The next highest rates were measured close to the powered electrodes, but a reasonable deposition rate was also observed far from the powered electrodes or the pump duct. The observation was interpreted by the formation of radicals useful for the deposition of the thin films throughout the reactor. The average residence time of approximately 80 s ensured a reasonably large production rate, despite the very low electron density in the plasma away from the oscillating sheaths next to the powered electrodes. Loading the reactor with numerous samples caused a significant difference in the deposition rates. Not only were they lower, but the distribution changed significantly. The deposition rates far from the powered electrodes dropped by more than an order of magnitude for a fully loaded chamber. Deposition rates above about 1 nm min -1 were only observed close to the powered electrodes. These observations indicate the need for modification of the discharge configuration in the industrial plasma reactor for depositing protective coatings from HMDSO precursor using the PECVD technique.
10,633
2021-10-05T00:00:00.000
[ "Physics" ]
Energy, Economic, and Environmental Assessment of Sweet Potato Production on Plantations of Various Sizes in South China Sweet potato (Ipomoea batatas L.) is an important starch-producing crop used worldwide. However, few studies have been conducted on the energy efficient, cost benefit, and greenhouse gas (GHG) emissions of sweet potato production. To address this issue, the data were collected using a questionnaire for face-to-face interviews of 78 sweet potato growers and 74 reference crop (i.e., rice, maize, and potato) growers in Guangdong province. Results revealed that sweet potato production exhibited the highest value of energy efficiency (0.83 kg MJ−1) and economic productivity (0.85 kg CNY−1) among four crops. The GHG emissions from sweet potato production (1165 kg CO2-eq ha−1) were significantly higher than GHG from rice and maize but lower than GHG from potatoes. Moreover, plantation size significantly (p < 0.05) affected inputs of labor, machinery, and diesel fuel and further affected the energy rate, energy efficiency, and GHG emissions of sweet potato production. Sweet potato production in small-size farms (<2.0 ha) exhibited the highest energy efficiency (0.97 kg MJ−1) and the lowest GHG emissions (1045 kg CO2-eq ha−1). Quartering assessments based on energy efficiency, economic productivity, and GHG emissions showed that fertilizers and labor were the major contributors to energy consumption, economic costs, and GHG emissions. Future efforts should be made to reduce fertilizer application and increase fertilizer use efficiency for sustainable sweet potato production. Introduction Sweet potato (Ipomoea batatas L.) plays an old and important role in food culture worldwide [1]. Sweet potato contains simple fermentable sugars (e.g., sucrose, glucose, and fructose), dietary fibers, and minimal amounts of proteins, lipids, and functional components [2,3], attributes that have contributed to its role as an important food crop in many developing countries. In addition, sweet potato has been considered a preferred starch-based feedstock for industrial production [4,5]. China, as the leading producer of sweet potatoes, had an annual production of 49 million tons (55% of the world's production) in 2020 [6]. Sweet potato is cultivated in 27 provinces in China, and its large production regions are distributed in the southwest, east, and south of China. Sweet potatoes once served as the major food in China; in recent years, more than 70% of sweet potatoes have been used for industrial purposes. The proportion of sweet potato as a staple food has declined significantly compared with other crops, i.e., rice (Oryza sativa L.), maize (Zea mays L.), wheat (Triticum aestivum L.), and potato (Solanum tuberosum L.). However, the sweet potato can still be viewed as an emergency food crop when the staple food crops face supply deficit risks [7]. In addition, sweet potato-based bioethanol could ensure food security, satisfy the non-grain biofuel feedstock requirement, reduce petroleum dependency, and generates development opportunities in the agricultural and agro-industrial sectors [5,8]. To further improve the efficiency and sustainability of sweet potatoes, it is of great significance to comprehensively assess sweet potato production from an energy, economic, and environmental perspective. Energy efficiency is an important factor in the assessment and optimization of a product [9,10]. Enhancement of energy efficiency in agriculture has been shown to save energy resources, improve farm profits, and protect the environment [11]. Economic analysis is also a vital factor that should be considered for promoting the sustainable development of agricultural systems [1,12]. Moreover, environmental performance, especially greenhouse gas (GHG) emissions, is a central worldwide environmental concern. The mitigation of GHG emissions from agricultural systems has been identified as an attractive strategy for stabilizing carbon dioxide emissions before 2030 [13,14]. To date, numerous studies have investigated energy, economic, or environmental consequences of agricultural production with a focus on cereal crops (e.g., rice, maize, and wheat) [15][16][17][18], horticultural crops (e.g., potato, tomato (Solanum lycopersicum), and apple (Malus domestica)) [9,19,20], and bioenergy crops (e.g., sweet sorghum (Sorghum bicolor L.), Jerusalem artichoke (Helianthus tuberosus L.), and fodder galega (Galega orientalis Lam.)) [12,21,22]. However, to our knowledge, few studies have been published on the energy balance, economic benefit, and environmental performance of sweet potato production. Furthermore, plantation size could affect energy consumption, inputs cost, and GHG emissions of crop production [11]. Nassiri and Singh [23] revealed that small-size farms had high energy efficiency as compared to larger farms for rice production. Wu et al. [14] found that plantation size was a factor in the use intensity of agricultural chemicals in China. However, this issue has not yet been sufficiently considered in sweet potato production. Guangdong province has abundant solar radiation and water resources, creating favorable conditions for growing sweet potatoes throughout the entire year. The province was taken as a case study because Guangdong is a typical province of intensive sweet potato farming, with an annual production of 3.5 million tons, accounting for 6.6% of the crop in China [24]. Therefore, the specific objectives of this study were (i) to assess the energy and economic inputs and outputs and GHG emissions of sweet potato production per hectare in Guangdong, China; (ii) to analyze the effect of plantation size on energy, economic, and environmental performance of sweet potato production; and (iii) to propose operations where energy and cost savings could be realized by changing applied practices in order to increase the energy efficiency and economic productivity and reduce GHG emissions for sweet potato production. The findings of this study may not only provide support for the sustainable development of sweet potatoes but also provide an important reference for sweet potato production in other developing countries that have similar situations to Guangdong province of China. Survey Sites and Data Collection Western Guangdong, Eastern Guangdong, and the Pearl River Delta are the main sweet potato-producing regions in Guangdong, accounting for approximately 90% of the province's total production. Western Guangdong is the most representative sweet potatoproducing area. Therefore, Western Guangdong (the cities of Maoming, Yangjiang, and Zhanjiang), Eastern Guangdong (the cities of Chaozhou, Jieyang, Shantou, and Shanwei), and the Pearl River Delta (the city of Huizhou) were selected as survey sites in the present study. Soil properties and climate characteristics of the production areas are presented in Table 1. Farms were selected randomly from villages in the area of study. According to a pre-survey feedback and literature reports [14,23], the plantation land area was assigned to Agronomy 2022, 12, 1290 3 of 15 three levels, i.e., small size (less than 2.0 ha), medium size (from 2.0 to 10.0 ha), and large size (more than 10.0 ha). Face-to-face interviews were conducted in 2019 using a questionnaire to achieve input and output data for sweet potato production. Ultimately, a total of 78 fully answered and validated questionnaires were collected, comprising 52, 18, and 8 from the Western Guangdong, Eastern Guangdong, and the Pearl River Delta, respectively ( Figure 1). The numbers of small-, medium-, and large-size farms were 35, 19, and 24, respectively. To standardize the feasibility and performance of sweet potato production, the input and output data of three main reference crops (i.e., rice, maize, and potato) were also collected from information provided by 74 local growers in this survey. The system boundary of sweet potato production and reference crop production included land preparation, sowing, growing, and harvesting ( Figure 2). The functional unit was per hectare. Table 1. Farms were selected randomly from villages in the area of study. According to a pre-survey feedback and literature reports [14,23], the plantation land area was assigned to three levels, i.e., small size (less than 2.0 ha), medium size (from 2.0 to 10.0 ha), and large size (more than 10.0 ha). Face-to-face interviews were conducted in 2019 using a questionnaire to achieve input and output data for sweet potato production. Ultimately, a total of 78 fully answered and validated questionnaires were collected, comprising 52, 18, and 8 from the Western Guangdong, Eastern Guangdong, and the Pearl River Delta, respectively ( Figure 1). The numbers of small-, medium-, and large-size farms were 35, 19, and 24, respectively. To standardize the feasibility and performance of sweet potato production, the input and output data of three main reference crops (i.e., rice, maize, and potato) were also collected from information provided by 74 local growers in this survey. The system boundary of sweet potato production and reference crop production included land preparation, sowing, growing, and harvesting ( Figure 2). The functional unit was per hectare. Table 1. Farms were selected randomly from villages in the area of study. According to pre-survey feedback and literature reports [14,23], the plantation land area was assigne to three levels, i.e., small size (less than 2.0 ha), medium size (from 2.0 to 10.0 ha), an large size (more than 10.0 ha). Face-to-face interviews were conducted in 2019 using a questionnaire to achieve in put and output data for sweet potato production. Ultimately, a total of 78 fully answere and validated questionnaires were collected, comprising 52, 18, and 8 from the Wester Guangdong, Eastern Guangdong, and the Pearl River Delta, respectively ( Figure 1). Th numbers of small-, medium-, and large-size farms were 35, 19, and 24, respectively. T standardize the feasibility and performance of sweet potato production, the input an output data of three main reference crops (i.e., rice, maize, and potato) were also collecte from information provided by 74 local growers in this survey. The system boundary o sweet potato production and reference crop production included land preparation, sow ing, growing, and harvesting ( Figure 2). The functional unit was per hectare. Energy Analysis Energy inputs of crops included energy from labor, machinery, diesel fuel, fertilizers, agricultural chemicals, water for irrigation, plastic film, and seed. Energy outputs of root and tuber crops (sweet potato and potato) and cereal crops (rice and maize) consisted of energy production from fresh roots, tubers, grains, and the air-dried crop straw. The yield of crop straw was calculated by multiplying the root, tuber, or grain yield by the corresponding field residue index [25]. Input and output data were converted into common energy units with the appropriate coefficients of energy equivalence ( Table 2). Net energy, energy rate, and energy efficiency were calculated for energy analysis according to Deng et al. [26], which are as follows: Energy rate = energy output ÷ energy input (2) Energy efficiency kg·MJ −1 = fresh root, tuber or grain yield ÷ energy input (3) Table 2. Energy equivalents of the input and output in sweet potato, potato, rice, and maize production. Economic Analysis Economic inputs were land rent, labor, machinery, diesel fuel, fertilizers, agricultural chemicals, water for irrigation, plastic film, and seed. The opportunity cost of family labor was also considered. Economic outputs were the total values of roots and tubers (sweet potato and potato) or grains (rice and maize), which were calculated by the product yields and market prices in 2019. A straw of the four crops was returned to the field (no sale), and the economic output of straw was zero in this study. For calculation of economic benefit, the net return, benefit/cost ratio, and economic productivity were determined using the following equations [12]: Net return CNY·ha −1 = economic output − economic input (4) Benefit/cost ratio = economic output ÷ economic input (5) Economic productivity kg·CNY −1 = root, tuber or grain yield ÷ economic input (6) GHG Emissions The GHG emissions from the production and use of machinery, diesel fuel, fertilizers, agricultural chemicals, plastic film, and seeds were estimated. In this study, GHG emissions of carbon dioxide (CO 2 ), methane (CH 4 ), and nitrous oxide (N 2 O) were weighed as the GHG emission coefficients (CO 2 -equivalent) according to their global warming potential characterization factors of 1, 25, and 298, respectively, following IPCC's 100-year estimates [34]. The GHG emission coefficients are given in Table 3. The GHG emissions of sweet potato and the reference crop production were calculated according to the equation below. where Q i represents the quantity of input (i), and C i indicates the GHG emission coefficient of input (i). Quartering Assessment First, 78 questionnaires on sweet potato production were categorized into four quartiles (1st, 2nd, 3rd, and 4th quartiles) based on energy efficiency, economic productivity, and GHG emissions [39,40]. Second, the mean values of the energy inputs, economic inputs, and GHG emissions for the 1st, 2nd, 3rd, and 4th quartiles were calculated, respectively. Finally, the different quartiles were compared according to different farm practices. Statistical Analysis All data collected were entered into Excel 2019 spreadsheets. Analysis of variance (ANOVA) was performed using SPSS 26.0 analytical software (IBM, SPSS Inc., Chicago, IL, USA) to assess the effects of crop species and plantation size on each parameter. Duncan's test was used to assess the differences among means at the p < 0.05 level for each evaluated parameter. Input and Output The agricultural input of root and tuber crops (sweet potato and potato) was higher than that of cereal crops (rice and maize), such as labor, machinery, fertilizers, agricultural chemicals, and seeds (Table 4). Sweet potato production required the most labor input compared with rice, maize, and potato, reaching 875 h ha −1 in Guangdong. Conversely, the yield of root and tuber crops was higher than that of cereal crops. In particular, potatoes exhibited the highest crop yield (35.08 t ha −1 ), followed by sweet potato, maize, and rice. Table 4. Input and output of different crops production (including sweet potato, rice, maize, and potato) and different plantation sizes of sweet potato production in Guangdong. The inputs used in sweet potato production with different plantation sizes are illustrated in Table 4. The difference in agricultural inputs required for sweet potato production under the three plantation sizes was mainly reflected in labor, machinery, and diesel fuel, especially in the process of land preparation. The mean values of needed labor (working hours) were higher in small-size farms, whereas the mechanization time of medium-and large-size farms was longer than that of small-size farms. The mean quantities of fertilizers used for sweet potato production on small-size farms were less than on medium-and large-size farms. In a word, the highest tuberous root yield of sweet potato was found to reach 28.70 t ha −1 on medium-size farms, which was 6.03% and 21.95% higher than that of small and large farms, respectively. Energy Balance The total energy input of sweet potato production was 31.59 GJ ha −1 in Guangdong (Table 5), significantly (p < 0.05) higher than those of rice (20.07 GJ ha −1 ) and maize (30.40 GJ ha −1 ) but lower than that of potato (46.63 GJ ha −1 ). The fertilizers consumed 48.95% of the total energy input, followed by seed (22.39%) and diesel energy (13.32%) during the cultivation period of sweet potato (Figure 3a). As for fertilizers (15.46 GJ ha −1 ), the shares of nitrogen (N), phosphorus (P 2 O 5 ), potassium (K 2 O), and farmyard manure occupied 62.79%, 8.16%, 12.28%, and 16.77%, respectively. The total energy output of sweet potato (167.14 GJ ha −1 ) was significantly lower than that of rice, maize, and potato. This is because the energy equivalence of sweet potato fresh tuberous root is low, even if its yield is high. The energy output of maize was the highest (294.35 GJ ha −1 ), resulting in significantly higher net energy (263.95 GJ ha −1 ) and energy rate (9.68) than the other three crops (p < 0.05). However, sweet potato had significantly higher energy efficiency (0.83 kg MJ −1 ) than other crops (Table 5). Table 5. Energy input and output of different crops production (including sweet potato, rice, maize, and potato) and different plantation sizes of sweet potato production in Guangdong. Parameter Crop The energy performance of sweet potato production was also analyzed considering plantation size (Table 5). Total energy input in small-size farms (27.85 GJ ha −1 ) was significantly lower than in medium-size farms (35.95 GJ ha −1 ) and large-size farms (33.58 GJ ha −1 ) because of the lower inputs of machinery and diesel fuel (p < 0.001). Whereas the energy input of labor in small-size farms was significantly (p < 0.05) higher than in the other two plantation sizes (Table 5, Figure 3b). In addition, the total energy output and net energy of sweet potato production in medium-size farms were slightly higher (p > 0.05) than in small-size farms and large-size farms. As a result, significant effects of plantation size were observed on energy rate and energy efficiency (p < 0.01). The above two energy indicators in small-size farms were the highest; the corresponding values were 6.16 and 0.97 kg MJ −1 , respectively. Economic Benefits The total economic input of sweet potato production was 31,075 CNY ha −1 , significantly (p < 0.05) lower than potato (42,582 CNY ha −1 ) but higher than rice and maize, mainly due to the significantly higher inputs of labor and fertilizer ( Table 6). The cost of The energy performance of sweet potato production was also analyzed considering plantation size (Table 5). Total energy input in small-size farms (27.85 GJ ha −1 ) was significantly lower than in medium-size farms (35.95 GJ ha −1 ) and large-size farms (33.58 GJ ha −1 ) because of the lower inputs of machinery and diesel fuel (p < 0.001). Whereas the energy input of labor in small-size farms was significantly (p < 0.05) higher than in the other two plantation sizes (Table 5, Figure 3b). In addition, the total energy output and net energy of sweet potato production in medium-size farms were slightly higher (p > 0.05) than in small-size farms and large-size farms. As a result, significant effects of plantation size were observed on energy rate and energy efficiency (p < 0.01). The above two energy indicators in small-size farms were the highest; the corresponding values were 6.16 and 0.97 kg MJ −1 , respectively. Economic Benefits The total economic input of sweet potato production was 31,075 CNY ha −1 , significantly (p < 0.05) lower than potato (42,582 CNY ha −1 ) but higher than rice and maize, mainly due to the significantly higher inputs of labor and fertilizer ( Table 6). The cost of labor and fertilizer occupied 46.51% and 22.45% of sweet potato production, respectively (Figure 3a). The total economic output of sweet potato (90,470 CNY ha −1 ) was significantly higher than those of potato, maize, and rice. As a result, sweet potato was significantly profitable, with the highest net return (59,395 CNY ha −1 ), benefit/cost ratio (2.91), and economic productivity (0.85 kg CNY −1 ). In short, the economic benefits of the four crops ranked from highest to lowest were sweet potato > potato > maize > rice. Table 6. Economic input and output of different crop production (including sweet potato, rice, maize, and potato) and different plantation sizes of sweet potato production in Guangdong. Non-significant effects were exerted by plantation size on input and output of the economy and economic benefits (i.e., net return, benefit/cost ratio, and economic productivity) in sweet potato production ( Table 6). The total economic input of sweet potatoes was slightly higher in large-size farms (32,009 CNY ha −1 ) than in small-size (31,062 CNY ha −1 ) and medium-size farms (29,919 CNY ha −1 ). In small-size farms, the economic input of labor was significantly (p < 0.05) higher than that in medium-size and large-size farms by 80.97% and 87.48%, respectively. Conversely, economic inputs of land rent, machinery, and diesel fuel in small-size farms were significantly lower than in medium-size and larger-size farms (Table 6, Figure 3b). The small-size farms had the highest values of net return (67,539 CNY ha −1 ) and benefit/cost ratio (3.17), followed by medium-size farms and large-size farms. The economic productivity in medium-size farms was the highest (0.96 kg CNY −1 ). GHG Emissions Total GHG emissions from sweet potato production was 1165 kg CO 2 -eq ha −1 , significantly (p < 0.05) higher than for rice and maize but lower than for potato (Table 7). GHG emissions from fertilizers, agricultural chemicals, and seeds were significantly different (p < 0.05) among the four crops. Seeds were the key contributor to total GHG emissions in sweet potato production (Figure 3a), accounting for 41.97%, followed by fertilizers (40.77%). Similarly, GHG emissions from reference crop production were also dominated by fertilizers. Table 7. GHG emissions of different crop production (including sweet potato, rice, maize, and potato) and different plantation sizes of sweet potato production in Guangdong. GHG emissions from machinery and diesel fuel were positively correlated with plantation size (Table 7, Figure 3b). For instance, GHG emissions from machinery and diesel fuel increased from 49 and 51 kg CO 2 -eq ha −1 in small-size farms to 118 and 123 kg CO 2 -eq ha −1 in large-size farms. Overall, GHG emissions from sweet potato production in small-size farms were the lowest (1045 kg CO 2 -eq ha −1 ), whereas emissions from large-size farms were the highest (1263 kg CO 2 -eq ha −1 ). Benefits from the Quartering Assessment and Corresponding Key Factors Significant differences in the energy inputs, economic inputs, and GHG emissions of sweet potato production were found among the four quartiles ( Figure 4). The energy inputs, economic inputs, and GHG emissions in the 1st quartile were 25.14 GJ ha −1 , 22,073 CNY ha −1 , and 784 kg CO 2 -eq ha −1 , respectively; these were 13-33%, 33-37%, and 23-52% lower than those in the 2nd, 3rd, and 4th quartiles, respectively. Specifically, as for energy performance, fertilizers were the main reason for the lowest energy input in the 1st quartile (Figure 4a). The application of N, P 2 O 5 , K 2 O, and farmyard manure in the 1st quartile were 105, 65, 125, and 1871 kg ha −1 , respectively. As for economic performance, labor was the key factor related to economic inputs. The amount of labor used in the 1st quartile was significantly lower than in other quartiles ( Figure 4b). As for environmental performance, the differences in GHG emissions between the 1st quartile and the other three quartiles were mainly related to fertilizers (Figure 4c). Overall, high energy efficiency and economic productivity with low GHG emissions could be achieved by adopting the inputs scheme of efficient farms (i.e., 1st quartile). ha −1 in large-size farms. Overall, GHG emissions from sweet potato production in smallsize farms were the lowest (1045 kg CO2-eq ha −1 ), whereas emissions from large-size farms were the highest (1263 kg CO2-eq ha −1 ). Benefits from the Quartering Assessment and Corresponding Key Factors Significant differences in the energy inputs, economic inputs, and GHG emissions of sweet potato production were found among the four quartiles ( Figure 4). The energy inputs, economic inputs, and GHG emissions in the 1st quartile were 25.14 GJ ha −1 , 22,073 CNY ha −1 , and 784 kg CO2-eq ha −1 , respectively; these were 13-33%, 33-37%, and 23-52% lower than those in the 2nd, 3rd, and 4th quartiles, respectively. Specifically, as for energy performance, fertilizers were the main reason for the lowest energy input in the 1st quartile (Figure 4a). The application of N, P2O5, K2O, and farmyard manure in the 1st quartile were 105, 65, 125, and 1871 kg ha −1 , respectively. As for economic performance, labor was the key factor related to economic inputs. The amount of labor used in the 1st quartile was significantly lower than in other quartiles ( Figure 4b). As for environmental performance, the differences in GHG emissions between the 1st quartile and the other three quartiles were mainly related to fertilizers (Figure 4c). Overall, high energy efficiency and economic productivity with low GHG emissions could be achieved by adopting the inputs scheme of efficient farms (i.e., 1st quartile). Energy and Environment Perspective of Sweet Potato Production From the energy point of view, sweet potato and potato production were characterized by lower energy rates but higher energy efficiency compared with the values for rice and maize. This result of energy efficiency can be explained by the high fresh yield of root Energy and Environment Perspective of Sweet Potato Production From the energy point of view, sweet potato and potato production were characterized by lower energy rates but higher energy efficiency compared with the values for rice and maize. This result of energy efficiency can be explained by the high fresh yield of root and tuber crops compared to cereal crops. Owing to the lack of similar research concerning the energy use of sweet potato production, the results were compared with other reference crops. Calculation of energy rate and energy efficiency are well documented in the literature for crops such as rice (5.30-9.00 and 0.17-0.32 kg MJ −1 ) [23], potato (1.71 and 0.47 kg MJ −1 ) [35], and maize (1.68-12.03 and 0.12-0.57 kg MJ −1 ) [41]. In short, crop species, cultivation regime, and environmental factors (e.g., soil and climate) determine energy inputs and energy outputs and ultimately affect energy performance [21,42,43]. In this study, we observed that GHG emission mitigation could be achieved through optimizing agricultural inputs during the period of sweet potato cultivation. As demonstrated by optimizing management patterns of sweet potato production (1st quartile), the fertilizers (mainly N) were responsible for the major portion of GHG emissions. Hosseinzadeh-Bandbafha et al. [44] also reported that GHG emissions of peanut (Arachis hypogaea L.) farms could be reduced by 58.79 kg CO 2 -eq ha −1 by optimizing the energy inputs (i.e., diesel fuel and N). These findings align with previous studies in that GHG emissions in crop production are dominated by fertilizer consumption [35,43]. Although fertilizer application provides greater crop yield and income stability [45,46], excessive chemical fertilizer does not produce a higher yield but can pollute the environment [35,47]. As Cui et al. [48] reported, science-based management practices are effective in reducing N use without compromising crop yields. Overall, the present results highlight the finding that the prevention of excessive chemical fertilizers use could maintain and/or increase tuberous root yields of sweet potatoes with lower GHG emissions, thereby avoiding trade-offs between crop yield and environmental costs [13,44,49]. Plantation size significantly affected inputs of labor, machinery, and diesel fuel and further affected the energy efficiency and GHG emissions of sweet potato production in this study. We found that the small-size farms (<2.0 ha) were more advantageous than mediumand large-size farms in terms of energy efficiency and GHG emissions in sweet potato production. These advantages of small-size farms in agricultural production may largely depend on the level of field operation, suitable planting density, and timely management, as well as low energy equivalence of labor [16,50]. Moreover, several technology training and guidance on sweet potato cultivation for small farms were organized by technicians of the China Agricultural Research System (sweet potato) and Agricultural Technology Extension Center in Guangdong province every year. Similar results have been reported by Nassiri and Singh [23] and Taki et al. [51], who concluded that smaller farms (<2.0 ha) had high energy efficiency compared to larger farms. However, Zhang et al. [11] demonstrated that the expansion of plantation size could benefit maize sustainable production when matched with technical innovation and machinery coordination. The authors also emphasized that supporting smallholder farmers to increase their resource use efficiency is necessary because small-size farms are more common in China. Economic Perspective of Sweet Potato Production From an economic point of view, sweet potato production is of interest to growers because economic benefits are higher than that of reference crops (i.e., rice, maize, and potato). The benefit/cost ratio of sweet potato in this study was superior to findings reported in other previous studies, e.g., 0.86 for cotton (Gossypium spp.) [27], 1.88 for potato [9], 2.76 for maize [22], and 2.31 for Jerusalem artichoke [12]. Although economic benefit may be affected by market price fluctuations related to supply and demand or the prices of production means [10], sweet potato can be regarded as an important cash crop for the central and local governments to implement poverty relief strategies in China. Moreover, in this study, no significant effects were exerted by plantation size on economic benefits. The slightly better economic benefits of small-size farms (<2.0 ha) are mainly due to reasonable and low chemicals input and part of free land rent (self-owned). Similarly, an inverse plantation area-productivity relationship was revealed in most small-sized farms for crop production in Northern China [11]. Wang et al. [52] also indicated that the conventional smaller farms (<6.7 ha) had the better revenue at the point of yield-based profit in grain production. Sweet potato cultivation was labor intensive compared with rice and maize. It is known that a higher labor input increases total cultivation costs [22,50]. This result was similar to those obtained by Fang et al. [12], i.e., that costs of labor and fertilizer exceeded half of the total cultivation cost per unit of land. Although the efficiency of manual labor is much lower than the efficiency of mechanization, manual labor is irreplaceable in meticulous farming practices, e.g., apical bud removal, vine lifting, and weed control. These management practices adopted by manual laborers ultimately contribute to maximizing the yield of sweet potatoes. Practical Implications of This Study Sustainable development of sweet potatoes in Guangdong is of great significance to maintaining a supply and demand balance of the sweet potato market in China. As the amount of rainfall in Guangdong is high, and as chemical fertilizers may be washed away by runoff, the amount of fertilizer and date of fertilization should be considered to avoid the indiscriminate use of chemical fertilizers. Moreover, nitrates rather than ammonium have a greater leaching tendency [53], and such ammonium N sources may be more efficient for sweet potato production in the tropics and subtropics (such as Guangdong). As indicated in this study, the combination of new high-productivity variety and science-based cultivation techniques is urgently required during sweet potato production. Future research should focus on decreasing agricultural inputs (especially optimizing fertilizer quantity) while substantially increasing the productivity of sweet potatoes in an environmentally sustainable way [35,54]. Field experiments are also needed to systematically evaluate the direct GHG emissions during the growth of sweet potatoes. A cost-saving cropping pattern will be attractive because labor costs are continually increasing in China. Reduced labor input after the replacement of labor with mechanization can ensure the timeliness of farm-related activities and increase farm output in terms of productivity [12,16]. For the mechanization and large-scale production of sweet potatoes, favorable policies (e.g., capital subsidies, low-cost financing, and tax incentives) and advanced agronomic management techniques are particularly of significance in promoting the intensive development of sweet potatoes. To further increase economic benefits, it is still necessary to organize technicians to provide scientific guidance to sweet potato farmers in China and other developing countries. Furthermore, sweet potatoes for bioenergy production or functional products should be embraced in view of the rich carbohydrate content and associated health benefits [55,56]. Conclusions Economic-strategic and environmental sustainability aspects appear to be important for the sustainable production of sweet potatoes. In the present study, sweet potato production exhibited significantly higher energy efficiency and economic productivity than those of reference crops (i.e., rice, maize, and potato). Small-size farms (<2.0 ha) had an advantage over medium-and large-size farms in terms of energy efficiency and GHG emissions in sweet potato production, whereas there were no significant differences in economic benefits among the plantation sizes. Furthermore, the improvement of energy use and reduction in GHG emissions at a low cost can be achieved by optimizing agricultural inputs (reducing fertilizer application and labor use) in current sweet potato farming. In short, this study provides valuable information regarding the energy, economic, and environmental aspects of sweet potato production, as well as may help to make high-level decisions on poverty alleviation via the cultivation of sweet potatoes in other regions of the planet.
7,393.4
2022-05-28T00:00:00.000
[ "Environmental Science", "Economics", "Agricultural and Food Sciences" ]
Atypical BSE (BASE) Transmitted from Asymptomatic Aging Cattle to a Primate Background Human variant Creutzfeldt-Jakob Disease (vCJD) results from foodborne transmission of prions from slaughtered cattle with classical Bovine Spongiform Encephalopathy (cBSE). Atypical forms of BSE, which remain mostly asymptomatic in aging cattle, were recently identified at slaughterhouses throughout Europe and North America, raising a question about human susceptibility to these new prion strains. Methodology/Principal Findings Brain homogenates from cattle with classical BSE and atypical (BASE) infections were inoculated intracerebrally into cynomolgus monkeys (Macacca fascicularis), a non-human primate model previously demonstrated to be susceptible to the original strain of cBSE. The resulting diseases were compared in terms of clinical signs, histology and biochemistry of the abnormal prion protein (PrPres). The single monkey infected with BASE had a shorter survival, and a different clinical evolution, histopathology, and prion protein (PrPres) pattern than was observed for either classical BSE or vCJD-inoculated animals. Also, the biochemical signature of PrPres in the BASE-inoculated animal was found to have a higher proteinase K sensitivity of the octa-repeat region. We found the same biochemical signature in three of four human patients with sporadic CJD and an MM type 2 PrP genotype who lived in the same country as the infected bovine. Conclusion/Significance Our results point to a possibly higher degree of pathogenicity of BASE than classical BSE in primates and also raise a question about a possible link to one uncommon subset of cases of apparently sporadic CJD. Thus, despite the waning epidemic of classical BSE, the occurrence of atypical strains should temper the urge to relax measures currently in place to protect public health from accidental contamination by BSE-contaminated products. Introduction Classical Bovine Spongiform Encephalopathy (cBSE), the first prion disease identified in cattle, was initially reported in 1986 in the UK. Food-borne transmission of cBSE to humans was observed ten years later as a variant form of Creutzfeldt-Jakob Disease (vCJD) [1], leading to a major public health crisis. This strain of cBSE is now rapidly disappearing as a result of appropriate containment measures. However, atypical forms of BSE have recently been identified in Europe and North America as a consequence of cBSE testing performed in these countries [2][3][4]. Because these cases are only found sporadically in older animals ($8 years) coming to slaughter with few or no signs of disease, it would be plausible to suppose that atypical forms of BSE may have a lower virulence than cBSE and be innocuous to humans. However, recent studies suggest that one of the two main forms of atypical BSE, initially discovered in Italy and referred to as the bovine amyloidotic spongiform encephalopathy (BASE), might be at the origin of the cBSE epidemic: inoculation of the BASE strain into transgenic and inbred mice showed an apparent natural evolution towards the typical BSE strain [5,6]. Moreover, a possible link has been suggested between BASE and one subtype (MV2) of human sporadic CJD (sCJD) on the basis of biochemical similarities [2,7]. In contrast to vCJD, sCJD is believed to occur de novo without food-borne transmission. However, specific contaminating events by ingestion are difficult to rule out because human prion diseases can have silent incubation periods exceeding 50 years, as demonstrated for kuru [8]. One strategy to evaluate the risk of BASE for humans consists in assessing the susceptibility to disease transmission and the degree of pathogenicity in a non-human primate model that has already been shown to have characteristic clinical signs, histopathological lesions and PrPres profiles following infections with either BSE or vCJD [9,10]. We therefore inoculated cynomolgus macaque monkeys (Macacca fascicularis) intracerebrally with BASE, cBSE and vCJD prion strains. The BASE strain, prepared from brain extract of a 15-year-old asymptomatic cow induced a distinctive and more rapidly fatal disease than cBSE, and showed a biochemical signature similar to that of the MM2 cortical subtype of human sCJD. Cattle and human samples The BASE inoculum (mix of brainstem and thalamus) from an asymptomatic 15 year-old Italian Piemontese cow [2]: 250 ml of a 10% brain homogenate in 5% glucose were inoculated intracerebrally (i.c.) to a single macaque monkey. As controls, we used two macaques inoculated i.c. with cBSE (brainstem from infected UK cattle) and 4 macaques inoculated i.c. with human vCJD [9,11]. Twenty-one subjects with a diagnosis of definite sCJD were referred to the Medical Center in Verona, Italy during the period 2000-2004. Tissues were processed 4-18 hours post-mortem according to established guidelines regarding safety and ethics. Brains were cut longitudinally into two halves. Hemi-brains were frozen and stored at 280uC until biochemical studies were performed. The patient group encompassed all of the different Amount of crude brain in 10% brain suspension inoculated intracerebrally. BSE brain had a 10-fold greater concentration of PrPres than the BASE brain). Animals inoculated with vCJD also received the equivalent of 8 mg of brain by intra-tonsillar injection. doi:10.1371/journal.pone.0003017.t001 Western blot subtypes of sCJD described by Parchi et al [7]: MM1 (5 cases), MV1 (2), VV1 (1), MM2 (4), MV2 (6) and VV2 (3). Non-human primate model Cynomolgus macaques (Macacca fascicularis), captive-bred from the Centre de Recherche en Primatologie (Mauritius), were checked for the absence of common primate pathogens before importation and handling in accordance to national guidelines. Animals were maintained in biological security level 3 animal facilities and clinical examinations were performed regularly. They were humanely euthanized at the terminal stage of the disease, and tissues were either fixed in Carnoy's fluid for histological examination or snap-frozen in liquid nitrogen and stored at 280uC for biochemical analyses. Neuropathology and immunochemistry Neuropathology and immunochemical detection of proteinaseresistant prion protein (PrPres) and Glial fibrillary acidic protein (GFAP) was performed on brain sections as previously described [12]. PrPres analysis Tissues were homogenized to 20% (w/v) final concentration in a 5% sterile glucose solution. PrPres was purified according to a protocol optimized for strain discrimination in ruminants [13,14] (Discriminatory kit ref 3551177, BioRad, Marnes la Coquette, France). Briefly, brain homogenates were first subjected to proteolysis using either 0.4 mg (''low'' concentration) or 4 mg (''high'' concentration) of proteinase K/mg of brain (final concentration) in a special buffer that partially protects the N-terminal part of PrPres in order to increase strain discrimination, and then purified PrPres was concentrated by centrifugation. Purified, non-human primate and human samples were processed for Western blot analysis as previously described: briefly, samples were separated by electrophoresis on a 12% SDS polyacrylamide gel, blotted onto a nitrocellulose membrane and detected by two mouse monoclonal antibodies: the antibody from the BioRad Discriminatory kit, which targets the epitope WGQPHGGX within the N-Terminal octarepeat region at position 57-88, and 3F4, which targets the epitope MKHM in the hydrophobic core at position 109-112. The protein bands were visualized using a peroxidase-conjugated goat anti-mouse antibody and chemiluminescence. Transmission characteristics of BASE and BSE Clinical features. The BASE-inoculated macaque developed clinical signs after a 21 months incubation period. Clinical signs evolved slowly during the first four months, being limited to mild tremor and myoclonus, without impairment in coordination or locomotion, and without anxiety or aggressiveness. In the last month, the clinical picture rapidly worsened with evidence of major spatial disorientation (the animal did not recognize its environment and seemed lost in its cage), cognitive troubles (no recall of food location and at intervals unaccountably stopped eating) and the appearance of incoordination and disequilibrium; however, appetite and general fitness were maintained. Euthanasia was performed at the terminal Figure 2. Histopathology and PrPres immunostaining. Spongiosis, gliosis (GFAP staining) and PrPres deposition in frontal cortex and obex in BASE-and cBSE-infected primates (original magnification 6200 for spongiosis and gliosis, 6400 for PrPres staining). Immunostaining of PrPres was performed with 3F4 monoclonal anti-PrP antibody after proteinase K treatment as previously described [11]. No staining was observed in the brain of control healthy primates (data not shown) in these conditions. doi:10.1371/journal.pone.0003017.g002 stage of illness at 26 months post inoculation ( Table 1). The two cBSE-inoculated animals had longer incubations periods (37.5 months) and survivals (40 months) despite a presumably larger infecting dose (100 mg containing a 10-fold higher PrPres concentration). Moreover, the clinical presentation was very different: animals exhibited aggressiveness and anxiety in combination with incoordination, severe ataxic tremor, and loss of appetite to the point of near starvation. The four animals inoculated with human vCJD had a clinical evolution similar to that of animals inoculated with BSE, though with less prolonged survivals (25 to 37 months). Histopathology (Figures 1 and 2). In the BASE-inoculated animal, the cortex showed widespread spongiosis and gliosis that were especially prominent in the fourth and fifth layers. Spongiosis was intense in the frontal cortex, with a loss of pyramidal cells in the third and fifth layers. Lesions in the parietal cortex were even more severe, with a complete disappearance of neurons in the fourth layer. In the cBSE-inoculated animals, spongiosis and gliosis were more discrete, and mainly affected the occipital cortex. In the obex and cerebellum, the lesions (spongiosis and loss of Purkinje and granular cells) were less pronounced in BASE than cBSE-infected animals. Immunohistochemistry ( Figure 2). In the BASE-infected animal, PrPres was distributed in a diffuse synaptic pattern (either fine and sandy or roughly granular) with laminar enhancement in the parietal cortex but no evidence of plaques, even when stained with thioflavine T (data not shown), whereas cBSE-infected animals had weak diffuse synaptic labeling but multiple intensely-stained PrPres aggregates and characteristic plaques [9]. Strain discrimination by proteinase K sensitivity and antibody reactivity We made use of a technique developed to discriminate and classify prion strains in small ruminants [14], based on the differential sensitivity of the octapeptide and core regions of PrPres proteins to proteinase K (PK) digestion. Controlled conditions of proteolysis allowed a strain-dependent threshold of removal of the octapeptides. This method, illustrated in Figure S1 (supplementary data), was successfully applied for the diagnosis of the first case of cBSE in a goat [15] and has now been validated by the European Commission for regular use on field. We adapted this test to primate prion strains, using only the higher PK concentration and substituting the monoclonal antibody 3F4 as the anti-core antibody to macaque and human PrP. Banding patterns in Western blots following pre-treatment with a high PK concentration are shown in Figure 3. Both vCJD/cBSE and BASE reacted strongly to anti-core antibody (Panel A). In contrast, vCJD/cBSE also reacted weakly to anti-octapeptide antibody (Panel B), whereas BASE reactivity was abolished (Panels B and C), indicating a gradient of resistance to proteolysis of the N terminal part of the PrPres among these strains. In cattle, the signal was abolished for both cBSE and BASE strains (data not shown). The method also revealed notable differences of octapeptide sensitivity to PK in different types of human prion disease (figures 3 and 4). Comparisons of the relative signals with both anti-core and anti-octapeptide antibodies for each sample indicated that the N terminal part of PrPres from vCJD and the VV2 subtype of sCJD were far more sensitive than either the MM1 or VV1 subtypes (Panel B). The MV2 subtype showed a strong resistance to proteolysis that was clearly different from the BASE-infected primate; however, three of the four MM2 subtype cases exhibited the same signature as BASE, and the fourth case had a significant proportion of PrPres with an intact octapeptide region, as shown in figure 5, indicating the coexistence of two types of PrPres (the majority being type 2). All four cases had clinical features consistent with the MM2 subtype as described by Gambetti et al [16]: comparatively long illnesses dominated by cognitive impairment followed by aphasia, and later in the course of illness the appearance of pyramidal and extrapyramidal signs together with myoclonus, but no cerebellar signs. Neuropathology was also typical of the MM2 subtype, with major cortical spongiosis and little or no involvement of the cerebellum (Table 2 summarizes the clinical, laboratory, and neuropathological features of each case). Discussion We have shown that BASE, the first identified atypical strain of BSE [2], originating from asymptomatic cattle, is transmissible by i.c. inoculation to a species of non-human primate. Although this observation concerned only one animal, its survival was substantially shorter than for all the macaques inoculated with classical BSE as well as the majority of those inoculated with human vCJD. Moreover, in earlier experiments by others on a total of 6 macaques inoculated i.c. with 50 mg of cBSE brain, none had an incubation period of less than 30 months [17], and humanized transgenic mice have been found to be highly susceptible to infection with BASE, and completely resistant to infection with cBSE [18]. If BASE is more pathogenic than classical BSE for primates, it could indicate a more readily transmissible infection from cattle to humans than previously suspected. A preliminary trial of oral transmission is currently ongoing for alimentary risk assessment: 49 months after oral dosing there is no indication of transmission; however, the incubation period following similar oral challenge with cBSE in an already completed experiment was 60 months. The disease induced by BASE was different in all respects from that induced by classical BSE. The clinical presentation was characterized by mild tremors and myoclonus, progressing to a marked cognitive disorder, including spatial disorientation but without anxiety, aggressiveness or loss of appetite. In contrast, cBSE presented signs of anxiety and aggressiveness together with progressive difficulties in locomotion as well as cerebellar signs (major ataxia), and severe decrease of appetite with concurrent weight loss. The widespread spongiform lesions and loss of pyramidal cells in the third and fifth layers of the frontal cortex together with the severe parietal lesions could explain the prominent cognitive signs and the spatial disorientation seen in the BASE-infected monkey, contrasting with the severity of lesions in the obex and cerebellum consistent with the incoordination seen in animals inoculated with cBSE. Amyloid plaques, the hallmark of BASE in cattle, are not produced in the Macaque monkey, and conversely, cBSE does not produce plaques in cattle, but does so in the Macaque [9], a clear indication that plaque deposition depends as much on the host as the prion strain. At the molecular level, under conditions of high proteinase pretreatment and detection using two antibodies reacting with either an epitope in the N terminal octapeptide repeat region or the core of PrP, BASE and cBSE were clearly distinguishable in primate. BASE was detectable only by the core antibody, whereas cBSE was detectable by both antibodies. We estimated that the proportion of octapeptide-resistant PrPres molecules in the BASE brain homogenate was only a small fraction (#1/10) of that of the cBSE brain homogenate. The difference in octapeptide sensitivity to PK between cBSE and vCJD in macaques on the one hand, and Type 1 sporadic CJD in humans on the other hand, is similar to what was observed between cBSE and classical scrapie in sheep. This method can now be used to test both ruminant and human samples to identify similarities and differences in their molecular protein signatures, and to implement the classification of ruminant and possibly human strains. Although classical epidemiological studies have not found any link between scrapie in sheep and goats and human CJD, newer molecular biological studies now indicate that about half of all cases of scrapie are due to previously undetected atypical strains [19] that are experimentally transmissible to sheep and mice [20]. Their risk for humans is unknown and is the subject of current studies in experimental models, including primates. cBSE has been shown to be responsible for human cases of vCJD, but the comparative risk for humans of BASE and other atypical strains of BSE is still unknown, and its clarification will require many years of epidemiological surveillance and molecular biological testing of both bovine and human populations. The first cases of BASE in cattle had PrPres electrophoretic profiles similar to the MV2 subtype of sporadic CJD patients [2] that, together with the presence of amyloid plaques in both the cattle and the patients, suggested a possible link between BASE and this subtype of sCJD. However, our PrPres typing technique has shown that, in the primate, PrPres of other MV2 sCJD patients exhibited a resistance to proteolysis different from the BASE-infected primate, whereas PrPres from vCJD-infected patients and primates behave similarly. This observation, together with the absence of amyloid plaques in the BASE-infected primate, weakens the likelihood of a direct link between BASE and MV2 subtype sCJD patients. In contrast, the specific signature of PrPres in the BASE-infected primate was similar to that seen in three of four patients with the MM2 cortical subtype of sporadic CJD [7]. It is interesting that an important feature of the clinical-pathological syndrome in this BASE-infected macaque -the absence of cerebellar involvementis also a common element in patients with the MM2 subtype of human sporadic CJD (Supplementary Figure S2). However, as illustrated by the clinical details of our four tested MM2 cases, there is considerable patient-to-patient variation, just as there can be variation among individual animals experimentally inoculated with a given strain of TSE. [21,22]. It is not known whether atypical strains of BSE have been circulating for years, or represent new forms of disease, and continuing research is clearly needed to answer both this and the equally important question about a possible relationship to at least certain forms of what are presently regarded as sporadic cases of human disease (sCJD) [4,23]. Moreover, the BASE strain has been described to evolve naturally towards BSE after successive transmissions in inbred mice [6]. The stability and pathogenicity of this strain in humans remains to be determined, and it is worth recalling that the stability of the cBSE/vCJD strain, which retains its specific molecular signature in different infected hosts, is the exception rather than the rule. As has been previously observed [24][25][26], one patient (Case No. 4, cf. figure 5 sample MM2#4) exhibited both types of PrP, i.e. type 2 typical of the MM2 subtype and type 1 observed in the MM1 subtype. On the one hand, this demonstrates the interest of such a simple biochemical test to refine PrP analysis, and on the other hand it raises a question about the existence of different PrPres signatures in the same patient, i.e., different prion strains linked to multiple infections or to variants selected by the host. In summary, we have transmitted one atypical form of BSE (BASE) to a cynomolgus macaque monkey that had a shorter incubation period than monkeys infected with classical BSE, with distinctive clinical, neuropathological, and biochemical features; and have shown that the molecular biological signature resembled that seen in a comparatively uncommon subtype of sporadic CJD. We cannot yet say whether BASE is more pathogenic for primates (including humans) than cBSE, nor can we predict whether its molecular biological features represent a clue to one cause of apparently sporadic human CJD. However, the evidence presented here and by others justifies concern about a potential human health hazard from undetected atypical forms of BSE, and despite the waning epizoonosis of classical BSE, it would be premature to abandon the precautionary measures that have been so successful in reversing the impact of cBSE. We would instead urge a gradual, staged reduction that takes into account the evolving knowledge about atypical ruminant diseases, and both a permanent ban on the use of bovine central nervous system tissue for either animal or human use, and its destruction so as to eliminate any risk of environmental contamination. Supporting Information Figure S1 Resistance to proteolysis of different prion strains in sheep. PrPres from brain homogenates of sheep infected with classical scrapie, experimental cBSE, or atypical Nor-98 scrapie, and of an uninfected control sheep. Samples were purified using low (odd lanes) or high (even lanes) concentrations of proteinase K, and visualized with monoclonal antibodies that recognize either the core region (Panel A) or the octapeptide region (Panel B) of the protein. With the lower concentration of PK used in the purification step (in order to maximize test sensitivity) of one widely utilized BSE screening test [13], all three strains gave a positive result with both the anti-core and anti-octapeptide antibodies (odd lanes). Using a higher concentration of PK (even lanes) did not alter the positivity with either antibody for classical scrapie, but the cBSE strain no longer reacted with the antioctapeptide antibody while Nor98 did not react with either antibody. Thus, by using the higher concentration of PK and two different antibodies, it is possible to discriminate between all three strains. Found at: doi:10.1371/journal.pone.0003017.s001 (2.51 MB TIF) Figure S2 Lesion profiles in cBSE-and BASE-infected macaque, and in MM2 sporadic CJD patients. The lesions were scored from 0 to 4 (negative, light, mild, moderate, and severe) for the different following gray matter regions: frontal (FC), temporal (TC), parietal (PC) and occipital (OC) neocortices, hippocampus (HI), parasubiculum and entorhinal cortex (EC), neostriatum (ST) (nuclei caudatus and putamen), thalamus (TH), substantia nigra (SN), midbrain periventricular gray (PG), locus ceruleus (LC), medulla (ME) (periventricular gray and inferior olive) and cerebellum (CE). Scoring for MME sCJD patient was issued from Parchi et al. [7]. Found at: doi:10.1371/journal.pone.0003017.s002 (1.52 MB TIF)
4,891.6
2008-08-20T00:00:00.000
[ "Biology" ]
On the Free Energy of Solvable lattice Models We conjecture the inversion relations for thermalized solvable interaction round the face (IRF) two dimensional lattice models. We base ourselves on an ansatz for the Baxterization described by the author in the 90's. We solve these inversion relations in the four main regimes of the models, to give the free energy of the models, in these regimes. We use the method of Baxter in the calculation of the free energy of the hard hexagon model. We believe these results to be quite general, shared by most of the known IRF models. Our results apply equally well to solvable vertex models. Using the expression for the free energy we calculate the critical exponent $\alpha$, and from it the dimension of the perturbing (thermal) operator in the fixed point conformal field theory (CFT). We show that it matches either the coset ${\cal O}/{\cal G}$ or ${\cal G}/{\cal O}$, where $\cal O$ is the original CFT used to define the model and $\cal G$ is some unknown CFT, depending on the regime. This agrees with known examples of such models by Huse and Jimbo et al. Introduction. Two dimensional solvable lattice models offer a rich ground to study such phenomena as phase transitions, universality and mathematical applications in knot theory. For reviews see [1,2]. These models also enjoy a strong connection with two dimensional conformal quantum filed theory (CFT). See, e.g., the reviews [3,4]. Some time ago the author introduced a method to construct solvable interaction round the face (IRF) from the data of an arbitrary CFT [5]. We call such models IRF (O, h, v) where O is the defining CFT and h and v are two primary fields in the theory. A long standing question is what is the fixed point CFT of the so defined models and how it is related to the original CFT O. We solve this problem here by calculating the free energy of the thermalized models. From this we deduce the critical exponent α and the dimension of the perturbing field in the fixed point CFT. To compute the free energy we first need to thermalize the trigonometric ansatz of [5]. This we do by calculating the two inversion relations for the general IRF models. Then we thermalize the models by replacing the sin(u) function in the inversion relations with the function θ 1 (u, q), where θ 1 is the standard elliptic function. This agrees with all the models where the off-critical Boltzmann weights are known and we conjecture that it is true in general. Thus we are in a position to solve exactly models for which the Boltzmann weights are not explicitly known. We find that in the four main regimes of the IRF model the fixed point CFT is given by a coset of the original theory. Namely, in regimes III and IV the fixed point CFT is consistent with the coset model G/O where G is some unknown CFT. In regimes I and II, the fixed point CFT is O/G. This fixed point RCFT is known exactly, in some cases. For example, in the Andrews-Baxter-Forrester model [6], which is IRF(SU (2) k , [1], [1]), the fixed point field theory was determined to be the unitary minimal models, which are the coset SU (2) k−1 × SU (2) 1 /SU (2) k , in regimes III and IV [7]. In regimes I and II the critical CFT was identified as the parafermionic field theory SU (2) k /U (1), which are the Fateev-Fateev model [8], by Jimbo et al. [9]. Indeed, this agrees with our general result. For the case of O = SU (N ) k , and h = v = fundamental, the fixed point field theory in regime III was shown to be SU (N ) k−1 × SU (N ) 1 /SU (N ) k by Jimbo et al. [10]. Again agreeing with our result for the fixed point CFT. To compute the free energy, in all the four regimes, we follow the method used by Baxter [1] in the hard hexagon model. Our results for the free energies agree with the hard hexagon case for O = SU (2) 3 , in the four regimes. The inversion relations. We wish to study IRF lattice models based on the braiding matrix of conformal field theory (CFT). We fix a conformal field theory O and fixed primary fields in this theory h and v. The IRF model is denoted as IRF(O, h, v) following ref. [5], defined on a square lattice. We assume that the boundary conditions are periodic. Let B i be the braiding matrix in the RCFT which exchanges the field h with the field v [11]. We define the operator < a 1 , a 2 , . . . , a n |B i |a ′ 1 , a ′ 2 , . . . , a ′ n >= B where the matrix B is the braiding matrix and it obeys the braiding relations, The variables on the lattice a m and a ′ m are some primary fields in the RCFT O. From the braiding matrix one can define the projectors, where n is the number of eigenvalues of B i (called the number of blocks) and λ a are the eigenvalues. The projection operators obey the relations, where ǫ a = ±1 according to whether the product is symmetric or anti-symmetric. We define the fusion products of the field h as, and whereh is the complex conjugate field of h, n is the number of blocks and the order of the fields is set in a certain way, which allows for the Yang-Baxter equation of the model. The order of the fields appears to be that ψ a+1 is contained in the fusion product of ψ a with the adjoint representation, and similarly forψ a . (The fact that the number of blocks is the same in both eqs. (the adjoint representation, assuming some quantum group structure). We denote the dimension of ψ a as ∆ a and similarly forψ a the dimension is∆ a . We define the crossing parameters as, where a = 0, 1, . . . , n − 2. We note that ζ a ,ζ a < π/2, which will be important later. In ref. [5] an ansatz for the trigonometric solution of the Yang Baxter equation (YBE) was given. It is where a = 0, 1, . . . , n−1. Our ansatz is that R h,h i solves the Yang Baxter equation, The two YBE equations (2.12, 2.13) imply that the transfer matrices for R h,h i (u) commutes with each other for different spectral parameters u and the same for The R matrices obey the first inversion relation which follows from eqs. (2.10, and where we changed by an irrelevant factor the normalization of R h,h i (u). The second inversion relation is crossing. We shall denote again R i (u) by its matrix form. Then the crossing relation is, (as part of our conjectured ansatz), where λ is the crossing parameter, where we used eq. (2.8). The crossing multipliers are where S is the modular matrix [4]. We wish to thermalize now the IRF model. We do not know how to thermalize the Boltzmann weights. So instead we will thermalize the inversion relations. We define the theta function, (This definition differs from the standard one by a factor of 2q 1/4 , which is irrelevant since we will only encounter ratios of theta functions.) Now we conjecture that the thermalization of the first inversion relation, eq. (2.14), is given by replacing the sin by the theta function θ 1 . We denote θ 1 (u, q 2 ) by θ 1 (u). Then, the thermalization of the R matrix is Finally, the crossing relation eq. (2.18) remains the same for general q except for the crossing multiplier, eq. (2.20), whose explicit expression we will not need here. Note that for q = 0 (the critical limit), θ 1 (u) = sin u so we get the same inversion relations as before. These conjectures can be indeed verified for many models for which we know the explicit Boltzmann weights, e.g., [1,2]. Next, we wish to define the free energy of the model. It is given by where N is the number of lattice sites and Z is the partition function calculated with R h,h . The free energy is given as usual by, where k B is Boltzmann constant and T is the temperature. Now, since the transfer matrices commute for different spectral parameters u, the inversion relations translate to equations of κ(u) (fixing some q), and In deriving the last equation, we used the fact that the crossing multipliers cancel when calculating the partition function. Actually, the inversion relations, eqs. (2.28, 2.27), remain the same under We also find it convenient to change the second inversion relation by substi- The second inversion relation then becomes, Regimes III. Our aim now is to solve the inversion relations eqs. (2.28, 2.30) and to calculate the free energy. We assume first that the model is in regime III. This is defined by where d = min i ζ i ,ζ i , and q 2 = exp(−ǫ). It is convenient to use the modular transformation of the theta function. This is defined by [1] We find it convenient to redefine, κ(u) = e 2δu/ǫ e 2(n−1)u 2 /ǫ κ(u), Then, the first inversion relation, eq. (2.28, 2.23), becomes, where where w = exp(−4πu/ǫ), (3.9) and we denoted for brevity f (w) for f (w,q). The second inversion relation, eq. (2.30), becomes,κ We wish to solve now for the free energy, using the two inversion relations. For this purpose we assume that log(k(w)) is analytic in the annulus containing the point w = 1 and the point w = x. We analytically continue k(u) to −λ < u < 0 and to 0 < u < λ. This assumption is justified by considering explicit models, e.g., the hard hexagon model [1]. So we expand, where the summation is convergent in the annulus containing 1 and x . In the neighborhood of 1, eqs. (3.7, 3.8), gives, where m > 0, where Taking the logarithms of eqs. (3.13, 3.7) and equating coefficients, we find, for m ≥ 0. The solution of these two equations is, for m = 0. This completes the calculation of the free energy in regime III. Indeed converges in an annulus containing the points 1 and x, as we assumed. Note that |ζ r | ≤ λ always, for all r, which is needed to show convergence, which we checked in many models, but we do not have a general proof for this fact. We wish to calculate now the critical exponent α. This is defined as the singularity of the free energy, where T c is the critical temperature. We have thatq = exp(−4π 2 /ǫ). Since c m is divided by 1 − x 2m this means that it becomes a theta function at the modulus x 2 , when summed back. Since λ = π∆ 0 /2, eq. (2.9), we may write this modulus as Under a modular transformation, x 2 becomes, Since q 2 ∝ |T − T c | this implies that and so, The dimension of the perturbing operator at the critical conformal field theory is given by [3], Now, since we assume thatψ 1 is the adjoint operator, in quantum group models, as can be see by considering various models, the dimension of the perturbing field is seen to be, where∆ 0 = ∆ adjoint . ⋆ Such a field appears in the coset theory, where O is the original CFT used to define the model and G is some CFT model, where we take the currents in G and the adjoint representation in O. Thus, we conjecture that the fixed point of Regime III is given by the coset C. In many cases where the fixed point theory was calculated explicitly, this was indeed shown to be the case. For example, the RCFT O of the ABF model [6] is SU (2) k WZW model. The fixed point in Regime III is the k + 1 minimal model [7], defined by the coset Regime II. Let us consider regime II. This is defined by The first inversion relation remains the same, eqs. (2.28, 2.23). The second inversion relation becomes, where the theta function is invariant under the shift by π. We define, Then the second inversion relation, eq. (4.2), becomes, where whereq, x, z r andz r are given by eqs. (3.12, 3.14, 3.10, 3.11). We assume now that log[w µκ (w)] (4.7) is analytic in an annulus a < |w| < b containing the points w = 1 and w = w 0 . This is an analytic continuation of w to w < 1 and w > w 0 even though the regime is defined for a subset of these values. The free energy log[w µκ (w)] is given by the series where a m is some function. Thus it is given by a theta function with the modulus under a modular transformation, as in eq. (3.3), we find where q 2 is proportional to |T − T c |. Thus we find for the exponent α, For a quantum group model∆ 0 is the dimension of the adjoint representation, conjecturally. Thus the dimension of the perturbing thermal operator is An operator with such a dimension appears in the Coset model, where O is the RCFT used to define the model and G is some CFT model. Thus we conjecture that C is the fixed point theory in regime II. The perturbing operator is the adjoint representation in O and the unit in G. For example, for the ABF model in regime II, where O is SU (2) k WZW model, the fixed point theory is the parafermionic theory [9], Let us get now to regimes IV and I. Regime IV is defined by where d = min i ζ i ,ζ i . We use the inverse modulus [1], where, as before, We define where the crossing parameters were defined in eqs. (2.8, 2.9). In regime IV we write the two inversion relations as 2πζ r + 2ζ r (π − ζ r ). (5.10) then, the first inversion relation (for regimes I and IV) becomes, The second inversion relation, for regime IV, becomes, where we define the crossing parameter, We denoted for brevity f (w,q) as f (w). As before, we assume that log(w µκ (w)) is analytic in an annulus a < |w| < b containing w = 1 and w = w 0 . Thus we expand and we find (in regimes I and IV) Similarly we expand for m = 0, and Thus, as before, eq. (3.27), we find where for m = 0 and We note that indeed this series converges in the annulus containing w = 1 and Let us compute now the critical exponent α. The series for c m , eq. (5.23), is an expression for a theta function with the modulus, (we ignore the overall sign). The expression forκ includes factors of θ 4 (u, −q 2 ). The function θ 4 is defined by and it satisfies the conjugate modulus relation Denoting p = −q 2 , we find that the inverse modulus transformation gives the modulus, This is the same dimension as in Regime III and we conclude that it is the other side of the same fixed point. The RCFT in Regime IV is thus conjectured as the coset model where O is the original CFT used to define the model and G is some unknown CFT. Let us turn now to regime I. It is defined by 1) The first inversion relation is the same as in regime IV, eq. (5.7). For the second inversion relation we take as in regime II, We define as in regime IV, q,q, x,z r and w, eqs. (5.4-5.6). We further definẽ κ(u) = e 2(n−1)u 2 /ǫ e δu/[ǫ(2λ−π)] κ(u), Then the second inversion relation, eq. (6.2), then becomes where and f (w,q) was defined in eq. (5.3). Now, we need to expand as before and we find, where d 0 is given by eq. (5.16). Also, for m = 0. So, as before, where and as before, for m = 0. The series, eq. (6.10), indeed converges in the annulus containing w = 1 and w = w 0 . Now, we wish to compute the exponent α. The expression for c m is a theta function with the modulus which can be written as where we used the definition of x and λ, eqs. (5.5, 2.9). Thus, the modulus of the theta function has the modulus (q 2 ) 2(1−∆ adjoint ) . (6.15) Now, since the expression for the theta function contains θ 4 , at positive modulus, the same discussion as in section (5) shows that 2 − α = 1 1 − ∆ adjoint , (6.16) and the dimension of the perturbing field is This is the same dimension as in regime II and we conclude that it is the other side of the same phase transition with the critical theory given as in regime II by the coset where O is the original theory used to define the model and G is some unknown CFT. This concludes the expression for the free energy in all the four regimes. We can check our results for the hard hexagon model which is IRF(SU (2) 3 , [1]). Discussion. Some other two dimensional solvable lattice models are the vertex models. These are defined by some CFT O and some representation in them h. The Boltzmann weights are elements of End(V × V ) where V are the weights of the representation h. As was discussed in refs. [12,13], the Baxterization of the models is exactly the same formula as the IRF models, eq. (2.10, 2.11). For the case of SU (2) this was described in ref. [14]. Thus, our calculation of the free energy holds equally well for these models, with the parameters ζ i andζ i given by eqs. (2.8, 2.9). We assume here periodic boundary conditions. The free energy then obeys the same inversion relations, eqs. (2.28, 2.30). Thus also the solution for the free energy is the same as described in this paper. Another interesting point is that in the theories C define integrable models when perturbed by the thermal operator. This may be interesting for the building of new integrable models of massive quantum field theories.
4,365.2
2020-11-28T00:00:00.000
[ "Physics" ]
Internal Values of Sport and Bio-Technologized Sport The aim of the paper is confronting internal or intrinsic values of sport detected by different sport-philosophers, such as W. J. Morgan, J. S. Russell, R. L. Simon, N. Dixon, S. Kretchmar, to today’s bio-technologized sports in order to find the ethical guidance for (non)acceptance of new bio-technologies in sport. Thus, in the first part, I will produce an overview of the internal values of sport in the sports-philosophical literature. In the second part, I will provide my understanding of ‘bio-technologized sports’, leaning mostly on W. J. Morgan’s and S. Loland’s previous work in this regard. In the third part, I will show that the key internal value of sport is ‘excellence’ and that the perfectionist account of sport dominates high-level professional competitive sports. However, I will show that ‘excellence’ is prone to different interpretations and understandings which (could) have different implications for the ‘bio-technologized sport’. Finally, I will propose going back to Aristotle and his account of eudaimonia to build principles for the regulation of (non)acceptance of bio-technology in sport. Introduction In the literature of the philosophy of sport, internal values of sport were debated from the late 1980s onward by many scholars, such as W. J. Morgan, J. S. Russel, R. L. Simon, N. Dixon, J. Lopez Frias, and E. Moore. Discussion took two main directions-on one hand, detecting internal values of sport to answer the question of what is sport per se, or, what makes sport such a special human practice, and on the other, building the normative conception of sport with detected internal values as guidance in how to play sports morally. From the early 2000s, the topic of bio-technology in/and sport was introduced and heavily discussed in the literature of sports sciences, from philosophy of sport to psychology, sociology, medicine to bioethics, by scholars such as A. Miah, M. McNamee, S. Camporesi, S. Loland, C. M. Tamburrini, and T. Tännsjö. In this paper, I will confront internal values of sport to the 'bio-technologized sports' to be able to reveal which values are present and reflected in such sport on one hand, while on the other, what the role of such values is in its development. Thus, in the first part, I will produce a critical overview on the topic of internal values in sport (IVS) in the literature of the philosophy of sport. IVS were presented and theorized among different normative internalist conceptions of sport, namely: historicistic conventionalist internalism (Morgan), interpretivism and broad internalism (Russell, Simon, Dixon), pluralistic internalism (Kretchmar), and shallow interpretivism (MacRea). In the second part, I will present my understanding of bio-technologized sport and discuss why we should refer to today's sport as such. In doing so, I will lean on Morgan's and Loland's previous work in that regard. In the final part, I will show that the key internal value of sport is 'excellence' and that the perfectionist account of sport dominates high-level professional competitive sports. However, I will show that 'excellence' is prone to different interpretations and understandings which (could) have different implications for the bio-technologized sport. I will finally propose a few basic guidelines for the regulation of embracing bio-technology advancements in sport based on Aristotle's account. Internal Values of Sport In 1987, the internalist approach in the philosophy of sport was introduced by W. J. Morgan [1,2] based on the grounds of three elements taken from Alasdair MacIntyre's book After Virtue [3], namely: (1) distinction between internal and external values; (2) distinction between social practice and social institutions as carriers of the values; (3) the pursuance towards achieving excellence. In the 1994 book, Morgan [2] further developed the concept of an internalist account of sport with the four central elements: (1) gratuitous logic of sport, (2) sports practice communities, (3) striving for excellence, (4) social context and history. Ergo, I call it 'historicistic conventionalist internalism'. Morgan's internalist theory understands sport as a social practice in MacIntyre's terms, and accepts its specificities: inherent and intrinsic goods and values, internal "gratuitous logic" characteristics and principles, non-instrumental "inside rational deliberations of its practice-community" (ibid., 253), and Suits' unnecessary obstacles present in constitutive rules [4] (p. 41), which ensure permanent "advancement of human excellence" [2] (p. 45). Thus, it seems fair to refer to Morgan as the father of internalism. In my opinion, internal values of sport (IVS) can be defined unless threefold: (1) values specific and essential for sport in general and for each sport in particular, and thus intrinsic; (2) values that can be recognized/identified and reached only through engaging in playing or practicing particular/specific sports [1][2][3], where the dominants are its "sweet tension" (Fraleigh), "zero-sum logic" (S. Kretchmar), and "gratuitous logic" (Morgan, Suits); (3) values in opposition to external or instrumental values. Here, while talking about IVS, to make things more obvious, it is important to go back to MacIntyre and his chess example: Furthermore, if we accept and follow Morgans' division of two kinds of sports practitioners within the specific 'sports-practice communities', IVS can be reachable by both-primary agents (sportsmen/players playing sports competitive games) and secondary (officials, spectators, journalists, scientists, investigators, scholars . . . ) [2] (pp. 236-237). Within the literature on IVS, Martinková stands out as probably the only scholar who tried to show which kind of IVS can be reached concretely. Thus, she distinguishes nine groups of IVS: experiential, competition, self-knowledge, ascetic, maturation and proficiency, interpersonal, moral, sport-specific, and meaning values [5] (pp. 61-62). She considers "intrinsic values of sport" in Morgan's terms, stating that IVS are the goods, end in themselves that arise and are being generated within the participation in sport through practicing [5] (pp. 28,59-60). Thus, without active sport practice IVS cannot be achieved. However, while for Morgan involvement in sport as a part of 'sports practice communities' are the IVS itself, for Martinková IVS are 'side effects' of the active attempt of trying to win [5] (p. 28). Similarly, for Suits, not only that involvement in the autotelic of game-playing is also a precondition of achieving intrinsic values and goods, but he finds game-playing to be an intrinsic value as such, in two different yet connected ways. On the one hand, intrinsic good is the "difficulty(ies)" that prelusory goals and constitutive rules together pose to game-players, while on the other hand the "lusory attitude: of players "loving something good for the property that makes it good" [4] (p. 16). Internalism started as an answer to the problems of formalism understood in Suits' terms presented in the Grasshopper [4]. Suits put essential importance to the constitutive or 'game-defining' rules for and in sports where, despite acknowledging the existence of the regulative or 'penalty-invoking' rules, fair play is understood as playing by the rules. Three problems are crucial here. First, in formalist account, winning the game and cheating is not logically compatible. Hence, the very moment one cheats, one also excludes oneself from the game, therefore one cannot win [4] (p. 24). This is the so-called (logical) incompatibility thesis. Second, formalism is not and cannot be the normative theory for three reasons-one, formalism does not point out any moral value or principle besides obeying the rules; two, playing by the rules is just not enough in terms of playing sports morally; and three, playing (only) by the rules does not have the normative force to produce nor ethical guidance in sport nor to build normative theory. Finally, rules do not 'cover' all the situations in sport. In other words, we do not have rules for everything that is happening in sports competitions. In such cases, argue D'Agostino and Morgan, we lean on conventions or ethos [1]. The problem is that conventions do not provide us with normative force either, mostly because of their questionable normative force. Formal rules are only the outer shell of the game. It is the history of the game-its sustaining traditions, lively passions, storied commitments, and evolving standards of excellence-that flesh in that shell, and enliven it as the specific kind of human practice that it is. [2] (p. 18) Unlike formalism (or conventionalism), what makes internalist conceptions normative (enough) in ethical terms is the inclination to detect internal values of sport to be able to preserve them while playing sports and accord the actions one take in sports with that inclination. In that regard, IVS are the core of the conception of internalism presented in three modus with the normative background developed by three scholars presented and established through four major bibliographical efforts and under three names: (1) historicistic conventionalist internalism: introduced by W. J. Morgan [1,2] and leaned on in previous work of Suits and MacIntyre; (2) interpretivism: brought by J. S. Russell [6] based mostly on R. Dworkin's law theory [7], which was added to Morgan's account; (3) broad internalism or interpretive formalism: presented by R. L. Simon [8], heavily relied on in Morgan's and Russell's previous accounts, and Butcher and Schneider's article on fair play in sport [9]. Russell's interpretivism is based on the analogy with R. Dworkin's theory of adjudication in the law [7] apropos Dworkin's model of deep interpretivism in the law. Namely, Russell uses two standpoints of this theory. First, that moral principles are an addition to the rules of law as well as part of it, and second, that moral principles must be applied in resolving the hard cases in the coherent and principled way by the virtue of integrity to achieve the ethical goal of maintaining the principle of 'justice' [6] (p. 34). He applied them to sports for resolving the cases of ethical dilemmas in which reference to the rules and ethos is not enough, and the umpires need to do the interpretation of the rules. Just, to be able to do that, Russell replaced 'justice' (in law) with 'excellence' (in sports). Finally, Russell posed two principles for the rule interpretation which presuppose the value of 'excellence': (1) Rules should be interpreted in such a manner that the excellences embodied in achieving the lusory goal of the game are not undermined but are maintained and fostered [6] (p. 35); (2) [Rules should be interpreted] "to generate a coherent and principled account of the point and purpose that underline the game, attempting to show the game in its best light" [10] (p. 55). Simon's broad internalism presupposes the existence of internal principles or norms in the idea of sport that are nor rules nor conventions, and uses 'excellence' as the foundational goal of sport, its purpose or the main function [10] (p. 10). Additionally, Dixon [11] involved a form of moral realism, or the necessity to establish "rationally grounded principles about the nature and purpose of sport" [11] (p. 106) to resolve the hard cases in sport, which Russell and Simon adopted [12]. Such realist broad internalism or interpretivism became the leading normative account of sport for two decades or so. The table shows IVS pointed out by Morgan, Russell, and Simon in their conceptions, together with the references they were using in building them. The only value common to the three interpretivism accounts is (striving for) excellence. (see Table 1.) Thus, the authors jointly promote a perfectionistic account of sport. Also, it is quite obvious how Russell's and Simon's lists complement each other, while, unlike Morgan, they show no interest in contextualizing and historicizing the values within their theories. Russell/Simon's joint position is that there is the essence of the sport that can be rationally detected and extracted as normative guidance, while Morgan insist that we should include history and social context or ethos in the sports' normative variable. This is the answer to the questions of where, what, and why started the ongoing, more than a decade long, debate between Morgans' conventionalist internalism [9,11] and the defenders of broad internalism, such as Lopez Frias, C. Yorke, W. Fraleigh, and C. Torres. Two attempts to correct this and bridge the gap, in my belief, deserve special attention-(1) 'pluralistic (broad) internalism' presented by S. Kretchmar [13,14] and partly adopted by J. S. Russel [11], and (2) 'shallow interpretivism' by S. MacRae [15,16]. On one hand, Kretchmar's sixfold pluralistic internalism was built on the 'testing and contesting' [17] understanding of sport, accompanied by human nature as a biological evolutionary heritage [13] (p. 93). He provides practical explications and normative emphasis [13] (p. 86) of broad internalism in sporting practices, to build "comfortable balance among the influences of socialization, reason, and the bio-psychological wherewithal we bring to the sporting project." [13] (p. 96) He introduced six models of sporting endeavors that characterize sports in one of its best lights: (1) achievement and excellence model of sport, (2) serendipitous, (3) epistemological, (4) aesthetical, (5) existential-individualist, and (6) communitarian versions of the sport. The quest for excellence is certainly defensible and attractive, but so are the quests for drama, narrative unity, knowledge, opportunity or serendipity, individual identity, and solidarity or community. [13] (p. 98) On the other hand, S. A. MacRae's shallow interpretivism [15,16] aims to offer a new and defensible normative model, because broad internalism fails in three ways-first, in demonstrating that excellence can function as the foundational goal of sport; second, in proving that excellence is an ethical value [15] (p. 292); and third, in providing insights into the species of value at which goal-directed activities of sports practitioners aim to [15] (p. 285). Thus, to introduce his model that suits sport the best, MacRae distinguishes four species of values: perfectionist (excellence), prudential (well-being), ethical (fairness), and aesthetical (beauty and sublime); and three levels of understanding-the first level assumes the role of ultimate or foundational goal and provides norms and principles, the second realizes and combines specific intrinsic values which further generate the norms and principles at the first level, while the third requires the state of integrity or internal consistency in endorsing the authentical set of values [15] (p. 286). Shallow interpretivism is a model without a third level goal and "norms at the first level are generated from different second-level values, none of which should decisively settle all conflicts or principles" [15] (p. 292). Bio-Technologized Sport Under the term bio-technologized sport (BioTS) I understand the present (historical) state or a version of the sport that has embraced bio-technologies novum in every aspect in which it can help in enhancing the competitive excellence of sportsmen to able them to win and, if possible, break records. However, as such a broad position encompasses many specific positions and characterizations of BioTS, I will narrow it down to just a few that will help me in building an ethical position for regulating the inclusion of bio-technology in sport. Here, I take a lead from W. J. Morgan [18,19], S. Loland [20], and M. Sandel [21]. On one hand, I accept Morgan's threefold historical development of modern sports in which he distinguishes amateur, professional, and scientific conceptions of sport in the specific historical period, where I lean on the scientific one. In the late 19th century period of 'gentleman-amateur sport' in Britain, sport was "pursued principally for the love of the game" [18] (p. 82), anti-strategically and guided by "aesthetic ideal of balance and proportion borrowed from the ancient Greeks" [18] (p. 84). In the first part of the 20th century the rise of professional sport took place in the United States. Here, the emphasis was put almost exclusively on winning, and thus to strategic thinking and conduct, as well as to efficiency and specialization in every aspect of the sport [18] (p. 86). Scientific present-day sport is "all about surpassing previous limitations on athletic performance by incorporating the best that science and technology have to offer" [19] (p. 30). Scientific sport is after making the sport-engaged humans their best competitive athletic versions with fulfilled physical potential. The end of such a sport is enhancing humans to the 'Gattaca' level (after the movie Gattaca). Furthermore, Morgan sees the line separating opposing pro et contra sides. On one side, there are the ones, like W. M. Brown [22], who claim that sport should accept all the latest scientific advances that can help competitors in striving for competitive ends. On the other side are the ones, like M. Sandel and R. Simon, that are resisting free usage of pharmacological and genetic aids in sport, stating that "corrupt athletic competition as a human activity that honours the cultivation and display of natural talents" [21] on one hand, and that drug usage is not at all in accordance to 'purpose of athletic contest' [23]. Through Sandel's words: The problem with [performance-enhancing] drugs is that they provide a shortcut, a way to win without striving. But striving is not the point of sports; excellence is. And excellence consists at least partly in the display of natural talents and gifts that are no doing of the athlete who possesses them [ . . . ] The real problem with genetically altered athletes is that they corrupt athletic competition as a human activity that honors the cultivation and display of natural talents. [21] (pp. [28][29] On the other hand, I accept Loland's threefold discern to (1) non-theory, (2) thin-theory, and (3) thick-theory of sports regarding usage and embracing the technology in sport. 'Non-theory' is in fact 'external' or 'instrumental' theory, because it accepts "any kind of sports technology as long as it serves the purpose of reaching the desired external goals", such as prestige and profit. 'Thin-theory' is 'equality' theory with the positive attitude to acquiring equality of opportunities and objective and optimal conditions for all participants of testing human sporting limits. Thus, according to the thin theory, every technological advancement is acceptable if it is available to all competitors involved in the supposedly equal terms and conditions. Finally, 'thick-theory' or 'regulative' theory aims to differentiate between acceptable and non-acceptable technology in sport [20] (p. 171). Here, Loland is speaking through Aristotle's lenses: "sport ought to be an arena for human development and flourishing and one among many elements of the good life" [20] (p. 167). He finds two footholds of regulating the sports performances through the 'norm of relevance' that states "we should not treat people differently in significant matters based on inequalities upon which they cannot influence in any significant way" [20] (p. 169)-(1) interaction between genes or "genetic predispositions to develop abilities and skills of relevance to sport" and (2) interaction between genes and the environment, which constitutes many elements, such as: the organism of the [athlete's biological] mother, via the first nurture and family upbringing and the general material, social-psychological, social and cultural influences, and to sport specific influences in terms of training and access to relevant material, financial, and human resources. [20] (p. 168) Accepting Morgan's description of the scientific historical phase of sport and Loland's accounts, I will lean the most on thick or regulative theory for several reasons. Firstly, it is following Aristotle's account of eudaimonia or human flourishing and well-being [24]; secondly, it supports the view that sports performance should be all about natural talent combined with environmental influence, where athletes' efforts are crucial [20] (p. 7); and thirdly, it provides clear directions towards (non)welcoming bio-technology in sport. Internal Values and Bio-Technologized Sports Previously shown analysis on internal values in/of sport within internalist normative theories has been mostly mono-dimensional considering (almost only) the value of excellence as the central, ultimate, and essential. Here, I will produce an analysis of the value of excellence and show that despite the fact that it is a single value, it appears in many forms and alterations depending on different factors like social context, understandings of sport in general, and 'sport kinds' in particular, and sport practicing specifics. Furthermore, two notions seem to be crucial and central-excellence and enhancement. It seems that BioTS for many members of sports practice communities aims for excellence in its absolute extreme-for constant record-breaking and competition-winning. For them, acceptance and immediate usage of everything that biotechnology can offer to enhance the sports' competitive performance seems to be the only paradigm in sports. Such views bring concern and the need for regulation. This paper hopes to be a small step in that regard. However, besides excellence there are other different internal values that can provide normative strength in sports and can shape ethical norms and principles. As noted before, Kretchmar points to serendipitous, epistemological, aesthetical, existential-individualist, and communitarian values [13], MacRae to prudential, aesthetical, and ethical ones [15], while Martinková added also experiential, competition, self-knowledge, ascetic, meaning, and sport-specific internal/intrinsic values of sport [5]. Value of Excellence Excellence seems to be the key term as well as the normative position in the literature thus far. Two questions regarding excellence seem to me as the most striking. First, what is excellence, and second, can it serve as normative guidance? I consider excellence as a perfectionist value, which is pointed out by sports internalists as an answer to the question of what is the nature of sport, essence, and purpose of sport. Their answer is "to develop and exhibit excellence at overcoming the sport-specific obstacles created by the rules" [8] (p. 10). Here, it seems important "not [to] confuse excellence, which is attainable, with perfection, which is best seen as a goal that we lucidly see as unattainable yet desirable." [25] (p. 17) It is also worth noticing that, as well as the internalism itself, the key value of excellence was also introduced and pointed out already by Morgan-only through MacIntyre's lenses [1]. Russell later added the value of "integrity" that is implicit in applying the value of excellence to resolve "hard cases" of sport "in a coherent and principled way" [3]. Additionally, contrary to internalism, in my view excellence is not a monistic value but pluralistic, and there are many (versions of) excellence, not just one. So, it seems to me that one should not look on excellence as one value but as many. Therewithal, excellence is a value that has to be effectuated and it is always excellence in/of something specific and actual and not excellence per se. On one hand, depending on and in relation with social context and history, the same type of excellence can appear in many different forms. In such a view, being excellent in physical endurance in football/soccer in the 1960s is in many respects disparate from 2000s and today. Also, the excellence of intensity of playing and training can be utterly dissimilar in the (social) context of poverty and lack of life opportunities than in the context of highly-developed western society. On the other hand, in terms of different specified criteria related directly to diversity of sports and its disciplines value of excellence can also be interpreted and considered in quite different ways. For instance, the value of excellence in speed is in many respects dissimilar in athletic discipline 100 m running and basketball. Moreover, within the same sport, different excellences appear in several ways. For instance, in basketball excellence can appear as excellence in playing defense or offence, dribble, rebound, or assists. More so, there are different excellences for different positions in the basketball team-thus, the excellence in being a playmaker (point guard) is quite unlike the excellence in being a power forward. More precisely, to acquire excellence, the point guard should be excellent at dribbling, passing, anticipating, and 'reading' the opponent team's offence, decision-making . . . Finally, the same excellence can be practiced or actualized in several ways within the same sport. Thus, to stick with the basketball examples, defense can be played excellently as a zone defense in many variations-as a match-up zone, 2-3 zone, 1-2-2 zone, point-zone, buzz-twilight zone, or circle defense. However, the described examples indicate that sometimes the particular value of excellence (of being an excellent point guard) necessitates previous obtainment of several other excellences (in dribbling, passing . . . ), which on the other hand connotate that there are miscellaneous other excellences of this type. And that connotates further the presence of diverse inner types of excellence that can be (or should be) classified and hierarchized in some order-due to their complexity, comprehensive or universalistic character, type of sport and/or discipline, gender group referment... Additionally, the value of excellence was quite different in certain periods of sports history, it is changing and developing through the same history together with sports, per se [20]. Furthermore, in a different social context, unique ethos within the sport practice communities influence quite dissimilar understandings and roles that the value of excellence has [26] (pp. 241-242). Also, excellence in team sports like basketball or football is not the same as within individual sports such as figure skating or high jump. This is similar, as MacRae explains, for 'test-based' (like athletics and gymnastics) and 'oppositional' sports (like soccer and tennis) [16], which Suits distinguishes as "judged or performance-sports" and "refereed or games-sports" [27]. Thus, "the excellences embodied in achieving the lusory goal of the game" are distinct-'excellence simpliciter' in the former and the 'comparative excellence' in the latter [16] (p. 7). According to MacRae, in oppositional sports: 'the excellences embodied in achieving the lusory goal' of the game serve conflicting goals and these conflicts are foundational to the enterprise. The whole point of such sports is that the competitors try to exercise their skills and abilities to frustrate the success of their opponent or opponents. But in acting in accordance with the demands of their oppositional sport they also thereby violate broad internalism's internal principle [of excellence and integrity]. [16] (p. 8) Another problem with excellence is that it is not always ethical, although it often is [15] (p. 293). Examples of boxing and different martial arts where excellence requires profound ways of harming others just emphasizes the problem. More so, many training methods and disparate ways of playing (professional) sports to achieve excellence are harmful and morally questionable and/or unacceptable. Probably the best example in this regard is a recent brain concussion case in American NFL, which was discussed in the literature by scholars like J. Lopez Frias, M. McNamee, J. Hardes, and J. Fry. All mentioned above lead to one simple and clear conclusion-excellence cannot provide us with the universal ethical normative guidance in all the (hard) cases in sport, nor it can be regarded as a sole standard to regulate acceptance and involvement of bio-technology in sport. Secondly, despite the undoubtedly extremely important role of excellence in sport, it should be considered as a pluralistic value that can appear in different concrete forms in sport. In that regard, excellence should be precisely and particularly determined with respect to the relevant aforesaid factors, historical and social context. Finally, other values should be considered and taken into ethical normative account. Five Criteria Model for Regulating Acceptance of Bio-Technology in Sport My standpoint is that the sport needs to have (as precise as possible) regulations for accepting and embracing new technologies in sport, based on (as) strong (as possible) criteria. Also, once established, criteria should be the subject of the constant critical oversight of the sports practice community, accompanied with the permanent criteria debate-a kind of deliberation which will lead to their continual improvement. Thus, in my view, the central question is which biotechnological advancement we accept and why? Answers to those questions depend on three distinct understandings-of the notion of excellence (in sport) we accept; of the kind of normative model we implement or we are guided with; and of the intrinsic values we find fundamental in that regard. Here, I lean on Aristotle, who seems to give us all the basic principles we need for the guidance in altering them for today's BioTS. Aristotle's normative ethical theory of eudaimonia is the most optimal for sports for several reasons. Firstly, his theory is a perfectionist one, which means that the goal, hence moral obligation, is to develop human nature and naturally given capacities. Hurka distinguished three forms of perfection in Aristotles' theory-physical or bodily perfection, theoretical and practical, both perfection of human soul and rationality [28] (p. 37). In terms of sport, bodily perfection is the primary, despite the fact that it is the lowest for Aristotle. However, "Aristotelian perfectionism finds the highest physical good in great athletic feats." [28] (p. 39) Secondly, even though physical perfection can acquire health, reproduction, and survival, through sport a combination of the bodily with theoretical and practical rationality, perfection can be accomplished. As Hurka states: "Athletes such as Wayne Gretzky solve sophisticated tactical problems during their games, as do scientific researchers and politicians." [28] (p. 123) Thirdly, through mixing physical with rational goods, "solve strategic problems at the same time as exercising the body", sport is suitable for "a well-rounded life" [29] (p. 90). The obvious choice here is being involved in sports which develop physical as well as practical and theoretical skills [29] (p. 96). Finally, the five criteria model that I propose here could be of help in determining which bio-technology to include in sport and why: (1) NATURAL TALENT. The first criterion provides respect for the natural human talents that come with genes and assure human development in that regard. To be precise, Aristotle talks about perfecting or realizing our nature that is distinct and unique among all the other creatures, not just our natural talents, and especially not only bodily or physical talents. Still, it is our purpose or function (as well other living beings) to develop our nature, especially because it is the way of reaching eudaimonia. However, from the point of the view of sports, the first criterion goes towards not interfering with the genetically received biological package and the predisposition of athletes "to develop abilities and skills that of relevance to sport" [20] (p. 6). More so, it excludes all the technology that enhance athletes without putting significant personal physical effort in training and competition. It also excludes every possible 'Gattaca' future of sport and realization of the genetically designed 'homo athleticus'. The real problem with genetically altered athletes is that they corrupt athletic competition as a human activity that honors the cultivation and display of natural talents. From this standpoint, enhancement can be seen as the ultimate expression of the ethic of effort and willfulness-a kind of high-tech striving. [21] (p. 25) (2) PERSONAL EFFORT. The second criterion intends to exclude bio-technologies which undermine the fundamental importance of human (sporting) development through efforts and training. According to this criterion, bio-technologies that lack respect for athletic training, effort, and skills, and provide different enhancing 'shortcuts' for athletes have no place within sporting endeavors. "[Athletes'] own efforts in this respect are of crucial importance. Sport performances, then, become matters of our abilities to cultivate our talent through training, and our efforts in competitions. This interpretation of athletic performance enables the realization of talent prescribed by the Aristotelian principle". [20] (p. 7) Despite the fact that Aristotle does not use the term 'effort' or 'training' as such, it is obvious that the person her/himself should realize her/his own potential given by nature through striving and undertaking significant or even extreme efforts. In terms of sport, that means committed training and attempting to win, which brings development in/and self-realization of our own human nature to the highest possible level (3) HUMAN FLOURISHING. The third criterion goes towards promoting the realization of the human talents as a part of human nature. In his study on perfectionism, Hurka [28] "advocates that humans have an obligation to develop human nature, the central human talents and capacities, to the highest possible level." [28] (p. 89) Here, the most applicible is Aristotles' comprehensive (or secondary) account of eudaimonia [24] (1178a, [9][10][11][12][13][14][15][16][17][18][19][20] that includes, besides excellence of areté or moral virtue and practical wisdom, a series of human concerns and actions, including bodily, which lead to the well-rounded life of the well-balanced person. Therefore, the third criterion promotes personal growth and flourishing through self-realization as a human being, and even finding life purpose in and through sports. Here, sport strivings can be considered as a human good per se, while sport can be an integral part of human existence. That means, we should allow bio-technology that respect athletes as unique human beings and support "choice of the technology that seems to promote human values and respect the individual athlete, and rejects technologies that do not" [20] (p. 8). Martha Nussbaum argued, following Aristotle, that our perspective on morality is conditioned profoundly by our understanding of what it is for beings like us to flourish. Our quest for the good life takes place against a background of our natural limitations. We can shift and alter those limits in the process of seeking greater human achievements, just as the athlete surpasses what was thought possible by straining against the existing limitation of the human body. But we need the broad idea of such a limitation to make sense of the surpassing performance. [30] (p. 178) (4) INTRINSIC VALUES. The fourth criterion inclines to the bio-technologies that contribute to the values of sport. In the internalist view, excellence and (its) integrity are criteria enough in this regard. I find this (broad) view narrow and not sufficient enough. Here, I would like to make a few points. First, I believe we should consider internal values in/of as intrinsic because they are not just internal but rather essential for sports (and they are not the essence of sport). Second, the single value of excellence, even in its pluralistic view, is not enough for the purpose broad internalists intended it for. More intrinsic/internal values should be taken into account here. Finally, a new and broader model for/of intrinsic values of sport is needed, where values will be seen in its social and historical sport-practicing context. In this regard, I salute previous literature strivings in that direction by scholars like Morgan [20,21], Martinková [5], Kretchmar [13], Berg [26], and MacRae [15,16]. In this area also lies my own interest and recent work [31]. (5) EQUALITY of opportunity to use new bio-technological advancements is the fifth criterion. It demands that all sport practitioners involved in the joint competition should have equal conditions and access to all bio-technologies relevant to the sports competition. Here I disagree with Breiviks' claim that such equality in conditions is needed only among children and youngsters, while "more inequality at higher levels is needed if the highest perfection is to be reached." [29] (p. 103) In my view, we have diverse categorizations in sport where most of what Breivik demands is already in motion. On the other hand, for every sporting competition it is essential that all competitors have the equal starting point, which means equal conditions and chances for competitive success. It should be noted at the end that the five criteria sketched here need further development and adjustment to the (un)predictable concrete situations in today's BioTS. However, it seems to me that they can provide ethical guidance and help in decisions on whether to accept and use new bio-technologies in sport.
8,128.4
2020-10-03T00:00:00.000
[ "Philosophy" ]
Expanding automated gene summaries for Caenorhabditis and parasitic nematode species in WormBase WormBase and the Alliance of Genome Resources provide several types of gene data including annotations to ontology terms and controlled vocabularies. These are used to automatically generate text summaries to give users a cogent view of gene function. However, automated summaries are not available for genes that lack curated annotations. To increase the genome coverage of the summaries in WormBase, we developed a new software module that generates additional gene summaries for C. elegans and new gene summaries for nine other nematode species: four Caenorhabditis species ( C. brenneri, C. briggsae, C. japonica, C. remanei ), P. pacificus , and four parasitic species ( B. malayi, O. volvulus, S. ratti and T. muris ). The three strategies used to generate summaries for genes that lack curated functional annotations are shown in steps 1, 2, and 3. Description Short textual gene summaries that describe gene function are valued for the ease with which they convey information about a gene and its biological role.The main advantage of gene summaries is that they require no specialized knowledge of database vocabularies and annotations.For several years, WormBase (Sternberg et al., 2024) has provided manually written gene summaries, and later developed an algorithm in collaboration 7/16/2024 -Open Access with the Alliance of Genome Resources (Alliance of Genome Resources Consortium, 2024) to generate automated summaries (Kishore, et. al., 2020).These automated summaries are based on structured, curated gene annotations to ontologies including the Gene Ontology (GO; The Gene Ontology Consortium, 2023), Disease Ontology (DO; Baron et al., 2024) and gene expression annotations to the WormBase Anatomy Ontology (AO; Lee and Sternberg, 2003).We have recently developed a new WormBase-specific software module based on the algorithm developed at the Alliance to provide additional summaries for genes from C. elegans and other nematodes that lack curated functional annotations.This module uses large-scale data from high throughput experiments to generate summaries related to gene expression, gene-gene and gene-chemical interactions.Further, the module uses orthology to transfer gene function statements from related species to the gene of interest in order to build a summary.These strategies resulted in several thousand additional summaries for C. elegans genes and new gene summaries for nine other WormBase nematode species (Howe et al., 2012;Howe et al., 2016).See Table 1 for the full list of species, for numbers related to the different data type statements and the total number of generated gene summaries. The software module implements the following strategies (depicted in Figure 1) in order to generate a gene summary: 1. Data transfer from orthologous genes. (i) For each C. elegans gene, human orthologs with the most number of prediction methods reported by WormBase were selected and the associated molecular activity and disease implication was included in the C. elegans gene summary.These statements are transferred to the gene summary only when GO data are not present. Example C. elegans act-3 gene summary: Expressed in gonad and head.Human ortholog(s) of this gene implicated in several diseases, including Baraitser-Winter syndrome 1; Baraitser-Winter syndrome 2; and autosomal dominant nonsyndromic deafness 20.Human ACTB Contributes to nucleosomal DNA binding activity.Human ACTB enables several functions, including Tat protein binding activity; enzyme binding activity; and kinesin binding activity.A structural constituent of postsynaptic actin cytoskeleton.Is predicted to encode a protein with the following domains: Phosphorylation site; Actin; Actin family; and ATPase, nucleotide binding domain.Is an ortholog of human ACTB (actin beta). (ii) For nematodes other than C. elegans, the best orthologs were selected from related nematode species based on the number of prediction methods and the number of GO annotations in WormBase, and the associated biological processes were included in the summaries. Example C. briggsae fem-2 gene summary: Predicted to enable protein serine/threonine phosphatase activity.Is an ortholog of C. elegans fem-2.In C. elegans, fem-2 is involved in male sex determination; masculinization of hermaphroditic germ-line; and nematode male tail tip morphogenesis. 2. Large-scale data.Large-scale data such as microarray, tiling array and RNA-seq studies that have been collated and summarized in WormBase (Grove et al., 2018) were used to generate statements related to gene expression and its regulation by chemicals and other genes.These statements are included in the gene summary only when GO and expression data are not present.Example gene summary for C. elegans abt-3: Enriched in male based on RNA-seq studies.Is affected by several genes including eat-2; sir-2.1;and npr-1 based on RNA-seq; tiling array; and microarray studies.Is affected by seven chemicals including Tunicamycin; manganese chloride; and multi-walled carbon nanotube based on microarray and RNA-seq studies. Figure 1 . Figure 1.Workflow diagram representing the gene summary generation process.: data.Protein domain data from InterPro(Paysan-Lafosse et al., 2022)in WormBase were used to build additional statements for gene summaries.These statements are included in the gene summary only when GO and expression data are not present.Example gene summary for C. japonica, Cjp-gid-1:
1,120
2024-07-16T00:00:00.000
[ "Biology", "Computer Science" ]
A Degradable Inverse Vulcanized Copolymer as a Coating Material for Urea Produced under Optimized Conditions Global enhancement of crop yield is achieved using chemical fertilizers; however, agro-economy is affected due to poor nutrient uptake efficacy (NUE), which also causes environmental pollution. Encapsulating urea granules with hydrophobic material can be one solution. Additionally, the inverse vulcanized copolymer obtained from vegetable oils are a new class of green sulfur-enriched polymer with good biodegradation and better sulfur oxidation potential, but they possess unreacted sulfur, which leads to void generations. In this study, inverse vulcanization reaction conditions to minimize the amount of unreacted sulfur through response surface methodology (RSM) is optimized. The copolymer obtained was then characterized using Fourier transform infrared spectroscopy (FTIR), thermogravimetric analysis (TGA), and differential scanning calorimetry (DSC). FTIR confirmed the formation of the copolymer, TGA demonstrated that copolymer is thermally stable up to 200 °C temperature, and DSC revealed the sulfur conversion of 82.2% (predicted conversion of 82.37%), which shows the goodness of the model developed to predict the sulfur conversion. To further maximize the sulfur conversion, 5 wt% diisopropenyl benzene (DIB) as a crosslinker is added during synthesis to produce terpolymer. The urea granule is then coated using terpolymer, and the nutrient release longevity of the coated urea is tested in distilled water, which revealed that only 65% of its total nutrient is released after 40 days of incubation. The soil burial of the terpolymer demonstrated its biodegradability, as 26% weight loss happens in 52 days of incubation. Thus, inverse vulcanized terpolymer as a coating material for urea demonstrated far better nutrient release longevity compared with other biopolymers with improved biodegradation; moreover, these copolymers also have potential to improve sulfur oxidation. Introduction The global population, which is 7.9 billion today [1,2], will exponentially grow to 10 billion by 2050. Hence, for the survival of humanity and for food security, enhancing crop production is needed while reducing the environmental population, and preserving soil health will be a challenge. To boost crop yields, the agricultural sector is posing a solution to consume huge amounts of nitrogen fertilizers, which altogether add up to adverse consequences [3][4][5]. Urea is the most essential nitrogen fertilizer; however, it is vulnerable to losses due to surface run-off, leaching, and ammonia volatilization, thus disturbing the neighbor ecosystem [6,7]. It has been estimated that almost 70% of the total urea applied to the crops dissipate to the environment causing low nutrient use efficiency (NUE) and high production cost [8][9][10]. To cease the mounting problem and achieve agronomic and environmental benefits, agricultural researchers and industries have been working to develop novel slow-release fertilizers (SRFs). Slow-release fertilizers are deliberately fabricated manure that delayed the release of the nutrient in synchrony with the nutrient requirement of the crops, hence, increasing the crop yield and nutrient uptake efficacy (NUE) [11]. To date, various materials have been utilized to develop SRF, which includes synthetic and natural polymers and inorganic materials. Although synthetic polymers have demonstrated promising results in terms of nutrient release longevity, the involvement of harmful solvent in the coating of synthetic polymers on urea and non-biodegradability leads to environmental and soil pollution [12][13][14][15]. On the other hand, natural polymers are suffering from their hydrophilic nature, which leads to the abrupt release of the nutrients at an unpredictable time [5]. The brittle nature of the inorganic material such as sulfur promotes the generation of micropores on the coating surface, causing failure in halting the nutrient release [5,[16][17][18]. Such formidable factors arise a need to look for other coating materials that are green, sustainable, and have better physicochemical properties. Sulfur polymers are a new class of green and sustainable polymers produced via a newly developed method called inverse vulcanization. It is a green polymerization process since it does not require any initiators or solvents and also due to the fact that it is highly atom-economical [19,20]. Further, it utilizes the already available and cheap elemental sulfur as the main comonomer, which is openly piled up as a byproduct in gas and petroleum refineries, causing many environmental problems [21,22]. Inverse vulcanization was first reported in 2013 by Pyun et al. as a polymerization technique that uses the same principles of rubber vulcanization; however, in this case, sulfur plays as the main comonomer [19,23]. Three different classes of comonomers, i.e., petro-based, bio-based, and vegetable oils, are utilized in the production of sulfur-based polymers. Vegetable oils consist of an unsaturated portion and a saturated portion, of which the unsaturated portion can act as a comonomer to produce sulfur-based polymers; nevertheless, the complex structure of vegetable oils and also their impurity (saturated portion) make it more difficult to produce controlled sulfur-based polymers using vegetable oils as monomers [21,23,24]. Oils of different vegetables including canola [25][26][27][28], castor [29], rubber seed [30,31], palm [32], linseed [33], corn [34], olive [33], sunflower [33], rice bran [29], soybean [35], and cottonseed [36] have been employed as monomers in the production of sulfur-enriched polymers. Due to the presence of the unsaturated section of vegetable oils, their copolymerization with sulfur results in composite structures because of the presence of the unreacted sulfur. The morphological properties of these composites are highly dependent on the composition of the utilized vegetable oil [32,34]. These polymers have been investigated in several applications, such as Li-S battery cathodes, mercury removal, hydrocarbon removal, and fertilizers [21]. Despite the fact that vegetable-oil-based copolymers have demonstrated promising results in many applications, they are still suffering from some challenges. For example, the presence of unreacted sulfur adversely affects their performance in Li-S batteries as it contributes to the capacity fading of the battery [24]. Sulfur is a secondary, yet indispensable nutrient required for plant growth; Stella F. Valle et al. reported that inverse vulcanized copolymer has the potential to improve sulfur oxidation, hence, providing SO 4 2− in a more convenient way compared with elemental sulfur [35]. The high sulfur content, better sulfur oxidation, and the biodegradable nature of these copolymers have attracted and planted a seed for this research. However, the presence of unreacted sulfur particles can promote the generation of the micropores on the surface of the copolymers, which could cause the sudden release of the nutrient. In lab investigation, it is observed that by optimizing the reaction conditions, the amount of unreacted sulfur can be controlled. Herein, the synthesis of the inverse vulcanized copolymer under optimized conditions is reported in this study. The reaction conditions are optimized using response RSM through central composite design (CCD). The produced copolymer is then characterized using Fourier transform infrared spectroscopy (FTIR), thermogravimetric analysis (TGA), and Differential scanning calorimetry (DSC). Terpolymer is produced to further reduce the amount of unreacted sulfur and utilized to coat the urea to produce slow-release fertilizer (SRF). The morphology of the coated urea is studied using scanning electron microscopy (SEM), and their nutrient-release longevity is investigated in distilled water. A soil burial test is conducted to assess the biodegradability of the copolymer. A schematic figure representing the research work is given in Figure 1. presence of unreacted sulfur particles can promote the generation of the micropores on the surface of the copolymers, which could cause the sudden release of the nutrient. In lab investigation, it is observed that by optimizing the reaction conditions, the amount of unreacted sulfur can be controlled. Herein, the synthesis of the inverse vulcanized copolymer under optimized conditions is reported in this study. The reaction conditions are optimized using response RSM through central composite design (CCD). The produced copolymer is then characterized using Fourier transform infrared spectroscopy (FTIR), thermogravimetric analysis (TGA), and Differential scanning calorimetry (DSC). Terpolymer is produced to further reduce the amount of unreacted sulfur and utilized to coat the urea to produce slow-release fertilizer (SRF). The morphology of the coated urea is studied using scanning electron microscopy (SEM), and their nutrient-release longevity is investigated in distilled water. A soil burial test is conducted to assess the biodegradability of the copolymer. A schematic figure representing the research work is given in Figure 1. Materials Elemental Sulfur (reagent grade) and jatropha oil (JO) were purchased from PC laboratory reagents, Malaysia, and Kinetics Chemicals Sdn Bhd, Malaysia, respectively. Diisopropenyl benzene, diacetyl monoxime, thiosemicarbazide (TSC), Phosphoric acid, sulfuric acid, and tetrahydrofuran were purchased from Sigma-Aldrich. Urea (AR-grade) was procured from PETRONAS Fertilizer Kedah Sdn Bhd, Malaysia. All materials were used as received without further purification. Design of Experiment The design of the experiment was carried out using Design Expert Software (Version 12.0.12.0, from Stat-Ease MN 55413, US) for the optimization of the synthesis of the inverse Materials Elemental Sulfur (reagent grade) and jatropha oil (JO) were purchased from PC laboratory reagents, Malaysia, and Kinetics Chemicals Sdn Bhd, Malaysia, respectively. Diisopropenyl benzene, diacetyl monoxime, thiosemicarbazide (TSC), Phosphoric acid, sulfuric acid, and tetrahydrofuran were purchased from Sigma-Aldrich. Urea (AR-grade) was procured from PETRONAS Fertilizer Kedah Sdn Bhd, Malaysia. All materials were used as received without further purification. Optimization of Inverse Vulcanization Reaction Conditions Design of Experiment The design of the experiment was carried out using Design Expert Software (Version 12.0.12.0, from Stat-Ease, MN 55413, USA) for the optimization of the synthesis of the inverse vulcanized copolymers using response RSM with full factorial CCD. This type of design involves a two-factorial design (+1, −1) overlaid by the central points (0) and the star points (+α, −α) at the distance of α = 1.682 from the design center at the axis of each design variable. Initial sulfur composition, reaction temperature, and reaction time are three selected independent variables for the optimization of the reaction condition to maximize the sulfur conversion in the final structure of the copolymer. Preliminary experiments were carried out to set the range of these independent variables by monitoring the resultant single phase of the copolymer and the release of hydrogen sulfide (H 2 S) gas, which directly affects the structure of the copolymer. Ranges of these factors, along with their levels, are presented in Table 1. As an example, the reaction between sulfur and Jatropha oil (JO) below 170 • C results in a two-phase product that indicates the incomplete reaction, while reaction at a temperature above 185 • C promotes the release of H 2 S gas, which results in the generation of the porous structured copolymer. The release of H 2 S gas was observed by blackening of the lead acetate solution wetted filter paper. The response of the experiments-which is the conversion of the elemental sulfur to the polymeric sulfur chain-was calculated using the DSC thermogram of the resulting copolymers. The thermogram of the elemental sulfur shows the endotherms, which represent the phase transitions of the elemental sulfur from 102 to 120 • C; these are highly dependent on the weight of sulfur, as the linear integration of these endotherms shows. The DSC thermogram of the inverse vulcanized copolymer also shows endotherm in these ranges, which represents the presence of the unreacted sulfur in the copolymer. As the intensity of these endotherms increases with the increase in sulfur weight, we ran a DSC analysis of sulfur with different weights and made a calibration graph to obtain the equation. To calculate the conversion of the sulfur, linear integration was carried out on the endotherms of the copolymers that appeared in DSC thermograms ranging from 102 to 120 • C and compared with the data obtained through the graphical equation. The linear integration was carried out with the help of TA instruments software. Regression Model The data obtained through CCD was analyzed using response surface regression and observed to best fit the quadratic model given in Equation (1). The statical procedures were followed to analyze the goodness-of-fit and the significance of the parameters of the regression model. where Y is the conversion (%) of the sulfur; b o , b i , b ii , and b ij are the constant, linear, squared, and interaction effect coefficients, respectively; and X i and X j are the coded values of the variables i and j, respectively. Synthesis of Copolymer A 25-mL glass vial was filled with the designed weight of the elemental sulfur and placed in a thermoset oil bath preheated to a required reaction temperature under vigorous stirring to initiate the formation of the thiyl radicals. First, the elemental sulfur upon heating starts to melt, after which when the temperature reaches >159 • C, octet structure of the sulfur starts to open to form the thiyl radicals, which is accompanied by the color change from yellow color to orange color liquid; at this point, the designed amount of the jatropha oil is added in a dropwise manner to avoid a sudden decrease in temperature [30][31][32]. After adding jatropha oil to the glass vial, a plaque mixture was formed, which was allowed to react under vigorous stirring for the designed time. The design time, temperature, and sulfur/jatropha oil amount refer to the amount that is required to run the experiment as designed for the optimization of the reaction conditions. The design of experiments is presented in Table 2, which is the combination of 2 3 factorial points, 10 central points, and 2 axial points, summing up to 20 combinations. After the reaction mixture was allowed to react for the desired time, the glass vial was removed from the thermoset oil bath and placed under a fume hood to allow the product to cool at room temperature. During the reaction, it is highly recommended to carry the reaction under a fume hood because it may release toxic gas such as H 2 S. FTIR analysis of the copolymer produced under optimized conditions was carried out to investigate the chemical composition and confirm the successful reaction of the thiyl radicals with the unsaturated part of the jatropha oil. The scan frequency range is between 500-4000 cm −1 with 4 cm −1 resolution. A total 8 number of scans were performed to confirm the chemical structure using the PerkinElmer frontier model spectrometer (PerkinElmer, Waltham, MA, USA). The attenuated total reflectance (ATR) method was used. Thermogravimetric Analysis (TGA) Thermal stability of the produced copolymer was evaluated at a temperature range of 25-800 • C with 10 • C heating rate using a PerkinElmer STA 6000 simultaneous thermal analyzer (PerkinElmer, Waltham, MA, USA) under nitrogen atmosphere. Differential Scanning Calorimetry (DSC) To evaluate the thermal properties and estimate the unreacted sulfur in the produced copolymer, a TA instruments Q2000 thermal analyzer (TA instruments, 159 Lukens Dr, New Castle, DE, USA) was used to obtain the DSC thermogram. Properties of the copolymer were evaluated with 20 • C/min heating rate at −80 to 200 • C temperature range under nitrogen atmosphere. Synthesis of Terpolymer To further reduce the amount of the unreacted sulfur in the final copolymer, 5 wt% diisopropenyl benzene was used as a crosslinker. The terpolymer was synthesized using the same procedure as explained in Section 2.2.2 and using the optimized conditions. Coating of the Urea To coat the urea granules, terpolymer was dissolved in tetrahydrofuran (THF) solvent to produce a coating solution, which was followed by a coating of the urea using the dip-coating method. The coating solution was prepared by dissolving 5 g of terpolymer in 6 mL THF and left overnight in an incubator shaker to make a homogenous mixer. After mixing, 10 g of urea with a size range of 2 to 2.5 mm was added in a polymer solution and gently stirred using a glass rod to obtain a uniform coating on the urea, followed by drying in an oven at 60 • C for 24 h. Morphology of the Coated Urea Morphology of the coated urea was studied by scanning electron microscopy (SEM) using Zeiss EVO LS 15 microscope armed with Oxford Instruments INCAx-act EDX spectroscope (Carl Zeiss, Göschwitzer, Jena, Germany). To obtain the cross-section of the coated urea and to estimate the thickness of the coating, coated urea was cut in half using a sharp knife and coated with gold using a sputter coater (Emitech K550X) for SEM analysis. Nitrogen Release in Distilled Water Total nitrogen content of the coated urea was estimated using the Kjeldahl method [37] before leaching test. After this, 2.0 g coated urea was placed in an Erlenmeyer flask filled with 200 mL of distilled water and sealed with clinging wrap to avoid water loss through evaporation. To measure the leached amount of the nitrogen into water after every 24 h, 2.5 mL of the gently stirred aliquot was taken out, and the water was replaced with 200 mL of fresh distilled water. The concentration of the urea in aliquot was found using diacetyl monoxime (DAM) calorimetry method, which uses red color solution. To obtain the red color solution, the aliquot was combined with 7.5 mL of the color reagent in a 60 mL glass vial and placed in a water bath at 85 • C for 30 min. The amount of urea in the sample determines the intensity of the color. The glass vials containing the solution were then put in ordinary tap water at a temperature of 20 • C for 20 min to cool down. The total release time was determined by doing triplicates and using the standard curve technique. To make the DAM solution, 2.5 g of DAM was dissolved in 100 mL of distilled water; to make the TSC (Thiosemicarbazide) solution, 0.25 g of TSC was dissolved in 100 mL of distilled water; to make the acid reagent, 250 mL of phosphoric acid was combined with 240 mL of distilled water and 10 mL of sulfuric acid. Consequently, the colored reagent solution was made by carefully mixing 25 mL of DAM solution, 15 mL of TSC solution, and 460 mL of acid reagent. Soil Burial Test A soil (sand 20.5%, silt 39.3% and clay 40.2%) burial test was conducted to investigate the biodegradability of the copolymer. For this purpose, 2 g of the copolymer was enclosed in a woven mesh bag (similar to a teabag) and buried under a soil in a polymer container. The bag was buried under the soil at a depth of 10 cm, and the soil was kept moist throughout the experiment. After a regular interval, the buried bag was taken out, washed with distilled water to remove the soil attached to it, dried in an oven to obtain a constant weight, and the weight loss of the copolymer was recorded using the method in [38]. Table 2 (Section 2.2.2). Analysis of variance (ANOVA) is crucial in determining the adequacy of the models; thus, ANOVA was used to analyze the fitness of all regression models, which revealed the highest validity of the quadratic model. No transformation of the data is required as the ratio of the minimum and the maximum response is 3.58 (81.06/22.62), which is less than 10. Fisher F test is conducted on the quadratic model and demonstrated its low sequential p-value (<0.0001) and high square of correlation value (R 2 = 0.9838, Adjusted R 2 = 0.9692 and predicted R 2 = 0.9158) [39], indicating the significance of the model. Full ANOVA of the quadratic model is presented in Table 3. The signal-to-noise ratio (which is required to be greater than 4) is found to be 28.5325, revealing the adequate precision of the model [40], which indicates that this model can be used to navigate the design space. 18 19 There are only 0.01% chances that 67.44 F-value of the model occurs due to noise, indicating the significance of the model. The significance of terms of the model is demonstrated by p-value, which should be less than 0.05, but Table 3 shows some terms such as AC, BC, and B 2 , which mean that model reduction is required. The significant terms such as A, B, and C showed that selection of the parameters for optimization is appropriate, as the ANOVA revealed their significance in influencing the sulfur conversion. The model is reduced by ignoring the insignificant terms, and ANOVA for the reduced model is shown in Table 4. The reduced model has a high F-value of 119.01 with a low p-value of <0.0001 and high square correlation (R 2 = 0.9821, Adjusted R 2 = 0.9739 and predicted R 2 = 0.9518), indicating that the significance of the model increased by ignoring the insignificant terms. After removing the insignificant terms from the quadratic model, the final equation in terms of actual factors to predict the response is shown below as Equation (2) Results and Discussions where Y is the conversion% of the sulfur to poly sulfur, A is the initial sulfur content (wt%), B is reaction temperature ( • C), and C is the reaction time (min). Figure 2a shows the normal dispersion of the error, indicating the adequacy of the model to predict the response in the experimental range. Figure 2b also demonstrates the good fitness of the model as the points of the graph between the actual and predicted response cluster around the straight line [39][40][41]. Polymers 2021, 13, x FOR PEER REVIEW 9 of 18 and high square correlation (R 2 = 0.9821, Adjusted R 2 = 0.9739 and predicted R 2 = 0.9518), indicating that the significance of the model increased by ignoring the insignificant terms. where Y is the conversion% of the sulfur to poly sulfur, A is the initial sulfur content (wt%), B is reaction temperature (°C), and C is the reaction time (min). Figure 2a shows the normal dispersion of the error, indicating the adequacy of the model to predict the response in the experimental range. Figure 2b also demonstrates the good fitness of the model as the points of the graph between the actual and predicted response cluster around the straight line [39][40][41]. Figure 3 depicts the effect of the reaction temperature and initial sulfur loading at a different time on sulfur conversion. As can be seen, increasing the reaction temperature increases the sulfur conversion; however, the sulfur loading increases the amount of unreacted sulfur. Reaction time also has a positive impact on the sulfur conversion. With lower reaction time, the max conversion that can be achieved is ≥71%; increasing the time increases the conversion to ≤80%. To optimize these condition, constraints are set to keep all the reaction parameters within the limit and maximize the sulfur conversion in Design-Expert software. As a result, the software suggested 100 solutions, and we chose the best one that has high sulfur conversion. It is found that 82.37% conversion of sulfur can be achieved if 51.94 wt% S is allowed to react with jatropha oil for 74.21 min at 169.9 • C temperature. Optimization of the Reaction Conditions Polymers 2021, 13, x FOR PEER REVIEW 10 of 18 Figure 3 depicts the effect of the reaction temperature and initial sulfur loading at a different time on sulfur conversion. As can be seen, increasing the reaction temperature increases the sulfur conversion; however, the sulfur loading increases the amount of unreacted sulfur. Reaction time also has a positive impact on the sulfur conversion. With lower reaction time, the max conversion that can be achieved is ≥71%; increasing the time increases the conversion to ≤80%. To optimize these condition, constraints are set to keep all the reaction parameters within the limit and maximize the sulfur conversion in Design-Expert software. As a result, the software suggested 100 solutions, and we chose the best one that has high sulfur conversion. It is found that 82.37% conversion of sulfur can be achieved if 51.94 wt% S is allowed to react with jatropha oil for 74.21 min at 169.9 °C temperature. Fourier Transform Infrared Spectroscopy (FTIR) The copolymer produced under optimized conditions is then analyzed using FTIR to confirm the formation of the copolymer. The FTIR-ATR spectrum of copolymer and jatropha is depicted in Figure 4. The spectrum of jatropha contain cis-alkene character peaks at 1660 and 3009 cm −1 representing the stretching of C=C and C=C-H, which appeared due to the unsaturated part of jatropha oil [24,25,34]. However, these peaks disappear in the spectrum of the copolymer and a new peak appears at 804 cm −1 , representing the vibration of C-H in vicinity of C-S bond, and thus, confirming the utilization of C=C to form C-S bond meaning a copolymer has been successfully formed [34,35]. Thermal stability of the copolymer is investigated using thermogravimetric analysis. TGA thermograms of copolymer, jatropha oil, and elemental are presented in Figure 5. Elemental starts to decompose at 200 • C and fully decomposes at 320 • C. Jatropha oil starts to degrade at a temperature of 289 • C in a two-step manner. The significant loss in one step is due the degradation of the polyunsaturated fatty acids followed by the decomposition of monounsaturated acids and remaining polyunsaturated acids, and it completely decomposes at 600 • C [42,43]. Meanwhile, the copolymer degrades in three steps; in the first step, loosely bonded and unreacted sulfur starts to degrade, which is onset at 205 • C, followed by the degradation of the oil part of the copolymer [24,29,34,44]. Copolymer yielded 18% char at 800 • C, which reflects its thermal stability. The copolymer is found to be thermally stable. To evaluate the thermal properties and estimate the sulfur conversion, DSC analysis is carried out. DSC thermograms of the copolymer and elemental are shown in Figure 6. Two endotherms appeared in the thermogram of the sulfur at 103 and 119 °C, representing the crystalline nature of the sulfur [24,25,35]. However, only one endotherm was observed in the DSC thermogram of the copolymer formed, which represents the presence of unreacted sulfur in the copolymer. By carrying out integration of the copolymer endotherm, 17.8% unreacted sulfur is found. The obtained model predicted the conversion to be 82.37% while the actual conversion is 82.2%, which shows the goodness of the model. To evaluate the thermal properties and estimate the sulfur conversion, DSC analysis is carried out. DSC thermograms of the copolymer and elemental are shown in Figure 6. Two endotherms appeared in the thermogram of the sulfur at 103 and 119 • C, representing the crystalline nature of the sulfur [24,25,35]. However, only one endotherm was observed in the DSC thermogram of the copolymer formed, which represents the presence of unreacted sulfur in the copolymer. By carrying out integration of the copolymer endotherm, 17.8% unreacted sulfur is found. The obtained model predicted the conversion to be 82.37% while the actual conversion is 82.2%, which shows the goodness of the model. To evaluate the thermal properties and estimate the sulfur conversion, DSC analysis is carried out. DSC thermograms of the copolymer and elemental are shown in Figure 6. Two endotherms appeared in the thermogram of the sulfur at 103 and 119 °C, representing the crystalline nature of the sulfur [24,25,35]. However, only one endotherm was observed in the DSC thermogram of the copolymer formed, which represents the presence of unreacted sulfur in the copolymer. By carrying out integration of the copolymer endotherm, 17.8% unreacted sulfur is found. The obtained model predicted the conversion to be 82.37% while the actual conversion is 82.2%, which shows the goodness of the model. Coating of Urea with Terpolymer As revealed by DSC thermogram, there is still unreacted sulfur present in copolymer; to overcome this and minimize the unreacted sulfur, terpolymer were synthesized using the same optimized conditions except the 5% DIB was used as crosslinker, as explained in our early investigation reporting that addition of crosslinker reduces the unreacted sulfur [30,31]. After synthesis of terpolymer, urea granules were coated using the solution of terpolymer with THF using dip-coating method. The coated granules were then placed in oven for 24 h for drying. Morphology of the obtained coated urea was then investigated by taking SEM images. Cross-section SEM images of the coated urea are shown in Figure 7, which clearly differentiate the urea (marked by yellow circle) and coating (marked by red circle). SEM images revealed the nonuniformity of the coating caused by the sticky nature of the copolymer, which promotes adhering of coated urea with each other. The SEM image also revealed that there is no unreacted sulfur present as no isolated particles appear on the surface of the copolymer. The thickness of the coating is found to be 206.31 µm. Coating of Urea with Terpolymer As revealed by DSC thermogram, there is still unreacted sulfur present in copolymer; to overcome this and minimize the unreacted sulfur, terpolymer were synthesized using the same optimized conditions except the 5% DIB was used as crosslinker, as explained in our early investigation reporting that addition of crosslinker reduces the unreacted sulfur [30,31]. After synthesis of terpolymer, urea granules were coated using the solution of terpolymer with THF using dip-coating method. The coated granules were then placed in oven for 24 h for drying. Morphology of the obtained coated urea was then investigated by taking SEM images. Cross-section SEM images of the coated urea are shown in Figure 7, which clearly differentiate the urea (marked by yellow circle) and coating (marked by red circle). SEM images revealed the nonuniformity of the coating caused by the sticky nature of the copolymer, which promotes adhering of coated urea with each other. The SEM image also revealed that there is no unreacted sulfur present as no isolated particles appear on the surface of the copolymer. The thickness of the coating is found to be 206.31 µm. Nitrogen Release in Distilled Water Nitrogen release from coated urea is tested in distilled water using DAM calorimetry method. The nitrogen release profile of urea and the coated urea are shown in Figure 8. The initial nutrient release rate from the coated urea reflects the integrity of the coating; the stronger and more thorough the coating is, the slower the nitrogen release rate. The pristine urea releases almost 99.9% of its total nutrients within 24 h of incubation, whereas the coated urea delayed the release of the nutrients and released only 65% of its total nutrients after 40 days of incubation, which is far better compared with urea coated with biopolymer, which release nutrients in less than 5 days of incubation [38]. These promising results demonstrate the potential of these copolymers to be utilized as coating material for urea as they have shown results comparable to the synthetic-petroleum-based polymers. Nitrogen Release in Distilled Water Nitrogen release from coated urea is tested in distilled water using DAM calorimetry method. The nitrogen release profile of urea and the coated urea are shown in Figure 8. The initial nutrient release rate from the coated urea reflects the integrity of the coating; the stronger and more thorough the coating is, the slower the nitrogen release rate. The pristine urea releases almost 99.9% of its total nutrients within 24 h of incubation, whereas the coated urea delayed the release of the nutrients and released only 65% of its total nutrients after 40 days of incubation, which is far better compared with urea coated with biopolymer, which release nutrients in less than 5 days of incubation [38]. These promising results demonstrate the potential of these copolymers to be utilized as coating material for urea as they have shown results comparable to the synthetic-petroleum-based polymers. Initial release rate of nutrient is very slow until the 10th day of incubation-this period is regarded as the lag period. This coated urea perfectly follows the European Standard (EN 13266, 2001) as it does not release 15% of nutrients in 24 h of incubation, which reflects the integrity of the coating film. The release of urea is characterized by a tendency to auto acceleration (Figure 8), which is possibly associated with an increase in the cross-sectional area of the pores of the polymer shell with time. If we assume that the release of urea (nitrogen source) obeys first-order kinetics, which is characteristic of highly soluble substances, and the pores are closed with urea, which creates a temporary diffusion barrier, then Equation (3) can be written as where is the conversion of nitrogen (urea) release; s is the total cross-sectional area of the pores of the polymer shell; is the true constant of the rate of urea release; is time. As the pore-clogging urea dissolves, a linear increase in the total pore cross-sectional area of the polymer shell can be expected in accordance with Equation (4). where is the proportionality coefficient. Substituting (4) into (3), dividing the variables, and integrating in the range from 0 to N and from 0 to t, we obtain Equation (5) as where = is the rate-effective constant urea release. Equation (5) is linear in the coordinates "ln 1 − . ", which allow determining the value of from the tangent of the slope of the straight line ( Figure 8A). The rateeffective constant urea release is 6.2 × 10 −4 day −2 , which allows calculating the theoretical kinetic release curve according to Equation (6) and comparing it with experimental data ( Figure 8B). Initial release rate of nutrient is very slow until the 10th day of incubation-this period is regarded as the lag period. This coated urea perfectly follows the European Standard (EN 13266, 2001) as it does not release 15% of nutrients in 24 h of incubation, which reflects the integrity of the coating film. The release of urea is characterized by a tendency to auto acceleration (Figure 8), which is possibly associated with an increase in the cross-sectional area of the pores of the polymer shell with time. If we assume that the release of urea (nitrogen source) obeys first-order kinetics, which is characteristic of highly soluble substances, and the pores are closed with urea, which creates a temporary diffusion barrier, then Equation (3) can be written as where N is the conversion of nitrogen (urea) release; s is the total cross-sectional area of the pores of the polymer shell; k 0 is the true constant of the rate of urea release; t is time. As the pore-clogging urea dissolves, a linear increase in the total pore cross-sectional area of the polymer shell can be expected in accordance with Equation (4). where α is the proportionality coefficient. Substituting (4) into (3), dividing the variables, and integrating in the range from 0 to N and from 0 to t, we obtain Equation (5) as where k = αk 0 2 is the rate-effective constant urea release. Equation (5) is linear in the coordinates "ln(1 − N) vs. t 2 ", which allow determining the value of k from the tangent of the slope of the straight line (Figure 8a). The rateeffective constant urea release is 6.2 × 10 −4 day −2 , which allows calculating the theoretical kinetic release curve according to Equation (6) and comparing it with experimental data (Figure 8b). As can be seen ( Figure 8B), Equation (6) is in satisfactory agreement with the experimental data. Figure 9 shows the weight loss of the copolymer in soil; the weight loss of the copolymer increases with the increase in time of soil burial. The weight loss reaches 26% on the 52nd day of the incubation, demonstrating that the copolymer is degrading slowly in soil and will take longer to fully decompose. The degradation kinetics formally correspond to the zero-order equation with the rate constant 0.465% × day −1 (Figure 9). = 1 − Soil Burial Test As can be seen ( Figure 8B), Equation (6) is in satisfactory agreement with the experimental data. Figure 9 shows the weight loss of the copolymer in soil; the weight loss of the copolymer increases with the increase in time of soil burial. The weight loss reaches 26% on the 52nd day of the incubation, demonstrating that the copolymer is degrading slowly in soil and will take longer to fully decompose. The degradation kinetics formally correspond to the zero-order equation with the rate constant 0.465% × day −1 (Figure 9). The degradation starts with the sulfur oxidation from loosely bonded S-S and unreacted sulfur present in the copolymer as A. niger bacteria is present in the soil, which helps sulfur to oxidize. This test confirms the biodegradable nature of the copolymer, which is an additional benefit of using this material as coating material for urea, as this mitigates the problem of pollution caused by the coating shell left in the soil after release of nutrients. The degradation of these copolymers will also help the plant growth as the oxidation of the sulfur produces sulfate, which is an accessible form of secondary nutrient required by the plants. This test shows the biodegradable nature of the inverse vulcanized copolymers, which is degrading with extension of soil burial incubation period [35]. Conclusions RSM was utilized to optimize the inverse vulcanization reaction condition to minimize the unreacted sulfur amount in the final copolymer. A quadratic model was developed to predict the sulfur conversion, and it was found that 82.37% conversion of sulfur can be achieved if 51.94 wt% S is allowed to react with jatropha oil for 74.21 min at 169.9 °C temperature. DSC revealed the actual conversion to be 82.2%, which shows the goodness of the developed model. To further maximize the sulfur conversion, 5 wt% DIB was used as a crosslinker, and the obtained terpolymer was utilized as a coating material to develop a novel slow-release coated urea to delay the nutrient release. The nutrient release test revealed that only 65% of the total nutrient released after 40 days of incubation com- The degradation starts with the sulfur oxidation from loosely bonded S-S and unreacted sulfur present in the copolymer as A. niger bacteria is present in the soil, which helps sulfur to oxidize. This test confirms the biodegradable nature of the copolymer, which is an additional benefit of using this material as coating material for urea, as this mitigates the problem of pollution caused by the coating shell left in the soil after release of nutrients. The degradation of these copolymers will also help the plant growth as the oxidation of the sulfur produces sulfate, which is an accessible form of secondary nutrient required by the plants. This test shows the biodegradable nature of the inverse vulcanized copolymers, which is degrading with extension of soil burial incubation period [35]. Conclusions RSM was utilized to optimize the inverse vulcanization reaction condition to minimize the unreacted sulfur amount in the final copolymer. A quadratic model was developed to predict the sulfur conversion, and it was found that 82.37% conversion of sulfur can be achieved if 51.94 wt% S is allowed to react with jatropha oil for 74.21 min at 169.9 • C temperature. DSC revealed the actual conversion to be 82.2%, which shows the goodness of the developed model. To further maximize the sulfur conversion, 5 wt% DIB was used as a crosslinker, and the obtained terpolymer was utilized as a coating material to develop a novel slow-release coated urea to delay the nutrient release. The nutrient release test revealed that only 65% of the total nutrient released after 40 days of incubation compared with pristine urea, which released 99% in just one day. Biodegradability of terpolymer was revealed by soil incubation test, which showed that 26% weight loss occurred after 52 days of incubating the terpolymer in soil.
9,109.8
2021-11-01T00:00:00.000
[ "Materials Science" ]
Secure and Efficient Access Control Scheme for Wireless Sensor Networks in the Cross-Domain Context of the IoT Nowadays wireless sensor network (WSN) is increasingly being used in the Internet ofThings (IoT) for data collection, and design of an access control scheme that allows an Internet user as part of IoT to access the WSN becomes a hot topic. A lot of access control schemes have been proposed for the WSNs in the context of the IoT. Nevertheless, almost all of these schemes assume that communication nodes in different network domains share common system parameters, which is not suitable for cross-domain IoT environment in practical situations. To solve this shortcoming, we propose a more secure and efficient access control scheme for wireless sensor networks in the cross-domain context of the Internet of Things, which allows an Internet user in a certificateless cryptography (CLC) environment to communicate with a sensor node in an identity-based cryptography (IBC) environment with different system parameters. Moreover, our proposed scheme achieves known session-specific temporary information security (KSSTIS) that most of access control schemes cannot satisfy. Performance analysis is given to show that our scheme is well suited for wireless sensor networks in the cross-domain context of the IoT. Introduction Wireless sensor network (WSN) is a distributed network which contains a large number of sensor nodes.We can collect the target data through the sensor nodes to obtain valuable information.Due to the flexibility and convenience of data capture, WSN has been integrated into the IoT.The integration of WSN applications and low-power sensing nodes with the Internet may be accomplished with various approaches and strategies [1], and the popular integration solutions include cloud-based integration approaches [2,3], front-end proxy integration approaches [4], architecture frameworks [5], and the integration via standard Internet communication protocols [6,7].In the cloud-based integration solution, some important security requirements including privacy, trust, and anonymity cannot be addressed.This approach also does not support the secure integration with data sources from other sensing devices or heterogeneous WSN domains.For the front-end proxy integration solution, the wireless sensor nodes communicate with the Internet hosts through a proxy server; thus this integration approach does not support direct communications between WSN nodes and Internet hosts, and the shortcoming of this approach is that the proxy server is vulnerable to cyberattacks and may become the bottleneck.In the integration solution via standard Internet communication protocols, most of approaches employ specialized middleware layers instead of supporting generic Internet communication mechanisms that can implement heterogeneous applications.However, these proposed solutions developed in the context of the architecture frameworks currently do not support Internet communications in WSN environments.For the integration via standard Internet communication protocols, a large number of access control schemes using public key infrastructure (PKI) are proposed.PKI, however, has a serious problem of certificate management.Subsequently, a series of access control schemes using identity-based cryptography (IBC) or certificateless cryptography (CLC) are designed, and even a new idea of integrating IBC with CLC into an access control scheme is introduced.In particular some access control schemes using heterogeneous signcryption schemes are generated, in which an Internet sender as part of IoT belongs to the CLC environment and a wireless sensor receiver is in the IBC environment.However, almost all of these access 2 Security and Communication Networks control schemes assume that communication nodes share common system parameters in different network domains, which are not suitable for cross-domain IoT environment in practical situations.Moreover, we find that most of these schemes cannot satisfy known session-specific temporary information security (KSSTIS, which means that the attacker cannot obtain the plaintext message when the ephemeral key and the access request message are leaked).Thus, it is necessary to design a more secure and efficient access control scheme and make it more suitable for wireless sensor networks in the cross-domain context of the IoT. 1.1.Related Work.Zhou et al. proposed an access control scheme for WSNs using elliptic curve (EC) cryptography [8], which is more efficient than the PKI-based schemes.However, to authenticate a sensor node, the scheme of Zhou et al. needed high computational and communicational costs.Next, Huang [9] proposed an efficient access control protocol (EACP) based on the EC, which is quite adequate for lowpowered sensor nodes.Consequently, Kim and Lee [10] pointed out that EACP scheme is susceptible to a message replay attack, and they proposed an enhanced access control protocol (ENCP).However, Lee et al. [11] showed that ENCP is subjected to a new node masquerade attack and message forgery attack, and then they proposed a practical access control protocol (PACP).In 2015, Chen et al. [12] claimed that the PACP is susceptible to the adversary attacks and needs huge key storage resources.Recently, Kumar et al. [13] proposed a more secure and efficient scheme for WSNs, which provides robust security and achieves the access control while taking care of the identity privacy.However, these schemes above cannot provide message confidentiality and unforgeability at the same time.In order to simultaneously authenticate the sensor node and protect the confidentiality of messages with a low cost, Yu et al. [14] and Ma et al. [15] proposed the access control schemes using signcryption approach (ACSC).Signcryption performs the signature and the encryption in one logical step.Compared with the signature-then-encryption method, signcryption has less cost.But these above ACSC schemes are based on the public key infrastructure (PKI).In PKI, the certificate authority (CA) generates a digital certificate for each user, which triggers the PKI's certificate management problem.In order to avoid this problem and reduce the burden on traditional PKI, identity-based public key cryptography (IBC) and certificateless public key cryptography (CLC) were proposed, where certificate used in PKI is not needed.Recently, many security mechanisms for WSNs using IBC [16,17] or CLC [18,19] have been generated.All the above schemes are homogeneous means that sender and receiver must belong to the same security domain (PKI or IBC or CLC environment).Heterogeneous signcryption allows the sender to send a message to the receiver in different security domain.Huang et al. [20] proposed a heterogeneous signcryption scheme that the sender is in the IBC environment and the receiver belongs to the PKI environment.In 2016, Li et al. [21] proposed a novel access control scheme (NACS) for sensor networks in the context of the IoT.The NACS uses heterogeneous signcryption (HSC) in which an Internet sender as part of IoT belongs to the CLC environment and a wireless sensor receiver is in the IBC environment, which conforms the characteristics of the WSNs in the context of the IoT. Our Contribution. In this paper, we propose an access control scheme for WSNs in the cross-domain context of the IoT using heterogeneous signcryption.We define the generic model and security model of the cross-domain heterogeneous signcryption (CDHSC) and then propose a CDHSC scheme that proves to be safe under the Bilinear Inverse Diffie-Hellman Problem (BIDH) and Computational Diffie-Hellman Problem (CDHP) assumptions in the random oracle model.Compared with NACS scheme [21] through performance analysis, our scheme has the following merits: (1) our scheme allows an Internet user in a certificateless cryptography (CLC) environment to communicate a sensor node in an identity-based cryptography (IBC) environment with different system parameters so that it can be used for WSNs in the cross-domain context of the IoT; (2) our scheme has less computation cost (not including precomputation cost).For the Signcryption algorithm, our scheme has the same computation cost as the NACS scheme.But for the Unsigncryption algorithm, our scheme only needs three bilinear pairings computations, while the NACS scheme requires four.As we all know, bilinear pairing computation is the most expensive operation in a signcryption scheme from bilinear pairing; (3) our scheme satisfies the known sessionspecific temporary information security attribute. Organization. The remainder of our paper is organized as follows.The preliminaries for network model, bilinear pairings, and difficult mathematical problems are given in the next section.The third section elaborates on the definition of the cross-domain heterogeneous signcryption (CDHSC), proposes a specific CDHSC scheme, and gives the security analysis of the proposed scheme.In the fourth section we propose a secure and efficient access control scheme for wireless sensor networks in the cross-domain context of the IoT and perform an efficiency analysis on it.In the last section, we make a summary. Preliminaries In this part, we give the basic network model of access control scheme, some prior knowledge of bilinear pairings, and difficult mathematical problems. Network Model. In the network model of access control for wireless sensor networks in the cross-domain context of the IoT, there are five types of communication entities including Internet user, a trusted third party called key generation center (KGC) in the CLC environment, WSN node, the other trusted third party named private key generator (PKG) in the IBC environment, and a gateway used to connect the CLC domain with the IBC domain.PKG and KGC are used to complete the registration of WSN nodes and Internet users, respectively.The PKG calculates the public key and a private key for each WSN node.The KGC is responsible for producing a part of the private key of Internet users, and the other part of the private key is generated by the users themselves.In the network model, each PKG and KGC has different system parameters.In the KGC environment, when an Internet user wants to access the information collected by the sensor nodes from WSN, he needs to signcrypt and submit the query message to the gateway.The gateway belonging to this WSN will first authenticate the access request message from the Internet user.If the verification is passed, the gateway will forward the query message to the WSN.Then the WSN transmits the collected data to the Internet user with unsigncryption key.Otherwise, gateway refuses to provide the service. In the network model of access control, the access request message generated by the Internet user should satisfy confidentiality, integrity, authentication, nonrepudiation, and known session-specific temporary information security (KSSTIS) simultaneously when it is transmitted to the gateway.Figure 1 shows the overview of the network model. The security of our scheme relies on the following two hard mathematical problems. Cross-Domain Heterogeneous Signcryption For the cross-domain heterogeneous signcryption (CDHSC) which can be used in access control for WSNs in the context of the IoT, we first define the generic model and security model.Then we present the specific CDHSC scheme.Finally we show the correctness analysis and the security proof of the proposed scheme. 3.1.Generic Model.Our CDHSC scheme consists of nine algorithms as follows. Setup.The trusted third party PKG and KGC execute this probabilistic algorithm to produce a series of system parameters.Firstly they input a security parameter then output their master secret key and corresponding system parameters.Different PKG and KGC use different and output different .Note that the ciphertext should meet the need of the public verifiability with confidentiality.That is to say, ciphertext verification process of the unsigncryption algorithm can be performed by any verifier (generally the WSN gateway) without the knowledge of the plaintext message . Security Model. The standard security notion for a CDHSC scheme is confidentiality and unforgeability.In the following definitions, Definition 3 describes the confidentiality and the unforgeability is depicted in Definition 5. Challenge.After Phase 1, outputs two plaintexts ( 0 , 1 ) which are of the same length, a sender's identity ID and a receiver's identity ID on which he wants to be challenged.Note that ID cannot be the identity that has been used for KE query in Phase 1. randomly takes a bit of ∈ {0, 1} and calculates * = SC( , ID , ID , ID , ID ) and forwards it to .Phase 2. can make a polynomially bounded number of queries just like in Phase 1, whereas, it cannot make a KE query on ID and cannot perform an Unsigncryption query on ciphertext * under ID and ID to obtain the plaintext unless the sender's public key PK ID is replaced after the challenge phase. For unforgeability, there are two types of adversaries named I and II since the signcryption is generated in the CLC environment.Type I adversary I does not know the KGC's master key, but he is able to replace public keys of arbitrary identities with other public keys of his choice.In contrast, Type II adversary possesses the KGC's master secret key, while he cannot replace public key of any user during the game. Initial.The challenger runs the Setup algorithm defined in generic model and gives the resulting system parameters to the adversary .For Type II adversary, sends him the master secret keys of PKG and KGC in addition to the system parameters. Probing.The challenger is probed by the adversary who executes a polynomially bounded number of queries just like Phase 1 of the confidentiality game.Note that II does not need to perform PKR, PPKE, and KE queries. Forge.The adversary returns a ciphertext * , a sender's identity ID , and a receiver's identity ID .Let the tuple ( * , ID , * ) be the result of unsigncrypt algorithm under the private key corresponding to ID . wins the game if the tuple ( * , ID , * ) satisfies the following requirements: (1) This ciphertext is a valid one, when the result of unsigncrypt algorithm is not character ⊥ but * .(2) has never asked the secret value of the user with identity ID .(3) has never asked the Signcryption query on ( * , ID , ID ). CDHSC Scheme. In this section, we propose a CDHSC scheme based on bilinear pairings.We follow the generic model of a general CDHSC scheme that we presented in Section 3.1, and we add KSSTIS property to it.The scheme is described below. CL-PPUKE. This algorithm accepts an identity ID 𝐴 of an Internet user and generates the partial public key = ( + 0 ) 0 for the user, where = 1−0 (ID ).Then the KGC runs the CL-PPKE algorithm. CL-PPKE. After executing the CL-PPUKE algorithm, the KGC calculates the partial private key = ( + 0 ) −1 0 for the user.Finally, the KGC sends ( , ) securely to the user.CL-SVS.This algorithm accepts an identity ID of an Internet user and randomly chooses a secret value ∈ * 0 for the user.Then the user runs the CL-PKG algorithm. CL-PKG. After executing the CL-SVS algorithm, the user calculates his public key PK = .IB-PKE.This algorithm accepts an identity ID of a WSN node and generates the public key = ( + 1 ) 1 for the node, where = 1−1 (ID ).Then the PKG runs the IB-KE algorithm. IB-KE. After executing the IB-PKE algorithm, the PKG calculates the private key = ( + 1 ) −1 1 for the node.Finally, the PKG sends ( , ) securely to the WSN node.SC.To signcrypt a message using the partial private key , secret value , and the receiver's identity ID , a sender with identity ID performs the following steps: (1) Selecting ∈ * 0 randomly. USC.To unsigncrypt the ciphertext = (, , ) using the private key , sender's partial public key , and the main public key PK , the receiver with identity ID performs the following steps: (1) Calculating = 3−0 (, , PK , ID ). Note that any user can verify the ciphertext = (, , ) by computing = 3−0 (, , PK , ID ) and verifying whether ê(, ) = ê( 0 , 0 ) ê(PK , ), where nothing about the plaintext message will be lost.Thus, we can shift the computational cost of signcryption verification to the WSN gateway (he just needs to obtain the public parameters and the ciphertext ) in the cross-domain context of the IoT. Correctness. The consistency of the CDHSC scheme is easy to verify. (1) In the signcryption verification stage, (2) In the signcryption decryption stage, 3.5.Security Proof.In this section, we use some mathematical difficult problems to prove the confidentiality and unforgeability of the CDHSC scheme in the random oracle model.In addition, we demonstrate that our scheme satisfies the known session-specific temporary information security (KSSTIS).In our scheme, the generation algorithms of public key and private key for the node ID in the IBC environment are the same as the generation algorithms of partial public key and partial private key for the user ID in the CLC environment, so the KGC can act as the roles of KGC and PKG simultaneously in a small wireless sensor networks in a single domain context of the IoT.Moreover, in the following security proofs of our proposed scheme, for reasons of proof brevity, we assume that the KGC plays the roles of the KGC and PKG at the same time in a single domain. ( Signcryption Queries.When asks for a query on a message with a sender's identity ID and a receiver's identity ID , if ID = ID , aborts the simulation and returns ⊥.Otherwise, first executes Corruption query and PPKE query with ID to obtain ID and ID , performs PPUKE query with ID to obtain ID , and then executes SC(, ID , ID , ID , ID ) algorithm to return the signcryption = (, , ).If the sender's public key PK ID has been replaced, the sender's secret value ID is provided by .Finally returns ciphertext back to .Unsigncryption Queries.When asks for this query on a signcryption = (, , ) with a sender's identity ID and a receiver's identity ID , if ID = ID , aborts the simulation and returns ⊥.Otherwise, computes = 3−0 (, , PK ID , ID ) and checks if ê(, ID ) = ê( 0 , 0 ) ê(PK ID , ) holds.If not, aborts the simulation and returns ⊥.Otherwise, executes the KE query to get ID and calculates = ê(, ID ).Then executes 2 query to obtain = 2 (, , ID ) and finally calculates and returns = ⊕ . Challenge. outputs two plaintexts ( 0 , 1 ) which have the same length and picks a sender's identity ID and a receiver's identity ID on which he wishes to be challenged.Note that fails if has asked a KE query on ID during the first stage.If ID ̸ = ID , aborts the simulation and returns ⊥.Otherwise selects a random number ∈ {0, 1} and generates the challenge ciphertext * = ( * , * , * ) as follows.At first, chooses the value * ∈ 1 .Then he sets * = and computes * = 2 ( * , * , ID ) and is the candidate answer for the BIDH problem).Finally forwards the ciphertext * = ( * , * , * ) to .Phase 2. then performs a second series of queries, and can handle these queries as in the first stage.Whereas, it cannot make a KE query on ID and cannot perform an Unsigncryption query on ciphertext * under ID and ID to obtain the plaintext unless the sender's public key PK ID is replaced after the challenge phase. Guess. produces a bit of .If = , then answers 1 as the result to the BIDH problem since he has generated a valid signcrypted message of using the knowledge of * .Otherwise, answers 0. So, the adversary can defeat the signcryption by means of analyzing the ciphertext, and at the same time he can solve the BIDH problem with nonnegligible advantage.But we all know that there is no algorithm that can be used to work out the BIDH problem in the probabilistic polynomial time; hence our scheme has the indistinguishability against adaptive chosen ciphertext attack. (2) Unforgeability Theorem 8. Our CDHSC scheme is existentially unforgeable against any EUF-CDHSC-CMA adversary (=I,II) in the random oracle model assuming that the CDH problem in 1 is intractable. Proof.Let be an CDH problem attacker.Then is the adversary who interacts with following Game 2. is given (, , ) as an input to the CDH problem and aims to compute , where , ∈ * and ∈ 1 . Initial.The challenger runs the Setup algorithm defined in generic model and gives the resulting system parameters to the adversary .For the Type II adversary, sends him the master secret keys of KGC in addition to the system parameters. Probing.We show that can use to solve the CDH problem. needs to maintain three lists to compute the new partial public key = ( + 0 + ) 0 and partial private key = ( + 0 + ) −1 0 with the same identity ID .Then the KGC returns ( , ) to user. Performance Evaluation. We compare the performance of our method with the NACS [21].The comparative result is shown in Table 1.In the table, we use , , and as abbreviations for point multiplications in 1 , exponentiations in 2 , and pairing operations, respectively.Moreover, we use notations KSSTIS as abbreviations for whether the scheme achieves known session-specific temporary information security. From the computational point of view, the signcryption operation of our CDHSC scheme needs three point multiplications in 1 and one exponentiation in 2 which is the same as NACS [21].The unsigncryption operation of our CDHSC scheme needs one exponentiation and three pairings, but NACS requires four pairings.As we all know, the pairing operation is several times more expensive than the exponentiation.So the computational cost of our CDHSC scheme is more efficient than the NACS.In addition to efficiency improvement, our CDHSC scheme also enhances the security, since it achieves KSSTIS attribute.Most importantly, our scheme allows an Internet user under the circumstance of CLC to communicate a sensor node in IBC environment with different system parameters. For energy consumption, according to [21], a point multiplication in 1 (or an exponentiation in 2 ) operation and a pairing operation consume 19.44 mJ and 45.6 mJ, respectively.Therefore, the computational energy cost of NACS and our scheme are 4 * (19.44 + 45.6) = 260.16mJ and 5 * 19.44 + 3 * 45.6 = 234 mJ, respectively, and the communication energy cost of NACS and our scheme are the same for the sensor node (the cost is 1.03 mJ [21]). Hence, consider the wireless sensor networks in the single-domain or cross-domain context of the IoT; it may be that our access control scheme is more applicable. Conclusion In this paper, we proposed a cross-domain heterogeneous signcryption scheme that allows a sender in the CLC environment to send the request signcryption message to a recipient in the IBC environment with different system parameters, and we proved that it has the confidentiality under the BIDH problem and unforgeability under the CDH problem in the random oracle model.Based on the CDHSC scheme, we designed a secure and efficient access control scheme for wireless sensor networks in the cross-domain context of the IoT. Compared with NACS, our scheme not only needs less computation costs but also has stronger security since it achieves KSSTIS attribute.We believe that the proposed access control scheme can be feasible in many practical single-domain or cross-domain WSN applications. The key extraction algorithm is performed by the PKG in IBC environment.Taking as inputs a PKG's master secret key and the user's identity ID, the algorithm outputs the user's private key ID .SC.The signcryption algorithm is performed by an Internet user under the circumstance of CLC.Taking as inputs the plaintext message , sender's partial private key ID , sender's secret value ID , sender's identity ID , and the receiver's identity ID , the algorithm outputs the ciphertext .USC.The unsigncryption algorithm is performed by the receiver in IBC environment.Taking as inputs the ciphertext , the sender's identity ID and the main public key PK ID , sender's partial public key ID , and the receiver's private key ID and identity ID , the algorithm outputs the plaintext if is a valid ciphertext.Otherwise the output is the symbol ⊥. The partial public key extraction algorithm is executed by the KGC in CLC environment, which takes as input a KGC's master secret key and an Internet user's identity ID, and outputs the user's partial public key ID .CL-PPKE.The partial private key extraction algorithm is executed by the KGC in CLC environment, which takes as input a KGC's master secret key and an Internet user's identity ID, and generates the user's partial private key ID .CL-SVS.The secret value setup algorithm is performed by the Internet users in CLC environment.Taking as inputs a user's identity ID, the algorithm outputs the user's secret value ID .CL-PKG.The main public key generation algorithm is executed by the users in the context of CLC, which takes as input the secret value ID , and outputs the user's main public key PK ID .IB-PKE.The public key extraction algorithm is performed by the PKG in IBC circumstance.Taking as inputs a PKG's master secret key and the user's identity ID, the algorithm outputs the user's public key ID .IB-KE. The challenger runs Setup algorithm with a parameter , then he returns the system parameters to .Partial Public KeyExtraction (PPUKE) Queries. chooses an identity ID and forwards it to .Then executes CL-PPUKE algorithm and forwards the corresponding partial public key ID to . executes CL-SVS and CL-PKG algorithm to compute the user's secret value ID and main public key PK ID then adds (ID, ID , PK ID ) to the list .Finally, returns PK ID to .Public Key Replacement (PKR) Queries. can replace a main public key PK ID with a value selected by himself.Corruption Query.On a corruption query, checks the list and returns the secret value ID .Partial Private Key Extraction (PPKE) Queries. chooses an identity ID and forwards it to .Then executes CL-PPKE algorithm and forwards the corresponding partial private key ID to .Public Key Extraction (PKE) Queries.When receiving an identity ID from , executes IB-PKE algorithm and forwards the corresponding public key ID to .When receiving an identity ID from , executes IB-KE algorithm and forwards the corresponding private key ID to .Signcryption Queries. submits a plaintext , a sender's identity ID , and a receiver's identity ID .Firstly, performs Corruption query and PPKE query with ID to obtain ID and ID , performs PKE query with ID to obtain ID , and then executes SC algorithm to get the signcryption = SC(, ID , ID , ID , ID ).If the sender's public key PK ID has been replaced, the sender's secret value ID is provided by .Finally returns ciphertext to .Unsigncryption Queries. submits a signcryption , a sender's identity ID , and a receiver's identity ID .Firstly, performs PPUKE query and PK query with ID to obtain ID and PK ID , performs KE query with ID to obtain ID , and then executes USC algorithm to check the validity of ciphertext .If the ciphertext is valid, sends plaintext = USC(, PK ID , ID , ID , ID ) to ; otherwise it outputs character ⊥. is given (, , ) as an input to the BIDH problem and aims to compute ê(, ) −1, where , ∈ * and ∈ 1 .Initial. sets pub = .The value is the master key of the KGC, which is unknown to , and gives system parameters to . W show that can use to solve the BIDH problem. needs to maintain three lists 1 , 2 , and 3 that are initially empty and are used to keep track of answers to queries asked by to oracles 1 , 2 , and 3 , respectively.What is more, maintains two lists: and chooses ∈ {1, 2, . . ., } randomly.When makes this query on ID ( ∈ {1, 2, . . ., }), if = (we let ID = ID at this point), returns ID = and adds (ID , ID = , ID = −1 , ⊥) to ( cannot compute ID = −1 ; he just considers ID to be −1 ).Otherwise picks a random ∈ * , returns ID = , and adds (ID , ID = , ID = −1 , ) to . 1 Queries.When makes this query on ID , forwards to if 1 has the entry (ID , ).If the list 1 does not contain (ID , ), randomly picks ∈ * , returns , and adds (ID , ) to 1 . 2 Queries.When makes 2 query on ( , , ID ), if the list 2 has the entry ( , , ID , ), answers to .Otherwise, randomly picks ∈ {0, 1} as the output and inserts ( , , ID , ) into the list 2 . 3 Queries.For 3 query on ( , , PK ID , ID ), if the list 3 has the entry ( , , PK ID , ID , ), answers to .Otherwise, randomly picks ∈ * as the output and inserts ( , , PK ID , ID , ) into the list 3 .Partial Private Key Extraction (PPKE) or Key Extraction (KE) Queries.When makes this query on ID , if ID = ID , aborts the simulation and returns ⊥.Otherwise, the list should have the entry (ID , ID = , ID = −1 , ). returns ID to .Public Key (PK) Queries.When makes this query on ID , if the list has the entry (ID , PK ID , ID ), then answers PK ID to .Otherwise, selects ID ∈ * randomly, computes PK ID = ID ID , and then returns PK ID and adds (ID , PK ID , ID ) to .Corruption Queries.We assume that has made PK query on ID before this query.The list should have the entry (ID , PK ID , ID ). returns ID to .Public Key Replacement (PKR) Queries. can replace the main public key PK ID of user ID with a value he selects.When executes a public key replacement query with the entry (ID , PK ID ), updates the list with entry (ID , PK ID , ⊥). 1) Confidentiality Theorem 7. Our CDHSC scheme is indistinguishable against any IND-CDHSC-CCA2 adversary in the random oracle model assuming that the BIDH problem in 2 is intractable.Proof.Let be a BIDH problem attacker.Then is an adversary who interacts with following Game 1. of the queries made by to oracles PK and PPUKE (or PKE), respectively.Subsequently, simulates the challenger and plays the game described in Definition 3 with the adversary as follows.Partial Public Key Extraction (PPUKE) or Public Key Extraction (PKE) Queries.Suppose that makes at most queries to this oracle.First, Table 1 : 1 , 2 , and 3 that are initially empty and are used to keep track of answers to queries asked by to oracles 1 , 2 , and 3 , respectively.What is more, maintains two lists: and of the queries made by to oracles PK and PPUKE (or PKE), respectively.Subsequently, simulates the challenger and plays the game described in Definition 5 with the adversary .The performs a polynomially bounded number of the following queries, and II does not need to perform PKR, PPKE, and KE queries.When makes this query on ID , if the list has the entry (ID , ID = , ID = −1 , ), then answers ID to .Otherwise, selects ∈ * randomly, computes ID = and ID = −1 , then returns ID , and adds (ID , ID = , ID = −1 , ) to .We assume that has made PPUKE (or PKE) query on ID before this query.The list should have the entry (ID , ID = , ID = −1 , ). returns ID to .Public Key (PK) Queries.We assume that makes at most queries to this oracle and has made PPUKE (or PKE) query on ID before this query.First, chooses ∈ {1, 2, ..., } randomly.When makes this query on ID ( ∈ {1, 2, ..., }), if = (we let ID = ID at this point), returns PK ID = and adds (ID , PK ID , ⊥) to .Otherwise selects ID ∈ * randomly, computes PK ID = ID ID , then returns PK ID , and adds (ID , PK ID , ID ) to .Corruption Queries.When makes this query on ID , if ID = ID , aborts the simulation and returns ⊥.Otherwise, the list k should have the entry (ID , PK ID , ID ).returns ID to . 1 , 2 , 3 , PKR, and Signcryption Queries.This proof is the same as the proof of Theorem 7.Unsigncryption Queries.When asks for this query on a signcryption = (, , ) with a senders identity ID and a receivers identity ID , computes = 3−0 (, , PK ID , ID ) and checks if ê(, ID ) = ê( 0 , 0 ) ê(PK ID , ) holds.If not, aborts the simulation and returns ⊥.Otherwise, executes the KE query to get ID and calculates = ê(, ID ).Then executes 2 query to obtain = 2 (, , ID ) and finally calculates and returns = ⊕ .Forgery.outputs a ciphertext * = ( * , * = , * ) and picks a sender's identity ID and a receiver's identity ID on which he wishes to be challenged.If ID ̸ = ID , aborts the simulation and returns ⊥.Otherwise, he runs the 3 simulation algorithm to obtain * = 3 ( * , * , PK ID , ID ); we can have * = * ID + and = * − * ID ( is a candidate for the CDH problem).Finally, checks if Comparisons of performance.tothe KGC.KGC examines if the user's information (e.g., the user's IP address) is reasonable.If the information is incorrect, the KGC will reject the user's request.Otherwise, the KGC executes CL-PPUKE and CL-PPKE algorithms to get partial public key = ( + 0 + ) 0 and partial private key = (+ 0 + ) −1 0 , where = 1−0 (ID ) and ∈ * selected by the KGC randomly.The KGC returns ( , ) to user.After receiving ( , ), the Internet user executes the CL-SVS and CL-PKG algorithms to obtain his secret value and public key PK .The user can store the public parameters in a plaintext file and ( , ) in a ciphertext file.4.3.Authentication with Key Establishment Phase.When anInternet user ID wants to access the information collected by a sensor node ID from WSN, the user firstly acquires the current time stamp 1 in order to detect the replay attack, then generates the query request message, and performs a signcryption operation on it.The ciphertext is = (, , , 1 ), where = + and = 3−0 (, , PK , ID ⊕ 1 ).Then he sends the ciphertext to the gateway belonging to the destination WSN.The gateway first calculates = 3−0 (, , PK , ID ⊕ 1 ) and examines whether ê(, ) = ê( 0 , 0 ) ê(PK , ) and 1 is fresh or not.If not, the gateway denies the access to the WSN.Otherwise, the user passes the authentication, and the gateway sends (, ) to the WSN node.The WSN node calculates = ê(, ), = 2 (, , ID ) and gets query request message = ⊕ .After that, the WSN node can encrypt the response data using symmetric encryption algorithm with the session key .In this process, confidentiality, nonrepudiation, and KSSTIS are all achieved according to the security proof of Section 3.5.The message integrity is ensured by using the hash value .The functions of authentication and session key establishment are implemented by verifying the signature (,) and calculating the session key , respectively.4.4.Leaked Key Revocation Phase.Assume that the partial private key of an Internet user with identity ID is leaked; then the user should send a key revocation request message to the KGC for a new key.The user submits his identity ID to the KGC.KGC randomly chooses another value ∈ *
8,283.8
2018-02-26T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
The Complex Adaptive Region-Assemblage and Local Economies: New Perspectives for Tackling Regional Inequalities Even before Covid-19, it was not news to say that many peripheral and rural regions have been struggling economically. This is exacerbated by what Peter De Souza observes in his 2017 book The Rural and Peripheral in Regional Development, that rural and peripheral spaces are poorly understood in academic and policy terms. Indeed, often largely rural regions are imagined as having little to contribute outside of the amenity value of the landscape. The obvious question was why voters in regions that had benefitted most from Structural Funding could have felt such disaffection toward the EU. Research to explore this question found that Leavevoting interviewees did not feel that the funding had addressed the issues that concerned them, or that it had made manifest improvements to their lives . This raises the important observation that local satisfaction does not tend to be part of the metrics designed to show whether and how development assistance has supported the region. The resultant learning is that we need to consider the subjective question about whether and how local people feel as if it has improved their lives. Neither was this pattern confined to the UK. If voting for Donald Trump in the 2016 US Presidential election is taken as a proxy for what we might call 'anti-establishment' behaviour, recipient regions for assistance have also taken the 'populist' turn. For example, many parts of the South West and South of the State of Virginia voted overwhelmingly Republican. This is despite benefitting from a $1.1 bn fund from the Virginia Tobacco Revitalization Commission (2020). Here too, funding has not been able to ameliorate a sense of disaffection, and raises the ontological question about whether regional development is about improving the economies of spaces, or the lives of local people. This starts to highlight a gap between the people delivering regional development, and the broader population of recipient regions. In order to address this, we need to find a new way of looking at regions if development is to have genuinely positive effects for local inhabitants. In this article, I am going to draw on the Philosophers Gilles Deleuze and Felix Guattari (2004) to make a case for the complex adaptive region-assemblage as a way to imagine regions, and which situates the general population as an important part of that assemblage. Later, I will look at what becomes visible if we imagine regions in this way, using case studies from the South West of Virginia, USA, and Cornwall in the UK. Why the Complex, Adaptive Region-Assemblage? In the framing of this article, we set out the problem in terms of a disconnect between decisionmakers and the public with regards to development. Part of the issue is that we can end up imagining the various different aspects of development in fragmented and fractured silos. For example, targets for the number of (well-paying) jobs to bring to regional economies are not necessarily connected to training provision for these types of jobs, (to ensure that new and emerging sectors are able to recruit from a local talent pool) or from information to ensure that locals understand what kinds of marketable skills to invest in. Consequently, we need a way of imagining the region which can capture the complex interconnections, interactions and knowledge flows that occur within it, and examine the interrelationships between regions and other regions. We also need a theoretical model that can incorporate the stories and insights of regular people and that can acknowledge flows between disparate regions if we are to be able to break down the binary separations that we often draw between different parts of the system. These include between core and periphery; rural and urban; rich and poor; elite and popular. The Assemblage has been growing in popularity across various aspects of regional studies over the past decade. Part of its appeal lies in its ability to explore processes of fluidity amongst the assembled and re-assembled elements of a system (Calzada, 2018), and in its examination of the complex inter-connections between things, parts and wholes (Jones, Heley and Woods, 2019). Yet there is much more that the concept is able to do. For the first part, through an intellectual trajectory going through the 1985 collaboration between the Nobel Prize winning -the Chemist Ilya Prigogine and the Philosopher Isabelle Stengers, Stuart Kaufman (1995), Bruno Latour (2005), Manuel DeLanda (2011), William Connolly (2011) and Jane Bennett (2010), the assemblage becomes a way of imagining ideas, institutions and regions as complex, adaptive, evolutionary organisms (Willett, 2019;2020). The complex adaptive region-assemblage is a temporal eco-system which pulls together all of the assembled actors within the region, and connects it and them to different or overlapping assemblages outside of the region. To illustrate and with regards to the economies of wool, Jones et al (2019) observe that whilst aspects of the wool industry are located in one region, the animal husbandry, harvesting, processing, and marketing of the wool and its products connects the region nationally and globally through complex and dynamic flows of goods and knowledge. If we flip this around the other way, we see that through (in this example) the wool industry, the region assemblage both contains this industry, but also is connected to many other regions throughout the world. It is temporal, because the knowledge and know-how required in order to undertake the industries and practices that occur within the region, rely on knowledge which have accumulated and disseminated over an extended period of time. It is an organism, because just as biological organisms need to adapt to their environments in order to be able to survive and thrive, the economies which support the region-assemblage also need to be able to evolve (see Boulding, 1981). From the overlay between Boulding's Evolutionary economics, Delueze and Guattari, and DeLanda, we see that it is essential that knowledges flow freely between the various inter-connected parts of the complex, evolutionary, region-assemblage if it is to be able to be able to adapt to contemporary challenges (see Willett 2020 for further discussion). For regional analysts and policy-makers, this means that it is imperative that we consider the complex interactions between all parts of the region-assemblage. It also means that if we take ourselves back to the situation outlined in the beginning, a lack of satisfaction of the general population with regards to development indicates a fundamental breakdown of knowledge flows amongst participants in the region-assemblage (see also Willett 2020). Given this situation, we now turn to what the Peripheral complex adaptive region-assemblage looks like when viewed from the perspective of the public. The case studies regions were the South West of the State of Virginia, USA, and Cornwall, in the Southwest of the UK. Alongside both being, highly rural regions a five-hour drive from the National capital, both have local incomes and productivity well below those of the national average (Census 2019; Nomis 2019) and have experienced significant changes to their economies with a severe decline of the extractive industries, and the re-shaping of other traditional industries associated with contemporary globalisation. A primary difference is that Cornwall has an extremely well developed tourism industry, although this contributes to what Bürk et al. (2012) describe as a 'stigmatising' perception of people within the locality -and which both regions share. The fieldwork for this ethnographic project was conducted over one-year period between April 2019 and April 2020, supported by an RSA Grant. Over 50 persons were interviewed (either one-to-one or in groups) including over 40 members of the public, and 10 'decision makers' (5 in each case study area). Elite interviews were conducted towards the end of the research process in both locations, in order to understand better the institutional architecture within which interviewees were situated, and discussed their lives. Interview transcripts were analysed using Grounded Theory, and emerging themes explored using the complex adaptive region-assemblages outlined above. What does the peripheral rural region look like if we start with regular people? For the first part, we see that although we can imagine these rural peripheries as being 'different' in some way, in actual fact, they are located within the assembled stories of the broader nation-state of which they are a part. Although experiencing active discrimination from more Metropolitan parts of the US, many people in SW VA (Southwest Virginia) were at pains to point out the extent to which their region had contributed to wider American histories. From the young men that fought in the Revolutionary war, the coal that was removed from the mountains to fuel America's energy demand, to the country music which the region played a seminal role in recording and popularising, people were keen to emphasise that their place had played a major (if neglected) role in the American story. Other histories, which played a significant part of the popular imagination (such as quilting, and other practices associated with rural subsistence farming) are also ones that are shared throughout contemporary USA. Even the stories around the closure of many local factories are embedded in a collective American experience (Macy, 2015). Many people in Cornwall are more resistant to an assertion of similarity with a broader UK, often proclaiming more in common with fellow Celts in Wales, Scotland, Ireland and even Brittany. This assertion of being a separate assemblage is punctuated with markers of difference such as the Cornish language, ancient flag, myths and legends, and the maintenance of a unique Cornish culture. However, even here, assemblages around Cornwall and the rest of the UK are bound together by a complex entanglement of past and present relationships, institutions and power structures. It might contribute less to the UK economy than more economically thriving regions, but in this extremely centralised nation state, its fortunes are inseparable. This might be from the people that have returned or newly migrated, who gained their skills and networks in 'the city' before moving 'home', or it may be from the particular and central role that the Cornish leisure industry plays in the British imagination (and the various responses to this). Regardless, we see that the rural and peripheral do not exist on a binary scale in the popular imagination, but when we explore them more deeply, we find that actually they are assemblages that are 'plugged in' to much bigger assemblages. We also find that people in both areas might grumble about the places in which they find themselves, but actually people showed an extraordinary attachment to the communities in which they live. Sometimes this is rooted in the histories that their families have, but more usually, it is that the place and the people in it matter to them. For example, we hear about how downtown in Bristol VA 20 years ago was 'just tumbleweeds', but now boasts a thriving community and a Smithsonian national museum, commemorating the contribution of the city to the story of Country Music. The energy behind these attachments plays an extremely important role in fuelling the assembled region-organism, and helping the economies to evolve and move forward. However, despite the clear appreciation that many people held for the places in which they live, the precariousness of living in a poor region, and the uncertainty that this creates, inhibited some of the creativity and ingenuity that could be encouraged and fostered to grow the region-assemblage from the ground up. This might have been about not being able to get and keep a job that paid the bills, Cornwall's difficulties over access to secure housing, or SWVA's problems in accessing healthcare. Although decision-makers provided frequent assurances that there are well-paying and secure opportunities in both case studies, and that there are support mechanisms for the small businesses, which have the capacity and adaptability to offer so much in both regions, there are a number of key connectivity blockages in the ways that knowledge flow around the regional-assemblages. For example, many people in SWVA who do not live inside city or town limits do not have access to fibreoptic broadband or 4G mobile signal. If Covid-19 highlights that the Internet is a utility not a luxury, this is a utility that many people are unable to access. It also dramatically reduces the ability of much of the population to keep up to date with how the wider communities in their travel to work areas are changing, the kinds of opportunities that are coming on board, and the marketable skills that they might want to invest in. However, Cornwall shows us that even with this infrastructure, many people still are unaware about how their economies are changing. Despite over 20 years of high level EU structural funding investment, many participants still imagined the local economy to be dominated by the loss or decline of traditional agricultural, fishing, or extractive industries. When asked where people that they know get work, they tell stories of trades people or shop workers, rather than participants in the growing digital, marine, or creative industries. This problematizes the adaptive capacity of the region-organism because it means that many local people are unable to participate. Moreover, blockages extended beyond knowledge, to the physical flows of people around a geographic space. Although the public transportation system is far better developed in Cornwall than in SW VA, many people told stories about the extreme difficulties they, or people that they knew had in getting to work because of non-existent, infrequent, or too expensive bus services. Consequently, people found themselves spatially limited in the places where they could look for work, and often had to take any job rather than ones to which they were better suited or qualified. Clearly, taken together this adds to the precariarity which people experience their lives, and reduces the amount of energy that they have to contribute to the region-organism. So what do we learn if we start to try to understand complex adaptive region-assemblages by starting with the experiences of ordinary people? Firstly, and importantly, we learn why people find it hard to access the new opportunities that funding brings. This makes visible to researchers and policy the places where flows of information are not getting through, and where (lack of) physical mobility hinders community members to take advantage of opportunities even if they know about them. It also highlights the spaces of precariarity, which sap the energy -or what Henri Bergson (1944) calls the Elan Vital, or life force, out of communities, rendering them less productive than they may be. Why does this matter? It matters because the region-assemblages of struggling regions are not different to powerful metropolitan cores, but are all connected to, and part of the same system.
3,414.6
2020-01-01T00:00:00.000
[ "Economics" ]
Two-Photon Lensless Micro-endoscopy with in-situ Wavefront Correction Multi-core fiber-bundle endoscopes provide a minimally-invasive solution for deep tissue imaging and opto-genetic stimulation, at depths beyond the reach of conventional microscopes. Recently, wavefront-shaping has enabled lensless bundle-based micro-endoscopy by correcting the wavefront distortions induced by core-to-core inhomogeneities. However, current wavefront-shaping solutions require access to the fiber distal end for determining the bend-sensitive wavefront-correction. Here, we show that it is possible to determine the wavefront correction in-situ, without any distal access. Exploiting the nonlinearity of two-photon excited fluorescence, we adaptively determine the wavefront correction in-situ, using only proximal detection of epi-detected fluorescence. We experimentally demonstrate diffraction-limited, three-dimensional, two-photon lensless microendoscopy with commercially-available ordered- and disordered multi-core fiber bundles. Introduction Flexible optical endoscopes are an important tool for a variety of applications from clinical procedures to biomedical investigations. A common use of such endoscopes is macroscopic imaging inside hollow organs. Another important application is in micro-endoscopy, where micron-scale structures such as single neurons are imaged or optically excited at depths beyond the reach of conventional microscopes. The latter are limited in their penetration depth due to tissue scattering and absorption [1]. In recent years, various solutions for small diameter micro-endoscopes have been developed [2,3]: Micro-endoscopes that are based on single-mode fibers are bend-insensitive, but require distal optical elements such as scanners and lenses [2], or spectral dispersers [4,5] to produce an image. Such distal elements may significantly enlarge the endoscope's diameter, increasing tissue damage, and consequently limiting its use for deep-tissue imaging. Developing a flexible, lensless micro-endoscope with a minimal diameter is thus a sought after goal for minimally-invasive deep-tissue imaging. Currently, the solutions for constructing lensless endoscopes are based either on imaging fiber bundles [2] or wavefront-shaping [6][7][8][9][10]. Imaging fiber bundles consist of thousands of single-mode cores packed together, where each core functions as a single pixel. While common and straightforward to use, conventional lensless bundles-based endoscopes suffer from limited resolution, pixelation, poor axial sectioning, and a small and fixed working distance. While axial-sectioning can be obtained in fiber bundles by addition of confocal scanning [2] or structured illumination [11], the working distance is fixed to the fiber facet or its image, and does not allow three-dimensional (3D) imaging. In recent years, a number of works demonstrated the use of wavefront-shaping for microendoscopy [6,[8][9][10][12][13][14][15]. In wavefront shaping, a computer-controlled spatial light modulator (SLM) is used to compensate the phase randomization and mode-mixing in fiber bundles or multi-mode fibers, in a principle similar to the decades-old works in holography [16,17]. However, since the phase distortions in long fibers are sensitive to fiber bending, even with the state-of-the-art digital wavefront control a direct feedback from the fiber distal end or precise knowledge of the fiber shape [15] are still required to determine the wavefront correction. These requirements make the application of wavefront-shaping techniques for flexible endoscopes very challenging in most practical imaging scenarios. Recently, speckle correlations in the optical transmission-matrices of fiber bundles, known as 'memory effect' correlations, have been exploited for computationally reconstructing distal images from proximal measurements, without wavefront correction [18,19]. However, these approaches currently allow only two-dimensional imaging, and provide high-fidelity reconstruction only for simple objects. Here, we present a lensless two-photon microendoscope based on wavefront-shaping, which does not require any a-priori knowledge of the fiber transmission-matrix or access to the distal end, even after fiber bending. In our approach the wavefront correction is performed in-situ, by iterative optimization of two-photon fluorescence (2PF) signals detected at the proximal end. Our approach is based on the fact that iterative optimization of a nonlinear signal leads to diffraction limited focusing [20], even when the signal is collected by a spatially-integrating detector having no spatial resolution. This principle has been recently exploited for focusing light through scattering samples [20] or multi-mode fibers [21], and here we show its use for endoscopic imaging. Our imaging technique is composed of two steps: the first is focusing of the excitation beam on the target object by iterative wavefront optimization. After the wavefront-correction has been found, two-photon imaging is performed by scanning the formed focus in three-dimensions using the single wavefront correction, exploiting the 'memory-effect' of imaging fiber bundles [7,18,19]. Methods The experimental setup is illustrated in Fig.1. A beam from a Ti:Sapphire laser oscillator (Mai-Tai DeepSee eHP, Spectra-Physics) producing 85 fs pulses at 785nm wavelength, is reflected off a galvanometric mirror (GVS012, Thorlabs), The galvanometric mirror is imaged on a phase-only SLM (X13138-02, Hamamatsu Photonics) by a 4-f telescope (L 1 = 35mm, L 2 = 500mm). The SLM is imaged on the proximal facet of a fiber bundle (either Schott 1533385 in Figs. 2-4, or Fujikura FIGH-03-215S in Figure 5d,i) using an additional telescope (L 3 = 250[mm], Obj 1 ; 20X Plan Achromat, Olympus). Two-photon fluorescent (2PF) target objects were placed simultaneously at several distances of 0.8mm-3mm from the fiber distal end. The excited 2PF signals are collected by the same bundle, propagate back to the proximal end, separated from the laser wavelength by a dichroic mirror (FF605-Di02-25x36, Semrock), and focused on a detector (either an sCMOS camera (Zyla 4.2 Plus, Andor) or a photomultiplier tube, PMT (X13138-02, Hamamtasu)), after additional filtering (FF01-510/84-25 bandpass, FF01-650/SP-25 short-pass filter, Semrock). 2PF imaging is performed in-situ in two steps: first, the SLM wavefront-correction for focusing is found by iteratively optimizing the total 2PF signal collected by the detector. This step produces a sharp focused beam on the target object [20]. After the wavefront correction is found, the focused spot is raster-scanned in 3D by adding a parabolic phase on the SLM for axial scanning, and a wavefront tilt with the SLM and/or Galvo for lateral scanning. The 2PF signal detected at the proximal end is used to generate a 3D image of the target objects, as in conventional 2PF microscopy [20]. For inspecting the focusing performance, a reference camera (Mako U-130B, Allied Vision) is used to directly image the object plane, using a microscope objective (Obj 2 ; 10X Plan Achromat, Olympus). Importantly, the reference camera is used only for results inspection, and is not required for successful focusing or imaging. The SLM is imaged on the proximal facet of a fiber-bundle. Two-photon fluorescent (2PF) targets are placed at short distances from the distal end of the fiber. The excited 2PF is collected by the same fiber and detected at the proximal end using a PMT or an sCMOS camera. The detected 2PF signal is used as a feedback for an iterative wavefront-shaping optimization process, aimed at maximizing the total detected 2PF, forming a sharp focus on the target. 3D two-photon imaging is achieved by scanning the formed focus with the SLM and galvanometric mirror. A reference camera is used only for results inspection. In-situ Focusing As a first demonstration, 2PF particles of Coumarin 307 were placed at distances of 2mm − 3mm from the bundle's distal facet, at two axial planes. Initially, the light at the object plane, as recorded by the reference camera, was a random speckle pattern (Fig.2a). After 2,500 iterations of an optimization algorithm maximizing the total collected 2PF signal, I 2PF , a sharp focal spot was formed on the object closest to the fiber facet (Fig.2c). The obtained focus size is of the speckle-grain (diffraction-limited) dimensions (see Fig.4), and has a peak to background ratio (PBR) of ∼ 310 (Fig.2c). For optimizing the 2PF signal we have used an iterative partitioning algorithm [22,23]. In this algorithm, the SLM was divided into 128X160 equally sized square segments. In each iteration, a phase, φ n of zero to 2π is added to a random subset of these segments in N = 4 steps. The 2PF signal as a function of the added phase is fitted to a cosine: I 2PF (φ n ) = A + B · cos(φ n − φ), using a fast Fourier transform, and the phase φ that maximizes the 2PF signal is added to the chosen segments. The obtained 2PF signal as a function of the iteration is plotted in Figure 2d. The total optimization time was limited by the refresh rate of the liquid-crystal SLM ( 5Hz) to tens of minutes. However, the optimization time could be significantly reduced since the integration times were of the order of 0.1[ms] even in the first iterations of the optimization (see Discussion). 3D two-photon imaging Following the adaptive focusing, 3D imaging was performed by raster scanning the focus. Fast scanning, with a pixel dwell-time significantly shorter than the SLM refresh rate, was achieved by using a galvanometric mirror for fast scanning in one lateral dimension. Scanning in the other lateral dimension was implemented by addition of a linear phase ramps on the SLM wavefront correction. Scanning in the axial dimension was achieved by adding parabolic phase patterns to the SLM wavefront correction, effectively transforming the fiber bundle to an adaptive lens. c. d. Fig.3(a-d) shows the 2PF images obtained with the focus scanning for two axial planes, where the fluorescent objects were present. Thanks to the inherent axial sectioning of 2PF focused excitation, at each plane, only the objects residing in this plane are visible (Fig.3b,d), whereas the parts of the object residing at different axial planes do not contribute to a substantial background halo, as is visible in the conventional bright-field imaging performed with the reference camera (Fig.3a,c.). Fig.3(e) shows additional two-photon images obtained at axial planes close to the plane of Fig.3(a-b). Characterization of the formed focus In wavefront shaping, the focusing resolution is expected to be diffraction-limited, as dictated by the dimensions of a single speckle grain [24] To characterize the axial resolution, the 2PF signal in the image stack presented in Fig.3(e) was plotted as a function of the axial distance. Fig.4a displays the trace of the maximal 2PF intensity obtained at each depth. The axial resolution, defined by the FWHM of a fit to a Gaussian, is δz = 165µm ± 10. The axial resolution, δz in 2PF microscopy is expected to be 0.64 times the FWHM of the focused beam, which is twice its e. object was removed, and the focus was imaged by the reference camera. The measured transverse (x-y) focal spot size was measured to have a FWHM of 5.4µm ± 0.2 ( Fig.4(b)). A value which is in accordance with the measured axial resolution, and also the expected diffraction-limited spot size dictated by the measurement geometry [18]: where D bundle ≈ 0.45mm is the fiber bundle's diameter, Z ≈ 2.2mm is the distance between the fiber facet and the object in this experiment, and N A ≈ 0.35 is the numerical aperture. Comparison of ordered vs. disordered bundles for wavefront-shaping based endoscopy As is visible in the focusing results of Fig.2(c), the obtained focal spot is surrounded by six visible side-lobes, positioned in a hexagonal pattern. These side-lobes are the result of the hexagonal periodicity of the cores in the fiber bundle. Fig.5a shows an image of the fiber facet used in the above experiments, displaying the hexagonal lattice order. The side-lobes in the formed focus are predicted by the Fourier transform of the cores arrangement (Fig.5b,c) [25, 26]. To suppress the side-lobes, we tested a commercially-available fiber with a less ordered arrangement of cores (Fujikura FIGH-03-215S). Indeed, using such a disordered fiber the side-lobes are effectively suppressed (Fig.5d-f). The main advantage of using non-ordered fiber bundles is the absence of diffraction side-lobes. When used for focus scanning based imaging, such side-lobes produce replicas of the object parts that are scanned by them. Thus, the side-lobes limit the transverse extent, i.e. field of view (FoV), of the imaged objects to be smaller than x ≈ z · λ/d , where d is the core-to-core spacing [25]. Fig5h,i displays a 2PF image obtained with our approach using the disordered fiber, showing accurate diffraction-limited 2PF image of the target objects (fluorescent beads). While we have expected the disordered fiber to provide a considerably larger FoV, we have noticed that this was not the case, as is quantified by measuring the reduction in focus intensity as a function of the scan angle (Fig.5g). This narrower 'memory-effect' angular range of this specific fiber is not an inherent limitation of disordered fibers. We attribute the smaller FoV to light propagation between the cores of this fiber. This can be observed in the lower cores-to-background contrast in Fig.5d, compared to Fig.5a. Another effect that this imperfect light guidance is causing is a narrower spectral speckle correlation bandwidth that we have measured for this fiber (not shown). This narrower spectral correlation bandwidth leads to a lower speckle contrast [27] that is measured when the 12nm-wide femtosecond pulsed illumination is used. This in turn, lowers the focus intensity enhancement obtained by wavefront shaping [24]. To increase the initial speckle contrast and focusing PBR using this fiber, a narrow BPF (LL01-785-25, Semrock, 3nm FWHM) was used in the illumination path in the experiments involving this fiber. Discussion We have demonstrated an in-situ wavefront-correction approach for two-photon microendoscopy. Since the wavefront correction is sensitive to the bending of the fiber, for each fiber orientation a different correction needs to be found. However, this can be continuously performed during in-vivo experiments. In our experimental implementation, the limiting factor on the time required for determining the wavefront-correction was the refresh rate of the specific liquid-crystal SLM used (<5 Hz). This yielded optimization times of tens of minutes in our experiments. This can be significantly shortened by using faster SLMs, since fundamentally the optimization time is limited by the 2PF signal level. In our experiments, integration times of 0. the first iterations of the optimization process. Significantly lower integration times are required in the following iterations, when the signal grows. Thus, finding the wavefront correction using SLMs with higher refresh rates, such as Digital Mirror Devices (DMDs) [28], or galvanometric mirrors based approaches [30] can decrease the optimization time by more than three orders of magnitudes. Using more advanced algorithms such as genetic algorithms [29] can also reduce the number of iterations required for optimization. These are expected to yield an optimization time of the order of seconds or less even using fluorescent markers, as was recently demonstrated using galvanometric-mirrors for 2PF microscopy through scattering tissue [30]. In general, any similar approaches for wavefront correction using nonlinear feedback that were originally developed for scattering media are directly applicable for bundle-based endoscopy, since a fiber bundle can be considered as a thin scattering layer [18]. After the initial focusing, image acquisition speed is limited by the galvanometric scanners speed. Faster scanners based on acousto-optic scanners, or resonant galvanometric mirrors can be utilized for faster scanning. An inherent limitation of wavefront-shaping based correction is the sensitivity to fiber bending. While our proximal-only detection approach does not require initial measurements prior to the insertion of the endoscope, or any knowledge of the fiber parameters or shape, any movement of the fiber after the optimization process will hamper the correction and decrease the focus intensity. For a static fiber left untouched, we have experimentally measured decorrelation times of more than 12 hours. However, small bending and shifts of a few millimeters caused complete decorrelation of the speckle patterns and focusing intensity in both fibers used. To overcome this restriction, continuous adaptive focusing may be used to adaptively compensate for the fiber movement during imaging, as small movements of the bundle translates to small shifts and small decorrelation of the focus [8]. Another possible approach is to deploy the endoscope inside a rigid cannula [31], which will significantly improve stability at the price of an increase in the overall endoscope diameter, and loss of mechanical flexibility. The field of view using fiber bundles with ordered cores is limited by the diffraction side-lobes of the periodic arrangement of cores [18], and the fiber 'memory-effect' angular range. The maximal angular FoV will be attained by using a disordered bundle having single-mode fibers, with no light propagation between the cores. Unlike the memory-effect in scattering media, the memory-effect range in fiber bundles is not limited by the fiber length, and for an ideal bundle having single-mode cores, with no coupling, the memory-effect FOV spans the entire NA of the fiber [18] Conclusion We have demonstrated a minimally-invasive, lensless, two-photon micro-endoscope with in-situ wavefront correction. In contrast to prior works, our technique does not require any distal access or prior characterization, making it an interesting potential solution for imaging or optogenetic stimulation [32]. We used the non-linearity of the two-photon excitation process to generate a focal spot. Other nonlinear mechanisms such as three-photon fluorescence, second harmonic generation, or stimulated Raman scattering may also be used. Using an SLM with faster refresh-rate may allow to use our approach for freely behaving animal studies.
3,887.8
2018-07-11T00:00:00.000
[ "Physics" ]
Perovskite Thin Single Crystal for a High Performance and Long Endurance Memristor Metal halide perovskites (MHPs) exhibit electronic and ionic characteristics suitable for memristors. However, polycrystalline thin film perovskite memristors often suffer from reliability issues due to grain boundaries, while bulk single‐crystal perovskite memristors struggle to achieve high LRS/HRS ratios. In this study, a single crystal memristive device utilizing a wide bandgap perovskite is introduced, MAPbBr3, in a high surface/thickness configuration. This thin single crystal overcomes these challenges, exhibiting a remarkable LRS/HRS ratio of up to 50 and endurance of 103 cycles, representing one of the highest reported values to date. This exceptional stability enables to analyze the electroforming process and LRS through impedance spectroscopy, providing insights into the underlying operational mechanism. As far as it is known, this is the first reported thin single‐crystal MHP memristor device and the first time that the electroforming process is recorded through impedance spectroscopy. This device's outstanding stability and performance position it as a promising candidate for high‐density data storage and neuromorphic applications. Introduction MHPs form a new type of semiconductor family that offer energetically favorable fabrication methods and cost-effective performance.MHPs found their first niche application in the field of optoelectronics, becoming a promising candidate for newgeneration photovoltaic technology due to their high absorption DOI: 10.1002/aelm.202300475[3] These have enabled MHP solar cells to reach power conversion efficiencies (PCE) >25% on a lab scale. [4][7] As a direct result, the presence of hysteresis behavior during current-voltage curves (jV) has been a recurring fingerprint for MHPs solar cells, leading to substantial differences in the forward (FS, from short-circuit current density J sc to opencircuit voltageV oc ) and reverse scan (RS, from V oc to J sc ) performance.10] Along with this ionic influence on the resistance, capacitive effects have also been widely reported in the perovskite literature, using techniques in the frequency domain such as impedance spectroscopy (IS) [11][12][13] or in the time domain. [14][17][18][19] Memristors are devices based on the resistive switching phenomenon governing the hysteresis response: the transition from a high resistance conductance state (HRS) to a low resistance one (LRS) when a specific voltage is applied (ON), that can be restored to the initial state by the application of a specific voltage of the opposite sign (OFF).This switchable internal resistance is the feature allowing to encode information.22] The rich variety of charge dynamics resulting from combining charge carriers and ionic migration, allows MHPs to match with the demanding requirements of a superior memristor device.[25][26] Although the MHP-based memristors research shows the potential to present an alternative to conventional oxide industry, [27] there are still many challenges to be solved as endurance and reliability.Most of the work published so far is based on polycrystalline thin film configurations, [28,29] bulk crystals [30,31] halide perovskites, or perovskite nanocrystals. [32]In most of these devices, the unstable response in their steady states limits the access to the working mechanisms.As a result, the exact nature of their working principles is still under debate, and a precise characterization of their electroforming process (the activation of a pristine device to a memristive one) has not been well detailed.In the case of polycrystalline thin films, the challenge is still to obtain high uniformity and smooth surface, due to the solution processing, harming to reach identical electric characteristics for each cell.The presence of grain boundaries and frequent pinholes in polycrystalline thin films, are the origin of parasitic leakage current, favoring degradation pathways that compromise the stability and reliability of perovskite materials as commercial memristors.In addition, the nanometric thickness of polycrystalline films, favors large intrinsic electronic current, which diminishes the ionic migration needed for the memory behavior.To partially overcome these issues, MHP single crystal memristors have been developed.Single crystals can provide considerable ionic transport and low operational current due to the absence of grain boundaries and lower state density of defects.This can be used to design memristive devices with fast switching speed, low energy consumption, and higher endurance.However, the usual thickness of bulk crystals, in a millimetre scale, can also hamper some optoelectronic properties of the material.35] Table 1 shows an overview of different MHP memristive devices.For simple structures (contacts and one buffer layer or less) the devices are characterized by a high LRS/HRS ratio but with high set/reset potentials and low endurance times.There is only one exceptional device with outstanding memristive properties with a simple device architecture, ref 46.Thus, devices with a simple structure need to improve their endurance times by keeping high LRS/LRS ratios.This bottleneck is solved by adding several layers of different materials, such as Pt, Pd, polymethyl methacrylate PMMA, SiO 2 , etc.This method improves the endurance time while maintaining a decent LRS/HRS ratio, but manufacturing costs increase significantly, adding the implicit costs of making the devices in glove boxes plus the encapsulation requirement.It should be noted that in this table we have not added the devices whose LRS/HRS ratio is <10. In this work, we unify the advantages of perovskite polycrystalline thin films with the stability of a single crystal material.We have developed an extremely stable perovskite memristor, with simple architecture and extraordinary endurance by keeping a reasonable LRS/HRS ratio.The device fabrication process is performed under ambient conditions without encapsulation.A wide bandgap perovskite, such as MAPbBr 3 , is selected to favor the ionic transport in a thin single crystal (TSC) for the memristor device.Exceptional long-time operation is achieved.The current stability reaches 10 4 s with an endurance cycle (set/reset switching) in the order of 10 3 .This value, measured under ambient conditions is one of the highest endurance values reported to date.The stability of the device enables to use impedance spectroscopy (IS) as a tool to unveil the nature of the electroformation process governing the resistive switching.We analyze both the impedance spectra corresponding to the electroforming process and LRS, confirming the role of the ionic motion on the operation mechanism. Crystallization Method and Design of Thin Single Crystal Perovskite Memristor TSC are monocrystals grown in a confined space where the thickness can be easily controlled, and therefore, their associated properties. [47,48]Thinner crystals are typically more flexible than the thicker ones, able to be bent or stretched without cracking, which is a great advantage for potential flexible memristors.In this confined method, the crystal is grown through a solution of 1.8 M of PbBr 2 and MABr in dimethylformamide (DMF): dimethyl sulfoxide (DMSO) 10:1 vol:vol, inserted between two poly (triaryl amine) (PTAA) covered indium tin oxide (ITO) substrates.The sandwiched structure is then heated on a hot plate at room conditions, based on the inverse temperature crystallization (ITC) method (growing method founded on the retrograde solubility behavior of perovskite materials), reaching 60 °C in a ramp of temperature of 15 °C h −1 , to precisely control the nucleation and growth of single crystals (see Figure S1, Supporting Information).Reproducible single crystals are achieved with thicknesses between 20 and 30 μm and area of ≈6 mm 2 (see Figure S2, Supporting Information).This micrometre-scale single crystal, instead of nanometer scale polycrystalline thin films, could provide a decreased defect density, reducing thus the surface recombination that could lead to higher charge carrier lifetime than thick single crystals (Figure S3, Supporting Information). [34]he perovskite composition selected for this work is MAPbBr 3 .It is chosen due to its intrinsic properties, such as high ions mobility, and both ambient and operational stability, beneficial for memristor application.After the crystallization process, the sandwiched configuration is carefully opened with a blade, resulting on the crystal stacked to just one of the PTAA surfaces.To perform the electronic characterization, a graphite spray was coated on the top of the crystal and on the ITO contact to complete the device (Figure 1a).This carbon-based electrode presents several advantages in terms of sustainability, efficiency, cost, and stability, as compared with metal electrodes.The scalability and compatibility with flexible substrates complete the benefits of this selection. [49,50]The electrical measurements were carried out using a customized sample holder specifically designed for this purpose (see Figure S4, Supporting Information). To study the structural properties of the TSC, X-ray diffraction (XRD) analysis is performed to corroborate its monocrystalline nature.Sharp and intense XRD reflections along (100) and ( 200) planes confirm the formation of a highly crystalline perovskite cubic phase (Figure 1b).Representative scanning electron microscopy (SEM) images of the device are shown in Figure 1c.The SEM images confirm thickness of ≈30 μm for the perovskite crystal.Top contact of graphite thickness is ≈15 μm.PTAA, with a thickness of 20 nm, and ITO are shown in the zoom SEM image.To ensure the smoothness of the TSC, we measure photoluminescence mapping to corroborate the of surface optical properties of the TSC (Figure 1d).The scanning was performed within an area of 60 μm 2 , as a representation of the complete surface of the TSC, between 6 and 7 mm 2 . Memristor Performance of the Thin Single Crystal Perovskite Device The structure of a typical resistive switching memristor consists of a top electrode, semiconducting layer and a bottom electrode, forming a two-terminal system.In our case, anode contact is connected in the carbon-based electrode placed on the top of perovskite, meanwhile, cathode is fixed in the carbon electrode deposited on the ITO substrate. Most of the resistors need a first step known electroformation.This process is referred to as the formation of a conducting path that allows the transition from HRS to LRS.Once the conducting path is formed due to the external bias (electroforming process), the device turns into LRS.When the opposite bias is applied, the RESET process occurs due to the breaking of the conducting path, and the HRS is achieved.If the ions do not come back to their original place, the conduction paths built during the electroforming process are just partially broken, and then, after the first RESET, turning the HRS to LRS requires less power.After this, the device stabilizes the voltages set/reset.The mechanism of this process is still under debate due to the difficulties in studying the in situ formation of this abrupt and dynamic process.However, we provide a more in-depth approach to this mechanism with the IS analysis that we propose in below. After the device has been activated through the electroformation process (see Figure S5, Supporting Information), we perform cyclic voltammetry, applying voltage sweep of 0 V → + 2 V → 0 V → −2.7 V → 0 V to evaluate the initial performance of our memristor device (Figure 2a).Along with the charge carrier injection and transport, the applied voltage also redistributes the ionic species or vacancies along the crystal thickness.This ionic movement is key to trigger the different resistive states of the memristor, with a mechanistic origin that is currently under debate and falls beyond the scope of this manuscript (different proposed interpretations include filament formation or interfacial adjustments [51,43] among others [52] ).In our case, as the voltage sweeps from 0 to 2 V, a sudden current increase arises at 0.6 V.At this set voltage the device exhibited the transition from HRS to LRS, reaching current values in the order of mA.To fulfill the non-volatile nature of the device, the memristor must maintain its resistance state within a voltage range.To test that, we measure a reverse sweep to negative voltages (from 0 to −2.7 V).At −2.3 V the transition from LRS to HRS occurs, becoming the reset voltage accompanied by a current drop down to the μA range (Figure 2a).To confirm the stability of the device under stressful conditions, 100 jV cycles are performed (Figure 2a).Cycles 1, 30, 50, 90, and 100 are displayed with solid color, the rest of the cycles are shown with translucid lines to better overview of the graph.The similarity between the first and last cycle (black line and yellow one, respectively) proves the operational reliability of the device.Figure S6 (Supporting Information) displays 10 jV curves of different devices, probing the reproducibility of the device.Most of the devices show an abrupt transition from LRS to HRS.This transition occurs mainly at −2.3 V after a previous reduction of the reset voltage.This reduction needs three cycles at most to occur, decreasing the voltage from −4 to −2.3 V to finally stabilize.Set voltage is much stable and occurs at 0.6 V in each device. To compare the cyclic voltammetry performance of our TSC with a polycrystalline device, we fabricated a MAPbBr 3 polycrystalline thin film memristor using the same architecture.As seen in Figure S7 (Supporting Information), we evidence the superior LRS/HRS ratio and current values for the monocrystal configuration as compared with the polycrystalline device. One of the most critical challenges in perovskite memristors is to achieve high endurance by keeping the rest of the outstanding properties with a simple architecture device.Endurance refers to the ability of a memristor to endure or sustain repeated switching cycles without significant degradation or loss of performance.It is particularly important because the repeated switching cycles can introduce various physical phenomena such as electromigration, thermal effects, material fatigue, and degradation of the memristor, [53,54] leading to changes in the memristor's resistance or conductance, resulting in variations in its behavior and potential failure. Figure 2b, shows the endurance test based on a sequence of voltage steps from +0.1 V → + 1 V → +0.1 V → −2.5 V, where 0.1 V is the reading voltage, 1 V is the set voltage, and −2.5 V acts as reset voltage.The current is measured at 0.1 V alternating the HRS and LRS.First 250 s, HRS lecture increased from 0.1 to 0.3 mA.Subsequently, the current of HRS decreases and the dispersion increases, reaching a current from 0.01 to 0.1 mA.LRS/HRS ratio oscillated from 10 (during the first 250 s) to 80 (for the rest of the measurement).LRS presents high stability of current, keeping the same order of magnitude after 10 3 cycles.Such results are comparable with some perovskite devices with more sophisticated architectures. [55]Figure 2c displays the retention time test, measuring the current in continuous at 0.1 V for both states, first the LRS (blue line) and subsequently, the HRS (orange line) along 10 4 s.HRS current transits from 50 to 80 μA during the first 500 s.Consequently, current is stabilized at 80 μA along the measurement.Meanwhile, the current in the LRS is highly stable along the 10 4 with a LRS/HRS ratio of 25 s, which is another optimal performance indicator of the ReRAM device. [56]inally, the resistance transition is showed in Figure 2d, evidencing the abrupt change from HRS to LRS at 0.5 V and the reverse transition at −2.0 V.The slope close to the unity is translated into a quick evolution between both states, results expected for memory devices. [57]Additionally, the time response from HRS to LRS transition is shown in Figure S8 (Supporting Information).The time transition is 15 ms.Unfortunately, to the best of our knowledge, time response is not usually reported in MHP memristors and no comparisons can be done. Impedance Characterization of the Electroforming Process As mentioned above, memristors undergo resistive switching via the electroformation process, where the conductive path is established within the perovskite material resulting in the LRS.Understanding this process is key for investigating the factors affecting the switching kinetics, endurance, and variability of perovskite memristors needed for their full potential in computing and memory applications.][60] Nevertheless, two different general possibilities can be considered, as reported in literature: i) electrochemical metallization mechanism (ECM) through metal cations, and ii) valence change mechanism (VCM), based on halide vacancies.In halide perovskite-based memristors, route can in-volve either metal cations for the ECM or halide vacancies for the VCM. [28,61,62]S consists of the electrical measurement of current-voltage of the device at a steady-state potential (V dc ) overlapping a small perturbation (V ac ) in a range of frequency, .The data obtained is reflected in the complex impedance that the device present against that perturbation, depending on the frequency region.The interpretation of these results can be done through fitting to an equivalent circuit.The capacitance in the IS response of metal halide perovskite devices, [63] particularly in the low-frequency range, [64] is linked to the type of current-voltage hysteresis.It can be classified as normal (NH) or inverted (IH).The latter, resulting in higher current in FS than in RS, has been associated with the negative capacitance feature, a distinctive feature also found in neuronal models. [65,66]Consequently, the nature of IS analysis can be of great help in determining the specific processes taking place during the HRS and LRS.IS can be used to analyze the behavior of the device during the electroforming process by measuring at different V dc , trying to reproduce a current-voltage curve from low applied bias to high applied bias.The pristine device, with no previous treatment, is gradually subjected to rising voltages, from 0 to 4.5 V.All the Nyquist plots present one arc in the high/intermediate frequency (HF, IF) region, which is attributed to bulk processes, and a characteristic loop toward the fourth quadrant at low-frequency domains (LF).This reduction of Im (Z) to values <0 at low frequencies can be observed in different configurations of halide perovskites-based devices, both experimentally and in drift-diffusion simulations. [67,68]It is commonly referred to as "inductive loop" or "negative capacitance" [69] and can be modeled by a chemical inductor parameter. [70,71]It is worth remarking that, in classical semiconductor theory, a conventional electromagnetic induction effect in series would define the shape of the high-frequency part of the spectra.2] A clear evolution of the IS is observed (Figure 3a) for each applied V dc : as the voltage increases, both the high-frequency arcs and the inductive behavior become smaller, leading to a quite distorted spectra when reaching 4.5 V.The I dc current produced during the IS is recorded to ensure the stability of the sample during the measure.Figure 3b shows currents of 10 −6 A at low V dc and how it is increasing linearly up to 10 −3 A when approximates to 4.5 V.This fact demonstrates that the process of electroforming is occurring during the impedance measure (orange points in Figure 3b).The rising up of the current with a change of more than 2 orders of magnitude, together with a change in the impedance spectra, clearly indicates the appearance of a process that leads to an ON state or LRS. A decrease of the bias applied is then carried out from 4.5 to 0 V.The I dc in this case show how the LRS is maintained, as the I dc still in the mA range at every V dc (blue points in Figure 3b). The presence of the inductive element leads to a slow time constant kin (see Supporting Information for details) that is able to provide enough information about the ionic motion occurring at the LF domain.We proceed then to fit the IS with the equivalent circuit (EC) (inset Figure 3d) including the R L -L branch, through which the kin is defined.Recently, several publications have confirmed the usefulness of this method in interpretation of MHP memristors behavior, introducing a general "neuron-style" model that includes a slow time constant that can persist to be nearly independent of the voltage. [71,73,74]The calculated kinetic time decreases with the increased bias, pointing out that the electroforming process is correlated with a change in the time domain that goes from 10 −1 s to 10 −2 s when the LRS is reached.Those times can be attributed to ionic accumulation and halide vacancy diffusion, respectively, as previously reported in literature. [75]Therefore, the electroforming process is initiated by an accumulation of ions, that gradually evolves to the stages determined by the faster processes of ion diffusion.A similar sequence has already been reported recently for memristors based on metal oxides, so we could suggest a similar basis of mechanisms, as demonstrated by IS. [76] This finding confirms the ionic role on the memristor mechanism, and marks the first time that the electroforming process is recorded in situ through IS. Conclusion We have successfully fabricated the first thin single-crystal perovskite memristor, which combines the advantageous properties of monocrystalline materials and thin-film perovskites.Through the confined inverse temperature crystallization (ITC) method, using ITO and PTAA as bottom electrodes and a graphite spray as a metal contact, we achieved a highly stable device with exceptional performance.Our memristor exhibited a remarkable ON/OFF ratio of 10 and endurance of 10 3 cycles, representing one of the highest reported values in literature.The device's stability under ambient conditions, without the need for encapsulation, allowed us to perform a comprehensive impedance analysis.Notably, we successfully recorded the electroforming process in situ, shedding light on the mechanism governing perovskite memristor operation. By quantifying the gradual kinetic changes ( kin ) related to the low-frequency response, we uncovered the modification of ionic motion during the electroforming process that leads to a threeorder magnitude variation in the current response.The calculated kinetic changes show a decreasing trend: the electroforming goes from long times to shorter times, that could be related to different ion dynamics (from accumulation to diffusion). These findings significantly contribute to our understanding of perovskite memristor behavior.Moreover, our fabrication approach offers a simple and efficient method for producing efficient and reliable perovskite memristors. Overall, this work represents a significant step forward in advancing perovskite memristor technology, opening new possibilities for high-performance and stable devices for future applications. Experimental Section Materials: Lead bromide (PbBr 2 ) was purchased from TCI. Methylammonium bromide (CH 3 NH 3 Br) was purchased from GreatCell.Polytriarylamine (PTAA) was purchased from Ossila.N,N-dimethylformamide (DMF) and Dimethyl sulfoxide (DMSO) were purchased from Thermo Scientific.Toluene was purchased from VWR. Graphite spray was purchased from RS.All the reagents were directly used without any further purification. Fabrication Process: Monocrystal devices: The MAPbBr 3 precursor solution of 1.8 M was prepared by dissolving 1:1 molar ratio of PbBr 2 and CH 3 NH 3 Br in 10:1 ratio of DMF:DMSO in a vial with vigorous shaking.PTAA was dissolved in toluene with 2 mg mL −1 .All the solution were filtrated using 0.2 μm PTFE syringe filter before deposition.Glass/ITO substrates were cleaned by acetone, detergent, deionized water, and absolute ethanol in an ultrasonic bath for 10 min for each step.After drying with a high air flow, the substrates were treated in a UV ozone cleaner for 20 min.Subsequently, 60 μL of the prepared PTAA solution was deposited onto the ITO substrate by spin-coater at 4000 rpm for 30 s under room conditions.Immediately, the substrates were annealed at 100 °C for 10 min.Therefore, 40 μL of the MAPbBr 3 solution were dropped on the as-prepared substrate and enclosed by another PTAA-coated substrate.The space-confined inverse temperature crystallization method was used for growing thin single crystals. [77]The temperature was increased from 25 to 60 °C with a ramp of 15 °C h −1 .The substrates were held at 60 °C for 48 h.After cooling at room temperature, the substrates were separated with a blade.Finally, the graphite electrodes were added on the MAPbBr 3 and ITO by deposing 10 μL of the solution with a micropipette.The solution evaporates in ≈60 s after deposition, forming a solid graphite that acts as an electrode.It is important to underline that the device is fabricated out of glovebox. Polycrystal Devices: A solution of 1.4 M of MAPbBr 3 dissolved in DMF:DMSO (1:4, vol:vol) was added on the ITO substrates treated with PTAA, as previously described.Subsequently, the solution was spinning with a ramp of 1000 rpm for 10 s and 4000 rpm for 40 s, adding 1 mL of toluene at 12 s of the second ramp.Then, the device is annealed for 30 min.Finally, the graphite electrodes were deposited on the MAPbBr 3 and ITO.All the device process was fabricated in glovebox. Characterization: The crystalline structure was assessed by a XRD collected on a Bruker D8 Advanced X-ray diffractometer with copper Ka radiation (l = 1.5418A˚) and at a scan rate of 5 °min −1 for 2 angles ranging from 12 to 40.Scanning electron microscope (SEM) images were captured by an S-4800 instrument from HITACHI (Tokyo, Japan) operating at 2 kV.Photoluminescence spectra was done using a confocal microphotoluminescence (micro-PL) spectroscopy system, with the samples placed in the cold finger of a vibration-free closed-cycle cryostat (AttoDRY800 from Attocube AG).Excitation and detection were carried out using a 50X microscope objective with a long working distance and a numerical aperture of NA = 0.42, which was placed outside the cryostat.The sample's emission was long-pass filtered, dispersed by a double 0.3 m focal length grating spectrograph (Acton SP-300i from Princeton Instruments), and detected with a cooled Si CCD camera (Newton EM-CCD from ANDOR) for recording micro-PL spectra, and a silicon single-photon avalanche photodiode detector (from Micro Photon Devices) connected to a time-correlated single-photon counting electronic board (TCC900 from Edinburgh Instruments) for micro-TRPL measurements.Impedance spectroscopy (IS) and electrical characterization were measured using a Gamry interface 1010E. Figure 1 . Figure 1.a) Scheme of the complete device for memristor application, using graphite electrodes for the electrical measurements.The inset shows a real photo of the TSC grown onto PTAA layer through ITC confined method.b) X-ray diffraction showing the peaks position corresponding to the cubic phase (100) at 15°and (200) at 30°. c) Cross-section SEM image of one device.d) Photoluminescence mapping of the TSC surface showing high homogeneity. Figure 2 . Figure 2. a) Cyclic voltammetry curves in log scale showing SET and RESET processes during 100 cycles at 100 mV s −1 of scan rate.b) Endurance performance test upon applying cyclic sweep voltages (+0.1 V → + 1 V → +0.1 V → − 2.5 V → +0.1 V). c) Evolution of current stability performed at LRS (blue line) and HRS (orange line) at 0.1 V of read voltage.d) Resistance measurement during the transition from HRS to LRS and vice versa. Figure 3 . Figure 3. a) Impedance spectra showing the evolution of the response with the applied bias leading to the electroforming process.Fitting is shown just as 0.3 for clarity.b) DC current corresponding to the electroforming process (orange dots) and LRS (blue dots).c) R L and L elements quantified through the fitting using the EC (inset Figure 3d) (see Supporting Information).d) Kinetic proper times ( kin ) corresponding to the negative capacitance features present during the electroforming process. Table 1 . Overview of different memristor devices based on polycrystal MHPs.
6,071.8
2024-02-10T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
An epilepsy classification based on FFT and fully convolutional neural network nested LSTM Background and objective Epilepsy, which is associated with neuronal damage and functional decline, typically presents patients with numerous challenges in their daily lives. An early diagnosis plays a crucial role in managing the condition and alleviating the patients’ suffering. Electroencephalogram (EEG)-based approaches are commonly employed for diagnosing epilepsy due to their effectiveness and non-invasiveness. In this study, a classification method is proposed that use fast Fourier Transform (FFT) extraction in conjunction with convolutional neural networks (CNN) and long short-term memory (LSTM) models. Methods Most methods use traditional frameworks to classify epilepsy, we propose a new approach to this problem by extracting features from the source data and then feeding them into a network for training and recognition. It preprocesses the source data into training and validation data and then uses CNN and LSTM to classify the style of the data. Results Upon analyzing a public test dataset, the top-performing features in the fully CNN nested LSTM model for epilepsy classification are FFT features among three types of features. Notably, all conducted experiments yielded high accuracy rates, with values exceeding 96% for accuracy, 93% for sensitivity, and 96% for specificity. These results are further benchmarked against current methodologies, showcasing consistent and robust performance across all trials. Our approach consistently achieves an accuracy rate surpassing 97.00%, with values ranging from 97.95 to 99.83% in individual experiments. Particularly noteworthy is the superior accuracy of our method in the AB versus (vs.) CDE comparison, registering at 99.06%. Conclusion Our method exhibits precise classification abilities distinguishing between epileptic and non-epileptic individuals, irrespective of whether the participant’s eyes are closed or open. Furthermore, our technique shows remarkable performance in effectively categorizing epilepsy type, distinguishing between epileptic ictal and interictal states versus non-epileptic conditions. An inherent advantage of our automated classification approach is its capability to disregard EEG data acquired during states of eye closure or eye-opening. Such innovation holds promise for real-world applications, potentially aiding medical professionals in diagnosing epilepsy more efficiently. Introduction Epilepsy is a very common neurological disorder in humankind that affects roughly 50 million people worldwide (Tuncer et al., 2021;World Health Organization, 2021).It is characterized by abnormal electrical activity in the nerve cells of the brain, resulting in recurrent seizures, unusual behavior, and possibly loss of consciousness (Fisher et al., 2014;Ozdemir et al., 2021).The worst-case scenario could result in permanent harm to the patient's life.Up to 70% of individuals with epilepsy could live seizure-free if properly diagnosed and treated.Therefore, a timely and accurate diagnosis method for epilepsy is essential for all patients and doctors.In clinical practice, doctors diagnose epilepsy by using patients' medical records, conducting neurological examinations, and employing various clinical tools such as neuroimaging and EEG recording.However, this analysis is considered complex due to the presence of patterns in the EEG that can be challenging to interpret, even for experienced experts.This complexity can lead to different opinions among experts regarding EEG findings, necessitating complementary examinations (Oliva and Rosa, 2019;Oliva and Rosa, 2021).To address the time-consuming nature of visual analysis and errors caused by visual fatigue during the increasing continuous EEG video recordings, numerous automatic methods have been developed. There have been various methods proposed in the past three decades for the automatic identification of epileptic EEG signals (Ghosh-Dastidar and Adeli, 2009;Sharma et al., 2014;Shanir et al., 2018;Truong et al., 2018).Machine learning (ML) methods can be used to build effective classifiers for automatic epilepsy detection.These automatic seizure detection methods mainly include two steps: feature extraction and classifier construction.The feature extraction includes time domain (T) (Jaiswal and Banka, 2017;Gao et al., 2020;Wijayanto et al., 2020), frequency domain (F) (Altaf and Yoo, 2015;Kaleem et al., 2018;Singh et al., 2020), time-frequency domain (TF) (Tzallas et al., 2007;Abualsaud et al., 2015;Feng et al., 2017;Shen et al., 2017;Goksu, 2018;Sikdar et al., 2018;Yavuz et al., 2018), and a combination of nonlinear approaches (Zeng et al., 2016;Ren and Han, 2019;Sayeed et al., 2019;Wu et al., 2019).In addition, various types of entropy such as fuzzy entropy (Xiang et al., 2015), approximate entropy, sample entropy, and phase entropy (Acharya et al., 2012) have been calculated from the EEG signals to distinguish different epileptic EEG segments.The automatic seizure classifier includes Support Vector Machine (SVM) (Subasi and Ismail Gursoy, 2010;Das et al., 2016;Şengür et al., 2016;Li and Chen, 2021), Convolutional Neural Network (CNN) (Feng et al., 2017;Wijayanto et al., 2020;Ozdemir et al., 2021), Extreme Learning Machine (Yuan et al., 2014), K-Nearest Neighbor (Guo et al., 2011;Tuncer et al., 2021), Deep Neural Network (Sayeed et al., 2019), Recurrent Neural Network (Yavuz et al., 2018).Gotman (1982) proposed the first widely used new method, which is based on decomposing the EEG into elementary waves and detecting paroxysmal bursts of rhythmic activity with a frequency between 3 and 20 cycles per second.This method was further improved by the same group, who broke down EEG signals into half waves and then extracted features such as peak amplitude, duration, slope, and sharpness to detect seizure activities (Gotman, 1990).Jaiswal and Banka (2017) primarily used time-domain features such as local neighborhood descriptive patterns and one-dimensional local gradient patterns for epilepsy detection.Gao et al. (2020) and Wijayanto et al. (2020) extracted approximate entropy as features and combined with recurrence quantification analysis to detect epilepsy, their method achieved an accuracy of 91.75% in the Bonn dataset (Andrzejak et al., 2001). Wijayanto et al. (2020) used the Higuchi fractal dimension (HFD) to differentiate between ictal and interictal conditions in EEG signals.Many researchers focused on time domain features, while others concentrated on frequency domain, timefrequency domain, and nonlinear approaches.Altaf and Yoo (2015) combined feature extraction with classification engines, implementing multiplex bandpass filter coefficients for feature extraction.Subsequently, a nonlinear SVM was used, achieving a sensitivity of 95.1%.Kaleem et al. (2018) developed a method based on a signalderived empirical mode decomposition (EMD) dictionary approach. The integrated time-frequency method has been widely used for feature extraction in various approaches.For instance, Abualsaud et al. (2015) successfully detected epilepsy from compressed and noisy EEG signals using discrete wavelet transformation (DWT), achieving an accuracy of 80% when SNR = 1 dB.Feng et al. (2017) extracted features from three-level Daubechies discrete wavelet transform.Shen et al. (2017) employed a genetic algorithm to select a subset of 980 features subset and used 6 SVMs to classify EEG data into four types, i.e., normal, spike, sharp wave, and seizures.Sikdar et al. (2018) proposed a MultiFractal Detrended Fluctuation Analysis (MFDFA) to address the multifractal behaviors in healthy (Group B), interictal (Group D), and ictal (Group E) patterns.Yavuz et al. (2018) extracted mel frequency cepstral coefficients (MFCCs) as features and applied them in a regression neural network.Goksu (2018) extracted Log Energy Entropy, Norm Entropy, and Energy from wavelet packet analysis (WPA) as features and used multilayer perception (MLP) as a classifier, achieving commendable performance.Some researchers have used nonlinear or mixed features as classification criteria.Zeng et al. (2016) extracted Sample Entropy and the permutation Entropy, and Hurst Index from EEG segments which were selected through an ANOVA test by four classifiers (Decision Tree, K-Nearest Neighbor Discriminant Analysis, SVM).Ren and Han (2019) extracted both linear and nonlinear features and classified them using an extreme learning machine.Sayeed et al. (2019) employed DWT, Hjorth parameters, statistical features, and a machine learning classifier to differentiate between ictal EEG and interictal EEG patterns. These methods based on feature extraction are influenced by the intrinsic characteristics of EEG, such as muscle activities and eye movements, which may introduce noise to the original EEG data, potentially altering its actual characteristics (Hussein et al., 2019;Li et al., 2020).To address these challenges, many deep learning models have been developed for automatic epileptic seizure detection. While other approaches have been proposed in the literature for epilepsy classification (Joshi et al., 2014;Zhu et al., 2014;Hassan et al., 2016;Indira and Krishna, 2021;Qaisar and Hussain, 2021), the prevailing trend involves the application of deep learning techniques (Yuan et al., 2017;Acharya et al., 2018;Tsiouris et al., 2018;Ullah et al., 2018;Covert et al., 2019;Li et al., 2020;Ozdemir et al., 2021) The structure of this paper is as follows: Section 2 gives a brief overview of the dataset, outlines the proposed method, and introduces the classifier used.Section 3 presents the results and compares them with other methods.Section 4 discusses the proposed approach, while section 5 highlights the main conclusions, contributions, and potential future directions. 2 Materials and methods Epilepsy dataset The EEG dataset used for the epilepsy classification performance is from the University of Bonn (Andrzejak et al., 2001).This comprehensive dataset includes EEG signals from both healthy individuals and those with epilepsy, with recordings taken under various conditions such as eyes opened and closed, intracranial and extracranial potential, and interictal and ictal states.The dataset is divided into five subsets labeled as A, B, C, D, and E, each containing 100 single-channel EEG signal segments.Each signal segment is 23.6 s long and sampled at a rate of 173.61 Hz.Subsets A and B were recorded using surface EEG recordings from five healthy volunteers with eyes open and closed, respectively, follow the standard electrode placement scheme of the International 10-20 System.Subsets C, D, and E consist of intracranial recordings from five epileptic patients, with set D representing recordings from the epileptogenic zone, set C from the hippocampal formation of the opposite hemisphere, and set E exclusively containing seizure recordings.Subsets C and D correspond to epileptic interictal states, while set E captures ictal activity.Further details can be found in Table 1. Each EEG set in the dataset contains 100 segments, each segment containing 4,096 points.However, since the classifier uses a CNN network, having more segments in the dataset is crucial for influencing the algorithm's performance.To address this issue, we divide each EEG segment into four epochs, each comprising 1,024 points.As a result, the original dataset transforms into one containing five classes (A, B, C, D, and E), with 400 segments each having 1,024 sampling points (Pachori and Patidar, 2014; Figure 1). In order to determine the performance and accuracy of the epilepsy classification algorithm, 9 classifications are considered to be designed as follows, they are A vs. E, B vs. E, AB vs. E, C vs. E, D vs. E, CD vs. E, AB vs. CD, AB vs. CDE, and ABCD vs. E. A vs. E and B vs. E can determine if eye closure or opening influences epilepsy detection.AB vs. E, A vs. E, and B vs. E can assess the impact of additional EEG data on epilepsy detection. C vs. E evaluates the method's performance in distinguishing interictal from ictal patterns.D vs. E examines the method's effectiveness in classifying interictal from ictal patterns and exploring the relationship between brain activity and hippocampal formation in the opposite hemisphere.C vs. E and D vs. E can identify which EEG component (epileptic zone or opposite hemisphere) is more effective in classifying interictal and ictal patterns.C vs. E, D vs. E, and CD vs. E investigate the influence of additional EEG data on interictalictal detection. AB vs. CD tests the method's ability to differentiate healthy volunteers from epileptic interictal patients.AB vs. CDE assesses the method's capability to distinguish healthy volunteers from epileptic patients.ABCD vs. E evaluates the method's capacity to differentiate seizure-free individuals from those experiencing seizures.These binary classification tasks are designed to enhance the effectiveness of the experiments. All of these binary classification tasks are designed to enhance the effectiveness of the experiments. Methods The proposed automatic system for epilepsy classification is based on FFT feature extraction, CNN, and LSTM. FFT Three approaches are selected for comparison to determine an optimal method for binary classification: FFT, wavelet transformation (WT), and EMD features.The discussion section compares the proposed methods with other approaches to assess their performance. The widely used convolution theorem asserts that circular convolutions in the spatial domain are equivalent to pointwise products in the Fourier domain.Matrix generation plays a crucial role in the proposed framework as a means of quantitatively describing EEG records.The information contained in the EEG record matrix is influenced by fast Fourier transformation (FFT) during classification tasks.The classical FFT comprehensively describes and analyzes EEG traces in the frequency domain (Samiee et al., 2015).To effectively extract valuable features from epilepsy EEG signals, the improved method of FFT is employed to convert an EEG signal into a matrix.The steps involved are outlined below: Step 1: obtain the Fourier coefficient for a given signal x n ( ) in the frequency range 0 2 , π [ ] using the discrete Fourier transform algorithm.The discrete Fourier transform is defined as equation ( 1): where X k ( ) are the discrete Fourier transform coefficients, M is the length of the input EEG. Step 2: calculate the absolute values of the coefficients as A X k r = ( ). Step 3: transform the A k into the m n × .Matrix form according to the sequential order of the sample points.The resulting matrix is then expressed as equation ( 2): where m and n are the matrix row and matrix column, respectively.Extracting the FFT features is a crucial step, followed by utilizing these features as training data to train the classifier. DWT Wavelets can be defined as small waves with limited duration and an average value of 0. They are mathematical functions that can localize a function or data set in both time and frequency.The concept of wavelets can be traced back to Haar's thesis (Daubechies, 1992;Adeli et al., 2003) in 1909.The wavelet transform is a powerful tool in signal processing, known for its advantageous properties such as time-frequency localization (capturing a signal at specific time and frequency points, or extracting features at different spatial locations and scales) and multi-rate filtering (distinguishing signals with varying frequencies).By leveraging these properties, one can extract specific features from an input signal that exhibit distinct local characteristics in both time and space. In continuous wavelet transform (CWT), the signal to be analyzed is matched and convolved with the wavelet basis function in a continuous sequence of time and frequency increments.Even in CWT, the data must be digitized.Continuous time and frequency increments mean that data at each digitized point or increment is used.Consequently, the original signal is represented as a weighted integral of the continuous basis wavelet function.In DWT, the basis wavelet function takes the original signal's inner product at discrete points (usually dyadic to ensure orthogonality).The result is a weighted sum of a series of base functions.The wavelet transform is based on the wavelet function, a family of functions that satisfy certain conditions, such as continuity, zero mean amplitude, and finite or near-finite duration. The CWT of a square integrable function of time, f t ( ), is defined as equation ( 3): by Chui (1992), where a b R a , , ∈ ≠0, R is the set of real numbers, the star symbol '*' denotes the complex conjugation.In CWT, the parameters a and b are continuously varying and can have infinite number of values to be taken, but this kind of computation cannot be done in finite time for modern computers.So we take a and b as discrete according to certain rules, which is DWT.If a expands exponentially, we define a as: Since for wide wavelets we want to translate in larger steps, we can define b as: b nb a where b is fixed and n The wavelet function and the transform equation are given by the following two equations, respectively equations (4), (5): EMD The principle of the EMD technique is to automatically decompose a signal into a set of band-limited functions called Intrinsic Mode Functions (IMFs).Each IMF must satisfy two fundamental conditions (Huang et al., 1998;Bajaj and Pachori, 2012): (1) the number of extreme points and zero crossings in the entire dataset must either be equal or differ by at most one, and (2) the mean value of the envelopes defined by local maxima and minima must be zero at every point (Li et al., 2013). The EMD is capable of decomposing a segment of EEG signal x n and a residue signal r.Therefore, x n ( ) can be reconstructed as a linear combination equation ( 6): The following describes a systematic method for extracting IMFs: Given an input signal x n r n x n n ( ) ( ) = ( ) = , , .0 Step 1: determine the local maximum and local minimum of x n ( ). Step 2: determine the upper envelope e n max ( ) by connecting all local maximum through cubic spline functions.Repeat the same procedure for the local minima to produce the lower envelope e n min ( ). Step 3: calculate the mean value for each point on the envelopes: m n e n e n end the sifting process, else, x n r n ( ) = ( ) and go back to step 1. The residue contains the lowest frequency.The main features of the ictal EEG are closely related to the first five IMFs.IMF1-IMF5 of each EEG segment is used to extract the EEG features. CNN + nLSTM Figure 2 displays the proposed automatic system for epilepsy detection, which is based on the fully-convolutional nested long shortterm memory (FC-NLSTM) model. Each EEG signal is initially segmented into a series of EEG segments, each segment containing M sampling points, by applying a fixed-length window that slides through the entire signal.Then filter the EEG signals using a Chebyshev bandpass filter with a cutoff frequency of 3-40 Hz.These EEG segments are then inputted into a fully convolutional network (FCN) with three convolutional blocks to learn the distinctive seizure characteristics present in the EEG data.The FCN serves as a feature extractor, effectively capturing the hierarchy features and internal structure of EEG signals.Subsequently, the features learned by the FCN are inputted into the NLSTM model to uncover the inherent temporal dependencies within the EEG signals.To extract the output characteristics of all NLSTM time steps, the time-distributed fully connected (FC) layer is used to take the outputs of all NLSTM time steps as inputs, rather than just the output of the last time step.Considering that all EEG segments should contribute equally to the label classification, a one-dimensional average pooling layer is added after the time-distributed fully connected layer.Finally, an FC layer is used for classification, and a softmax layer is employed to compute the probability that the EEG segment belongs to each class and predict the class of the input EEG segment (Li et al., 2020). Temporal convolutional networks are widely used to analyze timeseries signals, enabling the capture of how EEG signals evolve and automatic learning of EEG structures from data.The raw EEG signal comprises low-frequency characteristics with long periods and highfrequency characteristics with short periods (Adeli et al., 2003).It serves as a feature extraction module in the FCN and has been demonstrated as an effective method for time-series analysis problems (Wang et al., 2017).To prevent model overfitting to noise in the training data, this study maintains simplicity and shallowness in the FCN model, which includes three stacked convolutional blocks.Each of the three basic convolutional blocks consists of a convolution layer and a Rectified Linear Unit activation function. According to the EEG recordings that are close to or even distant from the current EEG epoch, neurologists can determine whether the EEG epoch is a part of a seizure.Recurrent neural Flowchart of proposed method.(Hochreiter and Schmidhuber, 1997).This memory mechanism allows the model to retain previous information from the EEG recordings.In this study, the FC-NLSTM is used to capture the temporal dependencies in EEG signals within the output of the feature extraction module. Classification The test data is inputted into the classification model for classification in this step.The 10-fold cross-validation method split the data into 10 parts, using 9 parts to train the model and reserving 1 part as the test set to evaluate the model's performance.This process is repeated 10 times to calculate the average sensitivity, specificity, and accuracy values. FFT, DWT, and EMD are chosen as features for training and testing, with the results compared in part 3. Subsequently, the bestperforming features were selected as the method feature and compared against the performance of existing methods. Classifier result estimation All the experiments results are based on the Bonn University database.The 10-fold cross-validation is used to reduce potential system errors, as well as to assess the stability and reliability of the proposed model. The EEG data is evenly split into 10 subsets.Nine subsets are designated as training sets, while the remaining one is assigned to test the model.This iterative process is repeated 10 times, and the averaged values across these runs are computed.The performance assessment of the proposed method involves statistical evaluation measures such as sensitivity, specificity, and recognition accuracy. delving into the statistical measures of sensitivity, specificity, and recognition accuracy, let us provide descriptions of four fundamental concepts: True positive (TP): the number of positive (abnormal) examples classified as positive. False negative (FN): the number of positive examples classified as negative (normal). True negative (TN): the number of negative examples classified as negative. False positive (FP): the number of negative examples classified as positive. Sensitivity (Sen) is calculated by dividing true positive (TP) by the total number of seizure epochs identified by the experts.TP represents the seizure epochs marked as positive by both the classifier and EEG experts. Sen = TP/(TP + FN).Specificity (Spe) is computed by dividing TN by the total number of non-seizure epochs identified by the experts.TN encapsulates the count of non-seizure epochs identified correctly.Spe = TN/(TN + FP).Accuracy (Acc) is the number of correctly marked epochs divided by the total number of epochs.Acc = (TP + TN)/(TP + TN + FP + FN). Results All experiments are performed in Python using Keras with TensorFlow backend and are implemented on an NVIDIA GeForce GTX1080-Ti GPU machine.In order to fully evaluate the performance of the proposed method in ideal and real situations, the University of Bonn database is used in this study. All 9 tasks are tested in three methods.Table 2 shows that FFT and FC-NLSTM obtained the best accuracy in all tasks except ABCD vs. E. EMD performed poorly in every task except ABCD versus E. Therefore, FFT is selected as the optimal feature for comparison with other methods in subsequent sections. Normal or interictal or non-ictal vs. ictal classification Three types of data are used in the experiment.They include non-ictal vs. ictal(A vs. E, B vs. E, AB vs. E, C vs. E, D vs. E, CD vs. E, AB vs. CDE, ABCD vs. E), and normal vs. interictal (AB vs. CD). The first three experiments compare non-ictal with ictal conditions, including A vs. E, B vs. E, and AB vs. E.The second set of three experiments compare non-ictal with ictal conditions including C vs. E, D vs. E, CD vs. E.The third experiment focuses on distinguishing between non-ictal and ictal states, classifying ABCD as seizure-free and E as seizure epilepsy.These experiments are conducted to validate the effectiveness and reliability of the proposed method. Table 3 presents the results of the two-class seizure detection problem.As shown in this table, the proposed method demonstrates excellent classification performance across all normal vs. ictal scenarios, achieving nearly 100% sensitivity, specificity, and accuracy in some instances.Although not every fold in the 10-fold cross-validation reaches 100%, the mean sensitivity, specificity, and accuracy values exceed 99%.Notably, the specificity for A vs. E reaches 100%.In the interictal vs. ictal comparison, the proposed method also performs well, achieving 100% sensitivity, specificity, and accuracy in half of the folds in the 10-fold cross-validation.The highest sensitivity of 100% is achieved in the C vs. E experiment, with nearly 100% performance in terms of sensitivity, specificity, and accuracy in multiple folds for C vs. E, D vs. E, and CD vs. E.In the non-ictal vs. ictal (Kundu et al., 2013;Hussein et al., 2019).However, our methods continue to perform well under these conditions, without additional operations in our experiment.The 10-fold cross-validation thoroughly validates the method and mitigates the randomness of these experiments. Normal vs. epileptic classification In this section, we discuss two types of epilepsy classification problems to demonstrate the effectiveness and robustness of our proposed method, which includes two experiments comparing normal vs. interictal and normal vs. interictal and ictal cases.The former experiments are AB vs. CD, while the latter compares AB vs. CDE.Table 4 presents the classification results of sensitivity, specificity, and accuracy obtained through 10-fold cross-validation.In our experiment comparing normal vs. interictal (AB vs. CD), our methods achieve mean accuracy, sensitivity and specificity of 99.06, 98.87, and 99.25%, respectively.The comparison between normal vs. interictal and ictal cases yields a mean accuracy of 97.95%, mean sensitivity of 97.58%, and mean specificity of 98.50%. Every aspect of the AB vs. CD comparison is superior to the AB vs. CDE comparison.The key to this difference lies in the use of Discussion In this study, the deep learning model NLSTM uses FFT as a feature to classify epilepsy segments from normal or interictal segments or a combination of both.The model demonstrates excellent accuracy, sensitivity, and specificity in the Bonn University database.The effectiveness of our approach is validated through 9 experiments presented in Table 2. FFT is employed as a feature within the model and integrated with fully convolutional deep learning and long shortterm memory to differentiate between ictal and non-ictal segments.This method uses the FFT features derived from the original EEG data. The deep learning framework model can effectively learn overall features.The low-level layers of a FCN can capture the internal structure of EEG segments and then transmit them to the higher-level layers of the model for further processing.Subsequently, these EEG features are used to extract the temporal information by being passed to the NLSTM.The NLSTM differs from standard LSTM and the stacked LSTM models in that it enhances the depth of LSTM by nesting to select pertinent information from the EEG segments.In the traditional stacked LSTM architecture, several standard LSTM units are combined into a whole, with the processing outcome of this step serving as the input for the subsequent units.Conversely, the NLSTM structure employs external memory cells to select and process EEG segments, while internal memory cells are responsible for storing and processing them.These two modules are interdependent, with the internal module using the output of the external module as input data.This configuration demonstrates strong performance in capturing the long-term dependencies present in EEG signals. Most epilepsy detection methods typically involve the extraction or design of features by humans to characterize epilepsy EEG.Subsequently, selection algorithms are applied to identify the most representative features for classification using various classifiers.However, these methods are often complex and time-consuming due to the search for suitable features.In contrast, deep learning frameworks, such as our approach, streamline the process by bypassing feature extraction or automating it, eliminating the need for manual feature selection common in traditional methods.This approach enables the extraction of EEG segment features without human intervention, facilitating the classification of segments into ictal or non-ictal categories.Implementing this method in medical settings alleviates the workload of neurologists by simplifying EEG graph interpretation, thereby reducing the expertise threshold and saving time for healthcare professionals. Different lengths of EEG segments significantly affect the accuracy of normal vs. interictal vs. ictal problems, which has been demonstrated by Li et al. (2020) that the EEG segment length of 1,024 allows the method to achieve optimal accuracy.This result is verified in the three databases, which include the Bonn University database, the Freiburg Hospital database, and the CHB-MIT database. There are many methods that have shown good performance in two-class seizure recognition problems.It is necessary and important to compare the accuracy with other research results.The results are compared in Table 5, which consists of three columns containing information on tasks, methods, and the accuracy of the classification experiments.This table includes 9 experiments conducted using the Bonn University database.Our method demonstrates higher accuracy than many other methods across all experiments.Bhattacharyya et al. (2017) used the tunable-Q wavelet transform (TQWT) to extract EEG features, which were then processed using a wrapper-based feature selection method and inputted into an SVM for the identification of ictal EEGs.They achieved 100% accuracy in A vs. E and B vs. E, and 99.5% accuracy in C vs. E. From Table 5, we can see that our method has a good performance in all 9 experiments.Kaya and Ertuğrul (2018) achieved 100% accuracy in A vs. E, but did not perform well in other tasks.Li et al. (2020) achieved 100% accuracy in A vs. E, B vs. E, and CD vs. E. Sharma et al. (2017) and Tuncer et al. (2021) both achieved 100% accuracy in B vs. E. Sharma et al. (2017) also achieved the same accuracy in AB vs. E.Our method demonstrates good performance across all nine classification tasks and achieves a classification accuracy of 99.06% in AB vs. CD. Table 6 presents the comparative results of statistical differences found in the classification tasks for various small datasets within the Bonn dataset.The performance in A vs. E, AB vs. E, C vs. E, and AB vs. CD is better, while D vs. E and AB vs. CDE show poorer results.The variation in differentiation among these small datasets is influenced by the nature of their data, with some showing greater differentiation and others showing slightly weaker differentiation. Conclusion In order to promote the application of epilepsy detection in medical practice, the integration of FFT and fully convolutional NLSTM is used in classification.The time domain of the EEG signal transforms into the frequency domain using FFT methods.The data is then divided into training and testing parts, with the former being put into NLSTM to train classification model, and the other parts being put into the classification model to classify them as normal, interictal and ictal categories.Additionally, EMD and WT and FFT are employed as data processing methods to determine the most suitable type for NLSTM, with accuracy, sensitivity and specificity serving as evaluation metrics.Among the 9 experiments conducted, the FFT method yields the best results, confirming the approach as FFT and FC-NLSTM. In the discussion section, we compare the results with other methods.Our method achieves an accuracy rate exceeding 97.00% across all experiments.The accuracies of 99.62, 99.00, 99.83, 99.13, 97.63, 98.67, 99.06, 98.15 Furthermore, this model and its framework can be used for EEG signal classification, which offers practical benefits in epilepsy detection.Its performance allows not only the classification of normal vs. ictal states, but also normal vs. interictal and interictal vs. ictal states. In future work, it is advisable to consider using large datasets, such as the Freiburg hospital database and the CHB-MIT scalp EEG database, to improve the generalizability of the method and facilitate the development of a successful model.The integration of real-time applications has the potential to greatly impact clinical practice.In addition, it is recognized that deep learning approaches have difficulty providing explanations for decisions.Therefore, novel and explainable methods may need to be proposed to effectively address the epilepsy classification problem. in this domain.However, most traditional methods have focused on specific or local features, resulting in information loss, including time domain features, frequency domain features, time-frequency domain features, and nonlinear features.Deep learning methods have demonstrated strong performance across various fields and have shown promise in epilepsy classification.Therefore, we propose combining FFT feature extraction with a deep learning algorithm. hippocampal Epileptic ictal Nie et al. 10.3389/fnins.2024.1436619Frontiers in Neuroscience 04 frontiersin.org have made significant progress in emulating this human ability.A more intricate model called LSTM has been proposed based on the simple recurrent neural networks, which incorporates a memory mechanism and addresses the problem of vanishing gradients networks TABLE 2 Nine accuracy of three methods in different tasks.All classification results exhibit an accuracy rate above 97.63%,demonstrating the robustness of our methods across various classification tasks.Among these experiments, the highest mean accuracy of 99.83% is observed in AB vs. E. Data imbalance is evident in these experiments, with the sensitivity, specificity, and accuracy in ABCD vs. E being lower compared to other experiments.The imbalance of non-ictal data segments in ABCD vs. E is four times greater than A vs. E, B vs. E, C vs. E, D vs. E, and twice as much as AB vs. E and CD vs. E.In this case, the traditional machine learning approaches may struggle to predict the minority classes TABLE 3 The results of 10-fold cross-validation for non-ictal vs. ictal based on the Bonn University database.Results of 10-fold cross-validation for normal vs. interictal and normal vs. interictal and ictal based on the Bonn University database.The combination of ictal and interictal segments and interictal reduces the accuracy, sensitivity and specificity.Conversely, AB vs. E (in Table2) achieves better results than AB vs. CDE across all evaluation metrics, with accuracy at 99.67%, sensitivity at 99.27%, and specificity at 100.00%.Ictal segments are easier to detect than interictal segments, as evidenced by the superior classification results of the AB vs. E compared to AB vs. CD.These three experiments (AB vs. E, AB vs. CD, AB vs. CDE) demonstrate that ictal segments have greater discriminative power than interictal segments, and the combination of both types makes it more challenging to classify them from normal segments.The experimental results indicate that the proposed method performs well in distinguishing non-ictal from ictal segments and excels in classifying interictal vs. ictal and normal vs. interictal and ictal segments. and 97.95% are calculated for the cases A vs. E, B vs. E, AB vs. E, C vs. E, D vs. E, CD vs. E, AB vs. CD, ABCD vs. E and AB vs. CDE, respectively.The accuracy of 6 experiments exceeds 99.00%.These comparative results demonstrate the effectiveness of our method.They indicate its potential for automated epilepsy detection. TABLE 5 Comparison results for A vs. E, B vs. E, AB vs. E, C vs. E, D vs. E, CD vs. E, AB vs. CDE, ABCD vs. E, AB vs. CD class recognition. TABLE 5 ( Continued) Bold indicates emphasis on our accuracy. TABLE 6 Comparison of differentiation under different datasets.
7,882
2024-07-30T00:00:00.000
[ "Medicine", "Computer Science" ]
Random-Walk Graph Embeddings and the Influence of Edge Weighting Strategies in Community Detection Tasks Graph embedding methods have been developed over recent years with the goal of mapping graph data structures into low dimensional vector spaces so that conventional machine learning tasks can be efficiently evaluated. In particular, random walk based methods sample the graph using random walk sequences that capture a graph's structural properties. In this work, we study the influence of edge weighting strategies that bias the random walk process and we are able to demonstrate that under several settings the biased random walks enhance downstream community detection tasks. INTRODUCTION Over the past few years, there has been a notable increase in the volume of data produced and exploited by applications and services that handle various types of networks. Most of these networks, such as citation networks, sensor networks and, most notably, social networks, can be naturally modelled through graph data structures, with the networks' entities and relationships being represented by a graph's nodes and edges respectively. Subsequently, by performing graph analytics tasks, such as node classification [2], link prediction [15], and community detection [9], we can discover inherent Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from<EMAIL_ADDRESS>OASIS ' characteristics of the network's nature and gain additional insight regarding the relationships of its entities. For instance, community detection tasks in social networks [11] can be used to enhance the targeting of marketing campaigns, recommendation systems, the identification of criminal groups, and more [19]. Recently, graph embedding methods that provide a latent representation of the graph data in a low-dimensional vector space have been developed. These methods employ the graph's components (nodes, edges, and features or attributes) and produce a mapping into an embedding space that targets to preserve the graph's topology and overall structural properties (such as the pairwise distance between nodes). The resultant graph embeddings can then be utilized for analytics tasks that are based on conventional machine learning mechanisms (e.g. executing the -means algorithm to obtain a partition of the graph's nodes). Graph embedding methods that map graph nodes to vector spaces can be categorized into three types [7]: (i) matrix factorization methods, (ii) deep learning methods, and (iii) methods based on random walks. Factorization methods attempt to decompose the graph's adjacency matrix into eigenvectors and eigenvalues, while deep learning methods employ multi-layer architectures to capture structural similarity between nodes. Finally, random walk methods sample node sequences by executing random walks among the graph's nodes and adopting the intuition that similar nodes will tend to coexist in several of the sampled sequences. The two most prominent random walk based methods are Deep-Walk [20] and node2vec [8]. The DeepWalk method samples a number of fixed-length random walks from each graph node which are then supplied as input to the skip-gram model of the word2vec word embedding technique [17,18]. The skip-gram model learns vector representations such that words with a similar meaning in a corpus will end up closer in the embedding space, while less similar words will end up further apart. DeepWalk intuitively uses a "corpus" of sampled sequences so that nodes that frequently appear together in a random walk (given a context window of a user-defined size) are characterized by a small distance in the final embedding. Node2vec [8] builds upon the core idea of DeepWalk with the main difference being the induction of bias in the random walk process. In particular, in each transition during a random walk, node2vec adds bias to the transition probabilities of the node's neighbors according to two user-defined parameters and . Parameter defines the tendency of a random walk to follow a Breadth-First-Search approach, while parameter enables a Depth-First-Search approach to the random walk. In this work, we focus on random walk methods and study the utilization of edge weighting strategies as a means of inducing bias to the random walk generation phase. Edge weighting strategies recalibrate and modify the edge weights of a graph with the end goal of enhancing a particular downstream analytics task. To the best of our knowledge, this work constitutes the first attempt at enhancing specifically the community detection downstream task by utilizing edge weighting strategies that attempt to guide the random walks into having predominantly members that belong in the same community. The experimental evaluation showcases that, for a variety of configurations, our approach yields more accurate and coherent results than those executed on graph embeddings derived from state-of-the-art random walk embedding methods. FRAMEWORK We begin by providing an outline of the broad framework and the proposed methodology before discussing the individual edge weighting strategies and their overall rationale. Outline Given an unweighted graph = ( , ), where and correspond to the graph's node and edge set, respectively, the objective is to provide a graph embedding that enhances community detection tasks performed by typical machine learning techniques. Thus, we employ weighting strategies that reweight edges between nodes according to a perceived likelihood of the nodes belonging to the same community. Algorithm 1 DetectCommunities( , S, ) Input: unweighted graph , weighting strategy S, number of communities Output: community designations CD for all nodes in 1: The outline of our framework can be seen in Algorithm 1. Initially, we reweight the graph according to a weighting strategy S and obtain the weighted graph ′ . After obtaining the embedding ′ using the node2vec algorithm we execute the -means algorithm on the embedding to obtain community designations for each node in . Note that Algorithm 1 is an indicative description of the overall framework and the implementation details such as the graph embedding technique (e.g. DeepWalk, node2vec, etc.) or the community detection algorithm (e.g. -means, GMM [3], etc.) may vary depending on the graph domain or the application requirements. In this work, we opted for the combination of node2vec and -means on account of their well-established practicality and applicability. Edge Weighting Strategies Most of the edge weighting strategies presented in this work focus on enhancing algorithms based on community detection through modularity maximization. Additionally, they attempt to handle the resolution limit problem [6] that exists in modularity maximization approaches. In the remaining of the section, we use an edge between two nodes and as a running example. The four wellestablished and effective methods presented in this work are: EBC_CNR The "EBC_CNR" method [12] weights a graph's edges according to two measures: their edge betweenness centrality (EBC) and common neighbor ratio (CNR). EBC corresponds to the number of shortest paths that go through while CNR reflects the percentage of common neighbors shared between nodes and . The exact weight of the edge is contributed by both EBC and CNR through two parameters and that are defined in either a manner that attempts to maximize the variance of the weight distribution or through heuristics. Thus, the weight of is: where , > 0, is the adjacency matrix of the graph, is the normalized EBC of , and is the CNR between nodes and . SimRank The "SimRank" approach is based on the SimRank similarity measure [10] which states that "two objects are similar if they are related to similar objects". SimRank scores each node pair based on the structural functionality or purpose they exhibit in the whole graph. Conceptually, in its iterative form the SimRank score ( , ) between two nodes and in the -th iteration of computation is equal to: where ( ) corresponds to the neighbor set of , ( ) refers to a particular neighbor of and signifies a decay constant. Additionally, 0 ( , ) = 1 if = and 0 ( , ) = 0 otherwise. The weight of an edge in the graph is set equal to the SimRank score between the edge's two endpoints. -path The " -path" method is based on the calculation of the -path edge centrality measure [5] along with additional operations [4]. The -path edge centrality measure assigns weights to the edges according to their centrality and is defined as: being the number of simple random paths of at most nodes initiating from that pass through and being the number of simple random paths of at most nodes that originate from . Finally, the weight between two nodes is set equal to the Euclidean distance of their -path centrality measures 1 : where ( ) is the degree of node . AdaptiveMM Finally, the "AdaptiveMM" approach [16] follows a three step approach to generating weights for an unweighted graph. At first, an artificial network is generated with topological characteristics that resemble the original graph. This artificial graph is equipped with generated ground truth communities and is then used as a basis for extracting a selection of local topological features from each edge such as the difference in clustering coefficients of the edge's endpoints or the Adamic-Adar index [1]. In the last step, the edge features are supplied as input to a regression model that weights the edges in a way that a modularity maximization approach would be able to efficiently detect the ground truth communities of the artificial network. Even though our approach is not related to the problem of modularity maximization in its general form, the methods presented above can be used as intuitive heuristics for the purpose of assigning significant weights to nodes that could potentially exist in the same community. EXPERIMENTAL EVALUATION In this section, we conduct experimental evaluation on the framework presented in Section 2 against the baselines of DeepWalk and node2vec. Since node2vec performed better than DeepWalk in all the experiments, we regard node2vec as the highest performing baseline. We begin by discussing implementation details before presenting the results on both synthetic and real-world datasets. Implementation Details Initially, we begin with an unweighted graph that is assigned weights through an edge weighting strategy. Following that, the graph embedding is obtained using node2vec and the -means algorithm is executed to obtain the final communities. During the weighting process some existing edges may end up with a zero weight and this may affect the random walk sampling process in two ways. In the first case, edges with zero weight are assigned a weight equal to the smallest weight of the neighbors of the node being traversed divided by their total count. In the second case, if all the edges of a node's neighbors have zero weight, then they are all equally probable to be selected during a random walk. The parameters used in the node2vec technique are = 128, _ ℎ = 80, = 10 (number of walks from each node), _ = 10. The parameters and were evaluated for each dataset using a grid search over [0.25, 0.5, 1, 2, 4] as per the suggestions in [8]. In the " -path" method we set to 20 and in the "SimRank" method we set to 0.8. In the case of "EBC_CNR" the parameters and were evaluated explicitly for each dataset using the heuristics described in [12]. All values selected above follow the suggestions of the authors in their respective original work. Synthetic Datasets We implemented a selection of LFR networks [13] with varying node counts and community sizes, and tested the performance of our framework for different values of the mixing parameter . Table 1 details the synthetic datasets used where is the number of nodes, and are the average and maximum vertex degree, are the minimum and maximum community sizes, and ∈ {0.25, 0.35, 0.45, 0.55}. The exponent for the degree power law sequence was 2, while for the community size sequence was 3. In each experiment we measure the Adjusted Rand Index (ARI) and Normalized Mutual Information (NMI) measures [21], along with the graph's modularity on the final partition. The ARI and NMI measures are estimated after ten instances of the -means algorithm with different centroid seeds and being equal to the respective's datasets ground truth communities count. Figure 1 presents our results where several observations can be made. "AdaptiveMM" consistently outperforms the rest of the methods and the baselines, while "DeepWalk" and "SimRank" achieve similar effectiveness, but are outperformed by the rest of the methods in the majority of the experiments across all measures. The effectiveness of our framework in the ARI measure increases for graphs with a higher node count. Finally, all methods, except "Sim-Rank", achieve higher modularity than the baselines for < 0.5, (i.e. communities with strong connections where a node has more neighbor nodes inside its' community than the rest of the graph), while " -path" achieves the highest modularity for = 0.55 among all methods. Table 2 summarizes the best results depicted in Figure 1 for each metric in each dataset. Real-world Datasets Complementary to the experiments on synthetic datasets, we also performed experiments on real-world datasets equipped with groundtruth communities and, more specifically, a product network and two social networks from the SNAP Dataset Collection [14]. The "ego-Facebook" dataset represents a set of social circles in the Facebook social network. Nodes and edges in this network represent users and friendship relationships between them respectively. The "Amazon" dataset consists of products found in the Amazon website that are linked if they are frequently bought together. Products belong in the same ground-truth community if they are characterized by the same product category defined by Amazon. Similarly to "ego-Facebook", the "Youtube" dataset contains friendship links between users in the video-sharing website Youtube. Ground-truth communities correspond to user-formed group communities. Note that in all three datasets a node may belong to more than one ground-truth community so for the purposes of this experimental evaluation we restrict each node to one ground-truth community assignment and disregard the rest of the assignments. Similarly to the synthetic experiments, we set equal to the respective's datasets ground truth communities count. In the "ego-Facebook" dataset we omitted nodes without a community assignment and nodes without any edges. In the "Amazon" and "Youtube" datasets we focused on the top 5000 communities with highest quality [23] discarding nodes and edges that were not a member in any of the top 5000 communities while also removing duplicate communities with completely identical members. Table 3 showcases the resulting real-world datasets used in the evaluation. Table 4 details the results of the experimental evaluation on the real-world datasets. The two best performing approaches across all datasets and metrics are "EBC_CNR" and "AdaptiveMM" while in each dataset the best values for each metric are achieved by the same approach. With the exception of the modularity metric in the "Amazon" dataset, each metric is increased in a statistically significant ( < 0.05) 2 improvement by at least one edge weighting strategy. The highest difference is on the "Youtube" dataset where "EBC_CNR" achieves +8.4% higher modularity than node2vec while the lowest difference is on the the "Amazon" dataset where "Adap-tiveMM" and node2vec have nearly identical modularity. The key observations from the experimental evaluation in both synthetic and real-world datasets are threefold: i) the use of edge weighting strategies generally enhances community detection tasks that are performed on embeddings generated by random-walk graph embedding methods; ii) with the exception of "Simrank", each strategy offers the best performance for at least one dataset and metric combination; and iii) in each dataset the best performances for both the ARI and NMI measures are achieved by the same strategy. Table 4: Experimental evaluation on real-world datasets. The first number in each cell refers to the mean metric value (over ten iterations), while the second number to two standard deviations. The best performance in each metric for each dataset is denoted in bold. Results marked with " * " provide a statistically significant ( < 0.05) increase over the results of node2vec. CONCLUSIONS The ubiquitous nature of networks and their ease of representation as graphs has led to several graph analytics tasks that seek to discern information about the characteristics of the network. By mapping graphs into vector spaces, classical machine learning algorithms can be efficiently applied to gain additional insight about a network's features. In this work, we studied random walk graph embedding methods and the influence of edge weighting strategies in community detection. We used four intuitive state-of-the-art strategies and experimentally demonstrated that under several settings the utilization of edge weighting strategies can lead to improved performance according to the ARI, NMI and modularity measures. Future work could focus on exploring the influence of weighting strategies while using community detection approaches other than k-means. Alternatively, the influence of weighting strategies in other analytics domains (such as node classification or link prediction) constitutes another interesting future work direction.
3,957.8
2021-08-30T00:00:00.000
[ "Computer Science" ]
Internal Factors Influencing the Profitability of Commercial Banks in Bangladesh The profitability of commercial banks is influenced by a number of internal and external factors. This paper attempts to identify the internal factors which significantly influence the profitability of commercial banks in Bangladesh. In this study, profitability is measured by ROA and ROE which may be significantly influenced by the internal factors such as IRS, NIM, CAR, CR, DG, LD, CTI and SIZE of the bank. Data are collected from published annual reports during 2014-2018 of 23 commercial banks. Using simple regression model, it is found that CR has significant effect on the profitability and CAR has significant influence on ROA only. In addition to this, DG has significant effects on PCBs’ profitability (ROE only) where as IRS and CTI have significant influence on profitability (ROA only) of ICBs. Further, none of these variables have significant effects on the profitability of SCBs but CAR and CR are correlated with profitability (ROA only) and the causes may be the nature of services provided by SCBs to its clients. The internal policy makers should manage the influential internal factors of the banks in order to increase their profitability so that they can meet stakeholders’ expectations. Introduction Banking industry is the vital part in any economy because it plays an important role in mobilizing savings from surplus to deficit unit to stream economic activities of the country which propel its economic growth. Stable, healthy and competitive banking industry of a country can significantly contribute to economic growth and development of a country (Bawumia et al., 2005). Further, Mujeri and Younus (2009) asserted that for enhancing economic growth, an important prerequisite is to ensure the required flow of saving into productive investments which depends on the development of appropriate financial institutions particularly banks that are capable of generating adequate quantity and quality of investment. To provide financial services to the economy formal financial institutions specifically banks are established which offer various financial services to its clients including deposit collection and credit disbursement, in order to achieve its primary objective i.e. profitability. Obidike et al. (2015), asserted that financial institutions are established to provide financial services with a view to make profit. The banking industry is managed by the central bank of the country. The central bank monitors all the activities of the commercial banks (Kalsoom et al., 2016). In this regard, Bangladesh bank (BB) monitors, regulates, promotes, directs and controls the activities of commercial banks in Bangladesh. The commercial banks started to provide banking services in Bangladesh thorough nationalizing twelve pre-independent commercial banks in 1972. To make the industry effective and efficient as well as to provide better financial service to the citizen, a number of commercial banks licensed time to time which are operating according to the bank company act 1991. Hossain and Ahamed (2015) stated that increased competition due to frequent entrants ultimately affect the banking profitability. At present, the industry has exaggerate number of banks and sometimes these numbers may affect profitability and cause to be over competitive even inefficient the industry as Mexico has only 47 commercial banks with 7.4 times larger GDP and 13.2 time larger surface area of Bangladesh in 2016 (Khatun et al., 2018). In Bangladesh, the banking industry comprises sixty scheduled commercial banks of which six are state owned commercial banks (SCBs), three specialized banks, thirty one private commercial banks (PCBs), eleven Islamic shariah-based commercial banks (ICBs), and nine foreign commercial banks (FCBs). Generally, the bank primarily, as an intermediary, collects money from depositors and lends those to borrowers because it has no money and difference between the lending and borrowing price contribute its profitability. Profitability is ability of a company to use its resources to generate revenues in excess of its expenses i.e. company's capability of generating profits from its operation. It is influenced by various factors such as internal, industry specific, economic specific, etc. Olweny and Shipho (2011) concluded that the bank-specific factors were more significant factors influencing the profitability of commercial banks in Kenya than market factors. The study also revealed that profitable commercial banks were strive to improve their capital bases, reduce operational costs, improves assets quality by reducing the rate of non-performing loans, employs revenue diversification strategies as opposed to focused strategies and kept the right amount of liquid assets. Further, Ramadan et al. (2011) investigated the nature of the relationship between the profitability of banks and the characteristics of internal and external factors on 10 banks of Jordan. They found that profitability tends to be associated with well capitalized banks, high lending activities, low credit risk, and the efficiency of cost management. San and Heng (2013), investigated the impact of bank-specific characteristics and macroeconomic conditions on Malaysian commercial banks financial performance. They found that equity assets ratio and liquidity ratio had significant positive relationship with return on assets, bank size had positive significant relationship with return on equity loan loss reserves to gross loans ratio had negative significant relationship with return on assets and net interest margin. In order to assess the definite area of the industry, this study deals with internal factors of the banks that usually contribute to the banks' profitability. Therefore, the specific objectives of this study are to identify bank-specific internal factors that significantly influence the scheduled commercial banks' profitability and assess whether these influential factors may vary among different segment of commercial banks. Further, time dimension changes from earlier studies may change these factors to influence its profitability. The outcome of this study will help stakeholders to make appropriate policy or pay close attention to manage internal factors efficiently to improve the profitability/commitment of the organization to the society. Literature Review There are numerous studies on the profitability of this high competitive industry in every country and most of the studies dealt with profitability. The factors influencing the profitability also vary from countries to countries or time to time and the influential factors are considered from wider areas. Dietrich and Wanzenried (2011), concluded that equity to total assets ratio, cost to income ratio, deposit growth rate, funding cost, interest income, effective tax rate and ownership structure negatively affect banking profitability in Switzerland. Khan et al. (2011), studied the determinants of bank profitability in Pakistan and found that bank size, loan growth, deposits to asset ratio, deposit to loan ratio had significant positive relation where net interest margin, tax and overhead expenses had negative significant relation with profitability. Oladele et al. (2012), found that operating expense; relationship between cost and income, and equity to total assets significantly affected the performance of banks in Nigeria. Ongore and Kusa (2013), found that bank specific factors (capital adequacy, management efficiency, liquidity management) significantly affect the performance of commercial banks in Kenya, except for liquidity variable. Further, Poudel (2012) concluded that default rate (DR) and capital adequacy ratio (CAR) have negative association with ROA where as cost per loan asset (CLA) also has an inverse relationship with banks' profitability measured by return on assets (ROA) in Nepal. Chavarin (2014), analyzed on the determinants of 45 commercial bank profitability in Mexico and found that the profitability of commercial banking is persistent by control of operating expenses, the charging of commissions and fees, and the level of capital and also found that market entry barriers and obstacles to competition as a relatively high persistence of profitability. There are also a number of studies on bank-specific are conducted in Bangladesh such as Samad (2015) identified a few bank specific factors such as loan-deposit ratio, loan-loss provision to total assets, equity capital to total assets, and operating expenses to total assets and the researcher finds that they significantly impact the performance of commercial banks. Mahmud et al. (2016) incorporated several bank specific factors in determining the profitability of commercial banks in Bangladesh. The study indicated that capital adequacy ratio, bank size, and total debt to total equity have significant impact on bank performance. , found that capital ratio, total loan as a percentage of total assets and staff expenditure as a percentage of total assets are highly correlated with profitability whereas total expenditure as a percentage of total assets and cost income ratio are highly negatively correlated with profitability. The study also suggests that bank size, operating efficiency; savings deposits as a percentage of total assets, branch, liquidity ratio, and assets management have no significant relationship with profitability. A number of recent studies in Bangladesh relating to this study are chronologically presented in Table 1. CRAR and cost-to-income are negatively correlated, and liquidity is positively correlated to bank profitability. On the other hand, estimation shows a negative correlation between bank size and profitability. Moreover, NPI is found to be positively correlated to ROA. From above discussion, Return on Assets, Return on Equity, Interest Rate Spread, Net Interest Margin, Capital Adequacy Ratio, Non-performing Loan to Total Loan, Deposit Growth, Lending Deposit Ratio, Cost to Income Ratio and Bank Size are used as variables in order to achieve the objectives of this study because these are the most commonly identified variables in the earlier studies which significantly influence the performance of the industry. Population The population of this study is 60 scheduled commercial banks which are divided into State owned commercial banks (SCBs), Specialized banks (SBs), Private commercial banks (PCBs), Islamic shariah-based commercial banks (ICBs), and Foreign commercial banks (FCBs). Sampling and Sample Quota sampling procedure is used to select four SCBs, fifteen PCBs, four ICBs for this study (appendix-A). SBs and FCBs are excluded in this study due to the special nature of service provision and the complexity of available structural data respectively. Variables The variables of this study are divided into dependent and independent which are described as follows: Dependent Variables The banks' performance can be explained in different ways and one of traditional approach is to look at the profit and loss account of banks which can be considered as microeconomic approach. On the other hand, the performance can be considered by considering the commercial banks' aggregate total assets and liability statement in an economy which can be regarded as macroeconomic approach. Beyond these, Return on Asset (ROA) and Return on Equity (ROE) are two of the important accounting measures of bank profitability. These are considered as depended variable in this study and explained as follows: Return on asset (ROA): It is a broad measure of overall bank performance which explains management's ability to produce income by using assets where high ROA indicates better performance in using assets. Alternatively, it measures the efficiency of using resources to earn income (Ally, 2013;Zopounidis and Kosmidou, 2008). It is measured as net income before tax divided by total assets. Return on equity (ROE): One of the innermost measures of banking performance for allocating capital among divisions that can be as the ratio of pre-tax profit to equity is ROE. High ROE indicates high managerial performance (Moussu and Petit-Romec, 2014). It is measured as net income after tax divided by total equity. Musah Independent Variables The performance of the banks is influenced by numerous internal factors which are considered as independent variables of the study. A brief description of the independent variables used in this study is given below: Interest Rate Spread (IRS): The difference between commercial banks' interest rate on deposit and lending is call interest rate spread. These rates may vary due to bank specific factors, industry/market specific factors as well as macroeconomic factors etc. Generally banks have different lending rates and deposit rates to its different products and the average of overall lending rate and borrowing rate is treated as interest spread (Mustafa & Sayera, 2009) . It is measured as (interest received divided by all interest bearing assets) minus (interest paid divided by interest earning liabilities). Net interest margin (NIM): NIM is the ratio of net interest income to total earning assets. Aboagye et al. (2008) stated that it is the best measure to represent bank interest rate spread which is supported by Amidu and Wolfe (2013), Ongore and Kusa (2013); and San and Heng (2013). It is measured as banks' interest income minus banks' interest expenses and the result is divided by total assets. Capital Adequacy (CAR): It is the ratio of total assets financed by equity. If the ratio is higher, then bank has lower external borrowings which positively contribute to the profitability. Credit Risk (CR): It is the ratio of non-performing loans to total loan (earning assets). This variable measures the quality of lending because competition may force to lend high volume without maintaining quality of the client which ultimately reduces the profitability of the bank. Deposit Growth (DG): Deposit is the prime source of banks' fund at lowest cost. If a bank's deposit is growing year to year, then lowest cost funding is increasing and that may contribute to the profitability of the bank. It is measured as deposit at year 1 minus deposit in previous year and the result is divided by deposit in previous year. Loan to Deposit Ratio (LD): It is one of the determinants of liquidity as the amount of lending against the amount of deposit. Scheduled Commercial banks convert deposits into lending so that it can increase profit. A high ratio has positive relation with profitability where as increasing liquidity risk alternatively, a low ratio confides liquidity but reduces profitability. Cost to Income Ratio (CTI): It is a measure of operating expenses as a percentage of operating income. It is a popular and critical measure of banks' efficiency. A lower ratio generally indicates higher efficiency and vice versa. Bank Size (Size): It is the bank total asset size. This study takes logarithm of total assets as a proxy of size (Samad, 2015). This assets size influence the clients' confidence as well as profitability of the bank though operating efficiency. It may have positive relation with profitability (San and Heng, 2013;Zeitun, 2012). Data Data collection is a systematic process of gathering data for a particular purpose from various sources. In this study, data have been collected from published annual reports of 23 commercial banks covering a period from 2014 to 2018 resulting 230 sample observations. The data is reliable as it drawn from the audited financial statements included in the annual reports. The data is generated in the defined state using measurement techniques stated in table 2. Data Analysis The study adopts the quantitative approach to analyze data applying regression because the study seeks to establish the relationship between dependent variables and independent variables. The analyses are performed using SPSS 23.0. Hypothesis of the Study The hypothesis of this study is formed according to the effects of independent variables on dependent variables. In this study, it is expected that IRS, NIM, CAR, DG, LD and SIZE have positive effects on profitability where as CR and CTI may have negative effects on profitability. Therefore the declared hypotheses of this study are as follows: H 1 : IRS has significant positive effect on profitability. Descriptive Statistics The overall and segmented descriptive statistics are presented in Table 2. It is found that the overall ROA is 0.78% which typically represents the private commercial banks (PCBs) (0.83%) where as state owned banks (SCBs) earned around half (0.34%) of the overall figures and ICBs earned 30.77% more than the overall earnings (Islam and Rana, 2017). Similar trend found in ROE, NIM, LD and CTI where as IRS, CA, and Size are comparatively stable among the segments. The SD follows the same pattern (Rahaman and Akhter, 2015). In addition, SCBs have double and ICBs have half CR compared to the overall CR where as ICBs have three times of the industry DG and SCBs have just opposite position direction. Among the segments, PCBs are ideally representing the industry. Regression In Table 3, it is found that only CAR and CR have significant effects on ROA where as ROE is significantly influenced by CR and CTI . The adjusted R 2 for ROA and ROE are 48.1% and 39.6% respectively which indicates that these independent variables can explain ROA better than ROE (Hossain and Khalid, 2018). The models are well fit because the probability of the test statistic is significant for dependent variables. The detailed results are presented in table 3. It is found that independent variables explain half of the dependent variables and only CAR, CR have significant influence on ROA (Noman et al., 2015) and ROE of PCBs respectively (Table 4). In table 5, independent variables explain aggregately 87.4% variation in ROA and it is significantly influenced by IRS, CAR and CTI of ICBs. Further, the model is well fit because the probability of test statistic is significant where as none of the independent variables has significant influence on ROE as only 12.5% of ROE can be explained aggregately by the selected independent variables . In all cases, these independent variables can superiorly clarify ROA than ROE. Only ROA of PCBs and ICBs is significantly influenced positively by CAR (Khatun and Siddiqui, 2016) where as ROE of only PCBs is influenced negatively by CR. The industry is represented and dominated by PCBs because the PCBs captured majority of the industry. On the other hand, the independent variables of SCBs cannot significantly influence its profitability (ROA and ROE) of SCBs, but can explain only 12.4% and 1.9% variation of ROA and ROE respectively and the models are not well fit because the probability of test statistic is not significant (appendix c). It indicates that SCBs profitability may be significantly influenced by other factors such as ownership structure, agency services to the government, etc. For example, these banks are owned by government and collect fees, charges, tax, etc, of government and also provide general banking services to its clients. Summary of Findings It is found that all the hypotheses are rejected for SCBs which means that none of these factors significantly influences the profitability of SCBs because this group provides client-services mostly related to government. So profitability of SCBs may be significantly influenced by fees, charges, commission, etc it received as non-operating revenue. H 1 is rejected except ROA of ICBs which indicates that ROA of ICBs will increase if its IRS increases. NIM cannot influence the profitability of commercial banks in Bangladesh (H 2 is rejected). Profitability (ROA) of PCBs and ICBs is influenced by CAR (partially accepted H 3 ) where as it is also (ROE of PCBs and ROA of ICBs) influenced by CR (H 4 partially accepted). DG has influence on profitability (ROE) of PCBs and ROA of ICBs is influenced by CTI (partially accepted H 5 and H 7 ). Finally, LD and SIZE have no significant influence on profitability of scheduled commercial banks of Bangladesh. The profitability of different segment of banks is influenced by different variables but CAR significantly influences the profitability of the industry. Conclusion The banking sector of Bangladesh is dominated by the PCBs because it captures the industry. The SCBs provide most of government activities related services where as motto of the ICBs is serving to client on Islamic shariahbased. The study importantly assesses how the internal factors influence profitability varying different segment of banks. It is found that CAR and CR have significant relationship with ROA and only CR has with ROE. None of these variables has significant influence on profitability of SCBs where as the profitability of the industry is influenced by CAR. So the study suggests policy makers to concentrates influential internal factors and efficient management of these factors will further contribute to the profitability of the entity effectively.
4,550
2020-07-20T00:00:00.000
[ "Economics", "Business" ]
Visual analysis of commognitive conflict in collaborative problem solving in classrooms In today’s knowledge-intensive and digital society, collaborative problem-solving (CPS) is considered a critical skill for students to develop. Moreover, international education research has embraced a new paradigm of communication-focused inquiry, and the commognitive theory helps enhance the understanding of CPS work. This paper aims to enhance the CPS skills by identifying, diagnosing, and visualizing commognitive conflicts during the CPS process, thereby fostering a learning-oriented innovative approach and even giving the script of technology-assisted feedback practices. Specifically, we utilized open-ended mathematical tasks and multi-camera video recordings to analyze the commognitive conflicts in CPS among 32 pairs, comprising 64 Year 7 students. After selecting the high-quality, medium-quality, and low-quality student pairs based on the SOLO theory, further investigations were made in the discourse diagnosis and visual analysis for the knowledge dimensions of commognitive conflict. Finally, it was discovered that there is a need to encourage students to focus on and resolve commognitive conflicts while providing timely feedback. Visual studies of commognitive conflict can empower AI-assisted teaching, and the intelligent diagnosis and visual analysis of CPS provide innovative solutions for teaching feedback. Introduction In today's knowledge-intensive and digital society, collaborative problem solving (CPS) has aroused increasing attention and is considered a critical skill for students to develop.It is defined as the capacity of an individual to effectively engage in a process whereby two or more agents attempt to solve a problem by sharing the understanding and effort required to come to a solution and pooling their knowledge, skills and efforts to reach that solution (OECD, 2015).Based on the PISA 2015 results, it was indeed found that Chinese learners had relatively lower performance in CPS compared to students from some other countries.It showed that students in Beijing, Shanghai, Guangdong, and Jiangsu and other places from China performed significantly worse in CPS than in math, science, and reading, ranking only tied for 25th place (51 countries or regions participated in the measure).The researchers discovered that the CPS task is less approachable for Chinese students after evaluating the CPS tasks in the worldwide assessment items (Zhou and Lu, 2017).As Chinese education system often focuses on standardized test, which may limit exposure to real-world problem solving and result in less development of CPS skills.However, it is worth noting that the efforts have been made to promote the students' CPS skills in China.For example, the Chinese researchers had already Lu et al. 10.3389/fpsyg.2023.1216652Frontiers in Psychology 02 frontiersin.orgpositioned CPS measuring framework to evaluate the students' development as soon as it was introduced in PISA 2015 (Wang, 2016). The international assessment of CPS skills was initiated by the Assessment and Teaching of 21st Century Skills (ATC21S) project in 2008 (Yuan and Liu, 2016).After it, the Program for International Student Assessment (PISA), administered by OECD, introduced CPS as a component in its assessments.PISA 2015 marked the first largescale assessment of CPS skills conducted within individual countries.Following PISA2015, Australia started a significant national-level evaluation (Li, 2017).The emphasis on CPS abilities in international assessments is due, on the one hand, to the significance of CPS skills and, on the other hand, to the social interaction view of individual mental development developed in recent years by sociocultural theorists, which provides theoretical and research case support for this aspect of the assessment. In the study of CPS between pairs or groups, some scholars promote the development of new communication-oriented research paradigms based on the perspective of individual mental development and social interaction (Xu, 2018).Through communication, individuals in pairs or groups can share information, exchange ideas, negotiate solutions, and coordinate their efforts toward solving a problem.To better understand how social connections contribute to the development of personal mindfulness, communication-oriented research examines in-depth micro-behaviors within social interactions and communicative conversations.Professor Anna Sfard is a representative researcher in the new communication-orientation research paradigm.She put up the idea of commognitive, a theoretical presumption about how social interaction and individual cognition relates to one another: interpersonal communication and cognitive processes are essentially two sides of the same phenomenon (Sfard, 2007).Commognitive conflict refers to the cognitive conflicts that arise during CPS interactions among individuals.It occurs when participants in a collaborative setting encounter different perspectives, interpretations, or strategies while working together to solve a problem. In order to promote CPS skills, it is an effective way to analyze and diagnose the commognitive conflict by observing the students work in pairs or groups.Visualizing the analysis of commognitive conflict during CPS allow educators to provide targeted feedback, better teaching intervention to students, thus promoting cooperative learning behavior.In the meantime, the development of artificial intelligence (AI) has made it possible to apply advanced statistical measures (e.g., RSM theory) to the practice of an online intelligent cognitive diagnostic system based on a test bank without difficulty.However, it still needs real classroom students' commognitive conflict analysis theory to be the basis for the more complex commognitive conflict diagnosis and visualization. To better develop students' ability of CPS skills, our research was conducted to study the intelligent diagnosis and visual analysis of commognitive conflict.The research was conducted based on video feedback data obtained from a Sino-Australian collaborative project team.We analyzed the observed commognitive conflict within the knowledge dimension and classified them into conceptual, procedural, and contextual conflict, following the cognitive conflict structure proposed by Lee and Yi (2013).For conceptual knowledge, the sub-components include facts, conceptions, relations, and conceptual structure.These aspects pertain to understanding the fundamental principles, ideas, and relationships within a given subject area. Procedural knowledge encompasses thinking skills, ranging from simple to complex.These skills include description, selection, representation, inference, synthesis, and verification.Contextual knowledge focuses on specific contexts such as school, everyday life, and social/cultural/historical contexts.Understanding how knowledge is situated within different real-life situations allows for a more comprehensive and meaningful application of knowledge. By identifying, diagnosing, and visualizing the commognitive conflict within knowledge dimensions during CPS, we can learn about students' collaborative learning behaviors.This understanding promotes a learning-oriented innovative approach and even facilitates the creation of technology-assisted feedback practices.Moreover, the script of technology-assisted feedback practice can be to gain insights into the communication process, enable speech recognition for efficient feedback, and facilitate discourse diagnosis for improved instruction and learning outcomes. Based on the background and purpose outlined above, the study focused on the identification, diagnosis, and visualization of commognitive conflict that arise during collaborative problem-solving (CPS) among student pairs.Firstly, we categorize the knowledge dimensions of commognitive conflict as conceptual, procedural, and contextual, so as to observe and analyze the commognitive conflict in students' pairs.Secondly, three typical cases of high quality, medium quality, and low quality were selected through SOLO theory from 32 pairs of student peers for further case analysis.Finally, diagnosis and visual analysis of these cases are conducted to assist in cultivating students' CPS abilities.We mainly study the following questions: Q1: What is the profile and visual diagnostic for the knowledge dimensions of commognitive conflict among student pairs?Q2: How can the commognitive conflict be diagnosed in the discourse of student pairs?Q3: How to visualize the commognitive conflict by 3D block diagram? By studying the students' performance of commognitive conflict in CPS, it is possible to provide teachers with a theoretical framework and a visual case reference that enables them to provide innovative learning-oriented assessment and feedback practice in the classroom.It also gives guidelines and script materials for future speech recognition supported by artificial intelligence and commognitive conflict discourse diagnosis. Literature review The study of commognitive conflict In recent years, the research on commognitive conflict has tended to extend in a broad sense, viewing commognitive conflict as a state produced by discrepancies between an individual's cognitive structure and the environment or between various components within that structure (Lee et al., 2003).Commognitive conflicts, which are cognitive conflicts that arise during communication, exist in the communication of different vocabulary usage, rules of evidence, etc. (Sfard, 2008).From a cognitive perspective, the heterogeneity of a team's knowledge gives rise to diverse cognitive conflicts, which, in turn, facilitates the activation of more flexible cognitive mechanisms.These mechanisms enable the fusion of divergent cognitive schemata, ultimately leading to the creation of new cognitive constructions (Zhang and Ni, 2006), stimulating various types of information exchange and the discovery of new solutions.Sociocultural theory academics' research on the social interaction view of individual mental development has offered a theoretical and empirical basis for the assessment of commognitive conflict.Vygotsky (1962) originally proposed that learners acquire knowledge most effectively through interaction, dialog, and negotiation in social, authentic learning situations that promote the holistic development.This not only improves the competency and learning performance of the students, but also stimulates the cognitive growth of the group through cooperation and interaction.When students are confronted with socially authentic problem situations, they participate in the CPS process through interaction, dialog, negotiation, and other learning styles.At this time, members' heterogeneous knowledge structures communicate with each other, and while they build complementary knowledge within the team, they also generate varying degrees of commognitive conflicts.And the proportion of commognitive conflicts in CPS was significantly higher than the traditional cooperative learning (Liang et al., 2017).Sfard (2007) created the theory of commognition and categorized its levels and components based on various student-teacher and student-student communication dialogs.She developed the commognitive vision of mathematics as a type of discourse-as a defined form of communication, made distinct by its vocabulary, visual mediators, routines, and the narratives it produces.However, the theory has not yet codified the level of discourse and developed a more detailed description of the forms of conflicts, which is a challenging and innovative aspect of the study.Although commognitive theory has areas that need refinement, its applications are very broad and can serve as an effective research lens for different fields, and its potential has not yet been fully explored (Presmeg, 2016). Due to the widespread application of commognitive theory, which has also received considerable attention from academics, the theory has been refined in practice.Regarding knowledge constructs for commognitive conflict, Gyoungho (2007) proposed a structural map of knowledge and beliefs that points to the students' cognitive conflict analysis.This serves as a reference for the classification of knowledge content for commognitive conflicts. Therefore, this paper argues that the commognitive version of discourse can be classified into conceptual, procedural, and contextual knowledge dimensions.In this way, we would able to observe how students collaborate to solve problems through classroom and to record the commognitive conflicts that arise during the learning process of interaction, dialog, and negotiation among members.If students can effectively manage commognitive conflicts, they will be able to foster cognitive development at both individual and group level.Moreover, this ability will also enhance their critical thinking skills and creative abilities. The study of commognitive conflict in CPS In terms of problem-solving research and practice, scholars have constructed mature models to study and understand the process of problem solving.These models provide frameworks and guidelines for approaching problems effectively.Polya (1973) proposed a problemsolving model consisted of four-step process, which emphasizes the importance of understanding the problem thoroughly, strategic thinking, and critical reflection and helps develop effective problemsolving skills.Subsequent scholars have adapted and expanded upon Polya's problem-solving model to cater to various needs and situations (Yu, 2008;Cao et al., 2016;Wei, 2019).For example, Schoenfeld (1985) divided the paradigm of problem-solving into six phases: preparation, exploration, strategy formulation, execution, evaluation and inquiry.In the "Inquiry" module, commognitive conflict refers to the cognitive conflicts that students may encounter while engaging in an inquirybased learning process.When learners encounter these conflicts, they are presented with opportunities for deeper understanding and critical thinking.However, these researches have primarily focused on individual student problem solving of closed problems as the primary case study, and none of the models directly address commognitive conflict.When students' pairs or groups solve mathematical problems in open environments, the cognitive model of CPS will become more sophisticated, and major commognitive conflict will occur. In a study involving students' commognitive conflict processes, Lewis and Mayer (1987) constructed a model on the process of comparing problem comprehension, arguing that students have a preference for the order of information provided in a problem and prefer problems that are in the same order.When students do not agree on the relational terms in solving comparison problems and the required arithmetic operations, comprehension errors occur and commognitive conflict arises.This type of conflict due to students' preference for the order of problems will most likely present explicit commognitive conflict during CPS. In terms of discourse analysis of commognitive conflict in CPS, Barron (2000), on the other hand, focuses on group-level characteristics of CPS, providing targeted strategies for examining cooperative group learning and providing explanations for the variability in the outcomes of collaborative activities.By recording and coding the quality of communication in the study groups, the characteristics of group interaction, problem-solving goal congruence are analyzed and the groups are classified into high quality and low quality problem solving.Iiskala et al. (2011) explore how metacognition becomes a socially shared phenomenon in their study of conversational episodes and characteristics during collaborative mathematical problem solving among high achieving students' pairs.These researches provide a powerful reference for the discourse analysis of commognitive conflict in CPS. In the context of commognitive conflict visualization, Ding (2009) visualized the knowledge refinement in CPS work.The study used a behavioral sequential approach to map students' pairs and personal knowledge refinement curves in CPS.This visualization study of commognitive conflict in CPS offers valuable insights and ideas. Overall, the majority of research on commognitive conflict in CPS has focused on the cognitive processes of individuals in problem solving and discourse conflict, whereas the visualization of commognitive conflict content classification and discourse is lacking.Therefore, our research utilized the SOLO theory to select student pairs of high-quality, medium-quality, and low-quality, and conducted a comprehensive investigation into identifying, visualizing, and diagnosing the commognitive conflicts during CPS.The statistical profile, discourse diagnosis, and visual analysis of commognitive conflict in CPS enable teachers to gain a deeper understanding of how students encounter commognitive conflicts and how different quality student pairs approach problem-solving.As a result, this provides teachers with timely and targeted guidance to support their students effectively, thereby enhancing students' CPS skills.Moreover, the process of discourse analysis and visualization also offers a learningoriented and innovative approach, providing a script for technologyassisted feedback practices. Research participants and cases Year 7, which is the present focus of the international CPS assessment, was selected prior to Year 8 in consideration of the exploratory nature of the project.The research segment, problem tasks, and research environment are all mostly consistent with the Australian partner side.32 student pairs, consisting of a total of 64 seventh-grade students from the LH middle school in an urban area of the TZ district in BJ city, with moderate educational quality, were selected as the sample for the CPS recordings. At the same time, the student outcomes were evaluated using the SOLO (Structure of the Observed Learning Outcome) five-level classification evaluation method (Biggs and Collis, 1982), which took into account the characteristics of the open-ended mathematical and contextual problems used in the project.The SOLO theory classifies observable learning outcomes into five levels: prestructural, unistructural, multistructural, relational and extended abstract structure.This resulted in the selection of the typical cases with the high quality, medium quality and low quality outcomes, as shown in Table 1. Research task The study utilized open-ended contextualized mathematical problems that better provoke commognitive conflict in communication (Clarke and Helme, 1998) from the Sino-Australian SEL project.The mathematical problem task is "Households and Age, " in which student pairs will collaborate to solve the problem, calculate the age of each person, and associate the social relationships of five people, as shown in Table 2.The student competencies examined in this task include the pedagogical problem-solving cycle involved in the International Institute for Frontier Mathematics Education, which promotes individual and group reflection on a dialectical cycle (Lu, 2017).Additionally, the study categorized the knowledge dimensions of commognitive conflict as conceptual, procedural, and contextual, and then examined and visually presented the features of the knowledge dimensions of the students' pairs. Research environment Conforming to the specifications of the data collection classroom environment for the Sino-Australian Student Collaborative Mathematics Problem Solving Project, the study environment was built in a school-filmed classroom with which the children were quite familiar.Each group of 4-6 students in the videotaped classroom sat around a multi-tabled collocation table.The participants performed three tasks: individually, in pairs and in groups.In this paper, we only analyze the task conducted in pairs.A video camera was set up to capture the entire activity, and wireless microphones, such as left and right channels, were used to gather sound.Each group of students received pens, task sheets, rough draft paper, and other tools so they could complete the mathematical tasks.The study utilized 12 min of data from the problem-solving session involving pair participation.Figure 1 depicts the information in detail. Data analysis During mathematical problem solving, student performance was videotaped, and the discourse was coded and analyzed.In this paper, we present a visual presentation and qualitative analysis of students' performance in commognitive conflict during CPS.We analyze the differences in knowledge dimensions, the time and frequency of occurrences, as well as the diagnosis and visualization of commognitive conflict.The study coded the knowledge dimensions of commognitive conflict, identifying and classifying the types of conflict segments and recording the length of conflict for each segment.This resulted in statistics on the number, type, and average duration of conflict for 32 pairs of commognitive conflict groups.Two coders were utilized to confirm the validity of the coding results, and the consistency coefficient of the results was 0.913.Inconsistencies in coding were reviewed and deemed consistent by the coders. To further analyze different quality student pairs' commognitive conflict, the study used Nvivo12 software and 3D visualization block diagrams to visualize the data and thus present a more visual representation of commognitive conflict in CPS. Visual diagnostic of commognitive conflict knowledge dimensions The study first encoded the commognitive conflicts of 32 student pairs and conducted an overall statistical analysis of the number of conflicts, average conflict duration, proportion of different types of conflicts, and resolution rate among student pairs.By coding and counting students' pairs commognitive discourse, it was found that students' commognitive conflicts were mainly concentrated in procedural and contextual knowledge, accounting for 47.5% and 46.50%, respectively.Conceptual knowledge accounted for only 6%.In terms of problem solving percentage, the problem solving percentage of procedural knowledge was 31%, and the solving rate was 65.3%; the contextual knowledge was 26%, and the solving rate was 55.9%.The percentage of problem solving based on conceptual knowledge was 6%, and the resolution rate was 100%.In terms of average conflict duration, the longest time was required to solve procedural knowledge, with an average of 52.77 s for one conflict.In contextual knowledge, the time for unresolved conflicts was 49.61 s, in which students were aware of the differences in their respective different mathematical contexts and therefore chose to postpone the conflicts.The details are shown in Table 3.As it can be seen in Table 3, it is evident that the conceptual knowledge conflicts have the lowest percentage and the highest resolution rate, which indicates that students have a good grasp of the basic concepts of such problems. To further analyze the commognitive conflicts among student pairs in CPS, the study selected high-quality, medium-quality, and low-quality case pairs using the SOLO theory.Visual analysis was then conducted on the quantity, occurrence, duration, and resolution status of commognitive conflicts in student pairs, as shown in Table 4. Similarities and differences were discovered in the total number of occurrences, duration, and resolution status of commognitive conflicts among student pairs.The similarity lies in the total number of commognitive conflicts, with 8-9 conflicts occurring within a 12-min period.The difference is that each pair has its own characteristics in terms of the occurrence, duration, and resolution status of commognitive conflicts.The commognitive conflicts among high-quality student pairs emerged early and were resolved relatively quickly, with 7 out of 8 conflicts being resolved, while medium-quality pairs resolved 5 out of 8 and low-quality student pairs resolved only 3 out of 9, and the first two periods took longer.The information presented in the visualization diagram in Table 4 can also be expressed as the information shown in Table 5. In terms of the categories of commognitive conflicts, different student pairs have fewer conflicts in the conceptual knowledge dimension.This is because the conceptual knowledge content of this mathematical task is not difficult, such as 'the total age of five people, the age of seventh grade, and the age of the remaining four people' .When conflicts arise, students are more likely to resolve them.The commognitive conflicts mainly focus on the procedural and contextual dimensions, which is consistent with the conclusions obtained in Case Case demonstration Case interpretation High quality P14 The conceptual and contextual knowledge was well presented, and several sets of correct and completed data were presented, both at the third level of the SOLO five-level evaluation.The presentation of hypothetical descriptions and family relationships for the five household roles, such as "possible" and "example, " is at the fourth level of SOLO Level 5 and is at the higher level of SOLO Level 4. Medium quality P4 The conceptual and contextual knowledge was well presented, and a complete set of data was presented, reaching the second level of SOLO's five-level evaluation.There are multiple assumptions about the internal relationships of the five resident roles, but the results of the thinking are incomplete and need to be supplemented. Low quality P2 Only at the first level of the five levels of SOLO evaluation, and no accurate results were shown. Table 3.In high-quality student pairs, all procedural dimension conflicts have been resolved, while unresolved conflicts still exist in the middle and low-quality pairs.Therefore, the study will perform further discourse diagnosis and 3D block diagram visualization analysis on the procedural and contextual knowledge dimensions conflict, and reveal the characteristics and patterns associated with these conflicts. Discourse diagnosis of commognitive conflict After selecting procedural dimensional commognitive conflict fragments, such as high quality P14-4S, medium quality P4-3S and low quality P2-2S and performing discourse visualization diagnosis, it was discovered that the procedural commognitive conflict is primarily manifested in the discourse of mathematical results after calculation errors by pairs.Table 6 takes the commognitive conflict fragments of the procedural knowledge dimension as an example and performs discourse diagnose from the beginning to the end of the conflict.This approach is also applicable throughout the entire study. Discourse diagnosis helps the research to identify and analyze the linguistic, semantic, and interaction features of discourse, and reveal the underlying patterns and dynamics of how student pairs solve the procedural commognitive conflict.For example, in the high-quality student pairs, the discourse diagnosis reveals that Girl 14B had a commognitive conflict with Girl 14A's calculation of 34 and further questioned the follow-up operation, which diagnosed as the cause and found of conflict.Girl 14B corrected the commognitive conflict after Girl 14A pointed to the draft document and made her explanations clear.She also suggested using 125 minus 34 and continuing to move Classroom setup for data collecting for pairs math problem-solving projects. Commognitive conflict visual diagnosis using 3D block diagram The study selected different quality cases of contextual knowledge dimension commognitive conflict fragments and conducted 3D block diagram visualization analysis and diagnosis.This 3D block diagram is adapted from Lee's structure of cognitive conflict.In research tasks, contextual knowledge is mainly divided into school mathematics knowledge and life experience.On the dimension of procedural knowledge, it is divided into arithmetic, mathematics quantitative certainty, and interval uncertainty.In terms of conceptual knowledge dimension, it involves basic mathematical concepts and other knowledge, consistent with the previous text.The path characteristics of commognitive conflicts can be intuitively seen from the 3D block diagram, as shown in Table 7. The 3D diagram allows for an intuitive visualization of the paths taken by student pairs in commognitive conflicts.Taking high-quality P14-5S fragments as an example, the conflict paths are as follows: 1-BA → 2-B → 3-A → 4-BA → 5-A.The specific process is described as follows: 1-BA represents student 14B, who based their judgment on life experience and considered the possibility that children are 12 years old.Meanwhile, student 14A revised 13 years old.2-B indicates that student 14B, while considering the reasonableness of the gap between parents' age and child's age, concluded that the other two children should be siblings.3-A indicates that student 14A experienced a communication cognitive conflict in response to the 2-B judgment made by student 14B.This conflict arose due to student 14A's narrow focus on mathematical quantification in school mathematics.4-BA represents student 14B's explanation for their 2-B judgment.After student 14A confirmed the explanation, they performed calculations to determine the validity.5-A indicates that when student 14A was recording the results, they considered the uncertain nature of mathematics and added descriptive elements such as "examples" in the column. Research has revealed distinct path characteristics for commognitive conflicts among student pairs of different quality levels.In the case of the high-quality student pair (P15-5S), the process path for the emergence, negotiation, and resolution of commognitive conflicts follows a path of "life experience → school mathematics → life experience." They also consider both the certainty and uncertainty aspects of mathematical problems.Similarly, the medium-quality student pair (P4-8S) follows a path of "life experience → school mathematics → life experience." They take into account the age range in real life, which is then transformed into mathematical certainty, leading them to conclude that the five individuals have a shared rental agreement.Although their thinking is somewhat biased toward the life context, their logic remains reasonable.On the other hand, the low-quality student pair (P2-8S) only experiences the path of "life experience → school mathematics." They fail to fully convert the discussed problems into the realm of school mathematics.Despite considering the interval uncertainty, their analysis lacks accuracy and depth. In the 3D visualization diagnosis of commognitive conflict, the study mainly draws on Gyoungho Lee's three-dimensional viewable framework (Lee and Yi, 2013); however, Lee's related research mainly reveals static cognitive conflict and has not yet used the theory to analyze dynamic cooperative problem solving.This research uses the 3D analysis framework to present a dynamic path for commognitive conflicts and also refine the use of the analysis framework.Table 7 shows that the different quality commognitive conflict in CPS has a distinct pathway, the high-and medium-quality student pairs showing Pairs The occurrence and duration of commognitive conflicts and whether they are resolved a "life experience-school mathematics-life experience" problem solving process.In this process, students make full use of their existing life experiences and mathematical knowledge, improve their own mathematical knowledge during the commognitive conflict with their pairs, and internalize and recreate based on their existing mathematical knowledge.This also coincides with Freudenthal's theory of mathematics education-the idea of mathematical reality, mathematization and recreation (Freudenthal, 1973).It demonstrates that students may increase mathematics learning and create their own mathematical and life experiences by effectively resolving commognitive conflicts in CPS. Discussion International assessments recognize CPS skills as critical for student growth, and commognitive conflict theory provides a theoretical foundation for individual knowledge creation and social engagement in collaborative challenges.Although established commognitive theories give a comprehensive description of conflict levels and elements, visualization studies of content classification and discourse levels are lacking.This research provides a visual representation of the knowledge dimension classification of commognitive conflict in CPS, as well as a discourse analysis of different quality student pairs.The Nvivo12 software was utilized in the study to visualize commognitive conflict with sound waves, as well as to present three-dimensional routes in a 3D analytic framework.This innovation presents commognitive conflict in a more concrete and visual manner.Specifically, intelligent diagnosis and visualization of student pairs' CPS behavior can enable teachers to provide timely feedback, which provides new solutions for the observation of students' engagement during CPS. In this research, we choose the participates of pairs in the seventh grade, who have already learned addition, subtraction, multiplication, and division, were chosen to participate in the study.As open-ended contextualized mathematical life situations are added, and students focus on both arithmetic and character relationships, as well as each person's age and total age.Table 1 presents the overall statistics of commognitive conflicts for the 32 student pairs.The researchers discovered a low percentage of conceptual knowledge conflicts, while the procedural and contextual knowledge commognitive conflicts were the primary challenges.Thus, the instructional interventions in these areas should be strengthened.Therefore, we conducted further investigations into commognitive conflicts in procedural and contextual knowledge dimensions through discourse diagnosis and visual analysis. In the discourse diagnosis of commognitive conflict (Table 3), the study identified the origination, detection, interpretation, modification, and response of commognitive conflict.These specific linguistic features help us better understand when and where the commognitive conflict begins and ends, similar to the work done by Zhao et al. (2022).This analysis allows for a deeper understanding of the processes involved in commognitive conflict, enabling researchers and educators to effectively address and intervene in these conflicts to enhance student learning outcomes.For visual analysis, Table 4 presents the visualization of commognitive conflicts using sound waves, displaying the occurrence, quantity, duration, and whether the conflicts are solved.Additionally, Table 7 utilizes 3D block diagrams to illustrate the path of commognitive conflict in the contextual knowledge dimension.This dynamic visualization shows how the conflicts occurred and possible resolutions.For example, in the case of the low-quality student pairs (P2-8S) mentioned in Table 7, if the teacher uses the visual diagram and identifies that these student pairs are experiencing difficulty in translating life experiences into school mathematics, timely intervention can be implemented.Recognizing this specific challenge allows the teacher to address it directly and provide targeted support or guidance to help these students overcome this hurdle. To summarize, we diagnose and visualize the commognitive conflicts in student pairs' CPS, which innovates the evaluation of learning-oriented feedback practices.Discourse diagnosis and visual analysis play a crucial role in enhancing the impact of feedback on student learning.As highlighted by Er et al. (2021), pairs' feedback can be particularly effective in providing valuable solutions.As problem solving is a tool, a skill, and a process, the effective identification of commognitive conflicts is needed to improve CPS skills and even lead to creative solutions.However, due to limitations in research time and effort, further research is needed to analyze group work, employ additional visualization diagnoses, and explore the feedback of teachers on student cooperation issues, among other aspects.Addressing these shortcomings will be the focus of future research in this study. Conclusion Based on the above studies, conclusions can be drawn from the need to encourage students to focus and resolve commognitive conflicts and make timely feedback; visualization studies of commognitive conflict can empower AI-assisted teaching as well as the intelligent diagnosis and visual analysis of CPS provide innovative solutions for teaching feedback. Encourage students to focus and resolve commognitive conflicts and make timely feedback Existing commognitive theories give a detailed classification of commognitive conflict levels and elements, but research on content classification and visualization of discourse levels is lacking.In this study, we classify and visualize the knowledge dimensions of commognitive conflict in order to provide a theoretical and practical foundation for broader application.Future advancements can be made in the precise classification of commognitive conflict, the refinement of discourse analysis, and the development of visualization tools. The visual analysis of the commognitive conflicts of different quality student pairs revealed significant differences in the duration, amount, and resolution of conflict, especially in the procedural and contextual knowledge dimensions.Thus, teachers need to encourage students to focus and resolve commognitive conflicts and make appropriate instructional interventions, which facilitate the development of students' CPS from low to high quality.The commognitive conflict in CPS, when coupled with timely and targeted feedback, empowers teachers by fostering student engagement, deepening understanding, enabling personalized instruction, and so on.When students receive immediate responses to their contributions, it reinforces their engagement and participation.Furthermore, timely and personalized feedback provides guidance and clarifies misconceptions, helping students refine their understanding and address their unique challenges.Therefore, the dynamic coding and visualization of commognitive conflict in CPS can more effectively identify the types and manifestations of commognitive conflict in problem solving and thus provide a basis for teachers to provide targeted instructional interventions. Visual studies of commognitive conflict can empower AI-assisted teaching During the research, we investigate explicit indicators of CPS, such as commognitive conflict categories, the occurrence and the average conflict duration etc., which are conducive to automatic identification and data analysis in the context of the rapid development of artificial intelligence (AI) for speech recognition.As AI technologies have the potential to reduce the workload for teachers and test developers (Yunjiu et al., 2022), their further development in the application of commognitive theory can provide directions and scripts for AI-assisted teaching, particularly in commognitive conflict discourse recognition and diagnosis in the future. At the same time, computerized automated evaluation, at a time when AI development such as speech recognition is becoming more and more mature, has been able to begin to intelligently perform automated diagnostic analysis.Therefore, a more in-depth visualization study of commognitive conflict in CPS can provide ideas and references for future AI-empowered teaching and learning. Intelligent diagnosis and visual analysis of CPS provide innovative solutions for teaching feedback In this paper, intelligent diagnosis and visual analysis are carried out according to the commognitive conflict of students' pairs in CPS, which provides an innovative solution for visual presentation of teaching feedback.Visual studies of commognitive conflict can provide teachers with rich data that informs their decision-making.For example, when the visual diagram shows the low quality student pairs, by analyzing visual data, teachers gain insights into the quality of students' interactions, and determine when and how to provide targeted instructional interventions.Its further development can provide analysis framework and case reference for teachers or even computer automation to evaluate students' pairs problem solving level.This solution of intelligent diagnosis and visual analysis is intended to provide a deeper understanding of how students respond to feedback practices of commognitive conflicts in CPS in the future teaching, and to shift to more applicable results.This, in turn, promotes the development of individuals in the social interaction and communication.Therefore, in the future, it is necessary to strengthen research on the diagnosis and visualization of commognitive conflict in CPS,which will provide scripts for future artificial intelligence, offer data support for targeted, timely, and personalized assistance. in a house; their total age is 25, and one of them is in the seventh grade.How old could each of the remaining four individuals be?What are the possible relationships between the five residents?Provide an explanation in a paragraph.Conceptual knowledge Factual, conceptual, relational, and conceptual structure disputes cause commognitive conflicts.The overall age; The ages of each of the five persons; etc. Procedural knowledge Thinking processes like description, selection, expression, reasoning, integration, and verification can lead to commognitive conflicts.The calculation procedure of the ages of five people; etc. Contextual knowledge Commognitive conflicts brought on by conditions in school, daily life, society, culture, and history.Social relationships among 5 people; Roles of 5 people; etc. TABLE 2 Analysis of the "Household and Age" open-ended contextualized math problems. TABLE 1 Selection of different student pairs' CPS cases through SOLO. TABLE 3 Visual diagnostic profile of commognitive conflict. TABLE 4 Diagnosis and visualization cases of commognitive conflict. in the table indicates "commognitive conflict was finally resolved, " while N indicates "commognitive conflict was finally not resolved." The red line indicates conceptual knowledge conflict, the yellow line indicates procedure knowledge conflict, the blue line indicates contextual knowledge conflict.The timeline and commognitive conflict segment codes in the table were generated by Nvivo12 software. S TABLE 5 Statistical comparison of different students' pairs commognitive conflict. TABLE 6 Discourse diagnose of commognitive conflict in the procedural knowledge dimension.Add it up, the others add up to 107, minus that 13 year old, there has to be an old man.If you go by the fact that his dad and his mom are 40. TABLE 7 3D block diagram of commognitive conflict in the contextual knowledge dimension.
8,313.2
2023-12-20T00:00:00.000
[ "Education", "Computer Science" ]
CPMU Development at Diamond Light Source Over the last three years (2020-2022) Diamond Light Source has installed four in-house designed, built, and measured Cryogenic Permanent Magnet Undulators (CPMUs). All four are 2 m long with a 17.6 mm period and have a minimum operating gap of 4 mm. These have replaced existing 2 m long in-vacuum Pure Permanent Magnet (PPM) devices to improve the flux to several of Diamond’s MX (Macromolecular Crystallography) beamlines by a factor of 2-4. In this paper we present the mechanical and cryogenic design considerations, and the shimming procedures and tools developed to produce these devices. The performance of the CPMUs compared to their PPM counterparts will also be reviewed. Introduction An undulator creates periodic static magnetic field to stimulate x-ray emission from electron beams in synchrotron radiation facilities.Cryogenic Permanent Magnet Undulators (CPMUs), where permanent magnets are cooled to cryogenic temperature, provide a sufficiently large magnetic field with a shorter period length compared to a PPM undulator resulting in higher flux and brightness at a higher photon energy range [1].The Macromolecular Crystallography Insertion Device (ID) upgrade project at Diamond Light Source aims to increase brightness and flux to aid in-situ data collection.The primary energy of interest is ~12.6 keV and the full operating photon energy range is 5-30 keV. Undulator design 2.1.Magnetic design CPMU magnetic design calculations are performed using RADIA, as shown see Fig. 1. 1 gives the main parameters of cryogenic undulators.The magnets for CPMU-4 are thinner due to the initial requirement of a 16.5 mm period length, that was later changed to 17.6 mm to improve the tuning across the full energy range in Diamond I and Diamond II.The gaps between harmonics were exacerbated for Diamond II with the previously chosen period of 16.5 mm. Figure 2 shows the magnetic field for CPMU-3 at room and cryogenic temperatures.The field is increased by ~13% at minimum ID gap from room temperature to cryogenic temperature.For CPMU-4 the measured field enhancement is ~8% at a 4 mm ID gap due to the thinner magnets. Mechanical design Mechanical design calculations consider the total length of the insertion device and minimum operating gap that results into a very high magnetic force between the two girders and estimates the impact of girder deformation on the magnetic field and thus on the performance of the undulator as a radiation source.Force calculated between upper and lower girder for full length of ID is ~20.5 kN at IOP Publishing doi:10.1088/1742-6596/2687/3/0320433 cryogenic temperature, gap variation due to girder deformation is < 1 µm.Force between poles is ~180N.At Diamond, the CPMU's mechanical frame is a two-pillar structure, which support magnetic forces in a C configuration, open at one side for the out-of-vacuum measurement and correction (see Fig. 3).The gap drive system has four motors which allow the girders to be moved independently to tune the gap and the longitudinal taper.Both girders are maintained at cryogenic temperatures by circulating liquid nitrogen through them.The girders are fixed on the out-of-vacuum frame with support columns passing through to the vacuum chamber.These support columns are equipped with bellows and are used to tune the position of each girder and to tune the magnetic gap along the full length of the ID by using mechanical shims. Control system The control system for the CPMUs is based on earlier designs [2] with adjustments caused by obsolescence.As two examples, the VME based controller was changed to Linux based servers to use remote I/O [3] and for motion control, during the construction cycle of the CPMUs, the Delta Tau Brick Controller became obsolete, and was substituted by the Omron CK3M motor controller. Undulator assembly CPMUs are built by initially mounting all the horizontal magnets without poles.An in-house sort code Opt-ID, based on artificial immune system, uses Helmholtz measured data to sort the magnet blocks for an optimum magnetic field distribution [4].Following magnet assembly, magnetic measurements are performed, and trajectories and RMS phase error are corrected using magnet swapping and flipping suggestion from Opt-ID in 'magnet-only' configuration.For the first two CPMUs, magnet height adjustment was also used to reduce RMS phase error in 'magnet-only' configuration.In the second step of assembly, all the poles were inserted, and then finally shimmed with the pole height adjustments.The idea was that the initial assembly with 'magnet-only' configuration allows initial build error to be corrected at an earlier stage, therefore saving time and effort at a later stage.However, trajectory and phase error correction using magnet swaps and flips are limited due to the availability of appropriate magnets.The magnetic errors were found to be predominantly caused by pole height error.Also, correcting the field integrals with pole height adjustment is faster than magnet swaps and flips.CPMU-5 is being built by mounting all the magnets and poles at the same time.Trajectories and phase error corrections will be made with pole height adjustment only. Room temperature After full assembly, the magnetic field is measured with a Hall probe and a flipping coil bench.Magnetic field corrections are applied to optimize the radiation properties.Coarse corrections are made first, for example mechanical shimming on the support columns to correct gap or by applying longitudinal taper into the girder to increase or decrease the magnetic field linearly with longitudinal position.To fine tune the trajectories, poles are either shifted vertically or tilted.To correct phase error, the heights of the poles are adjusted in pairs instead of moving the magnet heights, as the pole height can be continuously fine-tuned in microns due to the holder design (see Fig. 4 -Fig.5).Magnet height adjustment can only be done in discrete steps of ~15 µm with this holder design.To correct integrated multipoles, small cylindrical magnets called magic fingers are used at each end of the girders.Fig. 6 -Fig.8 show the measurement results as an illustration of before and after shimming at room temperature.Figure 8: First field integrals for CPMU-3 before (dashed) and after (solid) magic finger shimming for a 5mm ID gap. Cryogenic temperature An in-vacuum Hall and wire measurement system is developed and used to measure CPMUs at cryogenic temperature [5].At cryogenic temperature, there is a longitudinal thermal contraction of the girder (~8 mm over 2m long ID) and a vertical thermal contraction of the support columns (~1 mm).The magnetic gap varies along the length of the ID.Consequently, the phase error rms increases.For CPMU-3, the rms phase error increases from 1.8° to 6.6° (see Fig. 9), which is reduced to 3.2° after two iterations by using mechanical shims for support columns based on the measured magnetic field signature.2 lists the installed CPMUs in different beamlines over the last few years.Flux is improved by a factor of ~2 at the primary energy of interest i.e., 12.6 keV, and at higher photon energies flux is higher still. In beamline I24, the ID gap was restricted to 6.5 mm during operation and on investigation both upper and lower beam foils were found to bunch into the gap close to the centre of the ID.This issue was resolved following thermal cycling and I24 CPMU-1 was working within normal operating gap range.CPMU-1 was replaced with CPMU-3 in Dec 2021 in order to remove CPMU-1 for foil replacement.It is currently undergoing remeasurement.For CPMU-1 and CPMU-2 all poles were set 0.1 mm above the magnet top surface.Based on the temporary foil buckling problem experienced with CPMU-1 the offset between the heights of magnets and poles for CPMU-3 and CPMU-4 was eliminated to avoid ripples in foil and to achieve maximum contact between the foil and the magnetic array.gap.The front-end aperture is not compatible for smaller ID gaps and will be upgraded for Diamond-II, so that different harmonics could be used at these range of energies to achieve more flux gain. Conclusion Several CPMUs have been successfully developed and installed at various MX beamlines at Diamond light source and providing an increase in flux and brightness at or above specification. Figure 1 : Figure 1: Magnetic design using RADIA.6 period CPMU (top), regular magnet and pole in RADIA (bottom left), End design in RADIA (bottom right) Figure 2 : Figure 2: Peak magnetic field for CPMU-3 at room temperature and cold temperature. Figure 5 : Figure 5: Photo of the pole height measurement tool Figure 4 Figure4shows the magnetic holder design of the CPMUs.Each holder consists of one magnet and one pole.Magnets can be shimmed by using of mechanical shims.Pole heights can be adjusted continuously in micron steps, with the help of grub screws fixed on the top of pole holder and an insitu pole height measurement tool, as shown in Fig.5. Figure 6 Figure 6 Horizontal and vertical trajectory after shimming at room temperature, ID gap 5 mm. Figure 7 : Figure 7: RMS phase error for CPMU-3 before and after shimming at room temperature for a 5 mm ID gap. Figure 9 : Figure 9: Phase error RMS before and after shimming at cold temperature, ID gap 5 mm. Figure 11 : Figure 11: Flux measured with U23 and CPMU-4 on I04 beamline.CPMU-4 was recently installed for beamline I04, as shown in Fig.10.Figure11compares the flux measured at the sample position with a 32 µm x 20 µm beam with the previous in-vacuum PPM U23 and CPMU-4.There is a significant gain in flux except at 7-9 keV and 11-13 keV photon energy ranges partly due to the restriction imposed by the front-end custom aperture to 4.5 mm ID Table 1 Cryogenic Undulator Main Parameters Table 2 Installation status of CPMUs
2,188.2
2024-01-01T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Synthesis of ErBa2Cu3O7−δ Superconductor Solder for the Fabrication of Superconducting Joint between Gdba2cu3o7−δ Coated Conductor ErBa2Cu3O7−δ (Er123) superconductor is one of the best candidates of superconductor solder for the fabrication of superconducting joint between GdBa2Cu3O7−δ (Gd123) coated conductor, due to its high Tc value (93 K) and highest optimized oxygen annealing temperature among RE123 compounds. In this paper, we systematically research the effect of sintering parameters on the phase formation, microstructure and superconducting properties of Er123 powder. The optimized synthesis route to acquire high purity Er123 powder with as good superconducting properties as Gd123 has been uncovered. The melt temperature of Er123 with different dopant compared to Gd123 is also investigated, and the feasible operating temperature range of Er123 superconductor solder is discussed. This work reveals a very important starting point on fabrication high-quality superconducting joint between the commercial Gd123 coated conductor, which can further improve the development of the persistent operating mode on ultra-high field nuclear magnetic resonance and magnetic resonance imaging. Introduction REBa 2 Cu 3 O 7−δ (REBCO) has been considered to be one of the promising superconductors for the insert coil of ultra-high field nuclear magnetic resonance and magnetic resonance imaging [1][2][3][4][5]. The RIKEN has successfully fabricated an NMR by REBCO operated in the driven mode and demonstrated high-resolution NMR spectra [6]. In the real application, the persistent mode is preferred to reduce the heat leak and get a more stable magnetic field. One of the key points of persistence mode is a superconducting joint between REBCO coated conductor with resistance less than 10 −12 Ω [7]. However, most joints now are on the level 10 −8 to 10 −9 Ω [8 -10], which indicates that a feasible superconducting joint fabrication process is still required. The first persistent current joint between REBCO was invented by Park et al. in 2014 [11]. The GdBa 2 Cu 3 O 7−δ (Gd123) coated conductor was directly connected by long time heat treatment to diffuse the Gd123 to each other. After over 350 h oxygen annealing, the final joint had a critical current of 84 A and a resistance less than 10 −17 Ω. Although the properties fit the application criterion of the persistent mode, the too long annealing time is not feasible in the real application. In 2015, the Jin et al. in RIKEN established a novel method called crystalline joint by a melted bulk (CJMB). Two untouched Gd123 coated conductors were put on a RE123 bulk with low melt temperatures such as YBa 2 Cu 3 O 7−δ (Y123) and YbBa 2 Cu 3 O 7-δ (Yb123) [12,13]. After ingeniously designed the heat treatment, the Y123 melt and regrowth to form a superconductor joint between Gd123 coated conductors. The annealing time was successfully reduced to only 72 h with a final resistance of 8 × 10 −13 Ω. The core idea of this method is using low melt temperature REBCO materials as the superconducting solder. Using low melt temperature REBCO as a superconducting solder has also been used in joining Y123 single domain bulks [14][15][16][17][18]. In all these researches, appropriate oxygen annealing is essential to achieve high supercurrent capability [19]. In particular, ErBa 2 Cu 3 O 7−δ (Er123) has the highest optimized annealing temperature among RE123 compounds, which is strongly suggesting a drastic decrease in annealing time for the accomplishment of an optimally carrier-doped condition in Er123 compounds due to the large diffusion coefficient of oxygen [20]. There are only a few reports mentioned on Er123 powder [21][22][23][24][25][26], and none of them has systematic research on the synthesis of such compounds. The reported synthesis temperature even has some tens of degrees difference, which brings big confusion to other researchers. In this paper, we systematically investigate the effect of synthesis parameters on the phase formation, microstructure and superconducting properties. High purity Er123 powder with high supercurrent capability is acquired. The melt temperature of Er123 and Gd123 with different dopants are also researched. Materials and Methods Powder Preparation. The Er123 powders were synthesized by the solid reaction method. The Er 2 O 3 (99.9% Aladdin), BaCO 3 (99.9% Aladdin) and CuO powder (99.99% Aladdin) powder were mixed in stoichiometric ratio to form Er123. The powders were mixed, ground together, and pressed into a pellet that was put into a tube furnace and heat-treated at different temperatures for 24 h under flowing O 2 . For the multiple sintering sample, the product powders were ground, pressed and heat-treated again under the same process. The Er211 powders were synthesized by a similar process with the sintering parameters of 1000 • C for 24 h under the flow Ar. The oxygen annealing process used the Er123 sample with the highest purity. The as-synthesized powders were ground and pressed to pellet. The pellet was put into a tube furnace and heat-treated at 500 • C for 24 h or 48 h under flowing O 2 with a large flow rate. The commercial Gd123 powder is coming from Shanghai Superconductor Technology Co., Ltd. Sample Characterization. The phase analysis of the samples was characterized by Cu Kα x-ray diffraction (XRD, Burker D8), and the grain sizes were calculated by using the Debye-Scherrer formula. Microstructures were observed by scanning electron microscopy (SEM, FEI Quanta 450) with Energy Dispersive X-Ray Spectroscopy (EDX). The melt temperature was measured by differential thermal analysis (DTA-TGA, TA Instruments 2960) at a heating rate of 10 • C/min in the Air. Magnetic measurements were performed at 5K and 77 K in a vibrating sample magnetometer (VSM) using a Physical Properties Measurement System (PPMS) from Quantum Design Ltd. under an applied field up to 7 T. Field-cooled (FC) and the zero-field-cooled (ZFC) curve was measured in an applied field of 10 mT. Magnetization values were determined from the measured magnetic moment using the sample mass and nominal density ρ = (Gd123: 6.384 g/cm 3 , Er123: 7.152 g/cm 3 ) to calculate the actual volume of material present. The critical current densities, J c (in Am −2 ), of the samples were calculated by applying a standard Bean model expression for spherical grains to magnetic hysteresis loops through the formula: where ∆M (in Am −1 ) is the vertical width of the magnetization loop and l ≥ w >> t (in m) are the dimensions of individual plate-like crystallites in the samples. This formula was derived in Reference [27] for a collection of randomly-oriented thin platelets of an anisotropic superconductor such as REBCO, and yields an estimate of the ab-plane J c for fields applied parallel to c typically accessed in transport measurements while incorporating considerations relating to the anisotropy of the superconductor, the geometry of the crystallites, their respective orientations to the applied field and demagnetization effects. In the case of the commercial Gd123 powder, the standard Bean model expression for spherical grains ∆M = d/3 J c has been used, where d is the grain diameter (taken as 5 µm). Figure 1a shows the XRD results of the sample sintered at different temperatures for 24 h. All the samples consist of Er123 as a domain phase with impurities of Er 2 BaCuO 5 (Er211) and BaCuO 2 . It shows that the sintering temperature has an apparent effect on the phase formation. As shown in Figure 1b, along with the temperature raising to 920 • C, the peak intensity of Er123 kept increasing, at the same time the Er211 peak became weaker. When the temperature reaches 930 • C, such a trend reversed. The grain size calculated from XRD results shows the same evolution trend with the Er123 peak intensity. Figure 1a shows the XRD results of the sample sintered at different temperatures for 24 h. All the samples consist of Er123 as a domain phase with impurities of Er2BaCuO5 (Er211) and BaCuO2. It shows that the sintering temperature has an apparent effect on the phase formation. As shown in Figure 1b, along with the temperature raising to 920 °C, the peak intensity of Er123 kept increasing, at the same time the Er211 peak became weaker. When the temperature reaches 930 °C, such a trend reversed. The grain size calculated from XRD results shows the same evolution trend with the Er123 peak intensity. Results Multiple heat treatment is the conventional method to improve the reaction in the solid reaction method. The multiple heat treatment under 920 °C has been attempted, as shown in Figure 1c,d. The impurity phase such as Er211 and BaCuO2 nearly disappeared after 4 times sintering. Additionally, the grain size does not have significant changing along with the multiple heat treatment. The purpose of oxygen annealing on the REBCO powder is to compensate for the oxygen deficiency brought by the high temperature heat treatment, the annealing at 500 °C for 24 and 48 h was also tried. As shown in Figure 1e,f, the change of phase formation and grain size before and after oxygen annealing is negligible, which both have high purity Er123 as the dominant phase. Multiple heat treatment is the conventional method to improve the reaction in the solid reaction method. The multiple heat treatment under 920 • C has been attempted, as shown in Figure 1c,d. The impurity phase such as Er211 and BaCuO 2 nearly disappeared after 4 times sintering. Additionally, the grain size does not have significant changing along with the multiple heat treatment. The purpose of oxygen annealing on the REBCO powder is to compensate for the oxygen deficiency brought by the high temperature heat treatment, the annealing at 500 • C for 24 and 48 h was also tried. As shown in Figure 1e,f, the change of phase formation and grain size before and after oxygen annealing is negligible, which both have high purity Er123 as the dominant phase. Figure 2 shows the SEM images of the sample sintered at different temperatures for 24 h. When the temperature was 890 • C, there are plenty of small round particles embedded inside big Er123 particles, as pointed by the red arrow. The EDX results showed that these small particles are the Er211 phase. As the temperature rose, these small round particles gradually disappeared. When the temperature reaches the range of 910~920 • C, only the morphology of big plate-like particles can be found. However, the big plate-like particles started to disassemble at a temperature of 930 • C. This is due to the reaction between the Er123 and CuO to form Er211 and liquid phase [28]. Combined with the XRD results, we think it is due to the decomposition of Er123 to Er211 at the excessive temperature. However, we did not find the small round Er211 particles as in the sample of 890 • C. We randomly chose 50 particles and measured the particle size of the plate-like Er123. The average particles size of the Er123 in the sample of 900~920 • C are 5 µm × 5 µm × 1 µm. The particle size has a small change in the sample of 890 • C and 930 • C, due to the existence of Er211 impurities. However, the same value will be used in the calculation of the Bean model. Figure 3 shows the superconducting properties of the sample sintered with different parameters. As shown in Figure 3a, under different sintering temperatures, the Tc onset value is nearly the samearound 91 K-but there is an apparent difference in the magnetization value. In both the ZFC and FC curves, the magnetization value kept increasing and reached the maximum at 920 °C. The ZFC value shows how many superconductor phases in the sample. That is the reason the magnetization value in the ZFC curve has exactly the same trend as the peak intensity of Er123. The FC magnetization value is associated with the Meissner fraction, and is much lower in magnitude for both samples, indicating flux trapping within the grains. The increasing FC magnetization value indicates the poor pinning in all the samples. All the Jc curves of the sample with different sintering temperatures are parallel to each other at 5 K up to 7 T, as shown in Figure 3b. The sample of 920 °C has the highest Jc We did multiple heat treatment on the sample of 920 • C. Although the phase purity was further improved, there was no apparent change in the microstructure, as shown in Figure 2e,f. The grain size also did not change after multiple heat treatment. The microstructure and particle size of the sample with four times sintering at 920 • C for 24 h and oxygen annealing at 500 • C for 48 h are nearly the same as the sample with only one-time sintering at 920 • C for 24 h. It indicates that the Er123 particle with a size around 5µm is very stable, which is hard to connect together under more heat treatment. Figure 3 shows the superconducting properties of the sample sintered with different parameters. As shown in Figure 3a, under different sintering temperatures, the T c onset value is nearly the same-around 91 K-but there is an apparent difference in the magnetization value. In both the ZFC and FC curves, the magnetization value kept increasing and reached the maximum at 920 • C. The ZFC value shows how many superconductor phases in the sample. That is the reason the magnetization value in the ZFC curve has exactly the same trend as the peak intensity of Er123. The FC magnetization value is associated with the Meissner fraction, and is much lower in magnitude for both samples, indicating flux trapping within the grains. The increasing FC magnetization value indicates the poor pinning in all the samples. All the J c curves of the sample with different sintering temperatures are parallel to each other at 5 K up to 7 T, as shown in Figure 3b. The sample of 920 • C has the highest J c value, which is due to the highest Er123 phase amount. The 211 phase is the most conventional pinning center in the REBCO bulks. However, although there is a visible Er211 phase in both the sample of 890 • C and 930 • C, we did not find any better field performance in these two samples. Crystals 2019, 9, x FOR PEER 6 of 10 supercurrent with Gd123 which is used to fabricate commercial coated conductor. If the excellent texture of Er123 can be obtained between Gd123 commercial coated conductor by exquisitely designing the heat treatment process, we also expect the same level inter grain supercurrent of Er123 to Gd123, which finally provides a possibility to fabricate superconducting joint with high current capability by using Er123 as superconducting solder. Although the phase formation and microstructure did not change after oxygen annealing, the superconducting properties showed some difference, as shown in Figure 3e,f. After annealing at 500 °C under O2, the ZFC magnetization values did not have further improvement, indicated that although sintering at 920 °C could cause oxygen deficiency, the cooling underflow oxygen can basically compensate the deficiency. However, the FC magnetization values slightly decreased with the prolonging of the annealing time at 500 °C. The Jc curves also tell a new story: the high filed performance was mildly improved in the sample with oxygen annealing. Both results show that oxygen annealing improves the pinning in the Er123 powder. As shown above, the multiple heat treatment at 920 • C can further improve the phase purity. The enhancement of ZFC magnetization value and J c value also proved such results, as shown in Figure 3c,d. Moreover, the J c value at 5K is higher than the commercial Gd123 powder at the entire 0~7 T. Since the Gd123 powder is the exact one that Shanghai Superconductor Technology Co., Ltd. used to fabricate Gd123 coated conductor, this result indicates that the Er123 has similar intra-grain supercurrent with Gd123 which is used to fabricate commercial coated conductor. If the excellent texture of Er123 can be obtained between Gd123 commercial coated conductor by exquisitely designing the heat treatment process, we also expect the same level inter grain supercurrent of Er123 to Gd123, which finally provides a possibility to fabricate superconducting joint with high current capability by using Er123 as superconducting solder. Although the phase formation and microstructure did not change after oxygen annealing, the superconducting properties showed some difference, as shown in Figure 3e,f. After annealing at 500 • C under O 2 , the ZFC magnetization values did not have further improvement, indicated that although sintering at 920 • C could cause oxygen deficiency, the cooling underflow oxygen can basically compensate the deficiency. However, the FC magnetization values slightly decreased with the prolonging of the annealing time at 500 • C. The J c curves also tell a new story: the high filed performance was mildly improved in the sample with oxygen annealing. Both results show that oxygen annealing improves the pinning in the Er123 powder. Discussion From the results above, it can be found that there is a reversible reaction between Er123 and Er211 phases. At lower temperatures, the Er211 can react with BaCuO 2 form Er123, moreover, the Er123 can also decompose to Er211. This is the reason the Er123 intensity has a convex shape with the increasing temperature, and the optimized temperature windows on the Er123 formation is pretty narrow. These Er211 seems to agglomerate to particles of few micrometers and embedded into the large Er123 particles. According to the superconducting properties, especially the high field performance of J c value, such embedded Er211 particles could not act as the pinning center but only degrade the T c value. After multiple heat treatments, the impurity phase such as Er211 can be eliminated; however, the poor pinning problem emerges. Therefore, the controllable Er211 particles which can act as pinning centering are essential to the Er123 as the superconducting solder. We successfully synthesized the high purity Er211 phase with a particle size of around 10 micrometers, as shown in Supplementary Materials Figure S1. However, simply mixing such Er211 will not bring the pinning center, and further refining work is necessary. During recent decades, research on the melt texture synthesis of REBCO single domain bulk, multiple additives has proved that it is a benefit to the superconducting properties of the final products. Typically, the Ag or Ag 2 O is added to improve the connectivity and decrease the melt temperature of REBCO [29][30][31][32], which brings lower heat treatment temperature. Pt is also a common additive, which can inhibit the growth of the 211 phase [33,34]. The large 211 particles can lead to an inhomogeneous reaction. All these beneficial dopants will be introduced in the fabrication of a superconducting joint by using Er123 as a superconducting solder. Therefore, the effect of such additives on the Er123 melt temperature is critical, which directly determines the heat treatment process of fabrication of the superconducting joint. After obtaining Er123 powder with high purity and high supercurrent capability, the feasibility of Er123 as a superconducting solder is discussed here. Since the REBCO need the biaxial texture to carry high supercurrent, the only method of fabricating a superconducting joint between REBCO coated conductor is using REBCO materials to form a texture joint. In the most conventional method, the REBCO powder of low melt temperature is added as a superconducting solder between the REBCO coated conductor. During the heat treatment, the solder REBCO melts but the REBCO coated conductor keeps inert. After cooling at a slow rate, the solder REBCO grows to texture form based on the REBCO coated conductor. Therefore, the low melting temperature of superconducting solder is critical. Figure 4 shows the DTA results of the Er123 powder and Gd123 powder with a different dopant in Air. It is found that the melt temperature of Er 123 decreased about 30 • C by adding Ag and Ag 2 O, but no such effect is found in the curve of Pt adding. The melt temperature decreasing by adding Ag and Ag 2 O is very useful, which may bring more safety margins when designing the heat treatment process of the superconducting joint. Conclusions The synthesis parameters of Er123 powder were systematically optimized. The optimization temperature was 920 °C; both higher or lower temperatures brought small Er211 particles embedded in Er123 plate-like particles. The Er211 phase is a kind of barrier of the flow superconducting current but cannot act as a pinning center. After multiple sintering processes, high purity Er123 powder with as good superconducting properties as Gd123 coated conductor was acquired. The extra oxygen annealing at 500 °C is not necessary for this synthesis method. The melt temperature of Er123 and Gd123 with different dopants were also investigated. After adding Ag or Ag2O, a feasible operating temperature range (965~1013 °C) is uncovered, at which the superconducting solder Er123 can melt but the Gd123 in coated conductor remains inert. This shows the important first step to obtain a superconducting joint with high current capability. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: The XRD results of Er211 powders. However, according to our previous study, the Ag can decrease the melt temperature of REBCO by just contacting instead of homogenous mixing [35]. In the report, we found that the Ag can decrease the melt temperature of Yb123 to the same level as the Yb123 powder mixed with Ag. So if the Ag is added to Er123 during the fabrication of the superconducting joint between the G123 coated conductor, the effect of Ag on the melt temperature of Gd123 should be considered. Figure 4b shows the DTA results of the Er123 and Gd123 powder with and without Ag adding. It is found that the melt temperatures of both theGd123 and Er123 were decreased by the same level (about 29 • C) with Ag adding. So the heat treatment temperature range moved from 994~1042 • C to 965~1013 • C without any expansion. For safety reasons, the maximum heat treatment temperature should set below 1013 • C instead of 1042 • C, in case of the melting of Gd123 under the effect of Ag. Conclusions The synthesis parameters of Er123 powder were systematically optimized. The optimization temperature was 920 • C; both higher or lower temperatures brought small Er211 particles embedded in Er123 plate-like particles. The Er211 phase is a kind of barrier of the flow superconducting current but cannot act as a pinning center. After multiple sintering processes, high purity Er123 powder with as good superconducting properties as Gd123 coated conductor was acquired. The extra oxygen annealing at 500 • C is not necessary for this synthesis method. The melt temperature of Er123 and Gd123 with different dopants were also investigated. After adding Ag or Ag 2 O, a feasible operating temperature range (965~1013 • C) is uncovered, at which the superconducting solder Er123 can melt but the Gd123 in coated conductor remains inert. This shows the important first step to obtain a superconducting joint with high current capability. Author Contributions: Z.Z. contributed to the main part of this article including investigation, analysis and original writing. The L.W., J.L. and Q.W. contributed to the supervision of the investigation and writing-review and editing.
5,384.4
2019-09-25T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Magnetically responsive layer-by-layer microcapsules can be retained in cells and under fl ow conditions to promote local drug release without triggering ROS production † Nanoengineered vehicles have the potential to deliver cargo drugs directly to disease sites, but can potentially be cleared by immune system cells or lymphatic drainage. In this study we explore the use of magnetism to hold responsive particles at a delivery site, by incorporation of superparamagnetic iron oxide nanoparticles (SPIONs) into layer-by-layer (LbL) microcapsules. Microcapsules with SPIONs were rapidly phagocytosed by cells but did not trigger cellular ROS synthesis within 24 hours of delivery nor a ff ect cell viability. In a non-directional cell migration assay, SPION containing microcapsules signi fi cantly inhibited movement of phagocytosing cells when placed in a magnetic fi eld. Similarly, under fl ow conditions, a magnetic fi eld retained SPION containing microcapsules at a physiologic wall shear stress of 0.751 dyne cm − 2 . Even when the SPION content was reduced to 20%, the majority of microcapsules were still retained. Dexamethasone microcrystals were synthesised by solvent evaporation and under-went LbL encapsulation with inclusion of a SPION layer. Despite a lower iron to volume content of these structures compared to microcapsules, they were also retained under shear stress conditions and displayed prolonged release of active drug, beyond 30 hours, measured using a glucocorticoid sensitive reporter cell line generated in this study. Our observations suggest use of SPIONs for magnetic retention of LbL structures is both feasible and biocompatible and has potential application for improved local drug delivery. Introduction Local treatment of disease is an important research goal because it can potentially increase efficacy of drugs whilst reducing side effects. To some extent this can be achieved by direct delivery of therapeutics into disease sites which also promotes local effects. However, with small molecule drugs and biologics, we know that they can be rapidly cleared from joints and other sites via the blood stream and lymphatics. 1 An alternative is to use nano or micron sized particles as drug carriers from which prolonged release can be achieved. However, the persistence of vehicles at a desired site will depend on a range of criteria including cell phagocytosis and degradation, particle size, inflammation status of the site and lymphatic drainage. Work by Horisawa et al., (2002) showed that nanosized PLGA particles are readily phagocytosed by macrophages, whilst larger 26 µm particles remained extracellular in healthy rat joints. 2 When joints are inflamed, both nanosized (300 nm) vehicles and larger micronsized particles are removed by leakage and lymphatic drainage following intraarticular delivery. 3 Glucocorticoids have potent anti-inflammatory and analgesic effects and are widely used in the treatment of rheumatoid and osteoarthritis patients by delivery directly to joints as micronized crystalline suspensions that slowly dissolve and have prolonged local effects. 4 Nonetheless, following local delivery burst release results in elevation in blood levels of steroids 5 which can cause systemic side effects. 6 In addition there are reports of steroid crystals entering the lymphatics and causing hypopigmentation of covering skin [7][8][9] and injections can cause a flare in disease which results from ingestion of crystals by phagocytic cells 10,11 which can potentially migrate away from the delivery site. There are now a number of nanomedicines that are approved for clinical use, 12 generally they aim to promote drug half-life or achieve a degree of passive targeting to disease sites. Future developments in nanomedicine will be active targeting to disease sites, triggered release in response to environmental or physical cues 13,14 and theranostic capacity. 15 We are interested nanoengineering delivery vehicles so that they are better retained at disease sites and to improve local treatment of disease. One nanoengineering approach, layer-by-layer (LbL) assembly first described by Decher et al. (1992) 16 is a simple but flexible method to engineer nanoscale layers incorporating responsive particles and biological molecules into complex arrangements with functions ranging from sensors to drug delivery. [17][18][19] LbL assembly applied to microparticles was first reported by 20 by the sequential addition of layers of alternatively charged polymers of approximately 2-3 nm in thickness 21 on a template core. These microcapsules are ideally suited to the delivery of macromolecules that can be trapped within the structure of the capsule whereas small molecule drugs readily diffuse out unless they have affinity for a capsule component, 22 layers are crosslinked to improve retention 23 or high drug loading is achieved by use of crystalline drug as the capsule core. 24 One of their interesting attributes is their potential for functionalisation through incorporation nanocomponents which can permit responsiveness to physical stimuli. Inclusion of superparamagnetic iron oxide nanoparticles (SPIONs) enables responsiveness to magnetism, which can be utilised to target vehicles 25 and control release of cargo molecules 26 through the use of permanent and alternating electromagnetic fields respectively. We know that microcapsules are readily phagocytosed by cells 27 and it is feasible that magnetism could be used to retain microcapsules at a delivered site and to prevent removal by cells or flow conditions. Indeed a recent report has shown magnetic retention of SPION containing microcapsules in the microvascular blood supply. 28 When SPIONs are used to provide magnetic responsiveness, there is however the concern that detrimental effects on cells will be caused by the production of reactive oxygen species (ROS). 29 Iron is known to catalyse the Fenton reaction that converts hydrogen peroxide, a product of lysosomes or mitochondrial oxidative respiration, into a highly toxic hydroxyl free radical (OH • ). Despite this concern, in previous studies we have shown that SPION containing microcapsules are well tolerated by cells. 25 In this study we demonstrate that SPIONs incorporated into the microcapsule structure do not promote ROS production in cells. We also demonstrate that SPION containing microcapsules can be magnetically retained in a cell migration assay and under flow conditions. Furthermore, similar properties are seen with microcapsules formed from crystals of the glucocorticoid dexamethasone coated with polymer layers that incorporated SPIONs. Chemicals and reagents All materials were supplied by Sigma-Aldrich unless otherwise stated. Fabrication of empty LbL microcapsules Empty LbL microcapsules (Empty-LbL) were constructed on a sacrificial calcium carbonate (CaCO 3 ) template using the LbL self-assembly technique (Fig. 1). 30 In brief, 2.5 ml of 0.33 M Calcium Chloride (CaCl 2 ) and 2.5 ml of 0.33 M Sodium Carbonate (Na 2 CO 3 ) were combined in a beaker on a magnetic stirrer at 800 RPM for 30 seconds, 400 RPM for 30 seconds, then rested for 1 minute before centrifugation at 9000 RPM and collection. Poly L-Arginine (PLA) and Dextran Sulphate (DS) were used for biodegradable shells. All polymer solutions were used at 2 mg mL −1 in 0.15 M Sodium Chloride (NaCl). Six alternative layers, i.e. three of each polymer, were assembled in total. PLA was assembled as the first layer by suspension of cores and shaking at room temperature for 12 minutes. Between polyelectrolyte layers, microcapsules were washed twice in deionised water. For fluorescent visualisation PLAtetramethylrhodamine (PLA-TRITC) was added as the fifth polyelectrolyte layer. For magnetic microcapsules (Empty-LbL-Mag), SPIONs were synthesised as previously described 31 then stabilised with citric acid and added in place of the fourth layer, followed by addition of DS to adsorb any remaining positive charge. For 100% SPION coverage stock nanoparticles in water of 45.8 µg ml −1 was used and this suspension was further diluted 1 : 2 and 1 : 5 when used for reduced coverage. Following multilayer assembly, the CaCO 3 cores were dissolved in ethylenediametetraacetic acid (EDTA) solution. Initially a 0.165 M solution of EDTA was added and microcapsules shaken for 7 minutes at room temperature. The concentration was increased to 0.2475 M and the shaking step repeated. Finally, three 0.33 M EDTA washes were conducted until cores were completely dissolved, when the microcapsules were washed in deionised water and stored at 4°C until use. A summary of the structure of Empty-LbL and Empty-LbL-Mag microcapsules is given in Table 1. Fabrication of dexamethasone containing LbL microcapsules Dexamethasone crystals were produced by dissolution of dexamethasone powder in acetone at a concentration of 10 mg ml −1 . 500 µL of dexamethasone-acetone solution was added to 2 mL of 2% Tween-80 in H 2 O and was sonicated using a Piezon Master 400 dental sonication probe for 3 minutes at maximum power. Dexamethasone solution was stir evaporated at room temperature for 30 minutes, which formed crystals which were collected via centrifugation at 9000 RPM for 2 minutes and then washed twice with deionised water. LbL encapsulation of dexamethasone crystals (Dex-LbL) was carried out immediately. Poly(allylamine hydrochloride) (PAH) and poly(styrenesulfonate) (PSS) were used for synthetic shells. These polymer solutions were again prepared at 2 mg mL −1 in 0.15 M NaCl. Eight polymer layers were assembled in total, with PAH assembled as the first layer. For magnetic dexamethasone microcapsules (Dex-LbL-Mag), SPIONs were added in place of the fourth layer followed by addition of PSS to neutralise any remaining positive charge. A summary of the structure of Dex-LbL and Dex-LbL-Mag microcapsules is given in Table 1. Percentage encapsulation of dexamethasone was calculated by application of layer washes to 293T.GRE.Luc+ cells at a dilution of 1 : 100 in Dulbecco's Modified Eagle Medium (DMEM) cell culture media. For confirmation of stable polymer layers, 20 μL of Dex-LbL microcapsules were spun onto a glass slide, by centrifugation at 2000 RPM for 3 minutes. Dex-LbL crystals were viewed under phase contrast and red fluorescence to image the drug crystal and PAH-TRITC polyelectrolyte layer respectively, and images were overlaid. A 10 μL drop of acetonitrile was added to the structures to dissolve the dexamethasone and the field of view immediately imaged again (Fig. S1 †). For all other steroid crystals, alterations were made to the crystalisation method. For dissolution of prednisolone and prednisolone acetate, chloroform : methanol (1 : 1) was used as the solvent. Methylprednisolone acetate was dissolved in acetone. Prednisolone crystals were then produced by the above sonication method. Prednisolone acetate and methylprednisolone acetate crystals required a homogenisation method, in which 1 mL of steroid solution was added to 5 mL of 2% Tween-80 in H 2 0 and was homogenised using an IKA Ultra Turrax T8 Homogenizer (Janke & Kunkel GmbH & Co. KG) for 3 minutes at speed setting 4. Stir evaporation was carried out as above and crystals were stored at 4°C until use. Scanning electron microscopy Microcapsule and crystal appearance was assessed by imaging using an FEI Inspect-F scanning electron microscope (SEM). Following production, microcapsule samples were suspended in 1 ml of deionised water before further dilution 1 : 10 in deionised water. Three small drops were distributed on carbon tape upon a metal stump and were allowed to dry completely. For acetonitrile dissolution of LbL dexamethasone crystals, 10 μL of acetonitrile was subsequently added to stumps and allowed to evaporate completely (Fig. S1 †). Before imaging, samples were sputter coated with gold using a Quorum SC7620 sputter coater for 30 seconds. Coated samples were imaged using the FEI inspect-F SEM with FEI xT microscope control software, at varying magnifications up to 20 000×. Generation of the enhanced green fluorescent protein (EGFP) expressing HeLa cell line For visualisation of cell movement, a HeLa cell line, stably expressing EGFP was generated by transduction of HeLa cells (ATCC® CCL-2™) with the EGFP encoding lentiviral construct pHRSIN-CSGW-dlNotI (kindly provided by Dr Y. Ikeda, Mayo Clinic, Rochester, MN). To produce lentivirus, 6.16 µg of pHRSIN-CSGW-dlNotI construct was packaged via co-transfection with, 1.5 µg pCMV-VSV-G, 6.13 µg pCMV-Δ8.2 and 68 µg polyethylenimine (PEI), into HEK 293T cells (ATCC® CRL-3216™) seeded at 1 million per well in a 6-well plate. Cell growth media was changed 5 hours post-transfection and cells grown for 48 hours, before harvesting of virus containing media. Media was applied to HeLa cells seeded at 10 000 per well in a 6-well plate, with addition of 6 µg ml −1 polybrene to the virus containing media. Cells were grown and monitored for 48 hours, before transfer into a T25 tissue culture flask. HeLa-EGFP cells were subjected to FACS sorting, to isolate and Fig. 1 Production of polyelectrolyte layer-by-layer microcapsules. Shown is an overview of the production of LbL microcapsules. The process begins with a core, upon which oppositely charged polyelectrolyte layers are added. The process ends with dissolution of the core (if sacrificial). Cargo molecules can be encapsulated in the core, adsorbed in place of polyelectrolyte layers, or adsorbed to the microcapsule following core dissolution. A fluorescently labelled polyelectrolyte layer can be added for visualisation and a SPION layer can be added in place of a negatively charged polyelectrolyte layer. The number of layers, polyelectrolytes used and biological molecule content can all be altered to tailor LbL microcapsules to the desired specifications. Table 1 Structure of LbL microcapsules produced in this study. Shown are the four types of microcapsules constructed and the order of polyelectrolyte layers applied in their production Microcapsule Composition of polyelectrolyte layers Generation of the glucocorticoid responsive 293T.GRE.Luc+ cell line Oligonucleotides were designed that harbour the glucocorticoid responsive element (GRE, in bold), the sequence of the forward primer was 5′ CTAGCACCTCACGGTACATTTTGTTCT-GTGCCTCG 3′ and the reverse primer was 5′ CTAGCGAGGCAC-AGAACAAAATGTACCGTGAGGTG 3′. These phosphorylated oligonucleotides were annealed and repeats cloned between the Nhe I and Xho I sites of the previously described plasmid pCpGmCMVLuc+. 32 Sequence analysis confirmed the construction of a synthetic promoter consisting of 4 repeats of the GRE upstream from the mCMV promoter. The expression cassette was then transferred to the lentiviral vector LV.mCMV. Luc+ 32 by PCR cloning, forming the vector pLV.GRE.Luc+. Lentivirus was generated using the method described above and virus containing media was applied to HEK 293T cells seeded at 10 000 per well in a 6-well plate, with addition of 6 µg ml −1 polybrene to the virus containing media. Cells were grown and monitored for 48 hours before transferring into a T25 tissue culture flask. 293T.GRE.Luc + cells were tested for steroid responsiveness, using dilutions of dexamethasone. Glucocorticoid responsiveness was monitored by measurement of luciferase production. Cells were maintained in complete media (Dulbecco's Modified Eagle medium (DMEM, Gibco) supplemented with 10% fetal calf serum (Gibco) 1% penicillin-streptomycin and 1% L-glutamine). Cells were passaged 1 : 10, using trypsin EDTA, once a confluence of 100% was reached and were used within 5 passages. Confocal microscopy To assess cell uptake of Empty-LbL-Mag microcapsules, HeLa cells were plated on coverslips at a density of 10 000 per well in a 6-well plate. 24 hours post plating, TRITC labelled Empty-LbL-Mag microcapsules were added to the cells at a ratio of 10 : 1. Cells were incubated for 30 minutes, 1 hour and 2 hours. Cells were stained post treatment by washing twice in ice cold PBS and subsequent addition of a 1× dilution of CellMask™ Green Plasma Membrane Stain (ThermoFisher Scientific) in cell culture media, for 10 minutes. To fix cells, staining media was removed and 4% paraformaldehyde was applied for 20 minutes at room temperature. Cells were washed twice in PBS before mounting, using VECTASHIELD ® mounting media with 4′,6-diamidino-2-phenylindole (DAPI, Vector Laboratories, Peterborough, UK). Cells were immediately imaged using an LSM 880 confocal microscope with Airyscan (Zeiss Microscopy, Cambridge, UK) and Zen 2.3 software (Zeiss), using the DAPI, 488 nm and 568 nm laser channels and a 40× objective. All image analysis and processing was carried out using Zen 2.3 Lite software (Zeiss). Cell migration assay HeLa-EGFP cells were seeded at a density of 20 000 cells per well in 6-well plates in a central circle, defined using parafilm cut with a 6 mm biopsy punch. On the underside of the well a grid was drawn with fluorescent marker, as shown in Fig. 3A. After 6 hours incubation at 37°C cells had attached and media was replaced with fresh complete media. Microcapsules were applied at a 1 : 1 ratio to the plated cells and incubated for 24 hours. At time point 0 hours the parafilm was removed, cells washed twice with complete media and fresh 2 ml of media applied to each well. Circular 5 mm diameter × 5 mm thick N42 Neodymium Magnets (Magnet Expert Ltd, Nottinghamshire, UK) were used, which had maximum field strength of 191 mT as measured with a HT201 gaussmeter (EMF, UK). Magnets were applied to the underside of relevant wells directly below the circle of cells and remained in place for the duration of the experiment, aside from imaging. Areas of interest adjoining to the central circle (Fig. 3A) were imaged at 0, 96, 120, 144 and 168 hours using an EVOS™ digital colour fluorescence microscope (Thermo Fisher Scientific UK) in the four defined areas of interest, under the DAPI and EGFP fluorescence channels. The DAPI and EGFP images were overlaid into a composite image. Image analysis was performed with a macro written in Image Pro software. Thresholds were set for the EGFP colour channel so that the cells but not the background were detected. No measurements were carried out with the DAPI as it was used only for the purpose of lining up the images on the microscope for imaging. For EGFP, the area of interest was selected as the whole field of view and the data recorded for the % area filled with EGFP, corresponding to the area filled with cells. ROS assay 293T.GRE.Luc+ and HeLa cells plated at a density of 20 000 cells in 96-well plates were treated for 2 hours or 24 hours with defined numbers of microcapsules or equivalent concentrations of SPIONs, diluted in DMEM supplemented with 5% FCS, 1% pen-strep and 1% L-glutamine, before ROS assays were carried out. Treatments were removed and cells washed once with warmed DMEM. 100 µL of dichloro-dihydro-fluorescein diacetate (DCFH-DA, Sigma-Aldrich), diluted in serum free DMEM to a concentration of 10 µM was added to wells and incubated in the dark at 37°C for 30 minutes. Plate fluorescence was read at excitation/emission 485 nm/ 535 nm using a Tecan GENios microplate reader (Tecan Group Ltd, Männedorf, Switzerland). For additional stimulation of ROS, 100 µL of 0.01% hydrogen peroxide, diluted in DMEM media was added following removal of media. Cells were incubated at 37°C for 1 hour, before washing and detection of ROS with DCFH-DA. Cell viability In parallel with the ROS experiments, cells that were similarly treated with microcapsules or SPIONs were assessed for cell viability after 24 hours of treatment. In these experiments the CellTiter-Glo® (Promega Corp) assay was performed by addition of 100 μl titreGLO assay reagent to each well. Plates were briefly shaken and then incubated for 20 minutes before the luminescent signal in 1 second was recorded using a plate luminometer. Ferene-S iron assays Ferene-S assay was carried out using the method described by Hedayati et al., (2018). 33 Iron standards between 0 and 100 µg were produced using the TraceCERT® Iron Standard for ICP. Defined numbers of microcapsules or SPIONs, suspended in 100 µL PBS were dissolved by mixing with 100 µL concentrated nitric acid and incubated for 2 hours at 80°C. Acid was neutralised by addition of 160 µL 10 N sodium hydroxide. Ferene-S Assay working solution was prepared, composition 0.2 M L-ascorbic acid, 0.4 M acetate buffer, 0.1 M Ferene-S. 900 µL of Ferene-S Assay working solution was added to 100 µL of iron standard, microcapsule or nanoparticle sample and incubated at room temperature for 30 minutes. Absorbance of 300 µL of samples was read in triplicate in a 96-well plate, at 595 nm, using a MultiSkan FC (Thermo Fisher Scientific UK). Iron concentrations were determined using the standard curve and nanoparticle and microcapsule iron content calculated. Flow system investigations As a model for therapeutic microcapsules immobilised by a magnet, a flow system was assembled using an Econo-Column® peristaltic pump (Bio-Rad Laboratories Ltd, Hertfordshire, UK). Initial collections of volumes of deionised H 2 0 over a period of 1 minute at each flow speed were conducted to determine flow rate in µl min −1 . The terminal length of plastic tubing was applied across a 0.9 kg pull rectangular 10 × 3.5 × 2.25 mm thick N45 neodymium magnet with a maximum field strength of 182.5 mT at the poles. To test retention of magnetic microcapsules, 5 million microcapsules were applied to the flow system by reverse pumping at low speed until they reached the magnet. Flow was reverted to the forward direction and the slowest pump speed selected. After 5 minutes at each flow speed, photographs were taken and the retention of the microcapsules assessed by densitometry analysis of the photographs, using ImageJ (https://imagej.nih.gov/ ij/index.html). Measurement of dexamethasone release from magnetically retained microcapsules 10 million LbL-Dex or LbL-Dex-mag microcapsules were applied to the flow system by reverse pumping at low speed until they reached the magnet. Flow was reverted to the forward direction at flow speed 5 (0.385 dyne cm −2 ). Flow through of deionised water containing released dexamethasone was collected every 20 minutes for 10 hours then at 24, 30 and 48 hours. At termination of the experiment the magnet was removed from the tubing and the remaining deionised water in the system collected, with any remaining microcapsules. To assay dexamethasone, 293T.GRE.Luc+ cells were plated at a seeding density of 20 000 cells per well in a 96 well plate. Samples from the flow experiment were applied to cells in triplicate at a 1 : 5 dilution in DMEM supplemented with 5% FCS, 1% pen-strep and 1% L-glutamine. 24 hours post-treatment cells were lysed with passive lysis buffer (50 μl). A luciferase assay was performed on lysates (10 μl) in white plastic 96-well plates to which 50 μl of assay reagent was automatically added using an MLX Microtiter® Plate Luminometer (Dynex Technologies Inc., Chantilly, VA, USA) and light emission measured for 10 seconds. Dexamethasone standard concentrations were applied to the cells to produce a standard curve. Statistical methods Statistical analysis of results was carried out using GraphPad Prism 7.04 (GraphPad Software, La Jolla California, USA). For analysis of flow retention experiments, results were subjected to 2-way ANOVA analysis with multiple comparisons and posthoc Bonferroni test. For cell retention experiments, results were subjected to 2-way ANOVA analysis with multiple comparisons (simple effect within rows) and post-hoc Turkey test. ROS assay data was subjected to one-way ANOVA with multiple comparisons and Fisher's LSD post-hoc test. Results and discussion LbL microcapsules have many features that make them suited to application in drug delivery. 34,35 They can be assembled from FDA approved polymers under native conditions which are compatible with a range of bioactive molecules from DNA and proteins through to small molecule drugs. Beyond this, microcapsules can be loaded with particles that provide responsiveness to physical signals such as gold nanoparticles for NIR laser heating 36 and SPIONs for magnetic responsiveness. We are particularly interested in magnetic responsiveness as we have previously shown targeted delivery 25 and controlled release through this attribute. 26 In the present study, we have turned our attention to magnetic retention which could be important when microcapsules are delivered to a disease site and the aim is to prevent their removal in order to promote local drug effects. Microcapsule appearance and cell uptake Empty non-magnetic (Empty-LbL) and magnetic microcapsules (Empty-LbL-Mag) were characterised using SEM, fluorescent and confocal microscopy. Microcapsules were measured as 2.57 ± 0.10 µm in diameter ( Fig. 2A). The SPION layer was visible as a roughened surface in Empty-LbL-Mag. Both types of microcapsule were of similar size and resemble those reported in previous studies made by the LbL sacrificial calcium carbonate core method. 25,37 Fluorescence microscopy confirmed a TRITC labelled PLA layer, with a hollow microcapsule core (Fig. 2B), which enabled confocal microscopy visualisation of microcapsule interaction with HeLa cells. Timecourse incubation of SPION containing microcapsules with HeLa cells demonstrated interaction after just 30 minutes, maintained through 2 hours of co-incubation (Fig. 2C). In addition, Z-stack confocal scanning indicated that after 1 hour of incubation, microcapsules were inside the cells. Many of the microcapsules were adjacent to the nucleus which again is consistent with previous observations of standard microcapsules in other cells. 38 Importantly, there are no obvious differences in the appearance of cells, microcapsule uptake rate or intracellular trafficking when the magnetic version are delivered. Controlling SPION content of microcapsules correlates with measurement of iron content To date, studies utilising magnetic microcapsules have typically used an excess of SPIONs to produce one or more complete iron oxide shells. 25,39,40 Here, to establish more stringent control over the iron content of the microcapsules, empty core magnetic microcapsules were made with varying percentage coverage of SPIONs in the fourth layer. Visual analysis of microcapsules assembled with 100%, 50% and 20% SPION suspension clearly demonstrated that as the nanoparticle content was reduced, the roughened surface appearance of the microcapsules was reduced accordingly (Fig. 3A). Using the Ferene-S assay (standard curve Fig. 3B) the iron content of microcapsules with 100% SPION coverage was measured to be 20.2 pg per microcapsule (Fig. 3C). Dilution of SPION suspension 1 : 2 produced microcapsules with a lower iron content of 12.28 pg per microcapsule (Fig. 3C), about 60% of the iron content of the microcapsules prepared with the 100% suspension. A further dilution of the SPIONs to 1 in 5 produced microcapsules with an iron content of 4.11 pg per microcapsule, and 20.3% of the iron content of the microcapsules pre-pared with the 100% suspension (Fig. 3C). The ability to alter SPION content could ensure the minimum iron content to achieve a desired functional effect, whilst increasing microcapsule biocompatibility could potentially permit delivery of higher microcapsule doses or facilitate repeat delivery. Our data would suggest that there is further scope to reduce SPION content of microcapsules whilst retaining magnetic responsiveness. What the precise lower limit of SPION content is will largely depend on the magnetic function that is required. Biocompatibility of microcapsules containing SPIONs and their influence on ROS production Iron is a catalyst of free radical production including the toxic OH • from H 2 O 2 which can cause cell death. In view of this, there are concerns that iron nanoparticles can lead to ROS production. We have previously observed that SPION containing microcapsules do not alter cell viability except when they are at high ratios to cells and after prolonged exposure. 25 Here we saw that SPION containing microcapsules did not alter the viability of HeLa or 293T cells after 24 hours of exposure (Fig. 3D) and furthermore there was no induction of ROS regardless of the iron content of microcapsules, compared to control untreated cells (Fig. 3E). Similarly, delivery of free magnetite nanoparticles, equivalent to the content in microcapsules, also had no effect on cell viability (Fig. 2D) or ROS production (Fig. 3E). Our observations are in agreement with other researchers. Könczöl et al., (2011) 41 showed that A549 cells exposed to a similar concentration (10 µg ml −1 ) of slightly larger (20-60 nm) magnetite nanoparticles for 24 hours had ROS levels comparable to control cells. Whilst, Aranda et al. (2013) 42 incubated smaller magnetite nanoparticles (8 nm) with primary rat hepatocytes and saw no increase in ROS production until cells were exposed to an Fe 3 O 4 concentration of 50 µg ml −1 for 24 hours. Although our results clearly show that the levels of ROS produced by cells after delivery of microcapsules containing SPIONs are not elevated it has been reported that similar citrate coated small magnetite nanoparticles can induce a transient increase in cellular stress which was evident by an increase of malonyldialdehyde (an indicator of lipid peroxidation) immediately after delivery to a macrophage cell line. 43 We examined ROS production 2 hours after delivery of microcapsules or nanoparticles to cells but again there was no change in this parameter from untreated cells (Fig. S2 †). Concentration of iron is clearly an important factor. The observations on cellular stress made by Stroh used magnetite in excess of 300 µg ml −1 and the effects observed on ROS production seen by Aranda followed use of 50 µg ml −1 whereas the concentration delivered in our microcapsule study equates to a maximum of 4 µg ml −1 . Further work exploring the kinetics of cell utilisation of the iron delivered in microcapsules will be useful when repeat delivery is considered in order to avoid accumulation of iron to levels that alter cell function and viability. Another possibility to be considered is that SPIONs will exacerbate hydroxyl radical (OH • ) synthesis if they are delivered into an environment containing H 2 O 2 . If we consider the use of SPION containing microcapsules in treatment of inflammatory conditions this is important because H 2 O 2 released from activated polymorphonuclear leucocytes can be present in the inflammatory milieu. ESI (Fig. S3A †) shows that hydrogen peroxide treatment (1 hour) of cells causes a dose dependent increase in ROS production. When cells were pre-treated (24 hours) with free SPIONs or SPION containing microcapsules before exposure to a suboptimal concentration of H 2 O 2 (0.01%) for an hour there was no further exacerbation of ROS production (Fig. S3B †). These observations are encouraging for the in vivo use of SPION containing microcapsules in inflammatory environments. Retention of cells following delivery of SPION containing microcapsules Magnetic microcapsules may be of use in maintaining encapsulated drugs in their desired area of action within the body. One issue to overcome is the possibility that phagocytic cells may engulf microcapsules and remove them from the site of action. To this end, cell retention studies were carried out to determine whether the iron content of the Empty-LbL-Mag microcapsules was sufficient to prevent cell movement in a magnetic field. In this assay a magnet was placed directly beneath the cells which generated the magnetic field (strength maximum of 200 mT in the centre) illustrated in Fig. S4. † Images were collected at set time points after the start of the experiment (Fig. 4A and B) and these were used to determine the movement of HeLa-EGFP cells into adjacent areas of inter- est by image analysis. HeLa-EGFP cells containing Empty-LbL-Mag microcapsules at a 1 : 1 ratio of microcapsules to cells, moved significantly less than control cells at all time points beyond 96 hours when they were placed in a magnetic field. At 120 hours cells preincubated with Empty-LbL-Mag microcapsules in a magnetic field ( Fig. 4B and C:-1 : 1 mag + magnet) had filled 4.29% of the area of interest, demonstrating significantly less movement than control cells (22.12%, p = 0.0316), control cells in a magnetic field (control mag, 24.49%, p = 0.009), cells + Empty-LbL (1 : 1 non-mag, 22.63%, p = 0.0244) and cells + Empty-LbL microcapsules in a magnetic field (1 : 1 non-mag + magnet, 21.26%, p = 0.0477). At 144 hours, cells preincubated with Empty-LbL-Mag microcapsules and exposed to a magnetic field had filled 8.38% of the area of interest, significantly less than all other cell treat-ments ( p = 0.035). This observation was maintained through to 168 hours, where cells with Empty-LbL-Mag microcapsules in a magnetic field had filled only 10.53% of the region of interest ( p = 0.0023); significantly less compared to all other cell treatments (Fig. 4C). The observations in this experiment are important for several reasons; firstly, because they further support the idea that SPION containing microcapsules are inert in cells, because in the absence of a magnetic field they did not significantly alter cell migration in this assay, even at the longest timepoint. When cells containing SPION containing microcapsules (Empty-LbL-Mag) were placed in a fixed magnetic field, the distance they migrated was dramatically inhibited. This inhibition was solely due to the interaction of the SPION containing microcapsules with the magnetic field, as movement of HeLa cells containing standard microcapsules (100% SPION shell)) into the area of interest, under control conditions or when subjected to a magnetic field from a N42 neodymium magnet. Area of interest values are the mean of four separate areas of interest and error bars represent standard error of the mean. Cell movement into the areas of interest was measured at intervals up to 168 hours. Significant differences are indicated by *p < 0.05, **p < 0.01. # p < 0.05 relative to all other treatments at the same timepoint. Data is representative of two experimental repeats. (Empty-LbL) was not inhibited. These results are encouraging and suggest that it may be possible to retain microcapsules engulfed by cells at a delivered site in vivo if similar interaction with a magnetic field can be recapitulated. Retention of microcapsules with different SPION contents in a flow system Application of SPION containing Empty-LbL-Mag microcapsules to a flow system demonstrated that even microcapsules prepared with a 20% SPION suspension were retained at shear stresses between 0.046-0.751 dyne cm −2 (corresponding to flow speeds between 0.1-1.64 ml min −1 in a 1.5 mm diameter tube) (Fig. 5). Densitometry analysis of microcapsule pellets retained on a fixed magnet demonstrated that as the shear stress increased, there was no loss in retention of the 100% SPION shell microcapsules until the highest stress of 0.751 dyne cm −2 was reached (Fig. 5B), where a 10% reduction was observed. For the microcapsules prepared with 50% and 20% SPION suspensions retention was reduced to 90% and 85% respectively by a shear stress of 0.385 dyne cm −2 and was further reduced to 80% retention for both shell contents at the highest shear stress (Fig. 5). These observations with a flow system (shear rates of 0.046-0.751 dyne cm −2 ) show that SPION containing microcapsules could potentially be magnetically retained in the type of flow and shear stress conditions observed in lymphatic vessels (2 µl per hour 44 to 30 µl min −1 45 ) and have average shear stress of 0.64 dyne cm −2 . 46 Although flow rates increase during inflammation to reduce oedema and facilitate removal of cells. 47 Based on our findings, retention may also be possible in small veins (diameters from 800 µm to 1.8 mm) where flow rates of 1.2-4.8 ml min −1 have been measured, corresponding to shear stresses of between 0.028-3.435 dyne cm −2 . 48 Generation of steroid crystals coated with LbL assemblies Multiple steroids are used in the treatment of inflammatory conditions; hence we used four steroids for formulation into solvent evaporation steroid crystals. Dexamethasone crystals had a homogenous appearance and were approximately 9.22 ± 0.68 µm in size, with a rounded, flattened cuboidal shape of depth 1.49 ± 0.11 µm (Fig. 6A). Prednisolone crystals had less homogeneity, with crystals mostly hexagonal in shape and with an average size of 4.32 ± 0.42 µm, depth 0.82 ± 0.12 µm. Prednisolone acetate formulated as crystals gave a homogenous suspension of triangular shape crystals, with an average size of 5.44 ± 0.13 µm and depth 0.29 ± 0.04 µm. Methylprednisolone crystals were long and octagonal in shape with an average length of 8.89 ± 0.43 µm, width of 2.71 ± 0.16 µm and depth of 0.37 ± 0.03 µm (Fig. 6A). In comparison to crystalline steroid preparations currently used clinically to treat inflammation in arthritic joints, such as Depo-Medrol ® , our crystals were more homogenous in size and shape, have a smoother surface and are of similar size. 49 To confirm stable polymer layers, LbL encapsulated dexamethasone crystals were imaged using fluorescence microscopy and scanning electron microscopy, before and after dissolution with acetonitrile (ESI Fig. S1 †). Prior to dissolution the dexamethasone crystals and fluorescent polyelectrolyte layer were both clearly visible (Fig. S1A-C †). After dissolution with acetonitrile, the dexamethasone crystals were no longer visible, but the fluorescent polyelectrolyte layer was still visible and intact, confirming that the polyelectrolyte layers formed a stable structure around the steroid crystal ( Fig. S1D-F †). SEM imaging was used to confirm these findings, also showing that following acetonitrile dissolution, the polyelectrolyte shells remained visible and retained the crystal shape, whilst the solid crystal was no longer visible ( Fig. S1G and H †). The LbL method used within this paper is well characterised and has been used many times before on both colloidal sacrificial cores and crystalline cores. 24,25 When a layer of SPIONs was included in the coating of dexamethasone crystals (Dex-LbL-Mag), they were a largely homogenous suspension of flattened cuboidal crystals of size 9.22 ± 0.68 µm, with slightly rounded square sides and with a dexamethasone encapsulation efficiency calculated to be up to 99.99%. The iron content was measured to be approximately 5.03 pg per microcapsule and the distribution of SPIONs was largely on the narrow edge of the crystals as shown (Fig. 6B). The reason for this distribution around the narrow edge of the crystals may be because this surface has an increased charge, due to its curvature and rough layer growth at the edge, in comparison to the flat face of the crystals, therefore the SPIONs attach more readily on this surface. Alternatively, the SPIONs may attach to the planar crystal surfaces but many are mechanically sheared off when the planar surfaces slide across each other in the shaking stages of the encapsulation process. Responsiveness of 293T.GRE.Luc+ cells In order to monitor dexamethasone release from fabricated microcapsules, the glucocorticoid responsive cell line, 293T. GRE.Luc+ was used. Treatment of the 293T.GRE.Luc+ cell line with dexamethasone standard concentrations between 0.1 nM and 10 µM demonstrated that the cells were sensitive between 1 nM and 10 µM, as shown by a near-linear dose response (Fig. 7A). Treatment of 293T.GRE.Luc+ cells with a 1 : 1 ratio of un-encapsulated dexamethasone crystals, Dex-LbL and Dex-LbL-Mag demonstrated dexamethasone release from all structures, resulting in significant luciferase production compared to untreated cells after 24 hours. Luciferase production was increased 109.2-fold over control levels in cells treated with Dex crystals (7624.3 ± 440.8 vs. 69.8 ± 1.3, p < 0.0001). Dex-LbL microcapsules drove luciferase production 129.3-fold over control levels (9024.1 ± 759.9 p < 0.0001) and Dex-LbL-Mag microcapsules drove luciferase production 102.5-fold over control levels (7154.0 ± 693.4, p < 0.0001) (Fig. 7B). Biological activity was also shown for all the different steroid crystals by treatment of GC responsive 293T cells (data not shown). Most studies that monitor steroid release from vehicles use HPLC or UV-Vis absorbance to quantitate steroid release. HPLC has a reported limit of quantification around 10 nM 50,51 and UV/Vis has a lower detection limit of around 1 µM. 52,53 The reporter cell line we generated utilising an optimally designed promoter for low basal activity and robust activation 32 enabled us to accurately measure dexamethasone concentrations in the range 1 nM to 10 µM. In addition to improved sensitivity, the reporter cells also have the advantage that it confirms biological activity of the drug. This type of transcriptionally responsive system is particularly useful in monitoring drug released from nano/micro fabricated vehicles and can be combined with imaging modalities to monitor positional effects of released drug. 25 When we coated our dexamethasone crystals with polymer layers their appearance was not altered and they released similar amounts of dexamethasone as uncoated crystals. When dexamethasone has previously been coated in LbL layers, irregular micronized crystals of the drug have been used. 54,55 Pargaonkar et al., (2005) 54 sonicated crystals in the presence of poly(diallyldimethylammonium chloride) (PDDA) to form nanosized crystals (mean diameter 420 nm) which they coated in different polymer layers, whilst Stewart et al. 55 generated LbL structures similar in size (7.40 µm) to the ones we constructed. In both studies rapid release (100% within 120 minutes) of dexamethasone from LbL structures was observed, but Pargaonkar et al. (2005) 54 observed a slower rate of release with increasing number of microcapsule polymer layers. Prednisolone crystals have also been incorporated into LbL structures 56 and again slower release was observed when coated with more polymer layers. Prolonged release of Dex from crystals magnetically retained in a flow system In order to determine whether prolonged dexamethasone release in a flow system was possible, densitometry analysis was carried out on pellets of SPION containing Dex-LbL-Mag microcapsules applied to a fixed magnet in the flow system of increasing shear stress. Retention of Dex-LbL-Mag at the lowest stress of 0.046 dyne cm −2 was 100%, reducing to 95.36% at 0.385 dyne cm −2 and further reducing to 90.57% at the highest stress of 0.751 dyne cm −2 (Fig. 8A). Long term retention studies carried out at the lowest shear stress of 0.046 dyne cm −2 over a 48 hours period demonstrated that dexamethasone was removed from the flow system after 4 hours of application when non-magnetic Dex-LbL microcapsules were applied to the fixed magnet (Fig. 8B). Application of SPION containing Dex-LbL-Mag microcapsules demonstrated a significantly longer release of dexamethasone over a period of 48 hours. Quantification of dexamethasone in the flow through was significantly higher for Dex-LbL-Mag at 2 hours post application, with luciferase response measuring 3962.7 ± 150.0 RLU compared to 7.79 ± 3.60 RLU for Dex-LbL (p < 0.001). The concentration of dexamethasone in the flow through remained significantly higher for Dex-LbL-Mag (3600.13 ± 399.60 RLU Luc) compared to baseline values for Dex-LbL until 8 hours post application (p < 0.001). Beyond this time point, samples for Dex-LbL were not collected. Between 8 hours and 48 hours the concentration of dexamethasone in the flow through from Dex-LbL-Mag microcapsules steadily declined but even after 48 hours the dexamethasone level was above 5 nM. Furthermore, the remaining water collected from the flow system after 48 hours also contained measurable dexamethasone so the Dex-LbL-Mag microcapsules were still not exhausted at the end of this experiment. Clearly these studies show that magnetically retained Dex-LbL-Mag microcapsules result in prolonged release of dexamethasone under shear stress conditions beyond 30 hours. Whilst this compares favourably to the rapid release seen from dexamethasone LbL structures in other reports 54,55 direct comparison is not possible due to differences in the protocols of the release experiments. Taken together the data suggests that these dexamethasone microcapsules could potentially be magnetically retained at a delivered site, achieve prolonged drug action and resist removal by flow forces either in the lymphatics or small blood vessels. Conclusions Our experiments show that SPION containing microcapsules do not cause cellular stress following delivery and they can be magnetically retained at sites under both physiologically relevant shear stress conditions and following cell engulfment. A summary of the main findings of this study can be found in Table 2. If we can demonstrate the same characteristics in vivo we should be able to retain vehicles at delivery sites in order to promote local drug effects. We show how this approach can be utilised with magnetically responsive dexamethasone crystals and anticipate that it could be explored with other drug cargoes. Conflicts of interest The authors have no competing interests.
9,948.4
2020-03-25T00:00:00.000
[ "Biology" ]
Improving the prediction accuracy of river inflow using two data pre-processing techniques coupled with data-driven model River inflow prediction plays an important role in water resources management and power-generating systems. But the noises and multi-scale nature of river inflow data adds an extra layer of complexity towards accurate predictive model. To overcome this issue, we proposed a hybrid model, Variational Mode Decomposition (VMD), based on a singular spectrum analysis (SSA) denoising technique. First, SSA his applied to denoise the river inflow data. Second, VMD, a signal processing technique, is employed to decompose the denoised river inflow data into multiple intrinsic mode functions (IMFs), each with a relative frequency scale. Third, Empirical Bayes Threshold (EBT) is applied on non-linear IMF to smooth out. Fourth, predicted models of denoised and decomposed IMFs are established by learning the feature values of the Support Vector Machine (SVM). Finally, the ensemble predicted results are formulated by adding the predicted IMFs. The proposed model is demonstrated using daily river inflow data from four river stations of the Indus River Basin (IRB) system, which is the largest water system in Pakistan. To fully illustrate the superiority of our proposed approach, the SSA-VMD-EBT-SVM hybrid model was compared with SSA-VMD-SVM, VMD-SVM, Empirical Mode Decomposition (EMD) based i.e., EMD-SVM, SSA-EMD-SVM, Ensemble EMD (EEMD) based i.e., EEMD-SVM and SSA-EEMD-SVM. We found that our proposed hybrid SSA-EBT-VMD-SVM model outperformed than others based on following performance measures: the Nash-Sutcliffe Efficiency (NSE), Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE). Therefore, SSA-VMD-EBT-SVM model can be used for water resources management and power-generating systems using non-linear time series data. INTRODUCTION Reservoirs are recognized as one of the most powerful tool in integrated water resources management. They are considered the major solution in water-related problems like urban and industrial water supply, hydro-power generation, irrigation, flood control and conservation of ecology (El-Shafie et al., 2008). However, the reservoir system is a challenging problem due to its complexity as reservoirs should neither be too empty to operate nor too filled with water to allow capture of flood water (Amnatsan, Yoshikawa & Kanae, 2018). A reservoir's optimized operation depends on the accuracy of river inflow prediction, which is an essential element not only in reservoir operation but also for many hydrological management problems. Accurate prediction results in better decisions such as, flood and drought controls, the supply of drinking water, water resources management and many optimal environmental operations (El-Shafie et al., 2008;Erdal & Karakurt, 2013;Zhou et al., 2018;Wang, Qiu & Li, 2018;Dehghani et al., 2019). Over the past decades, numerous methods have been developed for accurate river inflow prediction. Literature related to river inflow prediction can be found from these (Kisi, 2005;Easey, Prudhomme & Hannah, 2006;Londhe & Charhate, 2010;Adnan et al., 2017a;Zaini et al., 2018). These models are broadly classified into three categories: physical-based models, data-driven models, and hybrid models (Chen et al., 2018). All these models have been widely used to predict rivers flow and other hydrologic analyses (Erdal & Karakurt, 2013;Hao et al., 2017;Chen et al., 2018;Darwen, 2019;Wang, Qiu & Li, 2018). Physical-based models extract the inherent behaviors of hydrological variables by conceptualizing their physical process and characteristics. However, physical-based models require a large amount of data and detailed mathematical equations, which may raise the issue of estimating huge parameters and the expensive computational costs (Chen et al., 2018). Moreover, due to the unavailability of long hydrological data, especially in developing countries, it is difficult to obtain these parameters, which limits the application of these models. Comparative to physical-based models, Data-Driven (DD) models are further classified into Traditional Statistical (TS) and Artificial Intelligence (AI) models to predict linear and non-linear data, respectively. TS models, also called Box and Jenkins methodology (Box & Jenkins, 1970;Box & Pierce, 1970), (Al-Masudi, 2013) includes the Autoregressive (AR), the Autoregressive Moving Average (ARMA) and the Autoregressive Integrated Moving Average (ARIMA) models are widely applied for predicting river inflow data. Adnan et al. (2017b) used the ARIMA model to predict the streamflow. They took monthly streamflow data and concluded that application of ARIMA can be useful in generating precise prediction. However, the disadvantages of TS models are that the river inflow data must be linear which limits the application of these models (Wang, Qiu & Li, 2018). To overcome these drawbacks, AI models have been introduced which includes Artificial Neural Network (ANN), Multi-Layer Perceptron (MLP), Generalized Regression Neural Network (GRNN), Adaptive Neuro Fuzzy Inference System (ANFIS) (Salih et al., 2019), Multivariate Adaptive Regression (MAR), M5 Model Tree (Yaseen, Kisi & Demir, 2016), Support Vector Machine (SVM), Extreme Learning Machine (ELM) , fuzzy logic and Radial Basis Neural Network (RBNN) (Othman & Naseri, 2011;Yang et al., 2017;Malik & Kumar, 2018;Mosavi, Ozturk & Chau, 2018;Kim et al., 2019). These AI techniques have been successively applied in hydrology to accurately predict the river inflow/outflow data (Othman & Naseri, 2011;Valipour, Banihabib & Behbahani, 2013;Shamim et al., 2016;Yang et al., 2017;Malik & Kumar, 2018;Mosavi, Ozturk & Chau, 2018). evaluated the potential of ELM algorithm to validate its superiority over other AI methods and suggested that ELM model outerperform than the other models to predict monthly streamflow. Yaseen, Kisi & Demir (2016) investigated the usefulness of three types of regression models i.e., least square-SVM, MAR and M5 model tree to forecast the monthly streamflow. Their study indicated that SVM model generally perform superiors than the other models. Among AI techniques, SVM, as the most widely used method, has been considered an effective tool in solving many non-linear mapping relationships to precisely predict rivers flow (Garsole & Rajurkar, 2015;Adnan et al., 2018;Bafitlhile & Li, 2019), water level (Behzad, Asghari & Coppola Jr, 2009) and many other non-linear problems (Wu & Lin, 2019). However, all these AI models needs to be carefully optimized as hydrological time series data becomes more and more complex due to rapid climate and other changes. For that purpose, bio-inspired techniques i.e., genetics algorithm, evaluationary programming, differential evolution, etc., are combined with AI methods to optimize their parameters to enhance its precision (Zheng et al., 2013). However, there is a drawback for such bio-inspired based AI methods. First, they ignore the multi-scale nature of hydrological data. Second, they do not incorporate with noises, which is inherited part of hydrological data. Developing a single model to predict river inflow data is a challenging task due to its non-stationary, multi-scale and noisiest characteristics (Yang et al., 2016;Yaseen et al., 2017;Yu et al., 2017;Al-Sudani, Salih & Yaseen, 2019;Rezaie-Balf et al., 2019a;Rezaie-Balf et al., 2019b;Rezaie-Balf, Kisi & Chua, 2019). Therefore, using the raw river inflow data may not provide useful results, but applying data pre-processing methods may improve the performance of TS or AI techniques known as hybrid models (Okkan & Serbes, 2013;Chitsaz, Azarnivand & Araghinejad, 2016;Chen et al., 2018;Wu & Lin, 2019). In recent years, hybrid models through data pre-processing techniques have received great attention and commonly applied in non-linear, multi-scale and noisiest time series data such as hydro-meteorology, climatology, finance and economic as powerful alternative modeling tools against alone physical-based or DD models (Chen et al., 2018;Zhang et al., 2018;Rezaie-Balf et al., 2019b;Nazir et al., 2019;Wu & Lin, 2019). Until now, various data pre-processing-based hybrid models have been developed to address these non-linearity issues present in river inflow series. Among all, five main data pre-processing algorithms, i.e., Fourier Analysis, Wavelet Transform (WT) (Daubechies, 1990), and SSA (Golyandina, Nekrutkin & Zhigljavsky, 2001), are combined with TS and AI methods to form a hybrid model. All data pre processing techniques can be used either to decompose non-linear and multi-scale data into the time-frequency domain or to denoise the time series data. Rezaie-Balf, Kisi & Chua (2019) employed EEMD data pre-processing method to enhance the performance of MAR and M5 Model Tree. They demonstrated that EMD-MAR provides more robust results to predict one-day aheaf river flow. Various studies shows that use of WA have gained popularity in handling multi-scale nature of complex hydrological data by combing with NN and other DD methods. Mouatadid et al. (2019) explores the use of WA-Long Short-term memory network (WA-LSMN) for robust irrigation flow forecasting. Their proposed methodology provides appropriate results rather than the satndalone LSMN model. Nazir et al. (2019) developed a WA-based hybrid model to predict the river inflow data of four stations and shown that their proposed model was better than the simple ARIMA and ANN models. Later, an EBT approach was be developed to enhance the precision of WA (Chipman, Kolaczyk & McCulloch, 1997;Johnstone & Silverman, 2005). In the EBT method, a mixture of priors is selected for the distribution of multi-scale components derived from WA. The posterior median is calculated from selected priors to estimate noise free multi-scale components (To, Moore & Glaser, 2009). Moreover, use of EMD (Huang et al., 1998) andEEMD (Wu &Huang, 2009) with DD models also became popular to study the non-stationary complex hydrological data (Rezaie-Balf, Kisi & Chua, 2019) However, all data pre-processing techniques have some drawbacks with different aspects to decompose the non-linear, multi-scale and noisiest data. The most widely used WA depends heavily on the selection of wavelet basis function (Wang, Qiu & Li, 2018), the application of EMD is limited by its own mathematical mode mixing and sensitivity to denoise property (Nazir et al., 2019), and EEMD suffers a strict mathematical theory (Qian et al., 2019;Wu & Lin, 2019). However, there is a need for developing new hybrid approaches with efficient decomposition methods that predict the non-linear, high irregular and noise-corrupted data with high precision. Several new data pre-processing approaches have been proposed and VMD is commonly used because of its efficient mathematical sound and more precise multi-scale components separation (Ali, Khan & Rehman, 2018;Wu & Lin, 2019;Lei, Su & Hu, 2019). VMD, as a data decomposition method, has been applied in the field of signal processing and wind speed prediction (Liu, Mi & Li, 2018;Lei, Su & Hu, 2019). Rezaie-Balf et al. (2019a) proposed a new hybrid model comprised on Variational Mode Decomposition based ELM (VMD-ELM) to forecast short-term water demand. Their pre-processing method i.e., VMD, provides better results when compared with the simple ANN and ELM models. Later, the performance of VMD is enhanced by coupling with EEMD and Random Forest Algorithms (EEMD-VMD-RFA) (Rezaie-Balf, Kisi & Chua, 2019). In this article, we aimed to develop a novel hybrid model to employ two-phase decomposition based method to efficiently predict the river inflow time series data. Our proposed method comprised on SSA as denoising, VMD as a data decomposition with EBT threshold, and SVM as a prediction method. This work is one of the first attempts known to the authors to use the SSA method as the primary decomposition technique, to enhance the prediction of daily river inflow records with VMD-EBT and SVM PROPOSED METHODOLOGY In this article, a novel hybrid model i.e., SSA-VMD-EBT-SVM is proposed to improve the accuracy of daily river inflow data. The schematic view of proposed methodology is illustrated in Fig. 1. The proposed structure is comprised of denoising, decompositionthreshold, prediction and aggregation steps. In denoising stage, SSA is used to denoise the river inflow data (Romero et al., 2015). In the decomposition-threshold stage, VMD is employed to decompose the denoised daily river inflow series into multiple IMFs (Rezaie-Balf et al., 2019a). The high irregular IMF is set as threshold with EBT to remove its sparsity and irregularity (Nazir et al., 2019). Further, in the prediction stage, SVM is applied on all IMFs to establish the prediction models and all predicted IMFs are aggregated to get a final prediction (Yaseen, Kisi & Demir, 2016). The effectiveness of the proposed hybrid model is evaluated using daily river inflow data from four stations of Indus River Basin (IRB) system, Pakistan (a detail discussion will be in 'Case Study and Experimental Design'). A brief introduction of SSA, VMD, EBT, and SVM is outlined as follows: SSA for denoising For time series analysis, the SSA method is known as a powerful non-parametric method (Golyandina, Nekrutkin & Zhigljavsky, 2001). SSA combines the principals of time series analysis, multivariate statistics, dynamical and signal processing (Suhartono et al., 2018). The reason for using SSA as it is a model-free technique (Romero et al., 2015), which can be applied on any type of data without any assumption. The main function of SSA is to decompose the time series data into a trend, seasonal, oscillations, and aperiodic noises and then reconstruct it after removing aperiodic noises from time series data (Traore et al., 2017). Unlike other methods of time series analysis, SSA assumes no statistical assumption about noises while performing analysis and investigating its properties (Traore et al., 2017). Principles of SSA method The principle of SSA lies in two stages of decomposition and reconstruction, briefly described as follows: Consider a time series data Y 1 ,Y ,...,Y of length N . The SSA transfer one-dimensional time series data into multi-dimensional Y 1 ,Y ,...,Y K where Y i = (y 1 ,y,...,y) T and K = N − L + 1. These vectors are grouped into a trajectory matrix as: called Hankel matrix whose all the diagonal elements i + j = cons are equal. The only single parameter in this stage is window length L where 2 < L < N (Traore et al., 2017). The SSA explores the empirical distribution of pairwise-distances between lagged vectors. The optimality of the SSA method heavily lies on the selection of a window length L as it determines the quality of decomposition (Traore et al., 2017). To remove noises from original time series values, eigenvalues are calculated from trajectory matrices which can be written as: where d is the number of non-zero eigenvalues in decreasing order (λ 1 ,λ 2 ,...,λ d ≥ 0) of the L * L matrix of S = YY and E i is calculated as: where i = 1,2,...,d, U i is eigenvectors and V i is calculated as following: The first few matrices E i to Y contributed much larger than that of the last few matrices as it is likely that these last matrices represents noises in time series data (Traore et al., 2017). The next step is to partition the set of indices i.e., i = 1,2,...,d into m disjoint subsets i.e., l 1 ,l 2 ,...,l m (Romero et al., 2015). Let one of these partitions I = i 1 ,...,i p , then the trajectory matrix of I is defined as E I = E i1 ,...,E ip . Once the matrices have been calculated for all partitions, then the original time series trajectory matrice is calculated from these partition matrices as Y = E I = E i1 ,...,E ip . This step is simplified by approximating matrix Y only with first r matrices Y = E 1 ,...,E r . The previous step needs a simplification of r parameter appropriately (Romero et al., 2015). An approximated time series then recovered from these subsets of matrices by taking the average of diagonals (Romero et al., 2015, Traore et al., 2017. VMD as decomposition The VMD is a non-recursive signal decomposition estimation method introduced by (Dragomiretskiy & Zosso, 2014;Rezaie-Balf et al., 2019a;Rezaie-Balf et al., 2019b). The VMD adaptively decomposes complicated original non-linear, non-stationary and multiscale signals into band-limited IMFs i.e., u k with a specific bandwidth in the spectral domain. To achieve a bandwidth of each IMF, the constrained variational optimization problem is solved as follows (Dragomiretskiy & Zosso, 2014): where w k is the center frequency of kth IMF, δ(t ) is the Dirac function, t is the time script and k is the number of modes. Moreover, δ(t ) + j πt is the Hilbert transformation function which transform u k into an analytical signal to form a one-side frequency. The spectrum of each mode can be shifted to a base-mode with the e −jwkt term. The above constrained problem is converted into unconstrained by making use of a quadratic penalty term i.e., α and Lagrangian multipliers λ, which is easier to address described as follows: where α denotes balancing parameter. The Eq. (5) can also be solved by an alternate direction method of multipliers. It is implied that updating u k ω k and λ k in two directions is conducive for realizing the analysis process of VMD, and the solutions of u k ω k and λ k can be calculated as follows: and wheref (ω), u n+1 k (ω),u n k (ω) andλ(ω) are the Fourier transforms of f and n denotes the number of iterations. The termination condition of VMD is defined as follows: where is the tolerance level of the convergence criterion. From VMD, the IMF u k is obtained from the entire decomposition process according to the following steps: 1. Set iteration number n = 1, and initialize parameters for VMD including u 1 k , ω 1 and λ 1 . 2. Using the Eqs. (8) and (9), calculate u n+1 k (ω) and ω n+1 k . 3. After calculating u n+1 k (ω) and ω n+1 k , update Lagrangian multiplier using the Eq. (9). 4. If the convergence condition of Eq. (10) is met, the iteration will be stopped, otherwise n moves to n + 1, and again return to step 2. Finally, the IMF are obtained. EBT as a threshold In EBT method, the posterior distribution is derived with the help of prior distribution to remove sparseness and noises from the coefficients derived from wavelets (To, Moore & Glaser, 2009;Nazir et al., 2019). In this study, we used this wavelet-based denoising method to remove noises and sparseness of VMD based coefficients. EBT method has leveldependent thresholding approach which deals each IMF according to its own distribution. EBT assumes a mixture of prior distributions for kth IMF as follows: where π k is the probability of non-zero coefficients of IMF k , δ 0 (θ ) presents the Dirac delta function of zero part of IMF k and γ (θ ) is a density of non-zero part of IMF k . The prior distribution should be chosen in such a way that it belongs to a family of distributions whose tails decays at polynomial rates. In this regard, Laplace distribution, exponential distribution and quasi-Cauchy distribution have been employed for non-zero coefficients of IMFs which are used to estimate noises (Nazir et al., 2019). The probabilities and parameters of a mixture of prior distributions are estimated through maximum likelihood estimation. The reason of using a maximum likelihood approach to estimate unknown is that it determine parameters in such a way that appropriately describe the given data (Hossain, Kozubowski & Podgórski, 2018). After estimating parameters, the posterior medianθ i (IMF k ,π k ) is calculated from a mixture of prior distribution as follows: which is used as an EBT rule forμ given data (Johnstone & Silverman, 2005). Simple hard rule is further applied to estimate noise-free coefficients of IMFs (Johnstone & Silverman, 2005;Nazir et al., 2019). Support vector machine (SVM) as a prediction method The SVM is a supervised machine learning method that comprised of statistical learning principles for nonlinear classification, function estimation, and pattern recognition applications (Vapnik, 1998). After introducing loss function, SVM can be used as a time series forecasting as well (Yaseen, Kisi & Demir, 2016;Sanghani, Bhatt & Chauhan, 2018). The concept behind the SVM is that it maps the complex non-linear high dimensional data into a high feature space through a nonlinear mapping. After mapping data into a high feature space, linear regression is performed by SVM in that feature space. Let we have a training set consists of N sample points, x i ,y i N i , where x i is lagged input vector and y i is the estimated value of a time series data, then the SVM is formulated as follows: where φ (x) is a non-linear transfer function projecting the input data into high dimensional space, w i are the weight vectors and b i is a bias. Estimating the sampled values with the range of allowed precision is considered as the problem of finding the minimum value for w . This can be summarized as convex programming: subject to where C is the user-defined penalty coefficient which represents the dispersion between the weights and objective function. The ξ and ξ * termed as the slack variables which describes how much data exceeds from tolerance. The Lagrangian function is further applied that uses regression function to replace weight vector and φ (x) given in Eq. (13) as follows: where α i and α * i are the Lagrangian multipliers and K is called the kernel function. The possible tested kernels includes linear, polynomial, Gaussian and sigmoid kernels which are defined respectively as follows: where γ is the structural parameter, d is a polynomial degree and r represents the residuals of the system. Different values of , γ and penalty parameter C is used in this study. The quadratic structure of the Eq. (16) is defined as: With the following constraints: Evaluation assessment methods We assessed and compared the predction performance of our proposed hybrid model SSA-VMD-EBT-SVM with other existing models (EMD-SVM, EEMD-SVM, VMD-SVM, SSA-EMD-SVM, SSA-EEMD-SVM, SSA-VMD-SVM) as a benchmark using following four measures: the Nash-Sutcliffe Efficiency (NSE), Mean Square Error (MSE), Root Mean Square Error (RMSE) (Ghorbani et al., 2018) and Mean Absolute Error (MAE) (Yaseen et al., 2018) with following equations respectively; where y ot is the observed values,ȳ ot is the mean of observed values and y pt is predicted value of model. Moreover, Taylor diagram is used to prepare a visual comprehension with the help of polar plot for the evaluation of modeling results. The Taylor diagram represents the normalization statndard deviation between simulated and observed values with normalized origin and R 2 are represented as directional angles (Darbandi & Pourhosseini, 2018). The interpretation of Taylor diagram is that an observed point is shown on graph and the closer the simulated performance measures to the observe point, the better the model performance (Al-Sudani, Salih & Yaseen, 2019). Benchmark models for the evaluation of the proposed hybrid model The proposed hybrid model i.e., SSA-VMD-EBT-SVM is compared with six benchmark models described as follows: a. Without denoising: this type of existing models comprised on decomposition and prediction stages only in which VMD and two different data decomposition methods i.e., EMD, and EEMD are chosen which decompose non-linear, non-stationary and multi-scale data into multiple IMFs with the different sound of time-frequency components. For prediction, the extracted IMFs through EMD, EEMD, and VMD are predicted with the same prediction method i.e., SVM as used in our proposed hybrid model. Then, the performance of proposed hybrid model i.e., SSA-VMD-EBT-SVM is compared with existing benchmark models i.e., EMD-SVM (Yu et al., 2017) and EEMD-SVM (Rezaie-Balf et al., 2019b) and VMD-SVM (Wu & Lin, 2019). b With denoising: these models use denoising-decomposition and prediction stages to predict river inflow data. For denoising, SSA is selected with the same decomposition and prediction stages as described in (a). Then, the performance of the proposed hybrid model i.e., SSA-VMD-EBT-SVM is compared with existing benchmark models i.e., SSA-EMD-SVM, SSA-EEMD-SVM, and SSA-VMD-SVM. CASE STUDY AND EXPERIMENTAL DESIGN The largest water system in Pakistan i.e., IRB is considered for application of proposed architecture as the IRB is Pakistan's largest source of power generation, irrigation, and insensible water resource system. Data from its four major rivers are analyzed i.e., the River Indus, the River Jhelum, the River Chenab, and the River Kabul which contributed significantly in the water system of IRB. The reason of selecting these tributaries is that they are facing frequent river flooding each year due to heavy monsoon rain and melting snow or glacier in Pakistan, glacier-covered 13,680 km 2 area which is estimated 13% of the mountainous areas of Upper Indus Basin (UIB). Melted water from these 13% areas adds a significant contribution of water in these rivers. Therefore, it is appropriate to use rivers data of IRB as a representative case study for evaluation of the proposed model. Data The daily river inflow dataset used in this study is comprised on 1st January to 31st March for the period of 2015-2019. To the application of proposed objective, the daily inflow of Indus River at Tarbela with its two principal left and one right bank tributaries: Jhelum River at Mangla, Chenab River at Marala and Kabul River at Nowshera respectively are selected. The daily inflow data is measured in 1,000 ft/s which was acquired from the site of Pakistan Water and Power Development Authority (WAPDA). RESULTS Results of the proposed hybrid model i.e., SSA-VMD-EBT-SVM is defined in stages as follows: Denoise-stage results: first, Augmented Dickey-Fuller (ADF) (Said & Dickey, 1985) test is applied on river inflow data of all selected case studies to confirm the non-stationarity. For all case studies, results of the ADF test showed that river inflow data is non-stationary in nature with p-values listed in Table 1. Then the original non-stationary data is processed with SSA to improve the quality of river inflow data by reducing the noises. In processing SSA, window length and number of group i.e., L and m parameter must be determined respectively. Here, different values of L are tested and the optimal value i.e., 90 is selected that gives the lowest error rate of actual and denoised series. The value of m is selected according to the eigenvalues of each river inflow. The eigenvalues of four selected four rivers of IRB system are shown in Fig. 2 for Indus and Jhelum river inflow, which shows that the values of components of 30, 25, 30 and 20 for Indus, Jhelum, Chenab and Kabul river inflow respectively are clearly larger than those of the remaining components. The denoised river inflow data is reconstructed by using the selected values of m. The processed inflow data for Indus and Jhelum is shown in Fig. 3. The mean and standard deviation of original and denoised river inflow data is listed in Table 2 where it can be observed that mean value remains the same while the standard deviation of denoised river inflow is reduced through processing. Decomposition-stage results: after the original river inflow data is processed with SSA, the denoised data is decomposed into linear and non-linear time-scale oscillations called IMFs by VMD. The number of IMFs i.e., K must be selected in advance in order to proceed with any decomposition method. Here, K = 6 is selected as the remaining IMFs tend to be similar when K > 6. All river inflow data is decomposed into six IMFs. The decomposition results of Indus river inflow is shown in Fig. 4. For comparison, the same river inflow data is also decomposed with EMD and EEMD as shown in Figs. 4 and 5, respectively, for Indus river inflow. It can be seen from Figs. 3-5 that the IMFs extracted through VMD are smoother than the other decomposition methods i.e., EMD and EEMD. However, due to the high oscillations of sixth IMF, extracted through VMD, EBT is applied to denoise IMF. The EBT effectively separate the clear and noisy coefficients of noise dominant IMF by a mixture of prior distributions as defined in Eq. (12) and preserve valid information as much as possible. First, to get normal distribution, the scaled transformation is applied so that each IMF follows N (δ i ,1). According to the nature of the sixth IMF as depicted in Fig. 4, the first two IMFs of Fig. 5 and the first three IMFs of Fig. 6 it is known that most of the coefficients in all noisiest IMFs are zero and few are non-zero out of which fewer coefficients are either very low or very high in magnitude. By inspecting both zero and non-zero coefficients of IMFs, a mixture of an atom of probability at zero and different distributions are considered for non-zero part coefficients of IMF (Johnstone & Silverman, 2005). Laplace distribution is chosen as a prior distribution out of Exponential and Cauchy distribution for δ i . Finally, the valid information of IMFs are preserved with posterior median threshold estimator which is calculated through Eq. (13). Table 3 and results of Kabul river inflow data is presented in Table 4. Moreover, Taylor diagram, as shown in Fig. 7, is used to illustrate the efficiency of proposed model. The graph shows that the proposed model performed very well over other exisiting models. From Tables 3 Discussion In this article, we proposed a novel hybrid model to efficiently predict the river inflow time series data. Our proposed method comprised of SSA as denoising, VMD as a data decomposition with EBT threshold, and SVM as a prediction method. In order to understand the applicability of our proposed model i.e., SSA-VMD-EBT-SVM, two different model strategies are adapted (see Tables 3 and 4). First, without denoising model strategy is implemented on which the same decomposition method i.e., VMD and two different decomposition methods i.e., EMD and EEMD are used. Moreover, for prediction purpose, SVM which is also adapted in proposed methodology is used Tables 3 and 4, it can be seen that the overall performance of without denoising models are poor for all river inflow data with high NSE, MSE, RMSE, MAE and MAPE values. Specifically, EMD-SVM performed worst among all without denoising models due to the fact that EMD suffers from the mode mixing problem and fail to produce noiseless IMFs (Di, Yang & Wang, 2014). The proposed model performed well with the lowest NSE, MSE, RMSE, MAE and MAPE values as compared to without denoising models. The second strategy used the concept of denoising and decomposition, here the same method of denoising which is used in proposed methodology is employed with different decomposition methods i.e., SSA-EMD-SVM, SSA-EEMD-SVM and with same decomposition method but without thresholding is adapted i.e., SSA-VMD-SVM. From Tables 3 and 4, it is observed that the proposed model i.e., SSA-VMD-EBT-SVM performed well for Indus, Jhelum, and Chenab, but for Kabul river inflow the results of SSA-VMD-EBT-SVM and SSA-VMD-SVM are same as thresholding the IMF did not enhance the prediction performance of SVM. Moreover, SSA-EEMD-SVM also performs well among existing with and without denoising methods for all case studies. Overall, the proposed SSA-VMD-EBT-SVM model showed a much better agreement between predicted and observed river inflow data which demonstrates the suitability of SSA, VMD, and EBT in pre-processing inputs/output data over other decomposition methods i.e., EMD and EEMD. Thus, it is concluded that the appropriate way of denoising, decomposition and thresholding can effectively enhance the performance of non-linear, non-stationary and multi-scale time series data. By applying proposed simulation models in IRB, it is expected that this will provide new tools for improving inflow prediction over what is possible with the current generation of statistical models as well as help with other land and water management questions. It may also be helpful in setting policies regarding what appropriate methods should be chosen for 'denoising-decomposition and prediction' and in assessing the effects of climate warming. These modeling efforts are therefore significant both for the scientific issues involved as well as for the practical relevance of the results. CONCLUSION The reliable and accurate prediction of river inflow is essential in order to manage water resources. In this article, a hybrid prediction model i.e., SSA-VMD-EBT-SVM is proposed and applied for the prediction of daily river inflow data of four rivers of IRB. The original river inflow data is denoised with SSA and decomposed into several linear and non-linear IMFs by using VMD, then EBT is applied on non-linear IMF to remove noises and sparsities. Finally, each IMF is predicted with SVM and the predicted results of IMF component are aggregated as the final prediction results. To compare the performance of the proposed model, the benchmark model with two different decomposition methods i.e., EMD and EEMD methods combined with SSA-based denoising and without denoising is selected. The five performance indicators NSE, MSE, MAE, RMSE, and MAPE are employed to measure the prediction accuracy of proposed SSA-VMD-EBT-SVM, and all other benchmark models. Based on the results, it is observed that the proposed hybrid model i.e., SSA-VMD-EBT-SVM shown the efficient results with minimum errors. In other words, compared with other models, the proposed hybrid model improves prediction accuracy and reduces errors. The results from this research will not only beneficial for sustainable water resource management but also for other non-linear time series data. ADDITIONAL INFORMATION AND DECLARATIONS Funding The Deanship of Scientific Research at King Saud University funded this work through research group no RG-1439-015. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
7,599.2
2019-12-06T00:00:00.000
[ "Computer Science" ]
BIM FROM LASER SCANS... NOT JUST FOR BUILDINGS: NURBS-BASED PARAMETRIC MODELING OF A MEDIEVAL BRIDGE Building Information Modelling is not limited to buildings. BIM technology includes civil infrastructures such as roads, dams, bridges, communications networks, water and wastewater networks and tunnels. This paper describes a novel methodology for the generation of a detailed BIM of a complex medieval bridge. The use of laser scans and images coupled with the development of algorithms able to handle irregular shapes allowed the creation of advanced parametric objects, which were assembled to obtain an accurate BIM. The lack of existing object libraries required the development of specific families for the different structural elements of the bridge. Finally, some applications aimed at assessing the stability and safety of the bridge are illustrated and discussed. The BIM of the bridge can incorporate this information towards a new “BIMonitoring” concept. to preserve the geometric complexity provided by point clouds, obtaining a detailed BIM with object relationships and attributes. INTRODUCTION Building Information Modelling is becoming a very popular technology for several infrastructures such as utility systems, roads and rails, bridges, dams, tunnels, communications networks, water and wastewater networks, etc.This fosters the use of advanced digital models (including semantics, object relationships and attributes) instead of traditional construction projects still based on 2D and 2D CAD drawings.However, BIM technology has limitations, especially for complex constructions with and irregular constructive elements.This is a significant problem for both modern and existing constructions (Agapiou et al., 2015, Patraucean et al., 2015;Tang et al., 2010;Volk et al., 2014, Cuca et al., 2015). Figure 1.The BIM of the medieval bridge Azzone Visconte (Lecco, Italy) generated from laser scans and digital images coupled with new algorithms and procedures for parametric modelling.This allowed one to preserve the geometric complexity provided by point clouds, obtaining a detailed BIM with object relationships and attributes. * corresponding author This paper focuses on BIM for bridges (sometimes referred to BrIM -Bridge Information Modelling), which is a novel approach able to manage the whole lifecycle of a bridge: fabrication, construction, operation, maintenance and inspection. The case study presented in this work is the medieval bridge Azzone Visconte (also known as the Old Bridge) over the Adda River, in Lecco (Italy).The bridge was built between 1336 and 1338 and today is the symbol of the city (Fig. 2).Several analyses were carried out to assess the stability and safety of the bridge, as well as the state of conservation of materials and structures. A geometrical survey at the scale 1:50 was one of the required products of the project.The complex and irregular geometry of the bridge resulted in several limitations when the modelling tools in BIM software packages were used.It should be mentioned that this "geometric" problem is not limited to historic objects.It also includes new bridges often characterized by variable curvature and cross sections.From this point of view, the lack of powerful BIM instruments able to reconstruct complex shapes is still a major drawback in BIM projects, and the generation of accurate historic BIM (Murphy et al., 2013) surveyed from laser scanning point clouds is a challenging task which requires the development of new libraries or the implementation of new algorithms (some examples are discussed in Fai et al., 2011;Baik et al., 2015;Oreni et al., 2014;Dore et al., 2015;Quattrini et al., 2015, Barazzetti et al., 2015c). Figure 2. The Old Bridge of Lecco (Italy) and some pictures of the data acquisition phase. In addition, the use of software for "pure" (direct) 3D modelling was not possible (e.g.Rhinoceros, AutoCAD, Maya, 3D Studio Max).Such software can provide 3D models without parametric modelling tools, object relationships and attributes (Eastman et al., 2008).BIM requires a database including semantics and object properties to create and manage meaningful information about the construction.BIM software packages (e.g.Revit, ArchiCAD, AECOsim Building Designer, Teckla, etc.) allow users to electronically collaborate at different levels with a consistent exchange of digital information.This paper describes a procedure for BIM generation able to take into consideration the geometric complexity captured by laser scanning point clouds.The implemented solution for as-built BIM generation (from laser point clouds and photogrammetry) is based on NURBS curves and surfaces (Piegl and Tiller, 1997) converted into BIM objects.The final BIM objects are then imported in the commercial package Autodesk Revit to ensure a consistent exchange of information among the different specialists involved in the project. SCAN ACQUISITION AND REGISTRATION The geometric survey of the bridge was carried out with laser scanning and photogrammetric techniques.Data were registered in a stable reference system given by a geodetic network (Fig. 3) measured with a Leica TS30.The network is made up of 6 stations and the measurement phase took one day.In all, 834 observations and 264 unknowns gave 570 degrees of freedom.Least Squares adjustment provided an average point precision of about ±1.5 mm.The complexity and the size of the bridge required 77 scans registered with the geodetic network.The instrument is a Faro Focus 3D and the final point cloud is made up of 2.5 billion points.The instrument was placed in different positions, including the road and the riverbanks.The survey of the vaults required the creation of a mobile metal structure that allowed one to capture the intrados (Fig. 4). The network provided a robust reference system to remove deformations during scan registration.Scans were registered with an average precision of ±3 mm by using chessboard targets measured with the total station and additional scan-to-scan correspondence (spherical targets). Figure 3.The geodetic network measured with a total station Leica TS30 (average point precision is ±1.5 mm). The laser scanning survey was then integrated with more than 500 images captured from a boat.Photogrammetry was used to generate accurate orthophotos of the fronts (South and North), the columns and the vaulted surfaces (intrados).The use of a total station data allowed one to obtain a common reference system for the different acquisition techniques.As the goal of the project is a BIM useful to assess the stability of the bridge, the surveying phase cannot be limited to the reconstruction of the shape.The presented measurement techniques can reveal the external layer of construction elements, whereas a BIM is made up of objects with an internal structure. As the goal is the creation of an interoperable BIM for different specialists (engineers, architects, historians, archaeologists, restorers, etc.), the survey included a historical analysis, the identification of materials, technological aspects, stratigraphic analysis, and information from other inspections such as destructive inspections and IR thermography. GENERATION OF THE BIM The starting point for the generation of the BIM is the set of dense laser scanning point clouds which reveal the geometric complexity of the bridge.Photogrammetry and laser scanning are useful technologies for irregular surfaces.On the other hand, BIM projects require an object-based representation made up of solid elements with relationships and attributes.The complexity of the bridge, with irregular shapes not available in existing object libraries, required the development of procedures for parametric modelling able to overcome the lack of commercial software able to preserve the level of detail encapsulated into laser scanning point clouds. For this reason, a procedure based on surfaces made up of NURBS curves and NURBS surfaces (Piegl and Tiller, 1997) was used to create parametric BIM objects. Creation of 3D curves NURBS are mathematic functions used in CAD projects to model simple shapes and complex free-forms objects.Although NURBS are very advanced mathematical objects, they can be used for manual modelling with an easy-to-understand geometric interpretation.Several commercial packages are based on NURBS (e.g.Ashlar-Vellum CAD, Blender, EvoluteTools PRO, Hexagon, Maya, Mol, Nurbana 3D, Rhinoceros, …).However, the result achievable with these software is only a 3D model, which is not a BIM for the lack of basic parametric modelling requirements. The proposed procedure for surface reconstruction uses a preliminary extraction of NURBS curves (in space) from the point cloud.Such curves are not random curves in space.They follow the logic of construction of the bridge, which is therefore divided into its constructive elements. A point cloud provides the geometric information needed to estimate the parameters of NURBS curves.A subset of the original 3D points can be used as control points { } for NURBS generation.Given a set of n+1 control points 0 , … , a NURBS curve of degree p is defined by: ∑ , () =0 (1) where { } are the weights and the { , ()} are the pth-degree B-spline basis functions defined on the knot vector U, which is a non-decreasing sequence of real numbers whose elements are called knots. The ith B-spline basis function , () has the recursive Cox-deBoor form: Shown in Fig. 5 are some functions that show the intrinsic high level of manipulation of NURBS curves: (top) a 3 rd degree NURBS curve defined by 7 control points (0, …,6) control points with knot vector k1 = {0,0,0,0,1/4,1/2,3/4,1,1,1,1}, and (bottom) the variation with a simple modification of the knot vector k2 = {0,0,0,0,1/4,1/4,1/4,1,1,1,1}.The control points (blue dots) are the same and both knot vectors start (and end) with a knot that has multiplicity p+1.Curves with such knot vectors have the remarkable property to start and end in a control point (0) = (0), (9) = (9).The first knot vector is uniform because it starts with a full multiplicity knot followed by simple knots equally spaced, and it terminates with a full multiplicity knot.Knots can also be non uniform.For instance, the vector k2 is associated to a change in the basis functions that gives the new curve where (4)= (4).The modification of the knot vector has an influence on continuity (smoothness).In particular, duplicated knots in the middle make a NURBS curve less smooth.The extreme case is a full multiplicity knot in the middle, which corresponds to a point in the curve that becomes a kink.The degree of the curve is also useful to define the shape of the curve.Shown in Fig. 6 are NURBS curves with common control points and variable degree (p equals 1,2,3, respectively).As can be seen, the linear case (p = 1) represents the typical polyline (zero-order continuity), whereas a high degree is smoothing the curve.In practical application NURBS with degree up to 5 are usually used. Since NURBS are rational functions, they can represent freeform entities, but also exact conics such as ellipses, circles and hyperbolas.This confirms their flexibility to design a large variety of shapes, from straight lines and polylines to free-form curves with arbitrary shapes. Finally, the shape of the curve can be modified by moving its control points.This is the most common way to make local adjustments, especially in interactive modeling projects. Interactive modifications can be carried out by dragging the control points: efficient algorithms provide real-time estimation of unknown parameters and visualization of the final result.Fig. 7.A NURBS can be locally modified by changing its control points.This does not affect the whole profile of the curve. As mentioned, one remarkable property consists in the opportunity to generate local variations without affecting the global shape of the curve.Shown in Fig. 7 are two curves (degree 3) with exactly the same (uniform) knot vector {0,0,0,0,1/6,1/3,1/2,2/3,5/6,1,1,1,1} and control points, except for the displacement of a point in the middle.The change in curve's shape is carried out only in a small area near the control point. The previous figures show that NURBS curves are very efficient tools for modelling simple and complex shapes, small and large objects with variable level of details, as well as regular and irregular parts.The manipulation of knots, control points, weights and degree, allowed one to design the vast range of shapes of the bridge (Fig. 8) and refine the first "visual" result in real time thanks to fast and memory efficient computational algorithms that preserve mathematical exactness. Figure 8. NURBS curves used to drive the generation of BIM objects. Creation of BIM objects from 3D curves 3D curves can be included in surface modeling as additional constraints, so that the congruence of consecutive surfaces is guaranteed by a curve that becomes an edge. A NURBS surface of degree (p, q) in the directions (u, v) is defined by: (3) where { , } are the weights, and the { , ()} and { , ()} are the B-spline basis functions defined on the knot vectors U and V. NURBS surfaces can be used to represent both complex and predefined mathematical surfaces, such as cylinders, spheres, paraboloids, and toroidal patches.In addition, they can represent ruled surfaces by means of a generatrix (i.e. a qth-degree NURBS curve () = ∑ , () =0 on the knot vector V) revolved around an axis.Cylinders and cones are typical examples that can be also intended as surfaces of revolution, but the method becomes extremely useful in the case of historical objects where the generatrix can be digitized from the point clouds.This makes NURBS useful for both simple and complex shapes.In the case of the bridge, the solution for surface generation was based on the use of multiple curves, then NURBS surfaces were generated from one, two or a set of curves in space, which are used as geometric constraint for surface interpolation.Although NURBS surfaces can be fitted to an unorganized point cloud, the final representation of sharp features is usually very poor.For this reason, the use of a preliminary set of curves for the generation of the surface was a better solution.Indeed, the creation of a curve network which drives the surface is much easier than a direct manipulation of the surface, especially in the case of discontinuity lines.After the extraction of the principal discontinuities, which provide the skeleton of a 3D shape, surfaces were generated with an interpolation of the curves through one or more surfaces.The curve should be interpolated as closely as possible so that the distance between the curves, clouds, and final surface is minimum (Hu et al., 2001;Brujic et al., 2002).The reconstruction of the bridge included the preliminary subdivision of the bridge into structural elements following the logic of construction (how the bridge is actually built, such as foundations, columns, arches, etc., see Brumana et al., 2013).This process allows an accurate geometric representation of the external shape surveyed with laser scanning technology and photogrammetry.On the other hand, it does not provide BIM objects for the lack of parametric modelling tools, relationships between objects and attributes.In addition, point clouds reveal precious metric information about the external first layer of the different elements, i.e. the visible layer that can be surveyed with images and range data, whereas a BIM must incorporate additional information concerning volume (e.g.thickness), material properties, and the organization of structural elements.As NURBS are univocally described by a finite set of parameters (degree, a set of weighted control points, and a knot vector), the main idea for their BIM representation is a mathematical solution that (i) preserves the original parameters and (ii) adds new objectbased information including geometric data and attributes.NURBS surfaces are therefore used to generate the external shape of BIM objects, which are then imported in the commercial software Autodesk Revit.Shown in Fig. 9 are some BIM objects used in the project and foundations).Structural elements were classified following the predefined structure of the software database: category, family, type, and obtaining and interoperable model for the different specialists involved in the project. BIMonitoring: A NOVEL MULTI-USER APPROACH FOR INTEGRATED PROJECTS The BIMonitoring approach proposed in this paper starts from a simple consideration: the availability of a BIM (in a multidisciplinary project in which different specialists are involved in the monitoring process, such as engineers, architects, conservators, historians, geologists, geophysicists, etc.) provides a common platform to (i) store results and reports and (ii) to perform new analysis. In the case of the bridge, different numerical and graphical results were available thanks to the contribution of different specialists: displacements measured during the loading phase to assess the load capacity of the bridge (Fig. 10), GPR (Ground Penetrating Radar) data to inspect the internal structure of the bridge, mapping of cracks, stratigraphy, coring, and mechanical characterization of specimens. Figure 10.Measurement of vertical displacements by geometric levelling during the loading phase of the bridge. This kind of information can be directly correlated to 3D information and can be stored in different ways in the BIM, also including links to reports, images, and videos.From this point of view, the use of a 3D model in a reference system allows the integration of information like in a traditional GIS software used in cartography.The 3D model of the BIM is a graphical representation that allows users to easily interact with the database of the construction.Moreover, the BIM is not intended only as a database where information is stored.Additional analysis can be performed by using products directly generated from the BIM.An example is the finite element analysis (FEM) to assess structural stability.Previous works (Barazzetti et al., 2015a;2015b) demonstrated that this approach is feasible.In the case of the bridge, a numerical analysis will be carried out with a simplified geometric model generated from the original BIM. CONCLUSIONS This paper presented an approach for BIM generation of a medieval bridge, demonstrating that the traditional BIM approach for buildings can be exploited also for complex infrastructures. The proposed approach tried to overcome some limitations of commercial BIM software packages.Indeed, there is a lack of commercial software able to provide accurate BIM reconstructions of complex geometries, including parametric modelling tools, relationships between objects, and attributes. The BIM reconstruction of the medieval bridge required the development of new algorithms and procedures.On the other hand, the final result is available in a commercial software so that a single platform becomes available for the different specialists involved in the project. Finally, new approach coined BIMonitoring was developed. The BIM can become a shared platform for different category of data in different formats, which can be efficiently integrated towards a sustainable conservation of the structure. Figure 4 . Figure 4. Scan positions (top), a 3D view of the registered point clouds (bottom left), and the metal structure used to lower the laser scanner (bottom right). Figure 5 . Figure 5.The variation of planar NURBS curves with equal control points (left) and different basis functions (right). Figure 9 . Figure 9.A detail of the BIM in Revit and the opportunity to select objects obtaining specific information.
4,223.8
2016-06-06T00:00:00.000
[ "Engineering", "History" ]
Searches for squarks and gluinos in events with missing transverse momen- tum In this contribution, the latest results from CMS and ATLAS on inclusive searches for Squark and Gluino Production at the LHC are reviewed. A variety of complementary final state signatures and methods are presented using up to 20 fb−1 of data from the 8 TeV LHC run of 2012. Interpretations of the experimental results in SUSY models are covered with a special emphasis on final states with jets, photons, and at most one lepton. Introduction The observation [1] of a new particle of mass ∼ 125 GeV, whose properties are consistent with those expected for the standard model Higgs boson poses new challenges to modern particle physics.It is expected that the mass of the Higgs boson (not being protected by any fundamental symmetry) will receive large corrections from the interaction with higher-scale particles.In order to constrain the mass of the Higgs boson to the observed value, it is necessary to postulate either an unrealistic fine-tuning of the parameters or a mechanism that can protect the Higgs mass to go much above the scale of the electroweak symmetry breaking.One of the most popular such mechanisms is that provided by supersymmetry, in which additional particles provide corrections that cancel almost exactly the contributions from the standard model particles.In particular, natural supersymmetry (see e.g.[2]) is a very appealing scenario; in this framework the masses of the supersymmetric partners of the top and bottom quarks ( t, b) are expected not to exceed O(500 GeV), and the masses of the superpartners of gluons (g) cannot be much heavier than 1 TeV.In this scenario, the ATLAS and CMS experiments, with the 2012 √ s = 8 TeV dataset, are already in a position to obtain evidence of new physics phenomena or exclude a large fraction of the natural or other SUSY models. Searches for squarks and gluinos in events with missing transverse energy The ATLAS and CMS detectors are described in detail elsewhere [3].In all the analyses summarized here, it is assumed R-parity conservation, so the lightest supersymmetric particle (LSP) is stable and traverses the detector a e-mail<EMAIL_ADDRESS>dominant signature for such kind of events is thus a large amount of missing transverse energy (MET ). Other variables extensively used in the analyses are the number of (b-)jets, the scalar sum of the transverse momentum of the jets (HT ), and the sum of MET and HT (m e f f ).All the analyses use part or whole the 2012 √ s = 8 TeV dataset, whose integrated luminosity is about 20 fb −1 . CMS: photons CMS uses 4.04 fb −1 to perform a search for events with one or two high-p T photons, hadronic jets and MET [4].This search targets General Gauge Mediated (GGM) scenarios, in which the LSP is a very light gravitino ( G), and the neutralino is the next-to-lightest SUSY particle.In the cases in which the neutralino (produced in the LHC environment by the decay of gluinos and squarks) is mostly bino-like, its dominant decay is χ 0 1 → γ G.The analysis proceeds in two streams: a single photon and a di-photon search.In the first case the presence of a photon candidate with p T (γ) > 80 GeV, along with at least two hadronic jets with p T ( jet) > 40 GeV (and HT > 450 GeV) is required.In the second, two photons with p T (γ 1 ) > 40 and p T (γ 2 ) > 25 GeV and at least one hadronic jet with p T ( jet) > 30 GeV are selected. Backgrounds arise from multijet (QCD) events in which the MET originates from the mismeasurement of the p T of one or more jets, and from electroweak events containing real MET , in which an electron has been misidentified as a photon.The first is estimated using data control samples with loosened photon-ID requirements, for the second, the γ mis-identification probability is measured using ee and eγ samples.Remaining small contributions are taken from the simulation. No significant excess is observed over the expected standard model backgrounds.Limits are set in the con- text of the GGM model, for both bino-like and winolike neutralinos (in the second case the dominant decay is χ 0 1 → Z G), considering a neutralino of mass 375 GeV.The single-and di-photon analyses have similar sensitivity and can exclude, in the most favorable case, masses of both gluino and squarks above 1 TeV.Fig. 1 shows the 95% CL exclusion plot for the di-photon analysis in the bino-like scenario. ATLAS: jets + τ's ATLAS uses their full 2012 dataset (corresponding to 20.7 fb −1 ) to search for SUSY signatures with at least one τ lepton in the final state [5].The analysis is optimized for models in which the τ is the next-to-lightest SUSY particle, which decays predominantly to τ and gravitino. It is required the presence of at least two hadronic jets (with p T ( jet 1 ) > 130 and p T ( jet 2 ) > 30 GeV) and MET > 150 GeV.Different selections are then considered in order to maximize the sensitivity to different models.Only hadronic decays of τ candidates are considered, and either one candidate with medium τ-ID tightness and no additional τ candidates, or two loose τ's are selected. The main backgrounds arise from t t, W and Z+jets events with either real or fake τ candidates.These are estimated from data control regions and the predicted yield in the signal regions is estimated by means of scaling factors derived from the simulation. The observed yields are in good agreement with the predicted background.Limits are set in the context of Gauge Mediated Supersymmetry Breaking (GMSB) and Natural Gauge Mediated Model (nGM).In both cases masses of gluinos below 1 TeV are excluded and, as can be seen from Fig. 2, the limits are practically independent of the mass of the τ. ATLAS: same sign leptons The full ATLAS 8 TeV sample (20.7 fb −1 ) is then used to search for the production of squarks and gluinos that The standard model predicts very low background yields for this kind of final states, mostly arising from t tV and VV events, where V is either a W or a Z boson; these contributions are taken from the Monte Carlo.Other background sources include charge mis-measurement of electrons irradiating a photon, and leptons associated to b-jets that are not rejected by the isolation criteria.These contributions are estimated with data-driven methods. Events with two same-sign leptons (e's or µ's) are selected, requiring the p T of the lepton candidates to be above 20 GeV, and |η| < 2.47 (2.40) for e's (µ's).Events are then categorized based on the number of reconstructed (b)-jets.The minimum p T for the generic hadronic jet is 40 GeV; this threshold is lowered to 20 GeV for the b-jets candidates, in order to be sensitive to scenarios with soft b-jets in the final states. No significant excess is observed above the standard model predictions, and exclusion limits are set for a variety of models.Fig. 3 shows the regions (in the m χ0 1 vs m q plane) excluded at 95% CL for direct squark production, followed by decays through sleptons. CMS: α T The CMS Collaboration uses 11.7 fb −1 of the √ s = 8 TeV dataset to search for SUSY signatures using the variable α T [7].For di-jet events, α T is defined as: where E j 2 T is the transverse energy of the less energetic jet and M T is the transverse mass of the di-jet system.This definition is generalized for multijet events by combining EPJ Web of Conferences 07004-p.2 ATLAS Preliminary All limits at 95% CL Figure 3. ATLAS limits on direct squark production (and subsequent decay of the squarks through sleptons) from the analysis with same-sign leptons in the final state. the jets into two pseudo-jets; the chosen combination is the one that minimizes the difference in transverse energy of the two pseudo-jets.By construction α T is very robust against jet energy mismeasurements that can fake missing transverse energy.QCD multijet events (with no real MET ) exhibit an α T distribution that has a natural cut-off at α T ∼ 0.5.Events with α T > 0.55 are characterized by genuine sources of MET . The analysis thus selects events with α T > 0.55 and HT > 275 GeV: QCD multijet events passing this cut are suppressed to a negligible level and the dominant background is constituted by t t, single t, and W, Z + jets events.The sample is split into two jet-multiplicity bins: 2 ≤ n jets ≤ 3 and n jets ≥ 4, and these samples are further split into eight bins of HT and five bins of b-jet multiplicity (n b− jets = 0, 1, 2, 3, and n b− jets ≥ 4). The backgrounds are determined from several data control samples (binned in the same way as the main signal sample): the µ + jets sample is used to estimate the t t, single t, W + jets, and Z + jets for n b− jets ≥ 2, while the µµ + jets and γ + jets samples are used to determine the Z + jets background for n b− jets < 2. The background contributions into the signal bins are calculated by multiplying the event yields of the corresponding data control samples by some translation factors derived from the simulation: Several closure tests on the data are performed in order to check the consistency of the procedure and to assign an appropriate systematic uncertainty.No significant excess is observed in the data, compared to the background predictions.Exclusion limits, in terms of the simplified model in which gluino pairs are produced Figure 4. Exclusion limits for the CMS α T analysis for the simplified model in which gluino pairs are produced and each gluino decays: g → q q χ0 1 .The color scale indicates the observed crosssection 95% CL upper limit, and the lines indicated the observed (black) and expected (red) exclusion limits.and each gluino decays to a pair of light q q + the LSP, are presented in Fig. 4. ATLAS: MET + jets The full dataset (20.3 fb −1 ) collected by the ATLAS experiment is used to search for new physics phenomena in final states with jets and missing transverse energy [8].Events with at least two jets, with p T ( jet 1 ) > 130 GeV and p T ( jet 2 ) > 60 GeV and are categorized into different signal regions based on the presence of additional jets with p T ( jet i ) > 60 GeV.The key selection variable is the effective mass m e f f , for which two definitions are used: in which only the jets considered in the particular signal bin are used, and a more inclusive one, taking into account all the jets in the event with p T > 40 GeV: Selection requirements are imposed using both variables, and cuts are applied on the difference between the azimuthal angle of MET and that of the jets in the event, in order to suppress the QCD background.Background arise from W, Z + jets events, t t, and QCD multijets.As in other analyses the contributions of these backgrounds into the signal regions are determined by multiplying the yields in data control samples by appropriate translation factors.The samples used in the analysis are multijet samples with reverted cuts on ∆φ( jet, MET ) and on MET/m e f f (QCD enriched), γ + jets (to determine the Z + jets contribution), and a µ + jets sample, either b-vetoed (enriched in W + jets) or b-tagged (enriched in t t).The translation factors for the QCD background LHCP 2013 07004-p.3 Figure 5. Exclusion limits set by the ATLAS MET + jets analysis on the squark-gluino production.The decay chains considered are q → q χ0 1 and g → q q χ0 1 , with the mass of q being set to 96% of the mass of the g. are determined using a data-driven technique, applying a resolution function to well measured multijet events.For the other background categories, the translation factors are taken from the simulation.No significant excess is seen compared to the background predictions, so limits are set for a variety of models.Fig. 5 shows the exclusion limits for squark-gluino production, where q → q χ0 1 and g → q q χ0 1 , with m q = 0.96 m g, while Fig. 6 considers gg production, with g → t t χ0 1 . ATLAS: multijets Another ATLAS analysis, based on 20.3 fb −1 , looks for even higher jet multiplicities to search for SUSY signatures [9].The analysis proceeds along two main streams.In the multijet + flavor stream, events with 8, 9, or ≥ 10 jets with p T > 50 GeV, or 7 or ≥ 8 jets with p T > 80 GeV are selected into different signal regions.These samples are further split (except the ≥ 10 jets bin), depending on their b-jet content (0, 1, or ≥ 2 b-jets). In the multijet + M Σ J stream, events with at least 8 jets with p T > 50 GeV and |η| < 8 are selected.These jets are then used to feed the anti-k t clustering algorithm, setting the value of the radius parameter R to 1.0 (the typical value for ATLAS analyses is 0.4).The selection variable M Σ J is defined as: where the sum runs on the R = 1.0 jets with p R=1.0 T > 100 GeV and |η R=1.0 | < 1.5.A cut at 340 or 420 GeV on M Σ J is set, depending on the different signal regions considered. The main selection variable in the analysis is the ratio MET/ √ HT (which is required to be > 4 GeV 1/2 for the signal regions).The estimate of the QCD multijet background relies on the observation that the shape of the MET/ √ HT distribution does not depend on the jet multiplicity of the event (there is some dependence though on the number of b-jets, so different b-jet multiplicities are considered separately).The QCD background is estimated from data control samples with lower jet multiplicity, extrapolating the number of expected events entering the signal region from the number of events with low MET/ √ HT .A µ + jets control sample is used to determine the t t and W + jets background, using translation factors taken from the Monte Carlo.The Z → νν component is predicted from the simulation after this has been validated using a Z → µ + µ − data control sample. No significant excess in seen over the background predictions in any of the different signal regions considered, so limits on different SUSY models are set.Fig. 7 shows EPJ Web of Conferences 07004-p.4 Figure 8. Exclusion limits of the ATLAS multijets analysis on gluino-pair production, with q → q χ+ 1 , χ+ 1 → W + χ+ 0 .In the case investigated here, the limits are shown in the plane x vs m(g), where x is the ratio between the mass splitting of charginoneutralino over the mass splitting gluino-neutralino.The mass of the neutralino is set to 60 GeV. the limits set by this analysis on the mSUGRA framework, with tan β = 30.Masses of gluinos around the TeV can be excluded for m 0 as high as 6 TeV.Fig. 8 shows the exclusions for this analysis in a simplified model where gluino pairs are produced, and each gluino decays to q qW χ0 1 via an intermediate chargino. CMS: MET + b-jets CMS uses its full dataset (19.4 fb −1 ) to search for SUSY signatures in fully hadronic events, with large MET , at least three jets, at least one of which is tagged as a bjet [10].In order to be sensitive also to compressed spectra scenarios, a relatively loose selection on MET and HT is performed: events with MET > 125 GeV and HT > 400 GeV are selected and subdivided in four bins of MET , four bins of HT and three bins of b-jet multiplicity (1, 2, or ≥3 b-tagged jets in the event). The analysis proceeds through a three-dimensional maximum likelihood fit, the three dimensions being MET , HT , and the b-jet multiplicity.The dominant background arises from t t, single t, and W + jets events, followed by Z → νν and QCD multijet events.These backgrounds are determined in a fully data-driven way, by using a single lepton (using e's and µ's) control sample for the t t, single t, and W + jets background, Z → e + e − and Z → µ + µ − controls samples (with loosened b-tagging requirements) to estimate the Z → νν component, and a sample enriched in QCD multijet events, obtained by reversing the cut on the minimum ∆φ between MET and the three leading jets in the event.No assumption is made on the MET -HT shape of the backgrounds, rather it is taken from the data control samples.The potential contamination of signal events (as predicted by the different SUSY models under investigation) into the control samples is taken into account in the fit. No excess is observed, compared to the background predictions.Limits are set in the context of gluino-pair production, with each gluino decaying to b b χ0 1 (see Fig. 9) or t t χ0 1 (Fig. 10).The limits on the gluino mass, for a relatively light LSP, exceed 1 TeV in both cases. Conclusions The ATLAS and CMS experiments are performing a vast experimental campaign to detect signatures of physics beyond the standard model.The 2012 √ s = 8 TeV dataset delivered by the LHC has already been extensively analyzed and more refined analyses are in the works.So far no statistically significant signal has been detected, and the limits on the masses of squarks and gluinos exceed the 1 TeV level in many of the scenarios investigated. Additional interpretation plots for the analyses presented in this contribution are available from the public results pages of the ATLAS [11] and CMS [12] Collaborations. The author would like to thank the SUSY Conveners of the ATLAS (Monica D'Onofrio and Andreas Hoecker) and of the CMS (Eva Halkiadakis and Frank Wuerthwein) experiments for the kind support given while preparing this talk. Figure 1 . Figure1.Exclusion plot for the CMS di-photon analysis in the m g vs m q plane.It is assumed that the mass of the neutralino is 375 GeV. Figure 2 . Figure 2. ATLAS exclusion limits (at 95% CL) on the mass of the gluino as a function of the mass of the stau, in the context of the nGM model. Figure 6 . Figure 6.Exclusion limits set by the ATLAS MET + jets analysis on gluino-gluino production, with each gluino decaying to tt χ0 1 . Figure 7 . Figure 7. Exclusion limits of the ATLAS multijets analysis on the mSUGRA framework, with tan β = 30. Figure 9 .Figure 10 . Figure 9. Exclusion limits for the CMS MET + b-jets analysis on the simplified model in which gluino pairs are produced and each gluino decays to b b χ01 .The color scale indicates the observed cross-section 95% CL upper limit, and the lines indicated the observed (black) and expected (red) exclusion limits.
4,523.4
2013-11-01T00:00:00.000
[ "Physics" ]
Supersymmetric black holes and attractors in gauged supergravity with hypermultiplets We consider four-dimensional $N=2$ supergravity coupled to vector- and hypermultiplets, where abelian isometries of the quaternionic K\"ahler hypermultiplet scalar manifold are gauged. Using the recipe given by Meessen and Ort\'{\i}n in arXiv:1204.0493, we analytically construct a supersymmetric black hole solution for the case of just one vector multiplet with prepotential ${\cal F}=-i\chi^0\chi^1$, and the universal hypermultiplet. This solution has a running dilaton, and it interpolates between $\text{AdS}_2\times\text{H}^2$ at the horizon and a hyperscaling-violating type geometry at infinity, conformal to $\text{AdS}_2\times\text{H}^2$. It carries two magnetic charges that are completely fixed in terms of the parameters that appear in the Killing vector used for the gauging. In the second part of the paper, we extend the work of Bellucci et al. on black hole attractors in gauged supergravity to the case where also hypermultiplets are present. The attractors are shown to be governed by an effective potential $V_{\text{eff}}$, which is extremized on the horizon by all the scalar fields of the theory. Moreover, the entropy is given by the critical value of $V_{\text{eff}}$. In the limit of vanishing scalar potential, $V_{\text{eff}}$ reduces (up to a prefactor) to the usual black hole potential. Introduction Black holes in gauged supergravity theories provide an important testground to address fundamental questions of gravity, both at the classical and quantum level. Among these are for instance the problems of black hole microstates, the final state of black hole evolution, uniqueness-or no hair theorems, to mention only a few of them. In gauged supergravity, the solutions typically have AdS asymptotics, and one can then try to study these issues guided by the AdS/CFT correspondence. On the other hand, black hole solutions to these theories are also relevant for a number of recent developments in high energy-and especially in condensed matter physics, since they provide the dual description of certain condensed matter systems at finite temperature, cf. [1] for a review. In particular, models that contain Einstein gravity coupled to U(1) gauge fields 1 and neutral scalars have been instrumental to study transitions from Fermi-liquid to non-Fermi-liquid behaviour, cf. [2,3] and references therein. In AdS/condensed matter applications one is often interested in including a charged scalar operator in the dynamics, e.g. in the holographic modeling of strongly coupled superconductors [4]. This is dual to a charged scalar field in the bulk, that typically appears in supergravity coupled to gauged hypermultiplets. It would thus be desirable to dispose of analytical black hole solutions to such theories. In the first part of the present paper we will make a first step in this direction. Solving the corresponding second order equations of motion is generically quite involved, such that one is forced to resort to numerical techniques. For this reason we shall look here for BPS black holes, which satisfy first order equations, and make essential use of the results of [5], where all supersymmetric backgrounds of N = 2, d = 4 gauged supergravity coupled to both vector-and hypermultiplets were classified. This provides a systematic method to obtain BPS solutions, without the necessity to guess some suitable ansätze. Let us mention here that black holes in four-dimensional gauged supergravity with hypers were also obtained numerically in [6]. Solutions that have ghost modes (i.e., with at least one negative eigenvalue of the special Kähler metric) were constructed in [7]. In five dimensions, a singular solution of supergravity with gauging of the axionic shift symmetry of the universal hypermultiplet was derived in [8]. Finally, ref. [9] analyzed the near-horizon geometries of static BPS black holes in four-dimensional N = 2 supergravity with gauging of abelian isometries of the hypermultiplet scalar manifold, while the authors of [10] found nonrelativistic (Lifshitz and Schrödinger) solutions in the same theory for the canonical example of a single vector-and a single hypermultiplet 2 . Another point of interest addressed in this paper is the attractor mechanism [12][13][14][15][16], that has been the subject of extensive research in the asymptotically flat case, but for which not very much has been done for black holes with more general asymptotics. First steps towards a systematic analysis of the attractor flow in gauged supergravity were made in [17,18] for the non-BPS and in [19][20][21][22] for the BPS case. Some interesting results have been found, for instance the appearance of flat directions in the effective black hole potential for BPS flows [20], a property that does not occur in ungauged N = 2, d = 4 supergravity [16], at least as long as the metric of the scalar manifold is strictly positive definite. In the second part of our paper we extend the work of [18] to include also gauged hypermultiplets. We shall construct an effective potential V eff that depends on both the usual black hole potential and the potential for the scalar fields. V eff governs the attractors, in the sense that it is extremized on the horizon by all the scalar fields of the theory, and the entropy is given by the critical value of V eff . As in [18], our analysis does not make use of supersymmetry, so our results are valid for any static extremal black hole in four-dimensional N = 2 matter-coupled supergravity with gauging of abelian isometries of the hypermultiplet scalar manifold. The remainder of this paper is organized as follows: In the next section, we briefly review N = 2, d = 4 gauged supergravity coupled to vector-and hypermultiplets. Section 3 summarizes the general recipe to construct supersymmetric solutions provided in [5]. In 4, a simple model is considered that has just one vector multiplet with special Kähler prepotential F = −iχ 0 χ 1 , and the universal hypermultiplet. In this setting, the equations of [5] are then solved and a genuine BPS black hole with running dilaton and two magnetic charges is constructed. Section 5 contains an extension of the results of [18] on black hole attractors in gauged supergravity to the case that includes also hypermultiplets. Section 6 contains our conclusions and some final remarks. 2 Matter-coupled N = 2, d = 4 gauged supergravity The gravity multiplet of N = 2, d = 4 supergravity can be coupled to a number n V of vector multiplets and to n H hypermultiplets. The bosonic sector then includes the vierbein e a µ ,n ≡ n V + 1 vector fields A Λ µ with Λ = 0, . . . n V (the graviphoton plus n V other fields from the vector multiplets), n V complex scalar fields Z i , i = 1, . . . , n V , and 4n H real hyperscalars q u , u = 1, . . . , 4n H . The complex scalars Z i of the vector multiplets parametrize an n V -dimensional special Kähler manifold, i.e. a Kähler-Hodge manifold, with Kähler metric G i (Z,Z), which is the base of a symplectic bundle with the covariantly holomorphic sections 3 . where K is the Kähler potential. Alternatively one can introduce the explicitly holomorphic sections of a different symplectic bundle, In appropriate symplectic frames it is possible to choose a homogeneous function of second degree F(χ), called prepotential, such that F Λ = ∂ Λ F. In terms of the sections Ω the constraint (2.2) becomes The couplings of the vector fields to the scalars are determined by then ×n period matrix N , defined by the relations If the theory is defined in a frame in which a prepotential exists, N can be obtained from The 4n H real hyperscalars q u parametrize a quaternionic Kähler manifold with metric H uv (q). A quaternionic Kähler manifold is a 4n-dimensional Riemannian manifold admitting a locally defined triplet K v u of almost complex structures satisfying the quaternion relation and whose Levi-Civita connection preserves K up to a rotation, An important property is that the SU(2) curvature is proportional to the complex structures, We will only consider gaugings of abelian symmetries of the action. Under the action of abelian symmetries, the complex scalars Z i transform trivially, so that we will be effectively gauging abelian isometries of the quaternionic-Kähler metric H uv . These are generated by commuting Killing vectors k Λ u (q), [k Λ , k Σ ] = 0, and the requirement that the quaternionic-Kähler structure is preserved implies the existence of a triplet of Killing prepotentials, or moment maps, P Λ x for each Killing vector such that The bosonic action reads the covariant derivatives acting on the hyperscalars are and (2.14) Supersymmetric solutions All the timelike supersymmetric solutions to N = 2 gauged supergravity in four dimensions were characterized by Meessen and Ortín in [5]. Here we summarize their results, restricted to the case of abelian gauging. The expressions and equations that follow are given in terms of bilinears constructed out of the Killing spinors, and of the real symplectic sections of Kähler weight zero The metric and vector fields take the form where the 3-dimensional metric h mn must admit a dreibein V x satisfying the structure equation 4 |X| 2 can be determined from R and I, and the spatial 1-form ω satisfies The complex scalars Z i , the sections R and I, the 1-form ω, the function X and the hyperscalars q u are all time-independent. The complex scalars depend, in a way that depends on the chosen parametrization of the special Kähler manifold, on the sections R and I. A common simple choice of parametrization is The effective 3-dimensional gauge connectionÃ Λ must satisfy from which follows the integrability conditioñ A similar condition holds for the I Λ 's, where (3.14) Finally, the hyperscalars must satisfy the equation For a given special geometric model the sections R can always, at least in principle, be determined in terms of the sections I, by solving the so-called stabilization equations. This means that to obtain a supersymmetric solution one needs to solve the above equations for I Λ , I Λ , ω, V x and q u . A black hole solution We now turn to the task of obtaining an explicit solution with non-trivial hyperscalars. To do so, we consider a simple theory with just one vector multiplet and one hypermultiplet, n V = n H = 1. More specifically, let the hypermultiplet be the universal hypermultiplet [24]. The scalar fields in this multiplet, denoted by (φ, a, ξ 0 , ξ 0 ), parametrize the quaternionic space SU(2, 1)/U(2), with metric where ξ|dξ ≡ ξ 0 dξ 0 − ξ 0 dξ 0 , and the corresponding SU(2) connection has components As for the vector multiplet, we choose a special geometric model specified by the prepotential with the parametrization χ 0 = 1, χ 1 = Z. Then it is easy to obtain from (2.4) the Kähler potential K = − log [4 Re(Z)] and the Kähler metric while the period matrix N ΛΣ , giving the scalar-vector couplings, is calculated from eq. (2.6) to be Using the definition (3.2), the dependence of the R section on the I section for this special geometric model is readily seen to be so that the complex scalar is given by and 1 2 |X| 2 = R|I = 2 I 0 I 1 + I 0 I 1 . (4.8) Since the theory includes two vector fields, we can choose to gauge up to two isometries of the metric H uv . We choose to gauge the (commuting) isometries generated by the Killing vectors where k Λ and c are constants. This means that we are gauging the R group of the translations along a with the combination A Λ k Λ , and the U(1) group of rotations in the ξ 0 -ξ 0 plane with the field A 0 . (4.9) is a subcase of the Killing vector considered in [6], and corresponds to setting in eqs. (3.8) and (3.9) of [6]. The triholomorphic moment maps associated with the Killing vectors (4.9) can be obtained from (2.10), and are With these choices the scalar potential (2.12) reads For simplicity we will look for solutions with R 0 = R 1 = I 0 = I 1 = 0, which implies from (4.7) that the scalar Z is real and from (3.4) that the gauge fields are in a purely magnetic configuration. From eq. (3.8) follows that ω is a closed 1-form, and can be reabsorbed by a redefinition of the coordinate t, leading to static solutions. This choice also implies that eq. (3.13) is trivially satisfied. We will also take the hyperscalar a to be constant and ξ 0 = ξ 0 = 0. Note that the scalar potential (4.12) has then a critical point at Z = −k 0 /k 1 and e 2φ = −c/k 0 , with V crit = 3k 1 g 2 c 2 /(8k 0 ). Since the absence of ghost modes requires Z > 0, one needs k 0 /k 1 < 0 (and of course c/k 0 < 0) to have a critical point of the potential. With the choice ξ 0 = ξ 0 = 0, the moment maps (4.11) become Eq. (3.5) implies then dV 3 = 0, hence there exists a function r (that we will use as a coordinate) such that locally V 3 = dr . (4.14) We shall impose radial symmetry on the solution by requiring the scalar fields Z, φ and the sections I Λ to depend only on r. The φ, ξ 0 and ξ 0 components of equation (3.15) reduce then to the constraint while the a component becomes where the prime stands for a derivative with respect to r. If we now introduce the remaining coordinates ϑ and φ by choosing where at this stage f is an arbitrary function of ϑ, the remaining components of eq. (3.5) are satisfied provided that the following conditions are met 18) From (4.19) and the constraint (4.15) we also havẽ Finally, (3.10) leads to the two equations while (3.12) is automatically satisfied since we obtainedF Λ as the exterior derivative of the effective connectionÃ Λ . Equation (4.16) allows us to use the chain rule to trade the coordinate r for φ in (4.21), which after summing over Λ becomes If we impose the condition 23) this equation is solved by where α is an integration constant. Substituting these expressions back in (4.21) for Λ = 0 or Λ = 1, we obtain an expression for the function W (r), The expression (4.25) is also a solution of equation (4.18), which is non-trivial, proving the constraint (4.23) to be consistent with all the equations. From (4.25) we also conclude that f (ϑ)/f (ϑ) should be a positive constant, therefore f (ϑ) in general takes the form f (ϑ) = γ sinh (δϑ + ρ) , (4.26) where γ, δ and ρ are constants. We can now go back to the coordinate r by solving equation (4.16) to obtain the dependence of φ on r, obtaining where β is yet another integration constant. Note that all the integration constants can be reabsorbed by the coordinate change that allows to write the complete solution as We start the analysis of the solution by noting that it has no free parameters, since all the constants appearing in (4.29)-(4.31) are completely determined by the choice of gauging. Observe also that in order to maintain the correct signature and to have Z > 0, which is required to have a real Kähler potential, we have to impose k 1 c > 0. The metric (4.29) is singular in r = 0 and, if k 0 c < 0, also in r = −k 0 /c. The singularity in r = r S ≡ 0 is a true curvature singularity, while the one in r = r H ≡ −k 0 /c is not and corresponds instead to a Killing horizon, always covering the curvature singularity. With the metric written in the form (4.29), it is immediate to see that in the asymptotic limit r → +∞ it reduces to which is manifestly conformally equivalent to AdS 2 ×H 2 . Note that (4.32) is very similar to hyperscaling violating geometries, which in d dimensions have the form Here, z is the dynamical critical exponent and θ the so-called hyperscaling violation exponent. Under the scaling r → r/λ, x i → λx i , t → λ z t, (4.33) is not invariant, but transforms covariantly, ds → λ θ/(d−2) ds. Geometries of the form (4.33) have been instrumental in recent applications of AdS/CFT to condensed matter physics, cf. e.g. [25]. (4.32) exhibits actually a scaling behaviour similar to that of (4.33). To see this, introduce new coordinates x, y on H 2 according to which casts (4.32) into the form Under the scaling (4.35) transforms as ds → ds/λ. In the near-horizon limit, r → r H , after the coordinate change t → t/4, the metric takes the form which is AdS 2 ×H 2 , while the scalar fields take the values The magnetic charges are given by yielding for the magnetic charge densities . (4.40) The Bekenstein-Hawking entropy density can then be written as (4.41) Attractor mechanism In [18] the authors presented a generalization of the well-known black hole attractor mechanism [12][13][14][15][16] to extremal static black holes in N = 2, d = 4 gauged supergravity coupled to abelian vector multiplets. In this section we closely follow their argument, generalizing it by taking into account the presence of gauged hypermultiplets. As in [18], we make no assumption on the form of the scalar potential, of the vectors' kinetic matrix N or on the scalar manifolds, so that our results are valid not only for N = 2 supergravity, but for any theory described by an action of the form (2.11). The equations of motion obtained from the variation of (2.11) are 3) where the dual field strengths are given by 6) and the second covariant derivatives on the scalars act as The metric for the most general static extremal black hole background with flat, spherical or hyperbolic horizon can be written in the form We require that all the fields are invariant under the symmetries of the metric, namely the time translation isometry generated by ∂ t and the spatial isometries generated by the Killing vectors The scalar fields can then only depend on the radial coordinate r, and the request of invariance of the field strength 2-forms F Λ leads to where p Λ (r) is a generic function of r. The Bianchi identities imply that p Λ must be constant. With field strengths of this form, it is always possible to choose a gauge in which the gauge potential 1-forms can be written as The r-component of the Maxwell equations (5.2) reduces then to the condition while the ϑ-component is automatically satisfied and the ϕ-component gives for every value of Λ, or equivalently Finally if we define a function e Λ (r) such that we have F Λϑϕ = 4πe Λ (r)f κ (ϑ) and the t-component of the Maxwell equations becomes The quantities p Λ and e Λ (r) are the magnetic and electric charge densities inside the 2-surfaces S r of constant r and t, The non-vanishing components of T µν are given by whereṼ BH is the so-called black hole potential, which however, unlike the usual definition, has an explicit dependence on r through the varying electric charges e Λ . It is also straightforward, using the expressions (5.13), (5.19) and the definition (5.6), to verify that with Z i H ≡ lim r→0 Z i , q u H ≡ lim r→0 q u . Because of equations (5.35)-(5.36), V eff is extremized on the horizon by all the scalar fields of the theory, The values Z i H , q u H of the scalars on the horizon are then determined by the extremization conditions (5.42), and the Bekenstein-Hawking entropy density is given by the critical value of V eff , For a given theory this critical value, and thus also the entropy, depend only on the charges (on the horizon) p Λ and e Λ (0), so that the attractor mechanism still works. On the other hand Z i H and q u H may not be uniquely determined, since in general V eff may have flat directions. The limit for V → 0 of V eff only exists for κ = 1, in which case V eff → (8π) 2 V BH and one recovers the attractor mechanism for ungauged supergravity. The fact that this limit does not exist for κ = 0, −1 is not surprising since flat or hyperbolic horizon geometries are incompatible with vanishing cosmological constant. For the black hole we presented in section 4, the fact that the entropy only depends on the charges is not really surprising, since the solution has no free parameters at all. It is however straightforward to verify that the near-horizon geometry does indeed extremize the effective potential V eff . In particular one has on the horizon (5.44) Final remarks In this paper, we considered N = 2 supergravity in four dimensions, coupled to vectorand hypermultiplets, where abelian isometries of the quaternionic Kähler manifold are gauged. In the first part, we analytically constructed a magnetically charged supersymmetric black hole solution of this theory for the case of just one vector multiplet with prepotential F = −iχ 0 χ 1 , and the universal hypermultiplet. This black hole has a running dilaton, and interpolates between AdS 2 ×H 2 at the horizon and a hyperscalingviolating type geometry at infinity, which is conformal to AdS 2 × H 2 . To the best of our knowledge, this represents the first example of an analytic genuine BPS black hole in gauged supergravity with nontrivial hyperscalars; previously known solutions of this type were only constructed numerically [6]. Diverging scalars fields of the form (4.31) are common in two and three dimensions, but are sometimes regarded as a sign of pathology in four or higher dimensions. However, similar to the linear dilaton black holes of [26], our solutions have finite entropy, magnetic charges and curvature at large r, in spite of the diverging scalars, and should thus be regarded as physically meaningful 5 . In any case, it may be interesting to consider more general models and gaugings, and to look for asymptotically AdS black holes with running hyperscalars, that might be more relevant for gauge/gravity duality applications. Unfortunately the equations of [5] become immediately quite involved once the complexity of the model increases, but perhaps our solution may serve as a starting point that helps solving analytically the equations of [5] in a more complicated setting. We hope to come back to this point in a future publication. In the second part of the paper, we extended the work of [18] on black hole attractors in gauged supergravity to the case where also hypermultiplets are present. The attractors were shown to be governed by an effective potential V eff , which is extremized on the horizon by all the scalar fields of the theory. Moreover, the entropy is given by the critical value of V eff , and in the limit of vanishing scalar potential, V eff reduces (up to a prefactor) to the usual black hole potential. The resulting attractor equations (5.42) do not make use of supersymmetry; they are valid for any static extremal black hole. It would be interesting to analyze them for some specific models, for instance the ones worked out in [27] and considered also in [6] that arise from M-theory compactifications.
5,532.8
2015-03-31T00:00:00.000
[ "Physics" ]
Variations of thermophysical properties and heat transfer performance of nanoparticle-enhanced ionic liquids The ionic liquid (IL) 1-ethyl-3-methylimidazolium acetate ([EMIm]Ac) was investigated as a promising absorbent for absorption refrigeration. To improve the thermal conductivity of pure [EMIm]Ac, IL-based nanofluids (ionanofluids, INFs) were prepared by adding graphene nanoplatelets (GNPs). The thermal stability of the IL and INFs was analysed. The variations of the thermal conductivity, viscosity and specific heat capacity resulting from the addition of the GNPs were then measured over a wide range of temperatures and mass fractions. The measured data were fitted with appropriate equations and compared with the corresponding classical models. The results revealed that the IL and INFs were thermally stable over the measurement range. The thermal conductivity greatly increased with increasing mass fraction, while only slightly changed with increasing temperature. A maximum enhancement in thermal conductivity of 43.2% was observed at a temperature of 373.15 K for the INF with a mass fraction of 5%. The numerical results revealed that the dispersion of the GNPs in the pure IL effectively improved the local heat transfer coefficient by up to 28.6%. Introduction Environmental concerns and the global energy crisis have caused absorption refrigeration that has the advantages of reduced energy consumption, environmental friendliness and high efficiency to become a focus of international research [1,2]. In an absorption refrigerator, the absorber takes in the refrigerant from the evaporator and thereafter releases it to the condenser in the desorber accompanied by exothermic and endothermic effects. (H 2 O/LiBr) [3] and ammonia (NH 3 /H 2 O) [4]. Unfortunately, their broader industrial application has been hindered by the inherent defects of crystallization, corrosion, high working pressure and toxicity [5]. Therefore, the discovery and development of new working pairs is crucial. Over recent years, ionic liquids (ILs) have been widely studied as new environmentally friendly solvents for various applications [6][7][8][9], owing to their negligible volatility, high gas solubility and good thermal stability [10]. ILs have attracted remarkable attention in the field of absorption refrigeration since Shiflett & Yokozeki [10] first proposed the use of ILs as absorbents for absorbing the refrigerant. To date, research into ILs as the absorbents for absorption refrigeration has mainly focused on the study of their physico-chemical properties and thus, the selection of potential working pairs for industrial applications. Cao & Mu [11] reported that the cation dependence of the water absorption ability of ILs can typically be ranked as follows: imidazolium . pyridinium . phosphonium; similarly, the water sorption capacity, rate and difficulty to reach equilibrium at 238C and 52% (relative humidity) for the investigated ILs with [ 6 ]. Su et al. [12] studied the absorption refrigeration cycle using a new working pair consisting of an IL and water. The results indicated that, compared with the typical working pair of H 2 O/LiBr, the single-stage absorption cycle using aqueous 1-ethyl-3-methylimidazolium acetate ([EMIm]Ac) exhibited almost the same coefficient of performance at a generation temperature of 1008C and a slightly higher performance at higher temperatures. These results demonstrated the feasibility of applying the working pair of [EMIm]Ac/H 2 O to absorption refrigeration. Current research on [EMIm]Ac is confined to theoretical refrigeration performance analysis based on the enthalpy-humidity diagram [12]. However, during a practical absorption refrigeration cycle, the processes of absorption by and desorption from the absorbent are often performed under cooling and heating, respectively. The cooling and heating efficiency directly affects the absorption and desorption efficiency. In particular, good heat transfer performance is required for the [EMIm]Ac on account of the performance feature of [EMIm]Ac gained from [12]. Whereas, He et al. [13] 14,6,6,6 ][DecO], in the temperature range of 283-373 K. The thermal conductivities of this series of ILs were found to be within the range of 0.147-0.162 W m 21 K 21 . It is apparent from these reports that the thermal conductivities of ILs are generally low. Therefore, it would be of great value to enhance the thermal conductivity of ILs such as [EMIm]Ac to allow their use as absorbents. In recent years, researchers have been able to increase the thermal conductivity of ILs by adding nanophases to form IL-based nanofluids (ionanofluids, INFs) [15,16]. Commonly used nanophases have included silica, nanosized carbons, metals, metal oxides, nitrides, carbides and graphene [17,18]. Among these, graphene is a novel carbon-based nanomaterial with excellent thermal, electronic and mechanical properties. The thermal conductivity of graphene is as high as approximately 5000 W m 21 K 21 , which makes it a very promising nanoadditive for nanofluids [19] (table 1). In the present study, the use of the promising absorbent [EMIm]Ac in an absorption refrigeration cycle was evaluated. The thermal stability, viscosity, thermal conductivity and specific heat capacity were measured. On account of the low thermal conductivity of [EMIm]Ac, graphene nanoplatelets (GNPs) were dispersed in the IL to obtain INFs. The variation of the thermal conductivity, viscosity and specific heat capacity of the INFs was analysed. The measured data were fitted with equations and compared with the corresponding classical models. Finally, considering that the most frequently used flow mode in the absorber and desorber units is falling film flow [20,21], the variation of the falling film heat transfer coefficient of the nanoparticle-enhanced IL in a horizontal tube was numerically evaluated. . Graphene nanoplatelets The GNPs were purchased from Chengdu Organic Chemicals Co. Ltd, Chinese Academy of Sciences. The GNPs exhibited a thermal conductivity of 3000 W m 21 K 21 , a diameter of 5-10 mm, a thickness of 4 -20 nm and a density of 0.6 g cm 23 and consisted of less than 20 layers (the data come from the manufacturer). A thermal field emission scanning electron microscopy (JSM-7001F, JEOL, Japan) image of the GNPs is presented in figure 2. It can be seen that the GNPs possessed the expected sheet structure. Measurement methods Thermal conductivities were evaluated using a laser thermal conductivity meter (LFA 467, NETZSCH, Germany) by the flash method over the temperature range of 293. 15 . The shear rate of 500 s 21 was selected to avoid the Taylor vortices area for steady-state shear testing. Specific heat capacities were determined using a differential scanning calorimeter (DSC 214 Polyma, NETZSCH, Germany) based on the sapphire method over the temperature range of 303.15-383.15 K. This was calibrated using sapphire provided by the supplier with a relative uncertainty of 0.5%. Thermal stabilities were analysed using the same differential scanning calorimeter as above, and the samples were heated from 250 to 3508C at a rate of 108C min 21 under a nitrogen atmosphere. Densities were measured using a densimeter (DMA 5000M-Lovis 200M, Anton Paar Co., Austria). This was calibrated using air and ultrapure water provided by Anton Paar GmbH and compared with values reported in the densimeter instruction manual. The found uncertainty was less than +1 Â 10 25 g cm 23 and the accuracy is 5 Â 10 26 g cm 23 . Numerical approach and reliability validation Considering that the most frequently used flow mode in the absorber and desorber units is falling film flow on a horizontal tube and its symmetrical structure, the physical model of falling film flow on half of the horizontal tube based on Gambit is depicted in figure 3. The boundary conditions are labelled. A slot with width of 3.0 mm was used as the liquid distributor and this was set at the left-most part of the solution domain. The distributor inlet was set as the velocity inlet. The liquid and gas phases were water and air, respectively. The simulated water entered from the distributor hole and then flowed around the smooth tube in the air atmosphere at a temperature of 298 K and a pressure of 101.325 kPa. The solution domain was discretized by quadrilateral elements. The areas near the tube and liquid inlet were refined. The volume of fluid model was selected for the simulations, which were performed using the Fluent software (v. 6.3.26). The governing equations can be expressed as follows [25]: where u is the velocity vector, rg is the gravitational force, F is the external body force of surface tension and T is temperature. Figure 4 shows a comparison of the local heat transfer coefficient of water between the present results and the reference results [25]. The tube diameter D and distribution height were 19.05 mm and 6.3 mm, respectively. The heat flux density q was 47.3 kW m 22 and the liquid film flow rate on one side of the tube per unit length G was 0.168 kg m 21 s 21 . From the curves, it can be seen that the obtained numerical data were in good agreement with the reference data. Results and discussion 3.1. Thermal stability of ionanofluids Viscosity of ionanofluids The shear stress is plotted in figure 6a as a function of shear rate for the samples within the shear rate range of 1-500 s 21 at 293.15 K, from which it can be found that the behaviour of INFs was quite Newtonian when the mass fraction of GNPs was lower than 0.5%. However, the behaviour of INFs Zhao et al. [25] present study figure 6b. It can be seen that in the studied temperature range, the viscosities of the INFs sharply decreased with increasing temperature. Figure 6c shows the viscosity as a function of the mass fraction. It can be seen that the viscosities of the INFs were lower than that of the base solution when the mass fraction was royalsocietypublishing.org/journal/rsos R. Soc. open sci. 6: 182040 less than 0.5%. This behaviour mainly originated from the dominant self-lubrication effect of GNPs at lower temperatures and mass fractions [26]. Owing to the higher viscosity of the IL, the slight change in viscosity was not obvious in the curves at mass fractions exceeding 1%. In addition, the effect of temperature on viscosity became weaker with increasing temperature. A maximum increase in viscosity of approximately 27.7% was observed for the INFs compared with the BL at a temperature of 373.15 K and a mass fraction of 5%, within the scope of this experiment. The relationship between the natural logarithm of the viscosity and the temperature was fitted using the Vogel -Fulcher-Tammann (VFT) equation (equation (3.1)). Table 3 summarizes the values of the fitting parameters A 0 , A 1 and A 2 for the various mass fractions. In addition, the deviation of the viscosities of the INFs from that of the BL ((h INF 2h BL )/h BL ) was analysed, as shown in figure 7. The deviation was found to fluctuate within approximately 20%, depending on the temperature and mass fraction. It can be concluded that both the mass fraction and the temperature had little effect on the viscosity deviation. Furthermore, the measured values of viscosity were compared with the classical models of Einstein, Brinkman and Batchelor [27] and a suggested correlation was developed by modifying the factor in the original Einstein model for spheres from a value of 2.5 to a fitted value of 1.1, as shown in figure 8, which can well represent the GNPs in our study except at a lower mass fraction (with a maximum deviation of where h INF is the viscosity of the INF, h BL is the viscosity of the BL, w is the particle volume fraction calculated using equation (3.6), v is the BL, w is the volume fraction of INF, and r NP and r INF are the densities of the nanoparticle and the INF, respectively. Overall, the viscosity of the IL was higher. The addition of a small amount of the GNPs did not increase the viscosity. Conversely, the addition of the GNPs slightly decreased the viscosity of the INFs. It is also worth noting that heating dramatically reduced the viscosity of the INFs. Thermal conductivity of ionanofluids The thermal conductivities of the BL and the INFs with mass fractions of 0.05, 0.1, 0.3, 0.5, 1, 2, 3, 4 and 5% as a function of temperature are shown in figure 9a. It can be seen that in the studied temperature range, the temperature exerted little influence on the thermal conductivity of the INFs. The thermal conductivity increased significantly with increasing mass fraction. This phenomenon can also be observed in figure 9b. These results demonstrate that the addition of the GNPs significantly increased the thermal conductivity of the IL. However, it is worth noting that the addition of excess GNPs was not conducive to increase the thermal conductivity, as the dispersion of the GNPs in the IL gradually decreased with increasing mass fraction. In addition, a linear equation (equation (3.7)) was used to fit the experimentally measured thermal conductivity data. Table 4 summarizes the values of the fitting parameters B 0 and B 1 for the various mass fractions. The deviation of the thermal conductivities of the INFs from that of the BL ((K INF 2K BL )/K BL ) was also analysed, as shown in figure 10. It is apparent that the temperature had little effect on the thermal conductivity. The maximum increase observed in the thermal conductivity of the INFs was (3.8)), as shown in figure 11. where K INF is the thermal conductivity of the INF, K BL is the thermal conductivity of the BL, K NP is the thermal conductivity of the nanoplatelets and w is the particle volume fraction calculated according to equation (3.6). The maximum error between the measurement values and the predicted data from the Maxwell model was 15.7%. Specific heat capacity of ionanofluids The specific heat capacities of the BL and the INFs with mass fractions of 0.05, 0.1, 0.3, 0.5, 1, 2, 3, 4 and 5% as a function of temperature are shown in figure 12a. The specific heat capacity of the INFs increased linearly with increasing temperature. Therefore, a linear equation (equation (3.9)) was used to fit the experimentally measured specific heat capacity data. Table 5 summarizes the values of the fitting parameters C 0 and C 1 for the various mass fractions. The influence of the mass fraction on the specific heat capacity is shown in figure 12b. mass fraction owing to the low specific heat capacity of the GNPs. A maximum reduction of approximately 3.62% was observed for a temperature of 363.15 K and a mass fraction of 5%, within the scope of the experiment. In an absorption refrigeration cycle, a lower specific heat capacity is beneficial for the temperature variation of the absorbent in both the absorber and desorber units. The deviation of the specific heat capacities of the INFs from that of the BL ((C P,INF 2C P,BL )/C P,BL ) was also analysed, as shown in figure 13. In general, the deviation of the specific heat capacity of the INFs increased with increasing mass fraction. However, no obvious trend was evident in the value of (C P,INF 2C P,BL )/C P,BL ) as the temperature was varied. Furthermore, the relative specific heat capacity (C P,INF /C P,BL ) at 303.15 K was compared with the existing theoretical model [29] (below equation), as shown in figure 14. where C P,INF is the specific heat capacity of the INF, C P,NP is the specific heat capacity of the nanoplatelets, C P,BL is the specific heat capacity of the BL and v is the mass fraction of the nanoplatelets. As can be seen from figure 14, the reliability of the model was very high, and the maximum error between the measurement value and the predicted data from the model was only 1.3%. 0.05% 0.1% 0.3% 0.5% 1% 2% 3% 4% 5% Figure 13. Deviation of the specific heat capacities of the INFs from that of the BL. The results demonstrated that the addition of GNPs clearly increased the thermal conductivity of the INFs while decreasing the specific heat capacity and viscosity at lower mass fractions. The viscosity and specific heat capacity sharply decreased and increased, respectively, with increasing temperature, while the thermal conductivity only slightly changed. Within the scope of the experiment, the maximum increase in viscosity of approximately 27.7% for the INFs compared with the BL was achieved at a temperature of 373.15 K and a mass fraction of 5%; a suggested correlation was developed by modifying the factor in the original Einstein model for spheres from a value of 2.5 to a fitted value of 1.1 with the maximum deviation of 9.5%. The maximum increase in thermal conductivity of approximately 43.3% occurred at 373.15 K for the INF with a mass fraction of 5%; the error between the thermal conductivity measurement results and the predicted data from the Maxwell model was within 15.7%. The maximum reduction in specific heat capacity of approximately 3.62% was observed at a temperature of 363.15 K and a mass fraction of 5%; the error between the specific heat capacity measurement results and the predicted data from the existing model was within 1.3%. Finally, it was found that the local heat transfer coefficient increased by 28.6% compared with the pure IL when the INF with a mass fraction of 5% was used as the absorbent.
3,960.6
2019-04-01T00:00:00.000
[ "Physics", "Engineering" ]
Genetic Determinants for Bacterial Osteomyelitis: A Focused Systematic Review of Published Literature Background: Osteomyelitis is an inflammatory process characterized by progressive bone destruction. Moreover, chronic bacterial osteomyelitis is regarded as a difficult-to-treat clinical entity due to its long-standing course and frequent infection recurrence. However, the role of genetic factors in the occurrence and development of bacterial osteomyelitis is poorly understood. Methods: We performed a systematic review to assess the frequency of individual alleles and genotypes of single-nucleotide polymorphisms (SNPs) among patients with bacterial osteomyelitis and healthy people to identify whether the SNPs are associated with the risk of developing bacterial osteomyelitis. Then, gene ontology and Kyoto Encyclopedia of Gene and Genomes analyses were performed to identify the potential biological effects of these genes on the pathogenesis of bacterial osteomyelitis. Result: Fourteen eligible studies containing 25 genes were analyzed. In this review, we discovered that the SNPs in IL1B, IL6, IL4, IL10, IL12B, IL1A, IFNG, TNF, PTGS2, CTSG, vitamin D receptor (VDR), MMP1, PLAT, and BAX increased the risk of bacterial osteomyelitis, whereas those in IL1RN and TLR2 could protect against osteomyelitis. The bioinformatic analysis indicated that these osteomyelitis-related genes were mainly enriched in inflammatory reaction pathways, suggesting that inflammation plays a vital role in the development of bacterial osteomyelitis. Furthermore, functional notation for 25 SNPs in 17 significant genes was performed using the RegulomeDB and NCBI databases. Four SNPs (rs1143627, rs16944, rs2430561, and rs2070874) had smaller scores from regulome analysis, implying significant biological function. Conclusion: We systematically summarized several SNPs linked to bacterial osteomyelitis and discovered that these gene polymorphisms could be a genetic factor for bacterial osteomyelitis. Moreover, further large-scale cohort studies are needed to enhance our comprehensive understanding of the development of osteomyelitis to provide earlier individualized preventions and interventions for patients with osteomyelitis in clinical practice. INTRODUCTION Osteomyelitis is an inflammatory process characterized by progressive bone destruction. It is a bone infection mainly caused by microorganism invasion, and Staphylococcus aureus is the bacterial pathogen frequently isolated from patients with posttraumatic and hematogenous osteomyelitis (Lew and Waldvogel, 2004;Olson and Horswill, 2013). Commonly, according to the etiology, osteomyelitis could be divided into three types: posttraumatic osteomyelitis, hematogenous osteomyelitis, and osteomyelitis caused by vascular insufficiency (Lew and Waldvogel, 2004). Posttraumatic osteomyelitis predominantly occurs following open traumatic fracture, skeleton surgery, or prosthetic joint replacement. Meanwhile, hematogenous osteomyelitis typically occurs in children, characterized by the spread of bacteria from a lesion to the bone through the bloodstream. Osteomyelitis secondary to vascular insufficiency particularly occurs in patients with diabetes or diabetic foot infection (Lew and Waldvogel, 2004). Chronic osteomyelitis is regarded as a difficult-to-treat clinical entity due to its long-standing course and frequent infection recurrence, with a high risk of morbidity and mortality (Valour et al., 2014). Patients with chronic osteomyelitis have a higher incidence of psychosocial impairment (Tseng et al., 2014) and have healthcare and economic burden (Kapadia et al., 2016). The pathogenesis of osteomyelitis is linked to both environmental and genetic factors. Currently, several pieces of evidence have suggested that genetic predisposition plays an essential role in the pathogenesis of osteomyelitis (Chen et al., 2017;Paludo et al., 2017). With rapid development and application of sequencing and genetic association analysis for complicated diseases, genetic variants that potentially contribute to the occurrence of osteomyelitis are widely investigated. Single-nucleotide polymorphisms (SNPs) of DNA sequences are common in the population. Many SNPs in genes related to the occurrence of osteomyelitis have been widely reported. For example, TaqI (rs731236) and FokI (rs2228570) of the vitamin D receptor (VDR) gene polymorphism may contribute to the susceptibility of chronic osteomyelitis (Jiang et al., 2016). This review was conducted to examine the individual allele frequency or genotype of gene variants among patients with osteomyelitis to identify whether gene polymorphisms are associated with the probability of developing osteomyelitis. MATERIALS AND METHODS This systematic review was performed based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Moher et al., 2009). Literature Search Strategy A systematic literature search was conducted using the PubMed, EMBASE, and Web of Science databases. The following terms-"genetic polymorphism, " "genetic variants, " "DNA polymorphism, " "single-nucleotide polymorphism, " "SNP, " "osteomyelitis, " and "bone infection" -were used to search all eligible studies on the relationship between SNPs and the risk of osteomyelitis and published until the end of December 2020. Additional studies were identified by screening the reference lists of the included studies. English articles were included in this review. Detailed search strategies are provided in Supplementary Table 1. Inclusion and Exclusion Criteria The selected studies fulfilled the following inclusion criteria: • Bacterial osteomyelitis was definitely diagnosed according to the standard criteria. • The study reported the association between SNPs and susceptibility to osteomyelitis. • Sufficient data could be extracted from the study. • The study was limited to case-control or cohort studies on humans. The exclusion criteria were as follows: • The same study was duplicated or overlapped. • Case reports, letters, meta-analyses, reviews, or studies on animals were excluded. • The study reported the correlation between other types of genetic polymorphisms and osteomyelitis. • The study reported the association of gene polymorphism with non-bacterial osteomyelitis. We screened the literature by title and abstract. For the abstracts that we could not fully access, we obtained the full text to complete the assessment before we decided to include or exclude them. Two reviewers independently screened the articles and discussed uncertain publications to resolve disagreements. Quality Assessment Two reviewers (XP.X. and JB.L.) independently conducted a quality methodological assessment of the studies included in the review based on the Newcastle-Ottawa Scale (NOS; Stang, 2010). The NOS scores ranged from zero stars to nine stars. Studies with NOS scores of below six stars were excluded in the review, and those with a score of at least six stars were considered good quality. We resolved disagreements by discussion or consultation with the third reviewer (F.G.), if necessary. Biological Function Annotation of Targeted Genes and SNPs The protein-protein interaction network of the genes included in the review was constructed using the Search Tool for the Retrieval of Interacting Genes (STRING, version 11.0) database (https:// string-db.org/). Hub genes are discovered using Cytoscape (version 3.4.0). Gene ontology (GO) and Kyoto Encyclopedia of Gene and Genomes (KEGG) analyses were performed to analyze the cell components, biological processes, and pathway enrichment of these osteomyelitis-related genes based on the DAVID database (version 6.8, https://david.ncifcrf.gov/tools. jsp/). P < 0.05 were used to denote statistical significance, and ≥3 enriched genes were considered significant as well. Data Extraction and Statistical Analysis The data extracted from each included studies were as follows: first author, year of publication, nation, study design, number of cases and controls, name of gene SNP, distribution of genotype and allele frequency in cases and controls, genotype method, and Hardy-Weinberg equilibrium (HEW) in controls. Two reviewers independently extracted important data from each included study. The data from each study were as follows: odds ratios (ORs), 95% confidence interval (CI), p-value, and genetic model. P < 0.05 were used to denote statistical significance. Characteristics of the Included Studies Based on our literature search strategies, 3,905 publications were obtained from three databases (Supplementary Table 1), of which 891 were removed because of duplication. Then, 2,983 of the remaining publications were excluded by browsing their titles and abstracts. The full texts of 31 publications were obtained, of which 17 were excluded because of the following reasons: nine studies focused on chronic non-bacterial osteomyelitis; in two studies, SNPs were not involved; two studies were not case-control studies; in three studies, controls were not healthy individuals; and one study had inconsistent data. Eventually, 14 studies fulfilled our inclusive criteria (including 1,248 cases and 1,712 controls). The document selection process is detailed in the flowchart (Moher et al., 2009), shown in Figure 1. Among the 14 included studies, two have reported the associations between SNPs and posttraumatic bacterial osteomyelitis Jiang et al., 2020), and two studies have reported the relationship between SNPs and hematogenous bacterial osteomyelitis (Osman et al., 2015(Osman et al., , 2016. Seven studies have reported the associations between SNPs and three types of osteomyelitis (posttraumatic, hematogenous, and vascular insufficiency-related bacterial osteomyelitis; Asensi et al., 2003;Montes et al., 2006;Valle-Garay et al., 2013;Jiang et al., 2016;Hou et al., 2018;Perez-Is et al., 2019;Zhao et al., 2020). The other three studies have reported the relationship of SNP with unspecified types of bacterial osteomyelitis (Ocana et al., 2007;Tsezou et al., 2008;Kong et al., 2017). This systematic review involved 1,248 patients with bacterial osteomyelitis, including 719 patients with posttraumatic bacterial osteomyelitis, 190 patients with hematogenous bacterial osteomyelitis, 98 patients with vascular insufficiency-related bacterial osteomyelitis, and 241 patients with unspecified types of bacterial osteomyelitis. The characteristics of selected studies are summarized in Table 1. The qualities of the included studies were evaluated using the NOS, and the scores are also presented in Table 1. It should be noted that among the five articles from the same authors, the cases and controls of each article shared their own data. The two articles in the team of Asensi et al. also shared their own data (Asensi et al., 2003;Perez-Is et al., 2019). All genotyped gene SNPs in 11 studies complied with the Hardy-Weinberg equilibrium (HWE) for healthy controls (p > 0.05), and three studies did not mention the HWE results. Summary of the Outcomes According to the included studies, we collected 25 SNPs in 17 significant genes ( Table 2), and we classified the protein products encoded by the 17 significant genes into the following two categories. Cytokine-Related Proteins In seven case-control studies, 14 SNPs in nine genes encoding cytokine-related proteins (IL-1α, IL-1β, IL-1RN, IL-4, IL-6, IL-10, IL-12β, IFN-γ, and TNF-α) were investigated. Recently, Jiang et al. (2020) have reported a significant statistical difference in the genotype distribution of IL1B genes rs16944 and rs1143627 between patients with osteomyelitis and controls and revealed that the GG and AG genotypes of rs16944 and the TT and CT genotypes of rs1143627 were linked to the risk of posttraumatic osteomyelitis. Additionally, the CC and CG genotypes of rs1800796 located on the IL6 gene were associated with an increased risk of posttraumatic osteomyelitis. However, the mutant C allele and CT genotype of rs4251961 within IL1RN were considered a protective factor against posttraumatic osteomyelitis. Additionally, Asensi et al. (2003) have revealed that the TT genotype within IL1A rs1800587 and IL1B rs1143634 might be associated with the occurrence of osteomyelitis. Among patients with osteomyelitis, the TT genotype in rs1800587 was significantly associated with a decreased age at which the diagnosis of osteomyelitis is made. Tsezou et al. (2008) have reported that the results of dominant genetic models used for rs1800587, rs2243248, rs2243250, and rs1800795 demonstrated that IL1A, IL4, and IL6 SNPs could contribute to the genetic pathologies of osteomyelitis. The result of IL1A rs1800587 regarded as a risk factor for osteomyelitis was consistent with the outcome reported by Asensi et al. (2003). However, no significant difference was observed between patients with osteomyelitis and healthy controls regarding the individual alleles and genotypes of IL-1α rs1800587 in the studies of Jiang et al. (2020) and Osman et al. (2016). Osman et al. (2015) have reported that the population with the C allele or CC genotype might increase the susceptibility to hematogenous osteomyelitis, whereas the T allele or CT genotype could act as a protector factor. A heterozygous genetic model of rs2243248 has demonstrated that the mutant G allele contributed to hematogenous osteomyelitis, although the distribution difference of allele G and T frequencies among cases and controls was not statistically significant (p > 0.05). In addition, the allele A of rs1800871 within the IL10 gene and the GG genotype of IL12B gene rs3212227 were identified to contribute to hematogenous osteomyelitis. Osman et al. (2016) have also revealed that the AA genotype of rs16944 in IL1β could be a risk factor, whereas the allele G and GG genotype of this SNP were considered a protector against hematogenous osteomyelitis among Saudis. However, this result was contrary to the conclusion of the study by Jiang et al. (2020). Zhao et al. (2020) have recently reported that the mutant allele A in rs2430561 might be a risk factor of posttraumatic osteomyelitis, and individuals with the AT genotype in this gene might have a higher risk of developing posttraumatic osteomyelitis. Hou et al. (2018) have first reported that individuals with the TT genotype of the rs1799964 SNP in TNF might have a higher risk of developing extremity chronical osteomyelitis in China. However, the results of a study (Asensi et al., 2003) indicated that rs1799964 in TNF is not associated with the susceptibility to the development of chronic bacterial osteomyelitis. Protein, Receptor, and Enzyme In another seven case-control studies, 11 SNPs in eight genes encoding a protein (BAX), receptors (VDR, TLR2, and TLR4), and enzymes (CTSG, COX-2, MMP-1, and t-PA) were investigated. Ocana et al. (2007) have found that the frequency of the mutant A allele at position 248 within the BAX gene was higher in patients with osteomyelitis than that in healthy controls, which was linked to the lower expression of BAX and prolonged survival of peripheral blood neutrophils. Jiang et al. (2016) have identified that the frequencies of the mutant C allele of rs731236 (TaqI) and rs2228570 (FokI) were higher in patients with chronic osteomyelitis than those in healthy individuals. The result of the dominant genetic model of rs731236 and the dominant and homozygous genetic models of rs2228570 has suggested that VDR SNPs are significantly linked to the susceptibility of developing chronic osteomyelitis. Different genotypes in rs731236 (TaqI) and rs2228570 (FokI) polymorphism were significantly associated with serum TNFα levels in patients with osteomyelitis. Osman et al. (2016) have found that the mutant T allele and TT genotype of rs3804099 in the TLR2 gene might protect against hematogenous osteomyelitis in the Saudi population. Montes et al. (2006) have demonstrated that although the contribution of the A allele of rs498670 and T allele of rs498671 in TLR4 did not differ between patients and controls, the outcomes of the recessive genetic model of rs498670 (GG genotype) and rs498671 (TT genotype) revealed that individuals with GG or TT would have an increased susceptibility to osteomyelitis. Perez-Is et al. (2019) have reported that the G allele of rs45567233 situated in CTSG was more frequent in patients with osteomyelitis than that in controls. The result has suggested that the G allele could be a risk factor, and people with the AG genotype would elevate the risk of osteomyelitis in a Spanish population. The association of the rs45567233 polymorphism with the susceptibility of osteomyelitis might be achieved by elevating serum CTSG activity and lactoferrin levels. Wang et al. (2017) have concluded that the G allele and GG genotype of rs689466 located in the PTGS2 gene (COX-2) could be considered a risk factor contributing to the onset of posttraumatic osteomyelitis. Serum C-reactive protein (p = 0.017) and IL-6 (p = 0.006) levels were significantly higher in patients with posttraumatic osteomyelitis accompanied with the GG genotype, instead of the CG genotype. Kong et al. (2017) have concluded that the G allele of rs1144393 in the MMP1 gene was regarded as a genetic risk factor and carriers of the GG genotype have an increased risk of osteomyelitis. Bioinformatics Analysis Twenty-five osteomyelitis-related genes were presented in the publications included in this review. The proteins encoded by osteomyelitis-related genes that were extracted from the included publications exhibited significant correlations (Figure 2). Among these genes, the node degree of TNF was the highest (Table 3). Additionally, the results of the GO and KEEG analysis of these osteomyelitis-related genes are shown in Figure 3. For biological processes, these genes were significantly enriched in the following terms: "immune response, " "positive regulation of NF-kappa B import into nucleus, " "positive regulation of nitric oxide biosynthetic process, " "positive regulation of interleukin-6 production, " and "negative regulation of growth of symbiont in host." For cell components, these genes were enriched in the following terms: "extracellular space, " "extracellular region, " "external side of plasma membrane, " "cell surface, " and "cytoplasm." For molecular functions, these genes were mainly enriched in the following terms: "cytokine activity, " "interleukin-1 receptor binding, " "growth factor activity, " "protein binding, " and "serine-type endopeptidase activity." According to the result of the KEEG analysis, these genes mainly participated in the pathways including inflammatory bowel disease (IBD), leishmaniasis, tuberculosis, amoebiasis, and rheumatoid arthritis. The results indicated that inflammation is involved in the pathogenesis of osteomyelitis. In addition, 25 SNPs of 17 genes were reported to be significantly associated with the risk of osteomyelitis ( Table 2). Many SNPs were mainly located in the intron or promotor region. According to regulome analysis, IL1B rs1143627, IL1B rs16944, IFNG rs2430561, and IL4 rs2070874 had scores of 1b, 1f, 2b, and 2b, respectively ( Table 4). Smaller scores imply that SNPs have a greater biological functional significance. DISCUSSION Bacterial osteomyelitis contains a complex inflammatory reaction caused by invading microorganisms. S. aureus is the bacterial pathogen frequently associated with posttraumatic and hematogenous osteomyelitis (Lew and Waldvogel, 2004). Despite appropriate treatments through medications and surgery, up to 30% of osteomyelitis cases become chronic, resulting in serious disability and economic burden (Lew and Waldvogel, 2004). To solve the problem of intractable chronic osteomyelitis disease, exploring the etiology and pathology of osteomyelitis is necessary. The occurrence of bacterial osteomyelitis is a complicated process caused by both genetic and environmental factors. Much attention has been paid to exploring the close association between host factors in terms of gene polymorphisms and the risk of osteomyelitis. This review was conducted to summarize the SNPs of 25 genes linked to the susceptibility of osteomyelitis based on published literature. According to our review, rs1143627 and rs1143634 in IL1B, rs1800796 and rs1800795 in IL6, rs2243248 and rs2243250 in IL4, rs1800871 in IL10, rs3212227 in IL12B, rs1800587 in IL1A, rs2430561 in IFNG, rs1799964 in TNF, rs689466 in PTGS2, rs45567233 in CTSG, rs731236 and rs2228570 in VDR, rs1144393 and rs1799750 in MMP1, and rs4646972 in PLAT(t-PA) and BAX-248G/A increased the risk of osteomyelitis, whereas rs4251961 in IL1RN, rs380099 in TLR2, and the CT genotype of rs2070874 in IL4 could protect against osteomyelitis. However, the results of Jiang et al. (2020) have suggested that the GG or GA genotype of IL1B rs16944 was a risk factor of osteomyelitis, which was inconsistent with the results of the study by Osman et al. (2016). The inconsistencies could be attributed to the insufficient sample size and population from different countries. Therefore, further research should be conducted to identify the link between IL1B gene polymorphism and osteomyelitis to clarify the etiology and pathogenesis of osteomyelitis. Many genes collected from the publications included in this review can regulate the immune system and, therefore, contribute to host defense against pathogenic microorganisms in the bodily tissues and blood (Hill, 2012). Pathogen molecular patterns binding to pattern recognition receptors, such as Tolllike receptors (i.e., TLR2 and TLR4), can initiate inflammatory reactions of innate immune cells and induce the expression of pro-inflammatory cytokines (such as IL-1β and TNF-α). The activation of TLRs expressed in bone cells could influence osteoclast differentiation and activities in a complicated manner. TLRs expressed in early osteoclast precursors inhibit the differentiation of these cells, whereas the activation of TLR expressed in osteoblasts triggers the secretion of osteoclastogenic cytokines, including RANKL and TNF-α, which contribute to osteoclast differentiation and activation (Bar-Shavit, 2008). Yoshii et al. (2002) reported that pro-inflammatory IL-1β, IL-6, IL-4, and TNF-α levels in a locally infected bone increased during the infection period in a murine model of osteomyelitis, and IL-1β and IL-6 might contribute to bone damage during the earlier period of infection. IL-1R signaling contributes to bone destruction during osteomyelitis, whereas it also plays an important role in repressing local bacterial replication during bone infection (Putnam et al., 2019). IL-1β-activated osteoclasts exhibit strong absorbing ability and high H+ release (Shiratori et al., 2018). IL-10 is regarded as an immune modulatory cytokine that mitigates damage by decreasing the expression of inflammatory cytokines. IL-10 promoter polymorphisms seemed to be associated with the pathogenesis of chronic nonbacterial osteomyelitis (auto-inflammatory disorder) through the involvement of IL-10 dysfunction (Hofmann et al., 2011). Neutrophils are the first-line innate immune defense against many microbial infections. Meanwhile, the elimination of neutrophils through apoptosis or taken in by macrophages could alleviate the destructive nature of inflammation and promote resolution of the inflammation (Savill et al., 2002). BAX-α is a proapoptotic protein, and bcl-2 is an antiapoptotic protein (Oltvai et al., 1993). A high BAX-α/bcl-2 ratio would lead to apoptosis of leukemic cells (Pepper et al., 1998). BAX gene mutation could influence protein expression and biological function (Addeo et al., 2007). IFN-γ secreted by immunocytes in response to bacterial invasion strengthens antigen presentation and the phagocytic abilities of macrophages (Gomez et al., 2015). Meta-analyses of the association between IFN-γ+874T/A and susceptibility to leukemia (Wu et al., 2016), hepatocellular carcinoma (Zhou et al., 2015), and asthma (Nie et al., 2014) have been conducted. Matrix metalloproteinases (MMPs), a family of enzymes, play an important role in the degradation and rebuild of extracellular matrix under both normal physiological and pathological conditions. MMPs are involved in matrix degradation and joint destruction in arthritis diseases (Pap et al., 2000;Tetlow et al., 2001). The expression of inducible MMPs is increased by stimulating inflammatory mediators, such as TNF-α and IL-1 (Nagase and Woessner, 1999). A population-based study has discovered that the MMP1-1607(1G/2G) polymorphism might be associated with reduced bone mineral density at the distal radius in postmenopausal women (Yamada et al., 2002). Cathepsin G (CTSG) is a serine protease stored in the neutrophil azurophilic granules and has antimicrobial properties (Miyasaki et al., 1995). In addition, CTSG could activate extracellular MMPs at the site of inflammation, causing the degradation of extracellular matrix components (Baggiolini et al., 1978;Korkmaz et al., 2008). CTSG also activates osteoclast precursors by stimulating the expression of RANKL, enhancing mammary tumor-induced osteolysis (Beaujouin and Liaudet-Coopman, 2008). Vitamin D is essential in calcium homeostasis and bone metabolism. In addition, it participates in the regulation of inflammatory reactions (Yin and Agrawal, 2014). VDR is coded by the VDR gene located on chromosome 12. In addition, VDR gene polymorphisms affect the biological function of VDR. The gene polymorphisms TaqI, BsmI, FokI, and ApaI were most frequently investigated in many skeletal diseases. A case-control study has reported that the VDR FokI polymorphism was linked to the risk of osteoporosis in postmenopausal women (Wu et al., 2019). Recently, the results of a meta-analysis have suggested that VDR BsmI and TaqI polymorphisms were associated with the susceptibility to osteoarthritis in the spine (Cezar-Dos-Santos et al., 2020). Simultaneously, our bioinformatics analysis discovered that these 25 genes were associated with immune responses (biological process), extracellular space (cellular component), and cytokine activity (molecular function). Interestedly, these osteomyelitisrelated genes were enriched in the disease pathways, including IBD. Inflammation participates in the pathogenesis of bacterial osteomyelitis from the aspect of bioinformatics analysis. Four SNPs (IL1B rs1143627, IL1B rs16944, IFNG rs2430561, and IL4 rs2070874) had smaller scores from the regulome analysis, implying that these SNPs had significant biological functions. rs1143627 and rs2070874 were located in the 5 ′ -UTR region of IL1B and IL4 gene, respectively. rs16944 is located in an intron of the IL1B gene, and rs2430561 is located in the IFNG promoter region. Thus, we can hypothesize that these SNPs contribute to the risk of osteomyelitis, possibly by affecting transcription factors or other molecules binding to the motif. This review summarized a series of genes and SNPs associated with osteomyelitis. Besides, the emergence of genome-wide association studies (GWAS) provides many opportunities to identify alleles associated with complex diseases (Altshuler et al., 2008). A GWAS from the UK Biobank (http://geneatlas.roslin. ed.ac.uk/; Bycroft et al., 2018) involved 452,264 individuals, including 698 patients with osteomyelitis. The subjects were from the UK and were between 40 and 69 years old. SNP genotyping was identified using the UK Biobank AXIOM array. However, unfortunately, we have not found any published data about the relationship between several SNPs and the risk of osteomyelitis using GWAS. Genetic variants exist among individuals, but the influence of these genetic polymorphisms on clinical significance or phenotypic diversity has not been known yet. Lappalainen et al. (2013) have demonstrated that genetic variation could affect the occurrence and development of diseases by regulating gene expression. Thus, elucidating how genotype varieties clinically affect phenotypes in complicated diseases remains challenging. Exploring the role of genetics in phenotypes and diseases and its potential interactions with other factors is of great significance for understanding pathogenesis of diseases. It is hoped that this will provide more opportunities for drug development and personalized treatments. Note that human genetics is a valuable tool for the therapeutic hypothesis in drug development. Plenge et al. (2013) have provided empirical examples of drug-gene pairs and objective criteria to highlight the role of genetic findings for drug discovery in the future. For example, anakinra (interleukin-1 receptor antagonist) treatment is effective in patients with adult-onset Still disease in the preliminary experience (Lequerre et al., 2008). Similarly, according to our summary of genetic SNPs associated with bacterial osteomyelitis, some inflammatory cytokine-related inhibitors can potentially be used to test therapeutic hypotheses for the development of drugs and treatment of bacterial osteomyelitis, such as ustekinumab (anti-IL12 monoclonal antibody), tocilizumab (anti-IL6R monoclonal antibody), and anakinra (interleukin-1 receptor antagonist). In this review, we summarized the association of gene polymorphisms with the susceptibility to osteomyelitis. However, this review has many limitations as well. The primary limitations of this review include the following: (1) the limited number of available studies and individuals selected for this review; (2) the low statistical efficacy caused by the limited sample size; (3) the limitation of ethnic diversity; (4) three different types of osteomyelitis that were not analyzed separately; and (5) the limited amount of data extracted from few studies, preventing us from conducting more quantitative analysis (meta-analysis). CONCLUSION This review summarized the association between gene polymorphisms and the increasing risk of osteomyelitis. According to this review, rs1143627 and rs1143634 in IL-1B, rs1800796 and rs1800795 in IL6, rs2243248 and rs2243250 in IL4, rs1800871 in IL10, rs3212227 in IL12B, rs1800587 in IL1A, rs2430561 in IFNG, rs1799964 in TNF, rs689466 in PTGS2, rs45567233 in CTSG, rs731236 and rs2228570 in VDR, rs1144393 and rs1799750 in MMP1, and rs4646972 in PLAT(t-PA) and BAX-248G/A increased the risk of osteomyelitis, whereas rs4251961 in IL1RN, rs380099 in TLR2, and the CT genotype, not the CC genotype, of rs2070874 in IL4 could protect against osteomyelitis. However, due to the small sample size included in this review, we cannot draw a definitive conclusion on the correlation between genetic polymorphisms and susceptibility to osteomyelitis. Therefore, large-scale prospective studies should be conducted to further illustrate the relationship between SNPs and the risk of developing osteomyelitis. Meanwhile, investigating how genetic diversity influences clinical phenotypes is also necessary to better understand the role of genetic factors in the pathogenesis of osteomyelitis and to provide more personalized preventions and interventions for osteomyelitis in clinical practice. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
6,304.2
2021-06-17T00:00:00.000
[ "Medicine", "Biology" ]
Performance and operation of the calorimetric trigger processor of the NA62 experiment at CERN SPS The NA62 experiment at the CERN SPS aims at measuring the branching ratio of the very rare kaon decay K+ → π+ ν ν̄ (expected 10−10) with a 10% background. Since an high-intensity kaon beam is required to collect enough statistics, the Level-0 trigger plays a fundamental role in both the background rejection and in the particle identification. The calorimetric trigger collects data from various calorimeters and it is able to identify clusters of energy deposit and determine their position, fine-time and energy. This paper describes the trigger system setup during the 2016 physics data taking. A newly implemented cluster counting algorithm is also presented. Conclusion 10 The NA62 experiment [1] is a fixed target experiment located in the CERN North Area. The 400 GeV/c high-intensity SPS proton beam impinges on a beryllium target, producing a 750 MHz secondary hadron beam of which 6% are kaons. They are selected with a momentum of 75 GeV/c and they decay in flight along a 65 m fiducial decay region (figure 1). To achieve the desired signal to background ratio of about 10 in the K + → π + νν measurement the experiment has to identify and veto the kaon decays, such as K + → π + π 0 and K + → µ + ν, that have branching ratios up to 10 10 times larger than the expected signal [1]. The Level-0 calorimetric trigger has the role of vetoing photons and selecting a π + in the final state. Its capabilities have been extended during the 2016 physics run by the implementation of a cluster counting algorithm, in addition to the total energy criteria. The Calorimeters A hermetic photon veto for the experiment is provided by various detectors, each covering a different angular region. From the inner to the outer region: the forward Small Angle Calorimeter (SAC), the Intermediate Ring Calorimeters (IRC) up to 1 mrad, the Liquid Krypton Calorimeter (LKr) up to 8.5 mrad and the Large Angle Photon Veto (LAV) up 50 mrad (see figure 1). Both IRC and SAC are made of alternating layers of lead and scintillators (Shashlik). Downstream the LKr calorimeter there are two hadronic calorimeters called Muon-Veto 1 and 2 (MUV1 and MUV2), composed as iron-scintillator sandwich. They are all readout via PMTs with a total of 176 channels for MUV1, 88 for MUV2, and 4 for IRC and SAC. LKr is an high-performance electromagnetic calorimeter, about 27 radiation lengths, with 13248 channels consisting of 2 × 2 cm 2 cells of thin copper-beryllium ribbons, kept at high voltage, and immersed in a 10 m 3 liquid krypton bath at 120 K acting as active medium. For photons of more than 10 GeV energy, a detection inefficiency of 10 −5 , a time resolution of 350 ps and an energy resolution better than 1% allow its use as an efficient veto and for particle identification. The back-end electronics is provided by 432 Calorimeter REadout Modules (CREAMs) for the LKr and other 10 CREAMs for MUV1, MUV2, IRC and SAC [2]. They are VME modules installed in 29 crates (28 for the LKr alone). Each module digitizes, after proper shaping, up to 32 calorimeter channels with 40 MS/s FADC with 14-bit dynamic range. It then buffers up to 8 GB data (on a DDR3 SODIMM module) during the SPS spill and provides 2 lower-granularity Trigger Sum Links (TSL) of 16 (4 × 4) calorimeter cells to the calorimetric Level-0 Trigger. The data, optionally zero-suppressed, are readout when there is a Level-1 trigger. A scheme of the calorimeter trigger and readout system for the LKr is shown in figure 2a. The NA6Trigger System The calorimeter trigger is part of the larger experimental trigger. At full intensity beam, an average 10 MHz decay rate hits the downstream detectors. In order to extract few interesting decays from such an intense flux, a complex three level trigger and data acquisition system was designed [3]. The Level-0 (L0) trigger algorithm is based on different sub-detectors (in addition to the calorimetric trigger, the charged hodoscope, the muon detector, the large-angle vetoes, the RICH detector) and it is performed by dedicated custom hardware modules, with a maximum output rate of 1 MHz and a maximum latency of 1 ms. The data from each sub-detector -except the LKr calorimeter's -is sent to a farm of PCs where the Level 1 (L1) and Level 2 (L2) software triggers are performed. L1 algorithms run on the data of individual detectors. A positive L1 decision triggers the readout of the calorimeter data (which is kept in memories up to then) and, subsequently, L2 algorithms are executed on the complete event. The L1 trigger has a maximum output rate of 100 kHz with a non-fixed total latency of about 1 s, while the L2 trigger, has an output rate of the order of 15 kHz with a maximum total latency equal to the basic data taking time unit, the period of the SPS beam-delivery cycle of about 15 s. The Level-0 Calorimetric Trigger The trigger recognizes electromagnetic/hadronic clusters in the calorimeters along with their position, fine-time and energy [4,5]. A schematic view is provided in figure 2b. The inputs are the Trigger Sum Links (TSL), sums of ADC values, sampled at 40 MHz, that are continuously sent by the CREAM modules. There are 864 TSL for the LKr, 1 for IRC, 1 for SAC, 12 for MUV1 and 6 for MUV2. The system is composed by 37 TEL62 boards [6,7]. They are 9U general purpose data acquisition boards, based on LHCb TELL1 [8], common to many sub-detectors of the experiment and they are equipped with custom dedicated I/O mezzanines (see figure 3a). Each board mounts five Altera Stratix III FPGAs (EP2SL200 [9]): four, so called Pre-Processing (PP), receive and process data from input mezzanines and one, so called Sync-Link (SL), collects and process data from the PPs and sends them to the output mezzanine. The calorimetric trigger is structured as a 3-layer system where each layer has a different number of TEL62, with different I/O mezzanines, and plays a different role in the cluster search that is is performed through a 1D (vertical) + 1D (horizontal) algorithm. The trigger of the largest and most complex calorimeter, the LKr, is structured as follows: • In a first front-end layer, composed of 28 boards (one board is shown in figure 3a, the crates in figure 4a), peaks are independently identified in 28 vertical slices of the calorimeter. Each slice is segmented vertically in 32 super-cells (4x4 calorimeters cells), where each super-cell corresponds to an input TSL. • In a second layer composed of 7 merger boards (mezzanines shown in figure 3b and 3c, the crate in figure 4b), different peaks are horizontally merged when they are close in time and space, and therefore each cluster can be fully reconstructed. • A concentrator board collects all the information and transmits, through the Gbit Ethernet mezzanine, a trigger primitive to the central L0TP for the trigger decision. In the 2016 physics run the trigger decision is based on the total energy deposit of each calorimeter and on the number of clusters. The role of the merger boards has been limited to the collection of data from the frontends and to its delivery to the final concentrator board, where the trigger logic is implemented. The trigger for the MUV1, MUV2, SAC and IRC is realized with one front-end board directly connected the the concentrator board. Main Firmware Features The entire system firmware has been designed from scratch and static timing analysis performed according to hardware specifications on all I/O paths. In this section the main features common to all system firmwares are described. Each layer additionally implements part of the trigger algorithm as described in section 3. The latency from the TSL input to the generation of the trigger primitive is about 50 µs.1 1The total latency of the NA62 L0 trigger is fixed to 100 µs with a delay added by the Level-0 Trigger Processor. -4 -(a) One of the 29 TEL62 boards of the front-end layer. It mounts two TELDES input mezzanines (on the left), each with 16 DS92LV16 deserializers for 16 input channels, and one TX board (on the right) that serializes two output channels. Clock distribution The experiment distributes the 40.08 MHz experiment clock via the Timing, Trigger and Control (TTC) system [10]. Each TEL62 board receives one optical fiber with the TTCrx timing receiver ASIC [11] (see figure 3a) that also synchronously distributes triggers (both the physics triggers and special ones like the SPS Start of Burst (SOB) and End of Burst (EOB)). The received clock is jitter-cleaned by a QPLL chip [12] (see figure 3a) and then distributed on the board to the SL and to the four PP FPGAs that use it as the input clock of the main PLL. Data are sent from the PP FPGAs to the SL with a derived 160 MHz compensated clock (12 Gbps bandwidth per each PP). The various I/O mezzanines use different technologies. The DS92LV16 deserializers on-board the TELDES [13] boards receive a 16-bit serialized input from the CREAMs and recover the -5 - 40 MHz clock embedded in the data stream. This data stream is interfaced on the PP FPGAs with dual clock FIFOs. The output mezzanines, the TX board or the Gbit board, receive source synchronous data and clock at 120 MHz from the SL FPGA. The TX board serializes 48-bit data over 8 LVDS links with a 70 MHz clock, provided by an oscillator on-board, that is also transmitted and used to latch data on the receiver side. A Stratix II FPGA on the TX board buffers the 120 MHz data stream received from the SL and provides a 70 MHz Double Data Rate input to two DS90CR485 serializers. The ECS local bus Each TEL62 has a local 32-bit bus, called Experiment Control System (ECS), that connects the five FPGAs, the output mezzanine (16 LSB of the bus) and the on-board Credit Card PC (CCPC, see figure 3a). The CCPC is an i486 disk-less PC with an Ethernet interface and 64 MB SRAM that runs Linux. A dedicated glue-card (PLX 9030) interfaces the PCI memory space of the CCPC to the local bus. This allows software to address and read/write registers on the FPGAs. ECS is clocked with a 20 MHz clock derived from the QPLL 40 MHz output clock. The CCPC behaves as a master on the bus while, on the firmware side, each FPGA has a bus bridge that selects the addressed register, FIFO or memory cell that acts as a slave. This system allows on-line control and monitoring of the trigger. Because of the complexity and high number of boards, a python object-oriented software infrastructure has been developed to abstract operations at the highest possible level. A VHDL memory map of the address space is used for firmware writing and it is also parsed by the software, simplifying software writing. The glue-card also allow JTAG access for reprogramming the board. -6 -Data transmission The whole system uses common generic logic for sending and receiving data between different FPGAs or between different boards. The data can be optionally sent together with a Hamming code that is checked by the receiver to check data integrity. The code has one extra parity bit to allow single error correction and double error detection. The system has not shown transmission errors during operations unless a hardware problem is present: a faulty mezzanine, cable, connector or simply a cable not correctly plugged. This logic can also be configured to send on the whole bus pseudo-random binary sequences that are checked on the receiver. This has two purposes: 1) perform Bit Error Rate Tests (BERT) for a extended time period; 2) at power-up of the deserializer on the RX boards, allow and then check data-clock deskew. Extended BERT have been performed for days without detecting errors. Single Event Upset detection Because of the high radiation environment, the error detection CRC feature of the Altera Stratix III FPGAs has been enabled. This is able to detect single or double bit flips in any of the configuration CRAM bits in Stratix III devices due to a soft error. During the 2016 run it has been used as monitor feature that allows operators to intervene in case of error by reloading the configuration and reinitializing the system (with the frequency of about one intervention per week at high beam intensities). In the future an automatic reloading and reconfiguration procedure may be foreseen. The Trigger Algorithm In the 2016 physics run, the trigger decision is based on the total energy deposit in each calorimeter and on the number of clusters (limited to "one cluster only" or "more than one cluster" conditions). More than one trigger condition can be implemented and it's result (true or false) is encoded in a specified bit of the primitive id word sent to the central trigger processors of the experiment. It is performed on different steps on different boards and firmwares. The front-end boards look, on each TSL input channel (ADC sums of 4x4 calorimeter cells), for relevant physics signals. This is done requiring a peak in time and above a configurable threshold. If d[0..3] are four input samples of one channel (40 MHz sampling frequency), the requests are: [3] (3.1) Different criteria and thresholds are selectable on-line during trigger operations and have been optimized to maximize peak recognition of pions and photons while being blind to muons at low energy. A parabolic fit is performed on the three samples around the maximum sample (see figure 5). The maximum of the fit is used as an energy estimation: this recovers any time walk effect due the phase between the physics signal and the 40 MHz sampling clock. Recursive bisections between the samples are then performed for the fine-time estimation of the peak onset. The peak time is therefore determined as the 32-bit experimental clock counter (25 ns period) of the lowest sample of the first bisection plus a fraction of that period expressed as an 8-bit fine-time number (LSB = 25 ns/256 ≈ 98 ps). Data for each peak (energy estimation, timestamp, finetime) is then transmitted from the 29 frontend boards up to the SL FPGA of the last concentrator board where the trigger decision is -7 - taken. This logic is clocked at 160 MHz and it is sketched in figure 6. There are five identical logic blocks, one per each data source: MUV1, LKr, SAC, IRC, MUV2. Incoming data have a source identifier in the data packet (they are tagged while traveling in the trigger system) and can be routed to the corresponding logic block. The core is a dual port RAM used as a circular buffer: it represents an energy histogram binned in time: each memory cell corresponds to a time interval and the stored value is total energy in that time interval. The bin size can be tuned on-line with a lower limit of 6.25 ns due to the 160 MHz clock rate. Each memory has depth 4096 and can then store a minimum of 6.25 · 4096 = 25.6 µs of data. The timestamp and the finetime of the data are used to address the RAM and the addressed memory cell is first read and then written summing up the energy of incoming data to the previously stored energy value. In addition, in each memory cell a bit (hereafter ncls) indicates the number of clusters as one cluster only (ncls = 1) or more than one cluster (ncls = 0). The energy and the 98 ps resolution 8-bit finetime of the peak with maximum energy is also stored; this is used to provide a fine granularity timing for the output trigger information. These memory buffers, one for each detector, are read simultaneously with a rate corresponding to real-time (that is at 160 MHz if bin width size = 6.25 ns, at 80 MHz if bin width size = 12.5 ns etc.). The width of the memory, corresponding to a 25 µs time window in the worst case, allows to absorb any time skew between different incoming data. The values read from each RAM are the total energy the number of clusters in the detector for that time bin. For calorimeters used as a veto, the veto window is enlarged summing up the energy over three time bins centered around the time of the positive trigger; this is done in order to avoid to underestimate the total energy because of time binning. Boolean conditions with cuts on energy and number of clusters for each detectors are applied resulting in the trigger decision. The time of the trigger corresponds to the finetime of the more energetic peak in the time bin. In figure 7 there is a flowchart showing how the clustering algorithm is implemented. While the energy in the bin is the total sum of the energy of the incoming peaks, the position always refers to the peak with the maximum energy release. Such position is considered as a candidate seed position for a cluster: if all other incoming clusters fall within a configurable distance (8 cm is the -8 -value used in the 2016 run) then there is one cluster only (and the total energy is the cluster energy). As a cross check, a comparison between the single cluster position computed by the trigger and the position computed by the offline analysis software is shown in 8, on a set of events selected with the offline software as having only one cluster in the LKr. The zero-centered box shape shows that the trigger algorithm correctly identifies the cluster position. An estimation of the trigger veto efficiency on π + π0 is in figure 9, showing about 98% efficiency.2 The trigger was required to veto above 20 GeV of total energy in the LKr calorimeter. The efficiency sample is obtained without any request on the LKr and the trigger response is checked in a ±20 ns time window around a control trigger. Conclusion The Level-0 calorimetric trigger of NA62 has been fully commissioned and operated for the first time during the first physics run from May to November 2016. It has been tested up to the nominal beam intensity of 30 · 10 11 protons per SPS spill. After the initial commissioning phase, the system has proven to be stable and no hardware faults have been detected. 2The efficiency sample is selected requiring a single track, corresponding to the pi + in the decay region. Events with photons outside the LKr acceptance or with muons in the LKr acceptance are excluded. Calorimeters are not used in the selection. JINST 12 C04020 A significant amount of data has been acquired with various trigger conditions that show clear suppression of the main background contributions. While for the 2015 run the trigger conditions were based on the total energy deposit in each of the calorimeters (MUV1, LKr, IRC, SAC, MUV2), the 2016 run had the energy clusters reconstructed on the basis of the spatial and time information. For the 2017 run a data readout at Level-0 is also foreseen through additional mezzanines, with Gbit Ethernet links, that will plugged on the TX board.
4,509.4
2017-04-21T00:00:00.000
[ "Physics" ]
AirID, a novel proximity biotinylation enzyme, for analysis of protein–protein interactions Proximity biotinylation based on Escherichia coli BirA enzymes such as BioID (BirA*) and TurboID is a key technology for identifying proteins that interact with a target protein in a cell or organism. However, there have been some improvements in the enzymes that are used for that purpose. Here, we demonstrate a novel BirA enzyme, AirID (ancestral BirA for proximity-dependent biotin identification), which was designed de novo using an ancestral enzyme reconstruction algorithm and metagenome data. AirID-fusion proteins such as AirID-p53 or AirID-IκBα indicated biotinylation of MDM2 or RelA, respectively, in vitro and in cells, respectively. AirID-CRBN showed the pomalidomide-dependent biotinylation of IKZF1 and SALL4 in vitro. AirID-CRBN biotinylated the endogenous CUL4 and RBX1 in the CRL4CRBN complex based on the streptavidin pull-down assay. LC-MS/MS analysis of cells that were stably expressing AirID-IκBα showed top-level biotinylation of RelA proteins. These results indicate that AirID is a novel enzyme for analyzing protein–protein interactions. Introduction Many cellular proteins function under the control of biological regulatory systems. Protein-protein interactions (PPIs) comprise part of the biological regulation system for proteins. Besides PPIs, biological protein function is post-translationally promoted by multiple modifications such as complex formation, phosphorylation, and ubiquitination. Therefore, it is very important to understand how proteins interact with target proteins. The identification of partner proteins has been carried out using several technologies such as the yeast two-hybrid system (Zhao et al., 2017;Li et al., 2016), mass spectrometry analysis after immunoprecipitation (Ohshiro et al., 2010;Han et al., 2015), and cell-free-based protein arrays that we have previously described (Nemoto et al., 2017;Takahashi et al., 2016). These methods provide many critical findings. As intracellular proteins are regulated by quite complicated systems, such as signaling transduction cascades, the use of multiple technologies can strongly promote our understanding of cellular protein regulation. At present, proximity biotinylation is based on the Escherichia coli enzyme, BirA. BioID (proximity-dependent biotin identification) was first reported in 2004, and its main improvement was the single BirA mutation at R118G (BirA*) (Choi-Rhee et al., 2004). BioID generally has promiscuous activity and releases highly reactive and short-lived biotinoyl-5 0 -AMP. Released biotinoyl-5 0 -AMP modifies proximal proteins (within a distance of 10 nm) (Kim et al., 2014). BioID can be used by expressing the BioID-fusion protein and adding biotin. In cells expressing BioID-fusion bait protein, proteins with which the bait protein interacts are biotinylated and can be comprehensively analyzed using precipitation with streptavidin followed by mass spectrometry (Roux et al., 2012). BioID can easily analyze the protein interactome in mild conditions. However, BioID takes a long time (>16 hr) and requires a high biotin concentration to biotinylate interacting proteins. Therefore, it cannot easily detect short-term interactions and is difficult to use in vivo. Second, BioID was improved using R118S and 13 mutations via yeast-surface display; this yielded TurboID (Branon et al., 2018). TurboID has extremely high activity and can biotinylate proteins in only ten minutes. However, TurboID caused non-specific biotinylation and cell toxicity when labeling times were increased and biotin concentrations were high (Branon et al., 2018). In addition, a small BioID enzyme from Aquifex aeolicus was reported as BioID2 (Kim et al., 2016). BiolD, TurboID, and BiolD2 are excellent enzymes, and they offer some improvements for the proximity biotinylation of cellular target proteins. Further improvement of BirA enzymes is an important goal that would enhance the convenience of proximity biotinylation in cells. Evolutionary protein engineering using metagenome data have recently been used to improve enzymes (Nakano and Asano, 2015;Nakano et al., 2018;Nakano et al., 2019). Here, we newly designed five ancestral BirA enzymes using an ancestral enzyme reconstruction algorithm and a large genome dataset. The combination of ancestral reconstruction and site-directed mutagenesis has provided a newly useful BirA enzyme, AirID (ancestral BirA for proximity-dependent biotin identification), which functions in proximity biotinylation in vitro and in cells. Although the sequence similarity between BioID and AirID is 82%, AirID showed high biotinylation activity against interacting proteins. Our results indicate that AirID is a useful enzyme for analyzing protein-protein interactions in vitro and in cells. eLife digest Proteins in a cell need to interact with each other to perform the many tasks required for organisms to thrive. A technique called proximity biotinylation helps scientists to pinpoint the identity of the proteins that partner together. It relies on attaching an enzyme (either BioID or TurboID) to a protein of interest; when a partner protein comes in close contact with this construct, the enzyme can attach a chemical tag called biotin to it. The tagged proteins can then be identified, revealing which molecules interact with the protein of interest. Although BioID and TurboID are useful tools, they have some limitations. Experiments using BioID take more than 16 hours to complete and require high levels of biotin to be added to the cells. TurboID is more active than BioID and is able to label proteins within ten minutes. However, under certain conditions, it is also more likely to be toxic for the cell, or to make mistakes and tag proteins that do not interact with the protein of interest. To address these issues, Kido et al. developed AirID, a new enzyme for proximity biotinylation. Experiments were then conducted to test how well AirID would perform, using proteins of interest whose partners were already known. These confirm that AirID was able to label partner proteins in human cells; compared with TurboID, it was also less likely to mistakenly tag non-partners or to kill the cells, even over long periods. The results by Kido et al. demonstrate that AirID is suitable for proximity biotinylation experiments in cells. Unlike BioID and TurboID, the enzyme may also have the potential to be used for long-lasting experiments in living organisms, since it is less toxic for cells over time. Results Reconstruction of ancestral BirA enzyme using metagenome data BioID and TurboID were designed on the basis of the biotin ligase BirA from E. coli. Using a different approach, we attempted to reconstruct the ancestral BirA sequence. Five ancestral sequences were obtained using the following process. A comprehensive and curated sequence library was prepared querying the Blastp web server and using a custom Python script (Source code 1), which exhibited more than 30% sequence identity with E. coli BirA (EU08004.1). Next, further curation approaches were applied to the library as shown in previous studies (Nakano et al., 2018;Nakano et al., 2019). The procedure consists of the following steps: 1) preparation of sequence pairs consisting of one of the submitted BirA sequences and one sequence (total 1275 genes) in the library, 2) sequence alignment of the all pairs, and 3) selection of sequences bearing 'key residues' Figure 1. Characterization of novel BirA enzymes designed using metagenome data. (A) A homolog library of BirA from E. coli (EcBirA) was generated using blastp and curated using an original python script. The curated library was multiple aligned using INTMSAlign and sequences were classified into four groups. Each group was phylogenetically analyzed, and ancestral sequences were designed. (B) AGIA-tagged AncBirAs were synthesized using the wheat cell-free system. Their expressions were confirmed using anti-AGIA antibody immunoblotting. (C) EcBirA or each AncBirA was added to the reaction mixture when His-bls-FLAG-GST was synthesized. Biotinylation of bls by each BirA was examined using anti-biotin antibody immunoblotting. As a control, the expression of each BirA was detected using His antibody. The band intensity of biotinylated His-bls-FLAG-GST was quantified with image J software, with the index intensity (value 1.0) shown in in red characters. (D) The WT or RG mutant of each BirA was fused to p53 (BirA-p53). They analyzed biotinylation activity with or without FLAG-GST-MDM2. As a control, the expression of each BirA-p53 and MDM2 was detected using an anti-AGIA antibody and an anti-FLAG antibody, respectively. The band intensities of biotinylated p53 and MDM2 were quantified with image J software. The index intensity (value 1.0) is shown in red characters. ( Figure 1A). In detail, we prepared the following four combinations of the key 26 th , 124 th , 171 th , and 297 th residues to classify the library: Ala, Val, Val, Ala (pattern 1, AVVA); Ala, Phe, Val, Ala (pattern 2, AFVA); Ala, His, Leu, Ala (pattern 3, AHLA); and Gly, Phe, Val, Ala (pattern 4, GFVA) ( Figure 1A). After the selection, we classified the library as follows: the library could be divided into 17, 9, 9, or 66 genes depending on whether the key residues consisted of pattern 1, 2, 3, or 4, respectively (Figure 1). Utilizing each of the classified genes, we designed four artificial sequences using the ancestral sequence reconstruction (ASR) method (Supplementary file 1). The designed sequences were named on the basis of the patterns; the sequences were classified using the patterns 1 to 4, which we refer to as AVVA, AFVA, AHLA, and GFVA, respectively ( Figure 1A). Furthermore, we added an 'all' BirA enzyme from the common ancestor of AVVA, AFVA, and GFVA. BirA enzymes of AVVA, AFVA, AHLA, and GFVA shared similarity with those in the Shewanella genus, the Frischella and Glliamella genera, the Thiobacillus and Betaproteobacteria genera, and multiple genera, respectively. When the AVVA, AFVA, AHLA, GFVA, and 'all' amino-acid sequences were compared to the sequence E. coli BirA they showed 45%, 58%, 42%, 82%, and 73% similarity, respectively, and the region including the active site (107-134 amino acids) was the same sequence throughout. Enzymatic characterizations of newly designed ancestral BirA enzymes On the basis of the amino-acid sequences discussed above, five DNA templates for AVVA, AFVA, AHLA, GFVA, and 'all' were prepared using artificial DNA synthesis. To convert the designed proteins into DNA sequences, we used the codon usage profile from the plant Arabidopsis, the average genic AT content of which is nearly 50% (Arabidopsis Genome Initiative, 2000). All ancestral BirA genes were fused an N-terminal AGIA tag, because this is a highly sensitive tag based on a rabbit monoclonal antibody (Yano et al., 2016). We synthesized these ancestral BirA proteins (AncBirAs) using a wheat cell-free protein production system to investigate their enzymatic potentials (Sawasaki et al., 2002). Biotin ligase activity was subsequently checked as all ancestral BirA proteins were obtained as a soluble form ( Figure 1B). A His-bls-FLAG-GST protein with a N-terminal biotinylation site (bls) of GLNDIFEAQKIEWHE for E. coli BirA (EcBirA) was used as a substrate. Three ancestral BirA proteins-AFVA, GFVA, and 'all'-showed activity against the bls sequence ( Figure 1C). GFVA had the greatest activity, similar to that of EcBirA, whereas AVVA and AHLA did not have activity. In all of the designed ancestral BirA proteins, an arginine residue corresponding to R118 of EcBirA was conserved in an active site for biotinylation (Supplementary file 1). Because EcBirA gained proximity biotinylation activity as a result of the R118G mutation known as BioID (Choi-Rhee et al., 2004), each R residue in the five genes was substituted with glycine (RG mutants). To compare the proximity biotinylation activity among these genotypes, wild-type or RG mutant BirA gene was N-terminally fused to the p53 gene. The resulting BirA-p53 proteins were synthesized using the cell-free system before mixing with FLAG-GST(FG)-MDM2, because an interaction between p53 and MDM2 has been widely observed (Momand et al., 1992;Michael and Oren, 2003). Immunoblotting revealed BirA-p53 biotinylation in EcBirA-RG (BioID), AVVA-WT, AVVA-RG, AFVA-RG, and GFVA-RG ( Figure 1D). FG-MDM2 proximity biotinylation was detected under these conditions in three ancestral BirA-RG mutants-AVVA-RG, AFVA-RG, and GFVA-RG-indicating that they are candidate enzymes for proximity biotinylation. AVVA-RG showed both the highest activity of proximity biotinylation and extra biotinylations in the lower size region ('non-specific biotinylation' in Figure 1D) when compared to the three ancestral BirA-RG mutants. GFVA-RG indicated the highest biotinylation activity for a specific peptide ( Figure 1C) and the lowest extra proximity biotinylation. According to these results, we focused on two enzymes, AVVA and GFVA, for further analysis. Proximity biotinylation ability of the ancestral BirA-RS mutants under different conditions TurboID was recently reported as an improved BioID enzyme (Branon et al., 2018). As TurboID has an R118S mutation (RS mutant) that increases the activity of proximity biotinylation, we made RS mutants of the two ancestral BirA enzymes and compared their proximity biotinylation activities in vitro and in cells. An interaction between N-terminal AGIA-BirA-fusion p53 (AGIA-BirA-p53) and FG-MDM2 was used to validate proximity biotinylation ability in vitro. Incubation time, biotinylation temperature, and biotin concentration were investigated as conditions for proximity biotinylation. Consequently, TurboID, AVVA-RG, AVVA-RS, and GFVA-RS showed higher proximity biotinylation activity after 3 hr than did BioID with a 16-hr incubation ( Figure 2A). The GFVA enzyme with a RS mutation dramatically increased the activity of proximity biotinylation to RelA (GFVA-RS in Figure 2A and Figure 2-figure supplement 1), but proximity biotinylation was almost the same in AVVA-RG and -RS. AVVA-RG, AVVA-RS, and GFVA-RS showed high activity of proximity biotinylation at temperatures above 16˚C (Figure 2-figure supplement 1A). AVVA-RG, AVVA-RS, and GFVA-RS showed high proximity biotinylation activity at biotin concentrations greater than 0.5 mM (Figure 2-figure supplement 1B). On the basis of these results, three BirA enzymes-AVVA-RG, AVVA-RS, and GFVA-RS-were used for further analysis. We used IkBa and RelA to validate the proximity biotinylation ability of these three enzymes in other protein-protein interactions because the IkBa-RelA interaction has been widely observed (Beg et al., 1992;Baeuerle and Baltimore, 1988). As in the analysis of the p53-MDM2 interaction, N-terminal AGIA-BirA-fusion IkBa (AGIA-BirA-IkBa) and FLAG-GST-RelA (FG-RelA) sequences were constructed. FG-MDM2 was used as a negative control. To compare the abilities of the different enzymes directly, the reactions of all enzymes were carried out under the same conditions. After coincubating AGIA-BirA-IkBa and FG-RelA, AVVA-RS or GFVA-RS, high RelA biotinylation was indicated ( Figure 2B). FG-MDM2 biotinylation by AGIA-BirA-IkBa was not observed. Proximity biotinylation of the ancestral BirA-RS mutants in cells Next, the proximity biotinylation ability of these three enzymes was validated in cells. MDM2 dramatically degrades p53 protein in cells (Michael and Oren, 2003), so a CS mutant (MDM2(CS)) lacking E3 ubiquitin ligase activity was used for this assay. In addition, GFP (green fluorescent protein) was terminally fused to MDM2 (GFP-MDM2(CS)) because the mobility size of this fusion protein on SDS-PAGE is very similar to that of BirA-fusion p53 and MDM2. AGIA-BirA-p53 fusions were transiently expressed in HEK293T cells with or without GFP-MDM2(CS), and they were compared with or without biotin supplement. GFVA-RS showed higher biotinylation of FG-MDM2 than did other enzymes under conditions without biotin supplementation (left panel in Figure 2-figure supplement 2). Furthermore, GFVA-RS also indicated biotinylation of FG-MDM2 under biotin supplementation conditions (right panel). Taken together, these results indicated that an ancestral BirA with GFVA-RS is a good enzyme for analyzing protein-protein interactions both in vitro and in cells. Thus, we selected GFVA-RS, and we called this AirID (ancestral BirA for proximity-dependent biotin identification, an homage to BioID and TurboID). Biochemical characterization of the AirID (GFVA-RS) enzyme Before utilizing AirID for various applications, we assessed AirID for the two activities self-biotinylation and 5'-biotinyl-AMP production, because these activities indicate proximity biotinylation. It is known that the p53 protein makes a homo multimer (Friedman et al., 1993;Delphin et al., 1994). Each enzyme alone or the p53-fusion form was used to investigate the two activities in BioID, Tur-boID, and AirID. BioID and TurboID showed self biotinylation, and TurboID had the highest activity ( Figure 2-figure supplement 3). AirID did not have the activity, indicating that AirID does not selfbiotinylate. As TurboID was selected as an enzyme by screening the yeast-surface display that showed the highest self-biotinylation activity (Branon et al., 2018), the highest activity from TurboID is reasonable. The lack of self-biotinylation activity in AirID may be caused by a property of AirID as the enzyme or by a lack of accessible lysine residues on AirID. We next investigated the ability of the AirID enzyme to produce biotinoyl-5 0 -AMP. His-tagged TurboID, GFVA-RG, and AirID proteins were produced in an E. coli system and purified using nickel sepharose beads. Highly purified enzymes were obtained (Figure 2-figure supplement 4A) and biotinylation of TurboID was found, indicating that TurboID biotinylated itself in E. coli cells. As shown in Figure 2-figure supplement 3, self-biotinylation of AirID was not observed. Furthermore, to investigate the biotinylation ability of AirID at the biotin ligation site (bls) of E. coli BirA, purified His-tag and bls fusion FLAG-GST protein (His-bls-FLAG-GST) was used as a substrate. AirID and GFVA-RG biotinylated His-bls-FLAG-GST (Figure 2-figure supplement 4B), but TurboID did not do so. Radio-isotope-labelled ATP [ 32 P-a-ATP] was used according to a previous report (Henke and Figure 2. Validation of PPI dependency of novel designed BirA enzymes. (A) RS mutants of AVVA and GFVA were cloned, and biotinylations of FLAG-GST-MDM2 (FG-MDM2) by BirA-p53 including RS mutants were analyzed. The reaction was performed at 500 nM of biotin at 26˚C for the described time. As a control, the expression levels of both BirA-p53 and MDM2 were detected using anti-AGIA antibody and anti-FLAG antibody, respectively. The band intensity of biotinylated MDM2 was quantified with image J software. The index intensity (value 1.0) is shown in red characters. (B) FG-RelA biotinylation by BirA-IkBa was examined. FG-MDM2 was used as the negative control. Biotinylations were performed at 500 nM of biotin at 26˚C for 1 hr (TurboID), 3 hr (AVVA-RG, AVVA-RS, and GFVA-RS), or 16 hr (BioID). As a control, the expression levels of BirA-p53 and MDM2 were detected using anti-AGIA antibody, anti-FLAG antibody and anti-GST antibody. The band intensity of biotinylated RelA was quantified with image J software. The index intensity (value 1.0) is shown in red characters. (C) GFVA-RG and GFVA-RS expressed using E. coli were purified using Ni beads and mixed with His-bls-FLAG-GST, which was synthesized using a wheat cell-free system and purified using glutathione beads. The mixtures were incubated a solution including [a-32 P]ATP and biotin for 30 min at 37˚C. The resultant biotinyl-5 0 -AMP, AMP, or unreacted ATP was separated using cellulose thin-layer chromatography. (D) GFP and either AirID-IkBa or TurboID-IkBa were transfected in HEK293T, and biotin was added to 5 mM of this mixture for the described time period. After transfecting for 24 hr, cells were lysed by RIPA buffer including protease inhibitors, and biotinylated proteins were pulled down with streptavidin beads. As a control, the expression levels of enzyme-fused protein and target proteins were detected using each proteinspecific antibody (left panel). The band intensity of pulled-down GFP and tubulin was quantified with image J software. The index intensity (value 1.0) is shown in red characters. The online version of this article includes the following figure supplement(s) for figure 2: Cronan, 2014) to detect biotinoyl-5 0 -AMP production by the enzymes. The ATP concentration reported by this assay was very low (final 1 mM) because of the use of labelled ATP. AirID and GFVA-RG produced biotinoyl-5 0 -AMP, and this activity was decreased by supplementing with His-bls-FLAG-GST ( Figure 2C). AMP was increased at the same time, indicating that biotinoyl-5 0 -AMP is Figure 3. Biochemical applications of AirID-dependent biotinylation on PPI. (A) Biotinylations of FG-MDM2 by AirID-p53 were carried out with or without Nutlin-3, which inhibits the interaction between p53 and MDM2, at 500 nM of biotin at 26˚C for 3 hr. Biotinylated MDM2 was detected using immunoblotting. As a control, expression levels of BirA-p53 and MDM2 were detected using anti-AGIA antibody and anti-FLAG antibody, respectively. (B) MDM2 biotinylation was detected using AlphaScreen with the reaction mixtures described for panel (A). Biotinylated FG-MDM2 interacts with both streptavidin donor beads and protein A acceptor beads to which the anti-FLAG antibody binds. The AlphaScreen results are shown in panel (C). (D) Pomalidomide-dependent biotinylations of FG-IKZF1 and FG-SALL4 by AirID-CRBN were analyzed. FG-IKZF1 or FG-SALL4 was biotinylated with or without pomalidomide at 500 nM of biotin at 26˚C for 3 hr. As the negative control, YW/AA mutant of AirID-CRBN, which does not bind to pomalidomide, was used. As a control, expression of AirID-CRBN and IKZF1 or SALL4 was detected using anti-AGIA antibody and anti-FLAG antibody, respectively. The band intensity of biotinylated IKZF1 or SALL4 was quantified with image J software. The index intensity (value 1.0) is shown in red characters. (E) CRL4 CRBN complex proteins were biotinylated using AirID or AirID-CRBN. Biotinylated proteins were pulled down with streptavidin beads. As a control, the expression levels of AirID-CRBN and the complex component proteins was detected using anti-AGIA antibody and anti-FLAG antibody, respectively (right panel). The band intensity of biotinylated DDB1 was quantified with image J software. The index intensity (value 1.0) is shown in red characters. The online version of this article includes the following source data and figure supplement(s) for figure 3: Source data 1. AlphaScreen data used to generate Figure 3C. produced by these enzymes and that AMP is released after biotinylation of His-bls-FLAG-GST. Biotinoyl-5 0 -AMP formation was shown as AirID > GFVA-RG > TurboID by comparing the enzymes under these conditions (Figure 2-figure supplement 4C), suggesting that AirID has higher biotinoyl-5 0 -AMP formation than TurboID under low ATP conditions. Proximity biotinylation conditions of AirID in cells We compared the optimal conditions for proximity biotinylation in cells between BioID, TurboID, and AirID. As a model of proximity biotinylation, AGIA-BirA-fusion p53 and FG-MDM2(CS) were coexpressed in cells. Biotin concentration and biotinylation time were investigated as the variable conditions for proximity biotinylation. Consequently, AirID and TurboID biotinylated MDM2 at biotin concentrations higher than 0.5 mM within 3 hr and 1 hr in cells, respectively (Figure 2-figure supplement 5A and B). Although TurboID-p53 dramatically increases biotinylation of high molecular weight products with long incubations of >6 hr with >5 mM biotin, AirID-p53 showed similar results from 3 to 24 hr and with 0.5-50 mM biotin supplementation in culture medium. This indicates that the AirID-fusion protein could function in a wide variety of conditions. Furthermore, PPI dependency was examined between AirID-IkBa and TurboID-IkBa. GFP and either AGIA-AirID-IkBa or AGIA-TurboID-IkBa were coexpressed in HEK293T cells. Next, biotinylation after 1, 3, 6, and 24 hr incubation with 5 mM of biotin supplementation was analyzed using a streptavidin-pull down assay. Each protein was detected using each specific antibody. As shown in the input sample (left panel in Figure 2D), expression of both fusion enzymes was at nearly the same level (IB: AGIA). TurboID showed much higher biotinylation in whole lysates than did AirID. In the pull-down assay, both enzymes biotinylated endogenous RelA at all points. After 1 hr of incubation, biotinylation of co-expressed GFP was found in TurboID-IkBa, and continuous tubulin biotinylation was also carried out after 3 hr (right panel in Figure 2D). They found no AirID-IkBa biotinylation, although AirID was incubated for 24 hr with biotin. These results indicated that AirID has high PPI dependency. Biochemical applications of AirID-dependent biotinylation in proteinprotein interaction We used AirID for various in vitro applications. It has been widely known that p53-MDM2 interaction is inhibited by nutlin-3 (Vassilev et al., 2004). Nutlin-3 was used to investigate whether AirID can be used to validate an inhibitor of PPI. Immunoblotting revealed that nutlin-3 inhibited FG-MDM2 biotinylation by AGIA-AirID-p53 ( Figure 3A). As we used AlphaScreen technology for drug screening of PPI in previous reports (Uematsu et al., 2018;Nemoto et al., 2018;Nomura et al., 2019;Yamanaka et al., 2020), we used it to detect the biotinylation of drug-dependent PPI inhibition ( Figure 3B). FG-MDM2 biotinylation by AGIA-AirID-p53 interaction was also detected (0 mM in Figure 3C) using AlphaScreen technology, and the signal was decreased by supplementing with nutlin-3 (>10 mM). These results indicate that AirID can detect PPI inhibition by the drug. CRBN is involved in the Cullin-4 complex consisting of DDB1, RBX1, and CUL4 (Fischer et al., 2014). To investigate whether AirID detects proteins in a multiple complex, AirID-CRBN was mixed in with the complex members. Biotinylations of DDB1, CUL4, and RBX1 by AGIA-AirID-CRBN were observed ( Figure 3E), but AGIA-AirID was not biotinylated. This indicated that AirID can detect PPI in a multiple protein complex. The Flowering locus T (FT) protein, known as the flowering hormone florigen in plants, induces the differentiation of flowering with Flowering locus D (FD) protein, which has a bZip DNA-binding domain (Abe et al., 2005). FT-FD interaction in the floral meristem has been thought to be an important event for flowering development (Jaeger et al., 2006). To investigate whether the FT-FD interaction detects biotinylation of AirID, FT and FD genes in Arabidopsis were selected to co-synthesize FT-AirID and AGIA-FD proteins using by the wheat cell-free system with 500 nM biotin. As a negative control, E. coli dihydrofolate reductase (DHFR) was synthesized with FT-AirID. Under these conditions, FT-AirID biotinylated AGIA-FD , but AGIA-DHFR biotinylation was not observed (Figure 3-figure supplement 2). This co-translational condition reaction was incubated for 16 hr at 16C with biotin, indicating that AirID-dependent biotinylation functions in co-translational conditions based on the cell-free system. Taken together, these results indicate that the AirID enzyme is useful for biochemical analysis of PPI. Cellular localization of AirID and AirID-p53 We next analyzed the cellular localization of AirID and cellular biotinylation by AirID. The p53 protein is known to localize mainly to the nucleus (Shaulsky et al., 1990;Rotter et al., 1983). AGIA-AirID alone or AGIA-AirID-p53 was transiently expressed in HEK293T cells. Fluorescent streptavidin was found in whole cells by supplementing AirID expression cells with biotin (50 mM in Figure 4A). In AirID-p53 expression cells, the fluorescence was mainly observed in the nucleus, and it was at the same level for cells exposed to either 5 mM or 50 mM biotin concentration ( Figure 4A). Cellular fractions from cytosol and nuclei were isolated to confirm the cellular localization by immunostaining. These fractionations indicated that AirID and AirID-p53 were mainly found in the cytosol or nucleus, respectively ( Figure 4B). This suggested that AirID-fusion protein localization is dependent on fusion protein features. Functions of proteins biotinylated by AirID in cells We investigated whether proteins that were biotinylated by AirID have native function. Because MDM2 has been known to induce p53 degradation via the ubiquitin-proteasome system in cells (Michael and Oren, 2003) and AirID-p53 provided both self and MDM2 biotinylation (Figure 2A), AGIA-AirID-p53 and GFP-MDM2 were transiently co-expressed in cells. Treatment with proteasome inhibitor MG132 inhibited AirID-p53 degradation, but degradation was extremely decreased without the treatment ( Figure 4C). The inactive MDM2 form, FG-MDM2(CS), also did not promote AirID-p53 (Figure 2A), indicating that AirID-p53 degradation is carried out by GFP-MDM2. In addition, MG132 treatment biotinylated AirID-p53 under biotin supplementation conditions. These results indicated that biotinylated MDM2 works as a E3 ligase for biotinylated AirID-p53. RelA was selected to investigate the transactivation activity of the biotinylated transcription factor because RelA has transactivation activity for the NF-kB promoter (Ganchi et al., 1992). Two types of expression plasmids, AGIA-AirID-RelA and AGIA-RelA, were constructed. Each plasmid was transiently transfected in HEK293 cells with a NF-kB promoter-luciferase plasmid. Biotin supplementation induced biotinylation of AGIA-AirID-RelA, but AGIA-RelA biotinylation was not found ( Figure 4D). The luciferase assay revealed that the transactivation activity of AirID-RelA was nearly the same for AGIA-RelA, AGIA-AirID-RelA, and biotinylated AGIA-AirID-RelA, indicating that biotinylated RelA functions as a normal transcription factor. AirID effects on cell viability TurboID almost completely inhibits HEK293T cell growth under 50 mM biotin supplementation conditions (Branon et al., 2018). HEK293T cells that stably expressed AirID or AirID-IkBa were constructed using a lentivirus system to investigate whether AirID affects HEK293T cell viability. In the stable cells expressing AirID-IkBa, RelA biotinylation was clearly found by supplementing with >50 mM for 6 hr. It was not found in cells in which AirID was stably expressed (Figure 4-figure supplement 1). The cell growth of both cell types was not fully inhibited by 50 mM biotin supplementation ( Figure 4E), indicating that AirID does not affect cell viability under 50 mM biotin supplementation conditions. It was demonstrated that TurboID showed cytotoxicity within 48 hr with 50 mM biotin (Branon et al., 2018). Therefore, AirID-or TurboID-expressing cells were cultured with or without biotin for 48 hr and then the viability of them was analyzed. Compared with control (Mock), the viability of TurboID-expressing cell was significantly decreased with 50 mM biotin, but the viability of AirID-expressing cells was not significantly affected (Figure 4-figure supplement 2). For immunostaining, HEK293 cells overexpressing AGIA-AirID-p53 were supplemented with the described biotin concentration for 3 hr. The cells were immobilized using anti-AGIA antibody and visualized using anti-rabbit IgG antibody-AlexaFluor555 and streptavidin-AlexaFluor488. AGIA-tagged AirID or AirID-p53 was transfected in HEK293T for the fractionation assay. The next day, biotin was added to 5 mM or 50 mM for the described time. Cytoplasmic and nuclear proteins were fractionated using a ProteoExtract Subcellular Proteome Extraction kit (Merck). As a control, expression of AirID or AirID-p53 was detected using anti-AGIA antibody. (C) AGIA tagged AirID-p53 was co-transfected with or without GFP-MDM2 in HEK293T. Biotin was added to a concentration of 50 mM at the same time. After 6 hr, DMSO or MG132 was added to a concentration of 10 mM. As a control, expression of MDM2 was detected using anti-GFP antibody. The band intensity of AirID-p53 was quantified with image J software. The index intensity (value 1.0) is shown in red characters. (D) qRT-PCR using AirID-IkBa. AGIA-tagged AirID or AirID-IkBa was stably expressed using renti-virus in HEK293T. Cells were seeded in a 96-well plate, and biotin was added at the same time. Next day, cells were stimulated using TNFa (20 ng/mL) for 0, 0.5, or 1 hr. In the cells, the mRNA level of TNFa was analyzed by qRT-PCR. Mean ± S.D. (n = 3). *, p<0.05. (E) Viability of AirID-expressing cells. AGIA-tagged AirID or AirID-IkBa was stably expressed using renti-virus in HEK293T. Cells were seeded in 96-well plates, and biotin was added the next day. The MTS assay was performed 0, 1, 2, or 3 days after adding biotin to measure cell viability. The online version of this article includes the following source data and figure supplement(s) for figure 4: Source data 1. Cell growth analysis data relating to Figure 4E. Source data 2. qRT-PCR data related to Figure 4D. Biotinylation of endogenous proteins by AirID in cells We investigated whether AirID could biotinylate dependently interacting endogenous proteins in cells. Streptavidin-conjugated beads were used as in a previous report to recover biotinylated endogenous proteins from cell lysates (Van Itallie et al., 2013). AGIA-AirID-IkBa was transiently expressed in HEK293T cells under different biotin concentration conditions. Streptavidin-pull down assay of the cell lysates was carried out, and the biotinylated endogenous proteins were detected using immunoblotting with each specific antibody. Endogenous RelA protein was biotinylated without supplemental biotin by transiently expressing AGIA-AirID-IkBa in cells (IB: RelA, 0 mM biotin concentration in Figure 5A). Biotin supplementation enhanced biotinylations of p50 or p105, which are known IkBa interactors (IB: p50/p105 and p50, 5 mM or 10 mM biotin concentration). These biotinylations were not found for AGIA-AirID alone. As the IkBa protein interacts with RelA, this result illustrated that biotinylated AGIA-AirID-IkBa may bring endogenous RelA without biotinylation. To confirm this, immunoprecipitation using a specific antibody recognizing endogenous RelA was performed in severe conditions after proteins were denatured with 1% SDS. Biotinylation of endogenous RelA recovered by immunoprecipitation was observed as a result ( Figure 5B). Using the same lysates, the streptavidin-pull down assay recovered RelA protein, indicating that RelA biotinylation depends on AGIA-AirID-IkBa. These results showed IkBa-interaction-dependent biotinylation by AirID. As in vitro protein biotinylation involving in a CRL4 AirID-CRBN complex was found ( Figure 3E), we investigated whether proteins in the CRL4 complex were biotinylated by AirID-fusion CRBN in cells. AGIA-AirID-CRBN was transiently expressed in HEK293T cells, and cell lysates were pulled down using streptavidin beads. Biotinylation of CUL4 and RBX1 was found after supplementing with 5 mM biotin ( Figure 5C), but DDB1 biotinylation was not found. Taken together, these results indicate that the biotinylation assay using the AirID-fusion target is a useful tool for analyzing PPI in the cell. Mass spectrometry analysis of AirID-IkBa-dependent biotinylated proteins in cells Since BioID has been widely used to identify PPI in cells using mass spectrometry (MS) analysis (Ikeda and Freeman, 2019), we also analyzed biotinylated proteins using LC-MS/MS in the cells stably expressing AirID alone or AirID-IkBa that were used in Figure 4E. The flowchart for the analysis of biotinylated peptides is shown in Figure 6A. The cells were treated with 50 mM biotin for 6 hr. Proteins were digested using trypsin after cell lysis. Biotinylated peptides were captured using Tamavidin2-Rev. Tamavidin2-Rev can bind to biotin-labelled substances and can release them under high concentrations of free biotin (Takakura et al., 2013). The biotinylated peptides were eluted using 2 mM biotin, and the eluted peptides were analyzed using LC-MS/MS. The biotinylation by free biotinoyl-5 0 -AMP occurs on lysine (Lys) residues on proteins (Choi-Rhee et al., 2004). Trypsin digests Lys or arginine (Arg), but it cannot cleave modified Lys in the same way as it does biotinylated Lys (Bheda et al., 2012). These features show that an eluted biotinylated peptide has a single biotin, indicating that the direct determination of biotinylated peptide provides a biotinylation site on the peptide. Using this method, we found 12 biotinylated peptides that were present in AirID-IkBa-expressing cells at levels that were more than five times higher than those in cells expressing only AirID ( Figure 6B and C, Figure 6-source data 1). In the top five peptides, three biotinylated peptides were derived from RelA proteins ( Figure 6C), indicating that AirID-IkBa could accurately biotinylate a major partner RelA protein in the cells. Furthermore, we investigated whether AirIDdependent biotinylation occurs in a specific region. Comparison of amino-acid sequences among the top 20 biotinylated peptides showed no similarity except for a single Lys residue (Figure 6-figure supplement 1), suggesting that the proximate biotinylation by AirID happens on the Lys residue but does not have a preferred sequence. In Figure 5A, the streptavidin pull down clearly showed biotinylation of the endogenous RelA protein in transiently AirID-IkBa expressing cells. We assessed whether an AirID-IkBa-dependent biotinylation of RelA could be detected using LC-MS/MS in transiently AirID-IkBa expressing cells. A flowchart for the analysis of biotinylated peptides using transiently expressing cells is shown in Figure 6D. As expected, the top biotinylated peptide was RelA ( Figure 6E), as in stably expressing cells. Taken together, these results suggest that detection of AirID-dependent biotinylation using LC-MS/MS is useful for PPI analysis in cells. Figure 5. Streptavidin pull-down assay for endogenous interactions using AirID-dependent biotinylation in cells. (A) Biotin was added to HEK293T cells expressing AirID or AirID-IkBa to concentrations of 0, 5, or 50 mM before incubating for 3 hr. Cells were lysed before immunoprecipitating with streptavidin beads. Pulled-down proteins were detected using immunoblotting with the described antibody. (B) AirID-or AirID-IkBa-expressing HEK293T cells were supplemented with 5 mM biotin and incubated for 3 hr. Cells were lysed and pulled down with streptavidin beads or Figure 5 continued on next page Figure 5 continued immunoprecipitated with anti-RelA. Normal rabbit-IgG was also used as a negative immunoprecipitation control. Pulled-down or immunoprecipitated proteins were detected using immunoblotting with the described antibody. (C) Biotinylation of the CRL4 CRBN complex was performed using AirID-CRBN. AirID-or AirID-CRBN-expressing HEK293T cells were incubated with or without 5 mM of biotin for 3 hr. Cells were lysed and pulled down with streptavidin. CUL4, DDB1, and RBX1 were detected using immunoblotting with each antibody. Figure 6. Mass spectrometry analysis of biotinylated proteins in AirID-IkBa expressing cells. (A) Schematic figure for detecting biotinylated proteins using cells stably expressing AirID. HEK293T cells that stably expressed AGIA-AirID or AGIA-AirID-IkBa were cultured in DMEM containing 50 mM for 6 hr before collecting (n = 3). Collected cells were lysed, and proteins were digested in solution using trypsin. Biotinylated peptides were captured from digested peptides using Tamavidin2-Rev beads (Wako), which can elute biotinylated samples using 2 mM biotin. Eluted peptides were detected using LC-MS/MS. (B) A volcano plot showing AirID-IkBa versus AirID against p-value of triplicate experiments. (C) A list of peptides increased by more than 5fold. (D) Schematic figure for detecting biotinylated proteins using cells transiently expressing AirID. HEK293T cells that transiently expressed AGIA-AirID or AGIA-AirID-IkBa were cultured in DMEM containing 5 mM for 3 hr before collecting (n = 1). Biotinylated proteins were detected using a similar method. (E) A list of the top ten peptides increased by AirID-IkBa. The online version of this article includes the following source data and figure supplement(s) for figure 6: Source data 1. Mass spectrometry data related to Figure 6B. Discussion Here, we used an algorithm of ancestral enzyme reconstruction using a large genome dataset, and we investigated five ancestral BirA enzymes. Finally, we combined biochemical experiments and RS mutations to create AirID with high PPI proximity biotinylation. Classical evolutionary protein engineering used random mutations to improve the activity (Branon et al., 2018). Therefore, the sequence similarity is extremely high because random mutations cannot provide dynamic sequence changes. However, sequence similarity between E. coli BirA and ancestral BirA was between 40% to 80%, indicating that a computational approach using large genome datasets can more dynamically design enzyme sequences. As another aspect to this approach, the BirA active region ( 115 GRGRRG 121 ) (Kwon and Beckett, 2000) was conserved, and RG and RS mutations introduced into ancestral BirA enzymes (Figures 1 and 2). In the present direction of computational protein evolution, dynamic changes to the backbone region of protein enzymes with a conserved active pocket would be acceptable. Further accumulation of knowledge about the enzyme function would be required to change the enzyme active region dynamically. When we looked at BioID (BirA*), TurboID, and AirID, the proximal biotinylation activity of BioID (BirA*) was considerably lower than that of TurboID and AirID (Figure 2A and Figure 2-figure supplement 1). By contrast, TurboID showed the highest proximate biotinylation activity in vitro and in cells (Figure 2A and Figure 2-figure supplement 1). This enzyme could be used for biotinylation within one hour (Branon et al., 2018). However, the highest activity from TurboID provided extra biotinylation on unexpected proteins, such as like GFP or tubulin, in cells that were treated for a long incubation of more than six hours and higher biotin concentrations (such as 50 mM biotin) ( Figure 2D). In the first report describing TurboID, it was used as a biotin-labelling enzyme rather than as a proximal biotinylation enzyme for PPI (Branon et al., 2018). If it was used analyze PPI, Tur-boID would show the best performance under limiting conditions, such as a short treatment (1 hr) in cells. In the case of AirID, GFP and tubulin biotinylations were not observed in the same conditions as those catalyzed by TurboID ( Figure 2D). Streptavidin-pull down assay and LC-MS/MS analysis also indicated that AirID-fusion proteins were able to biotinylate each well-known interactor accurately in the transient-and stable-expression cells (Figures 5 and 6). The formation of biotinoyl-5 0 -AMP was greater for AirID than for TurboID in low ATP concentrations (1 mM) (Figure 2-figure supplement 4C), and it prefers lower concentrations of biotin (with 5 mM biotin or without biotin supplement) (Figure 2-figure supplement 2). In addition, analysis of biotinylation sites from LC-MS/MS showed that AirID biotinylation happened with no special sequence preference on a proximate Lys residue ( Figure 6-figure supplement 1). Taken together, our AirID is expected to enhance PPI-dependent biotinylation accuracy, suggesting that AirID is suitable for PPI analysis in cells. Inhibition of MDM2-p53 interaction by nutline-3 was detected using AirID biotinylation ( Figure 3A and C), and several pomalidomide-dependent interactions between CRBN and neosubstrates were also detected by AirID biotinylation ( Figure 3D). These results indicate that AirIDdependent biotinylation would be useful for PPI analysis using chemical compounds. Furthermore, in vivo proximity biotinylation using BioID has been performed in many studies because the identification of in vivo partner proteins of target proteins is key for understanding biological functions (Odeh et al., 2018;Motani and Kosako, 2018), and it has uncovered new PPIs. Stable expression of AirID-IkBa did not induce cell-growth inhibition even under biotin-supplementation conditions ( Figure 4E), suggesting that AirID-fusion protein expression would have very low toxicity. Therefore, AirID could also be used for in vivo screening for protein interactors of a target protein. In conclusion, AirID is a novel enzyme providing proximity biotinylation for PPI analysis. Reconstruction of five ancestral BirA The BirA homologous sequences, classified into four groups using the key residues, were aligned using MAFFT software 2 (Katoh et al., 2002). Each of the aligned sequences was analyzed using MEGA6 software 3, and the phylogenetic tree was generated using the maximum likelihood method (Tamura et al., 2013). Aligned sequences and phylogenetic tree data were submitted to FastML (Ashkenazy et al., 2012). The JTT empirical model was adopted for analysis. Finally, we obtained four ancestral BirA forms named AVVA, AFVA, AHLA, and GFVA. Furthermore, we applied the three designed sequences (AVVA, AFVA, and GFVA) and an identical procedure to design another ancestral BirA called 'all'. All of the five designed sequences are shown in Supplementary file 1. Cell lines HEK293T cells (purchased from RIKEN RCB, Tsukuba, Japan, catalog number RCB2202) were incubated at 37˚C and 5% CO 2 in Dulbecco's Modified Eagle Medium (DMEM) (wako) supplemented with 10% fetal bovine serum (Biosera) and antibiotics (100 units/mL penicillin and 100 mg/mL streptomycin) (Thermo). We confirmed that the cell line was free of mycoplasma contamination. Lentiviruses expressing AGIA-AirID and AGIA-AirID-IkBa were generated by transfection using PEI MAX -Transfection Grade Linear Polyethylenimine Hydrochloride (Polyscience). After transmission of the transgene, a pool of HEK293T cells that were resistant to Blasticidin S (10 mg/mL) (Invitrogen) was generated and used in subsequent experiments. Cell-free protein synthesis and GST-tag purification In vitro transcription and wheat cell-free protein synthesis were performed using the WEPRO1240 expression kit (Cell-Free Sciences). A transcript was made from each of the DNA templates mentioned above using SP6 RNA polymerase. The translation reaction was performed using the WEPRO1240 expression kit (Cell-Free Sciences). For biotin labelling, 1 ml of BirA or of the ancestral BirAs produced by the wheat cell-free expression system were added to the bottom layer, and 500 nM (final concentration) of D-biotin (Nacalai Tesque) was added to both upper and bottom layers as described previously (Sawasaki et al., 2008). The aliquots were used for expression analysis and functional characterization. 1 mL of synthesized His-bls-FLAG-GST was mixed with Glutathione Sepharose 4B (GE Healthcare) and rotated for 3 hr at 4˚C. The mixture was washed with PBS. Proteins were eluted in 100 mL fractions with elution buffer (50 mM Tris-HCl [pH8.0], 10 mM reduced glutathione). Protein was subjected to SDS-PAGE and CBB staining to determine purity. BirA enzyme preparation from E. coli To purify TurboID, GFVA-R118G, and AirID proteins, the genes encoding them were inserted into pET30a and transformed into E. coli strain BL21. The E. coli cells were grown at 37˚C in LB medium to an OD600 of 0.6 and induced by adding IPTG to 1 mM for an additional 6 hr at 37˚C. Cells were centrifuged and resuspended in lysis buffer (20 mM sodium phosphate, 300 mM NaCl, 10 mM imidazole). The cells were lysed using sonication, and the lysates were centrifuged. The supernatants were added to Ni Sepharose High Performance (GE Healthcare) and incubated for 3 hr at 4˚C. The mixture was washed with three column volumes of wash buffer (20 mM sodium phosphate, 300 mM NaCl, 50 mM imidazole). Proteins were eluted in 500 mL fractions with elution buffer (20 mM sodium phosphate, 300 mM NaCl, 500 mM imidazole). Fractions were dialyzed against PBS. Proteins were subjected to SDS-PAGE and CBB staining to determine purity. Cell transfection and immunoblotting HEK293T cells were transfected with various plasmids using PEI MAX (Polyscience). Immunoblotting was performed according to standard protocols. Briefly, proteins in whole-cell lysates were separated using SDS-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto a PVDF membrane using semi-dry blotting. After blocking with 5% milk/TBST or Blocking one (Nakalai Tesque), the membrane was incubated with the appropriate primary antibodies followed by a horseradish peroxidase (HRP)-conjugated secondary antibody. Biotinylation assays In vitro biotinylation assays were performed. Briefly, 5 mL of each synthesized protein was mixed and incubated at 26˚C for 1 hr. Biotin was added, and the biotinylation reaction was performed in a total volume of 15 mL. After the reaction, biotinylated proteins were analyzed using SDS-PAGE and immunoblotting. In cell biotinylation assays were also performed. Briefly, each BirA or BirA fused gene and substrate gene were transfected into HEK293T. At the same or each time, biotin was added and cells were lysed using SDS sample buffer (125 mM Tris-HCl [pH 6.8], 4% SDS, 20% glycerol, 0.01% BPB, 10% 2-mercaptoethanol) 24 hr after transfection. Whole cell lysates were analyzed using SDS-PAGE and immunoblotting. In vitro inhibition assays using AlphaScreen technology Synthesized FG-MDM2 and Nutlin-3 were mixed and incubated for 30 min at 26˚C. AGIA-AirID-p53 was added to the mixture and incubated for 1 hr at 26˚C. In addition, biotin was added to the reaction mixture to 500 nM and incubated for 3 hr at 26˚C. Inhibition was examined using the AlphaScreen IgG (Protein A) detection kit (Perkin Elmer) and immunoblotting. Briefly, for AlphaScreen, 10 mL of detection mixture containing 100 mM Tris-HCl (pH 8.0), 0.1% Tween 20, 100 mM NaCl, 10 ng anti-FLAG antibody (Sigma), 1 mg/mL BSA, 0.1 mL streptavidin-coated donor beads, and 0.1 mL protein A-conjugated acceptor beads were added to each well of a 384-well Optiplate before incubation at 26˚C for 1 hr. Luminescence was detected using the AlphaScreen detection program with an EnVision device (PerkinElmer). For immunoblotting, solutions were boiled in SDS sample buffer. The boiled solution was analyzed using SDS-PAGE and immunoblotting. Immunoprecipitation Cells after biotinylation were lysed with RIPA buffer and sonication for immunoprecipitation. Lysates were centrifuged and SDS was added to supernatants to denature proteins. Their solutions were diluted 10-fold. After 2 mg of the indicated antibodies were bound to either protein A or protein G Dynabeads (Thermo Fisher Scientific) for 30 min at room temperature, the beads were incubated with cell lysates diluted overnight at 4˚C. The immunocomplexes were boiled in SDS sample buffer after washing three times with PBS. The boiled solution was analyzed using SDS-PAGE and immunoblotting. Cell viability assays Cells were seeded into 96-well plates at a density of 0.25 Â 10 4 cells/well and treated with 50 mM biotin after 24 hr. Cell viability was determined using the MTS assay with a CellTiter96 Aqueous One Solution Cell Proliferation Assay kit (Promega). In brief, 20 mL of the MTS reagent was added into each well, and the cells were incubated at 37˚C for 1 hr. The absorbance was detected at 490 nm (reference: 650 nm) with a Microplate Reader (SpectraMaxM3 Multi-Mode Microplate Reader; Molecular Devices). Cells were seeded into 96-well plates at a density of 0.25 Â 10 4 cells/well and transfected after 24 hr. After 2 days, cell viability was determined using CellTiter-Glo Luminescent Cell Viability Assay Cell system (Promega). In brief, the CellTiter-Glo reagent was added into each well, and the cells were incubated at room temperature for 10 min. The luminescence was detected with a Microplate Reader (GloMax Discover Microplate Reader). Fractionation assay HEK293T cells were seeded onto a 24-well plate. Next day, cells were transfected, and biotin was added at the same time or each time. Subcellular fractionation was performed 24 hr after transfection using a ProteoExtract Subcellular Proteome Extraction kit (Merck) according to the protocol. Immunofluorescent staining Cells were fixed with 4% paraformaldehyde in phosphate-buffered saline (PBS) for 15 min at room temperature before permeabilizing with 0.5% Triton X-100 in PBS for 15 min. Cells were incubated with a primary antibody overnight at 4˚C after blocking with 0.5% CS in TBST for 1 hr. After washing with TBST, cells were incubated with the appropriate Alexa Flour 488-and/or 555-conjugated secondary antibody and streptavidin for 1 hr at room temperature. Nuclei were counterstained with 4,6-diamidino-2-phenylindole. After washing with TBST, coverslips were mounted with anti-fade. Mass spectrometry analysis of biotinylated peptides The proximity-dependent biotin identification method using AirID was performed according to a previous report (Kim et al., 2016). Briefly, confluent HEK293T cells stably expressing AirID or AirID-IkBa fused at the N-terminus with an AGIA tag in a 6 cm dish were incubated with 50 mM biotin for 6 hr before harvesting using ice-cold PBS. Cell pellets were lysed and digested with trypsin. The digested peptides were incubated with Tamavidin2-Rev magnetic beads (FUJIFILM) before eluting with 2 mM biotin. Detailed procedures will be described elsewhere (Motani K and Kosako H, in preparation). LC-MS/MS analysis of the resulting peptides was performed on an EASY-nLC 1200 UHPLC connected to a Q Exactive Plus mass spectrometer using a nanoelectrospray ion source (Thermo Fisher Scientific). The peptides were separated on a 75-mm inner diameter Â150 mm C18 reverse-phase column (Nikkyo Technos) with a linear gradient from 4-28% acetonitrile for 0-40 min followed by an increase to 80% acetonitrile during 40-50 min. The mass spectrometer was operated in a datadependent acquisition mode with a top 10 MS/MS method. MS1 spectra were measured with a resolution of 70,000, an AGC target of 1 Â 10 6 , and a mass range from 350 to 1500 m/z. HCD MS/MS spectra were acquired at a resolution of 17,500, an AGC target of 5 Â 10 4 , an isolation window of 2.0 m/z, a maximum injection time of 60 ms, and a normalized collision energy of 27. Dynamic exclusion was set to 10 s. Raw data were directly analyzed against the SwissProt database restricted to Homo sapiens using Proteome Discoverer version 2.3 (Thermo Fisher Scientific) for identification and label-free precursor ion quantification. The search parameters were as follows: (a) trypsin as an enzyme with up to two missed cleavages; (b) precursor mass tolerance of 10 ppm; (c) fragment mass tolerance of 0.02 Da; (d) carbamidomethylation of cysteine as a fixed modification; and (e) protein N-terminal acetylation, methionine oxidation, and lysine biotinylation as variable modifications. Peptides were filtered at a false-discovery rate of 1% using the percolator node. Normalization was performed such that the total sum of abundance values for each sample over all peptides was the same. Statistical analysis Significant changes were analyzed using a one-way or two-way ANOVA followed by Tukey's posthoc test using Graph Pad Prism eight software (GraphPad, Inc). For all tests, a P value of less than 0.05 was considered statistically significant. . Supplementary file 1. Amino acid and nucleic acid sequences of ancestral BirAs. Amino acid and nucleic acid sequences for the ancestral BirAs designed (AVVA, AFVA, AHLA, GFVA, and all) in this report. Data availability All data generated or analysed during this study are included in the manuscript and supporting files. Source data files have been provided for Figures 3, 4, and 6.
11,758.6
2020-05-11T00:00:00.000
[ "Biology", "Chemistry", "Computer Science" ]
The Role of Blockchain in Medical Data Sharing : As medical technology advances, there is an increasing need for healthcare providers all over the world to securely share a growing volume of data. Blockchain is a powerful technology that allows multiple parties to securely access and share data. Given the enormous challenge that healthcare systems face in digitizing and sharing health records, it is not unexpected that many are attempting to improve healthcare processes by utilizing blockchain technology. By systematically examining articles published from 2017 to 2022, this review addresses the existing gap by methodically discussing the state, research trends, and challenges of blockchain in medical data exchange. The number of articles on this issue has increased, reflecting the growing importance and interest in blockchain research for medical data exchange. Recent blockchain-based medical data sharing advances include safe healthcare management systems, health data architectures, smart contract frameworks, and encryption approaches. The evaluation examines medical data encryption, blockchain networks, and how the Internet of Things (IoT) improves hospital workflows. The findings show that blockchain can improve patient care and healthcare services by securely sharing data. Introduction Everyone may benefit from healthcare data.It maintains a record of our bodily characteristics.It is essential for the treatment and diagnosis of disorders [1].With the fast growth of artificial intelligence (AI), health records have become a tremendous advantage.It may assist in the development of AI diagnostic models and aid in the diagnosing process.Even while the recording of medical data has moved from paper data to electronic medical records (EMRs), which are more convenient for data storage and access, more attention needs to be placed on data privacy [2].Several institutions and hospitals have curtailed data transmission and exchange to prevent data privacy breaches, which has resulted in the establishment of data silos as medical information is dispersed among numerous healthcare institutions [3]. The sharing of medical data provides numerous benefits to diverse stakeholders.Sharing data among clinical organizations, hospitals, and healthcare providers, for example, improves patient care coordination by providing comprehensive medical histories, allowing for more informed decisions and preventing unnecessary testing [4].In emergencies, data sharing between hospitals and emergency medical services can expedite care, as immediate access to vital patient information enables responders to administer appropriate treatment, thereby reducing delays and enhancing patient outcomes [5].The integration of medical data sharing with smart homes and Internet of Things (IoT) devices enables remote patient monitoring, which is advantageous for patients with chronic conditions and enables proactive healthcare delivery [6].In addition, collaborative data sharing between institutions promotes medical research and scientific discoveries, facilitating the identification of patterns, risk factors, and the development of new treatments.the first to use blockchain technology.Therefore, introducing a coin is not required to utilize blockchain and develop decentralized apps [26].This section explains the principles of blockchain technology to facilitate a comprehension of the remainder of this article.To aid the reader in comprehending the blockchain idea, its fundamental properties and building blocks and also their subsequent importance in healthcare will be detailed in the following sections. Blockchain A blockchain may be described as a sequence of time-stamped and cryptographically connected blocks.These blocks are permanently and securely sealed [27].Each new block added to the end of the chain contains a reference to the content of the preceding block [28].The shareholders, known as the blockchain's nodes, are arranged in a peer-to-peer (P2P) network.Each node in the network has two keys [29]: a private key used for decrypting messages and allowing the node to read them, and a public key used for encrypting messages transmitted to the node.Hence, the public key encryption process is employed to assure the non-repudiation, irreversibility, and consistency of a blockchain [25].Messages encrypted with the accompanying public key can only be decrypted with the matching private key.The term for this idea is asymmetric cryptography.While a comprehensive explanation is beyond the scope of this study, more information may be obtained in [29].The so-called hash, which is created using a cryptographic one-way hash function, is used to connect every block on the blockchain.It also assures the block's compactness, anonymity, and immutability [30]. This leads us to the significance of network nodes.Because the blockchain system is a P2P network, a node may be considered a peer when it begins to connect and interact with other nodes in the network; hence, peer node is the correct term.A full node is, in layman's words, any computer that has the main blockchain client installed and runs a complete copy of the whole blockchain ledger [25].A user who wishes to interact with the blockchain connects to the network through a node [31].Miners are a subset of nodes since each miner needs to also run a fully functional node.Each miner is thus a node, but not every node is also a miner.This situation is known from a certain sort of public blockchain using the proof-of-work (PoW) consensus algorithm.Some forms of blockchain networks using different distributed consensus mechanisms, such as proof-of-stake (PoS), do not need mining [32]. Depending on the level of involvement [33], blockchain may be classified into the consortium, private, and public chain.As its name suggests, a public chain is entirely public and open to anybody.Due to the immutability of the data on the chain, public chains are regarded to be entirely decentralized.Participation in the consortium chain is restricted to authorized members, and the write/read rights and participation accounting permissions on the blockchain are constructed by the alliance's norms.The private chain is exclusive to private organizations, and the write and read rights on the blockchain, as well as the permissions to participate in accounting, are constructed by the norms of the private organization.Participating nodes are restricted in reference [34]. Smart Contracts Computer protocols known as "smart contracts" allow for the informational distribution, validation, and enforcement of contracts [35].Smart contracts do not need the verification of a third party, and successful transactions are irrevocable and traceable.Computer software is used to create a legally binding contract that can be automatically executed.A smart contract is a program placed on the blockchain that guarantees the safety and security of transactions in the absence of third-party monitoring [36].The process of smart contracts is shown in Figure 1.In the smart contract code, predefined response rules and trigger situations are encoded, triggering specific actions automatically when predetermined conditions are met.This eliminates the need for intermediaries and improves the contract execution process's transparency, security, and efficiency.When trigger situations, such as specific dates, events, or conditions, occur, the smart contract implements predefined actions, such as transferring ownership, releasing funds, and updating records.By incorporating blockchain technology into smart contracts, participants gain an increased trust, lower costs, and reduced fraud risks.Combining blockchain technology and smart-contract streamline processes optimizes contract administration and provides a secure and transparent solution for a variety of industries.and trigger situations are encoded, triggering specific actions automatically when predetermined conditions are met.This eliminates the need for intermediaries and improves the contract execution process's transparency, security, and efficiency.When trigger situations, such as specific dates, events, or conditions, occur, the smart contract implements predefined actions, such as transferring ownership, releasing funds, and updating records.By incorporating blockchain technology into smart contracts, participants gain an increased trust, lower costs, and reduced fraud risks.Combining blockchain technology and smart-contract streamline processes optimizes contract administration and provides a secure and transparent solution for a variety of industries.Table 1 displays the highlights of blockchain-enabled smart contracts. Features Description Untamperable Smart contracts cannot be changed after deployment.Like a contract, this cannot be changed once signed. Low cost Smart contracts do not need a third party to enforce the code after a violation; thus, they are cheaper than regular contracts. Open and transparent A smart contract will execute according to the design code and be transparent once deployed. Decentralized Computers supervise and arbitrate smart contracts without third-party involvement. Importance of Blockchain in Healthcare Blockchain may provide an effective, efficient, safe, and transparent method of data and information communication for all stakeholders involved in the healthcare business [37].With tokenization and smart contracts, it is possible to decrease or eliminate the preauthorization procedure in the healthcare industry [38].While connecting with multiple parties, blockchain-based systems for health documentation protect the security of an individual's data via the use of secure encryption methods [39].Using the encryption methods, smart contracts, and tokenization used in blockchain network transactions, the preauthorization method will be drastically streamlined, allowing patients to obtain essential and informed treatment more quickly.This is a consequence of the healthcare provider's ability to immediately obtain pertinent information, whereas previously, they had to rely on the patient or on files physically delivered or emailed from many sources, such as local doctors, laboratories, etc.Not only may tokenization promote a more efficient contact and communication between insurance companies and healthcare practitioners, but it can also support and enhance patient-provider dialogue. Features Description Untamperable Smart contracts cannot be changed after deployment.Like a contract, this cannot be changed once signed. Low cost Smart contracts do not need a third party to enforce the code after a violation; thus, they are cheaper than regular contracts. Open and transparent A smart contract will execute according to the design code and be transparent once deployed. Decentralized Computers supervise and arbitrate smart contracts without third-party involvement. Importance of Blockchain in Healthcare Blockchain may provide an effective, efficient, safe, and transparent method of data and information communication for all stakeholders involved in the healthcare business [37].With tokenization and smart contracts, it is possible to decrease or eliminate the pre-authorization procedure in the healthcare industry [38].While connecting with multiple parties, blockchain-based systems for health documentation protect the security of an individual's data via the use of secure encryption methods [39].Using the encryption methods, smart contracts, and tokenization used in blockchain network transactions, the pre-authorization method will be drastically streamlined, allowing patients to obtain essential and informed treatment more quickly.This is a consequence of the healthcare provider's ability to immediately obtain pertinent information, whereas previously, they had to rely on the patient or on files physically delivered or emailed from many sources, such as local doctors, laboratories, etc.Not only may tokenization promote a more efficient contact and communication between insurance companies and healthcare practitioners, but it can also support and enhance patient-provider dialogue. The expansion of the worldwide healthcare business may be aided by blockchain technology, which can also save money and stimulate additional investment in vital resources.With so much at risk, it is inconceivable that the current inefficient, excessively bureaucratic, and failing healthcare business can continue [40].It is time for executives, practitioners, and patients to embrace the available technological and system-based innovations. The misuse of available information prevents healthcare organizations from providing appropriate patient care and remarkably improved services.Even though these organizations are economically competent, they are unable to meet the needs of patients.Here are a few facts from Supporting Materials that illustrate this reality.Nowadays, healthcare data breaches in organizations are estimated to cost around USD 380 per compromised record.This amount is anticipated to increase with time.Several healthcare offices still use antiquated frameworks for maintaining patient records.These frameworks are beneficial for keeping patient information records close at hand.This might make it difficult for the professional to analyze, which can be tiresome for both the specialist and the patients.As a result, the cost of maintaining a patient-centered business increases substantially [41,42].The majority of the present healthcare data infrastructure relies on reputable third parties.In numerous instances, however, they cannot be relied upon.A potential answer to this issue is the blockchain, which depends on consensus and does not need a central authority. Methodology This study continues by defining the used approach.The systematic study is confined to the subject of medical data sharing. Research Questions The purpose of the study was to address the following research questions (RQs): RQ3: What are the issues of using blockchain to share medical data? Databases Included in the systematic review were the following databases: Using the query string(s) listed below, a search for related articles was conducted.Based on the study domain and the established RQs, the search strings were developed.These keywords have been used in the search: The online digital library search was performed on 14 March 2023.The search query was purposely designed to be as comprehensive as feasible to evaluate as many results as possible that were relevant to the research topics given in this systematic review.By searching in the title of the articles, 284 total items were discovered via the main search.The online digital library search was performed on 14 March 2023.The search query was purposely designed to be as comprehensive as feasible to evaluate as many results as possible that were relevant to the research topics given in this systematic review.By searching in the title of the articles, 284 total items were discovered via the main search.Figure 2 shows a summary of the search and selection technique used to choose the articles. Selection of Studies Depending on the criteria, articles were either included or excluded from the systematic review (Table 2).Fifty papers were ultimately included in the systematic review.To ensure that only high-quality and relevant research was examined, the approach was rigorous. Selection of Studies Depending on the criteria, articles were either included or excluded from the systematic review (Table 2).Fifty papers were ultimately included in the systematic review.To ensure that only high-quality and relevant research was examined, the approach was rigorous. Limitations One issue is due to the scope of attention; as systematic reviews have a confined focus.Another restriction concerns the study selection, information loss on critical outcomes, incorrect subgroup analysis, and inconsistency with the unique experimental results [43]. Limited databases and the title search query are other limitations of this review.The decision to use only article titles as the search query for this systematic review was motivated by the need to conduct a preliminary investigation of the topic within the limitations of time and resources.This method has several limitations, including the possibility of omitting relevant studies, a reduced precision in the study selection, and the risk of bias. Discussion RQ1: How established is blockchain in medical data sharing, and how has this evolved?This systematic review looked for articles published between 2017 and 2022 on the use of blockchain technology in the exchange of medical data.Figure 3 4. System design must not be defined properly Limitations One issue is due to the scope of attention; as systematic reviews have a confined focus.Another restriction concerns the study selection, information loss on critical outcomes, incorrect subgroup analysis, and inconsistency with the unique experimental results [43].Limited databases and the title search query are other limitations of this review.The decision to use only article titles as the search query for this systematic review was motivated by the need to conduct a preliminary investigation of the topic within the limitations of time and resources.This method has several limitations, including the possibility of omitting relevant studies, a reduced precision in the study selection, and the risk of bias. Discussion RQ1: How established is blockchain in medical data sharing, and how has this evolved?This systematic review looked for articles published between 2017 and 2022 on the use of blockchain technology in the exchange of medical data.Figure 3 provides a bibliometric summary of the selected articles.Only two articles were published over the years 2017 and 2018.In 2019, four papers were published.The years 2020 and 2021 each have nine items.With 24 papers published in 2022, the growth rate has risen.It contains 48 percent of all papers in this review.This demonstrates that blockchain research in medical data sharing is very important, and expanding, and shows no indication of slowing down.Blockchain enables enterprises to offer proper patient care and provide access to highquality healthcare services.With this technology, health information exchange, a substantial strain owing to its repetitive nature and time-consuming nature, is swiftly alleviated.Adoption and implementation of blockchain in healthcare systems and organizations for the sharing of medical data should be thoroughly evaluated.Various factors, including regulatory frameworks, technological challenges, interoperability, data protection, and stakeholder acceptance, need to be considered.Several research and development initiatives are investigating the potential of blockchain for medical data sharing, as its popularity and importance have risen dramatically.Initially, research concentrated on the theoretical foundations and viability of blockchain in this domain, resulting in the development of frameworks, methods, and protocols tailored to address the unique challenges of medical data sharing.As the technology advanced, efforts were made to improve the effectiveness and functionality of blockchain in the sharing of medical data, including refining data storage and retrieval, optimizing consensus processes, and exploring integration with other cutting-edge technologies such as edge computing and IoT. Despite growing interest and research in this area, blockchain technology for the sharing of medical data is still in its infancy.Numerous proposed solutions are still in the experimental or proof-of-concept stages and require additional validation, standardization, and integration with the current healthcare infrastructure.To accomplish a widespread blockchain adoption in the healthcare industry, adoption, regulation, interoperability, and scalability issues need to be resolved.These developments are required to ensure the successful deployment of blockchain technology for the secure and efficient exchange of medical data, which will benefit healthcare stakeholders and facilitate the improvement of healthcare outcomes. RQ2: What are the latest developments in blockchain-based medical data-sharing research?There are several instances in which blockchain is utilized to share medical data.Rajput et al. [44] propose a system for healthcare management that leverages blockchain technology to create a tamper-protection application by taking into account secure rules.In an emergency case, these regulations specify extendable access control, auditing, and tamper resistance.Hu et al. [45] propose CrowdMed-II, a blockchain-based health data management architecture that might solve the aforementioned issues with health data.In their framework, they investigate the architecture of significant smart contracts and suggest two smart contract structures.In addition, a unique search contract for searching patients is introduced.They assess their efficacy based on Ethereum's execution costs.The paper by Hashim et al. [46] presents a transaction-based smart contract-triggering method for inter-blockchain communication, allowing the exchange of electronic health records (EHRs) across separate blockchains.They employ local and global smart contracts that will be executed whenever a blockchain-based transaction is generated.Local smart contracts are used to share EHRs inside a blockchain, while global smart contracts are used to share EHRs across blockchains that are independent of one another.Using the Hyperledger Fabric blockchain technology, the experimental setup is executed.In a health federation arrangement, inter-blockchain communication between two separate fabric networks is handled by a global smart contract using Hyperledger Cactus for EHR exchange. Wu et al. [47] build a blockchain-enabled framework for dynamic access control coupled with local differential privacy (LDP) techniques to offer attribute-based privacy protection in transaction processing.In the framework, they develop four kinds of smart contracts to suit the needs of anonymous transactions, dynamic access control, advantageous matching decisions, and the assessment of disclosed data.To provide granular privacy protection, they categorize critical EMR parameters into distinct tiers and randomize them before data publication using differential privacy budgets.In addition, they create a data quality function that illustrates the disruption caused by LDP-based privacy preferences from the requester's perspective, and they propose suitable many-to-many matching selections among participants for advantageous transactions.Specifically in the study by Kumar et al. [48], deep learning and permissioned blockchain uses a blockchain system to register, verify (using zero-knowledge evidence), and validate communicating entities by utilizing a consensus process based on smart contracts.The authenticated data are then utilized to propose a unique DL strategy that combines a stacked sparse variational autoencoder (SSVAE) with self-attention-based bidirectional long short-term memory (SA-BiLSTM).In this system, SSVAE encodes or converts healthcare data to a new format, while SA-BiLSTM detects and enhances the attack detection process. Several blockchain implementations, encryption strategies, and IoT-based systems are the three main categories of current system platforms for exchanging medical data.Table 3 outlines research proposing blockchain solutions for exchanging medical data. Types of Blockchain There are four primary forms of blockchain networks, including public blockchains, private blockchains, consortium blockchains, and hybrid blockchains.Each of these systems has advantages, disadvantages, and optimal applications.The paper by Zhang and Lin [49] offers a blockchain-based safe and privacy-preserving protected health information (PHI) sharing (BSPP) protocol for enhancing e-health system diagnostics.Initially, the data architecture and consensus procedures for the two types of blockchains-private blockchain and consortium blockchain-need to be designed.The private blockchain is responsible for storing PHI, while the consortium blockchain maintains the PHI's safe indexes.To ensure privacy preservation, access control, data security, and secure search, all data, including PHI, keywords, and patient identities, are encrypted with a public key using keyword search.Two kinds of blockchains, private and consortium blockchains, are developed in another research by Shamshad et al. [50] by defining their consensus, data formats, and procedures.A private blockchain is responsible for the maintenance of EHRs, while a consortium blockchain keeps the safe indexes of EHRs.To achieve data security, secure search, access management, and the protection of patient privacy, all EHRs are public-key encrypted with an appropriate search phrase. Encryption Encryption of medical data facilitates electronic data transmission and the sharing of clinical patient data and documentation.Regardless of location, patient medical data may be exchanged within a health system or transmitted to permitted health systems.In the study by Yang et al. [51], first, the encrypted medical data are saved in the cloud, and then the storage address and medical-related information are entered into the blockchain, thus ensuring efficient storage and eliminating the risk of irreversible data change.The proposed approach combines attribute-based encryption (ABE) and attribute-based signature (ABS) to enable the exchange of medical data in many-to-many interactions.The ABE provides data privacy and fine-grained access control, while the ABS confirms the source of medical data while safeguarding the signer's identity.In addition, the majority of medical data ciphertext decryption activities are outsourced by the data user to the cloud service provider (CSP), which may significantly minimize the computing strain.In a separate investigation, Sun et al. [52] conduct a hash computation on the EMR and record the resulting value on the blockchain to assure the data's integrity and validity.They encrypt the EMR and put it in the distributed storage protocol interplanetary file system.The encrypted keyword index information of EMRs was saved on Ethereum, and instead of relying on a centralized third party, a smart contract implemented on Ethereum was utilized to perform keyword searches.In addition, they use the ABE method to guarantee that only the access policycompliant qualities may decode the encrypted EMR.Zhang et al. [53] address these issues by providing a distributed PHR-sharing mechanism based on blockchain and ciphertext policy ABE (CP-ABE), which enables efficient encryption and decryption.In addition to maintaining the data's integrity and tracking its source, blockchain records all activities on the data as transactions.In addition, the nodes of the blockchain serve as attribute authority for the CP-ABE cryptosystem.Tracing cryptographic techniques enables the identification of rogue blockchain nodes.Furthermore, the recovery of ciphertext is made fair via the use of smart contracts.To circumvent the restricted storage capacity of blockchain, our innovative solution employs both on-chain and off-chain storage options. In the study by Zhang et al. [54], the deniably authenticated searchable encryption scheme (DASES) utilizes blockchain to assure the integrity, immutability, and traceability of image data, while circumventing the blockchain's storage and processing limitations.Not only can the DASES survive an inside keyword guessing attack (IKGA), but it can also offer good privacy protection and validate the validity of medical picture data.Secondly, they demonstrate that the DASES meets the ciphertext and trapdoor indistinguishability conditions.Regrettably, the DASES is less efficient than other comparable systems in the literature, but its largest asset is its capacity to provide an improved identity privacy protection and enhanced security.The application created by Cheng et al. [55] enables the physician to access the patient's personal history EMRs with the patient's permission to comprehend the patient's sickness history and build a new medical record for the patient.The server calculates the ciphertext and adds it to the patient's medical record to complete the case update.Via hierarchically storing patient information, medical staff information, and medical records, Yuan et al. [56] devise a three-chain paradigm.The combination of interplanetary file system (IPFS) technology and the encryption algorithm guarantees the security and efficiency of data storage off-chain.Attributes are used to classify users, and ABE technology is used for the secondary encryption of the key and ciphertext channel.However, hierarchical encryption drastically reduces the chance of a system assault.Zhang et al. [57] provide a unique blockchain-based data sharing system (BDSS) with fine-grained access control and permission revocation for the medical context.With this concept, they divide the EMR into public and private sections.Next, they employ symmetric searchable encryption (SSE) technology to encrypt these two pieces independently, and ABE technology to encrypt the symmetric keys used by SSE technology.Based on CP-ABE, Tan et al. [58] present a blockchain-enabled security and privacy protection system for COVID-19 medical records with traceable and direct revocation.With this system, all public keys, revocation lists, etc., are maintained on a blockchain, and the blockchain is used for consistent identity authentication.The system management server is responsible for producing the system settings and publishing the COVID-19 medical practitioners' and users' private keys.Using policy matching, the cloud server provider (CSP) maintains the CEMRs and creates the intermediate decryption parameters.If the user has private keys and intermediate decrypt parameters, he or she may compute the decryption key.Chen et al. [59] offer BFHS, a blockchain-based method for the safe, granular exchange of EHRs.On BFHS, they encrypt EHRs using ciphertext-policy ABE and upload them to the IPFS for storage, whilst the matching index is encrypted with proxy re-encryption and stored on a medical consortium blockchain.In addition, a credit evaluation system was developed and included in the smart contract.The combination of smart contracts, proxy re-encryption, a credit assessment system, and IPFS provides patients with a secure EHR sharing environment and a dynamic access control interface. Ciphertext Ciphertext is the result of an encryption algorithm transforming plaintext into encrypted text.Ciphertext cannot be read until it has been decrypted (converted to plaintext) using a key.Decryption cipher is a method that converts ciphertext to plaintext.Yang et al. [60] propose a novel blockchain-based keyword search protocol with dual authorization for the exchange of EHRs.The certificateless cryptosystem eliminates key escrow and certificate administration.The development of the authorization matrix enables the dual authorization of user identities and searchable departments.Moreover, the matrix may manage user access privileges.The ciphertext index signal value enables an authoritative control over the ciphertext index.The ciphertext MAC verification code kept on the blockchain can check for the legality of ciphertext, and smart contracts are utilized to guarantee fair transactions.Yang et al. [61] encrypt keywords using the certificateless cryptosystem, which eliminates the certificate administration and key escrow issues.The suggested approach also enables multi-user searches, and the user authorization table may be utilized to adjust medical data users' access rights.In addition, the root values of the Merkle trees are recorded in the blockchain to assure the search results' immutability, integrity, and traceability.In addition, a smart contract facilitates a fair transaction between a cloud service provider and customers of medical data without the need for trusted third parties.They demonstrate that the suggested technique is safe against the random oracle model's keyword-guessing attack.Lai et al. [62] propose a secure medical data-sharing system based on a traceable ring signature and blockchain as a solution to the issue of medical institutions' challenges in exchanging medical data.First, a certificateless traceable ring signature mechanism based on distributed key generation is suggested to preserve data integrity and privacy.The combination of a smart contract with access control and a self-controlling object (SCO) enables the outsourcing of decryption and the sharing of data.In addition, the suggested approach leverages the IPFS to store the seas of medical privacy data and encrypts the hash index to store it, which increases data sharing efficiency.Using the consensus process, they may choose the proxy node and upload the SCO package to the blockchain node for data exchange after the blockchain has been incorporated. IoT-Based Systems With healthcare mobility solutions, IoT may automate the workflow of patient care by automating the workflow of patient care.Data transfer, machine-to-machine connectivity, and interoperability have increased the productivity of healthcare sectors.Healthcare professionals and patients may save time with IoT integration.Chen et al. [63] presented a health IoT-based blockchain data-sharing system that protects privacy.To allow patients to construct granular privacy protection, they devised a privacy-preserving mechanism based on the content extraction signature system.They created a Byzantine fault-tolerant leader election method that improves the Raft algorithm's security and data-sharing efficiency.In addition, they built a summary contract to facilitate the retrieval of data.Pang et al. [64] propose a patient-controlled EHR-sharing system based on blockchain technology and cloud computing.To prevent tampering, the medical abstract and access strategy are kept on the blockchain.To accomplish fine-grained access control, they suggest encrypting EHRs using ABE and multi-keyword encryption.In addition, they suggested a node-statecheckable practical Byzantine fault tolerance consensus method to prevent Byzantine nodes from gaining access to the consortium blockchain.Nie et al. [65] present a new blockchainbased safe-sharing system with searchable encryption and a concealed data structure through IoT devices.Data owners' EHR ciphertexts are kept in the interplanetary file system (IPFS).A user with the appropriate access rights may search for the needed data using the data owner's time-limited authorization and validate the search result's legitimacy.With a symmetric key, the data user may then obtain the appropriate EHR ciphertext from IPFS.In IoT applications, the technique combines searchable encryption and smart contracts to provide safe search, time management, verified keyword search, quick search, and forward privacy.Wang et al. [66] present a consortium-based blockchain-based PHR management and sharing system that is both security-conscious and privacy-preserving.The PHR ciphertext of Internet of medical things (IoMT) is stored using the interplanetary file system (IPFS).Hence, zero-knowledge proof may be used to validate keyword index authentication on the blockchain.In addition, the system combines modified attribute-based cryptographic primitives with custom-tailored smart contracts to offer safe search, privacy preservation, and individualized access control in IoMT situations.Wu et al. [67] present a triple-subject purpose-based access control (TS-PBAC) model that is compatible with a blockchainenabled reliable transaction network, and they design an individual-centric security and privacy-preserving mechanism for access control with varying purposes and roles in IoMT scenarios.Particularly, they develop a hierarchical purpose tree (HPT) and associated regulations to ensure the legality of an external user who has several purposes.They create a LDP-based policy and role-based access control mechanism in an edge computing paradigm to award fine-grained permissions to authorized users to increase the privacy of sensitive characteristics against an internal attacker. Blockchain Role Year Capability Smart Contract Reference Trust-less medical data sharing 2017 Access control mechanism [68] Blockchain-based data sharing for electronic medical records 2017 Receive data from the shared pool once identities and cryptographic keys have been validated [69] Efficient and secure medical data sharing 2018 The enhanced consensus technique delivers EMR consensus without significant network congestion or energy consumption [70] Secure and privacy-preserving data sharing 2019 Session-based flexible healthcare data sharing [71] Table 3. Cont. Blockchain Role Year Capability Smart Contract Reference Blockchain-based searchable encryption 2019 Complete control over data access [72] Efficient healthcare data sharing 2019 Mutual authentication and the generation of a session key [73] Privacy-preserving data sharing 2019 Fine-grained access control, keyword search, and privacy protection [74] Secure and privacy-preserving data sharing 2020 Using bilinear mapping and intractable issues, the authentication process's security danger may be neutralized. [75] Efficient and secure data sharing 2020 Verification by zero-knowledge proof, decryption using proxy re-encryption technology, and PBFT-based distributed consensus [76] Privacy-preserving data sharing 2020 The data usage ontology and the automatable discovery and access matrix comprise the dynamic consent model [77] Fine-grained access control and privacy protection 2020 In the random oracle paradigm, keyword indistinguishability against adaptively selected keyword assaults [78] Protected data sharing 2020 Couples with privacy-sensitive information are stored on the consortium blockchain, while non-sensitive data are shared on the public blockchain [79] Privacy-preserving medical data sharing 2021 Scheme for anonymously transmitting medical data based on proxy re-encryption algorithm and cloud servers [80] Secure data sharing 2021 Proxy re-encryption protocols [81] Protected data sharing 2021 Searchable encryption and K-anonymity [82] Consortium-based data sharing 2021 Allowing data requesters to comply with data access requirements and to build their standing within a consortium [83] Secure and privacy-preserving data sharing 2021 The outsourced business has no access to the server or its data [84] Secure and distributed data sharing 2021 Data ownership, data traceability, data consistency, privacy protection, data security, and distributed storage [85] Secure data storage and sharing 2021 Certificateless public key cryptography and elliptic curve cryptography (ECC) [86] Hierarchical data sharing with access control 2022 Fine-grained access control, efficient retrieval across encrypted PHRs with low-consumed hierarchical key distribution and key leakage resistance, as well as efficient aggregative authentication [87] Searchable encryption with access control 2022 Algorithm for key-policy ABE [88] Privacy-preserving data sharing 2022 The condition is concealed inside the re-encryption key so that the proxy cannot discover it [89] Table 3. Cont. Blockchain Role Year Capability Smart Contract Reference Protected and integrated data sharing 2022 Storing encrypted medical data in dispersed storage mode and integrating patient data across offline institutions and platforms [90] Privacy-enhanced data storage and exchange 2022 Patients' personal information is held on off-chain storage (IPFS), while other information is saved on the blockchain ledger, which is available to all participants [91] Hybrid storage with access control 2022 Feasibility of recovery of the encryption keys [92] Secure data sharing with access control 2022 Immutability, fine-grained access control, and traceability [93] In the sphere of blockchain-based medical data sharing, there are common concepts and tendencies.These papers highlight the use of blockchain technology to facilitate the secure and trustless sharing of medical data, addressing issues of trust, security, and privacy.These solutions seek to provide a tamper-resistant and transparent infrastructure for storing and sharing sensitive medical information by leveraging the distributed and decentralized nature of blockchain in conjunction with cryptographic techniques. In these blockchain-based systems, privacy and data security are top priorities.Several methods, including encryption techniques, access control mechanisms, and privacypreserving algorithms, are used to protect the privacy of patient information while allowing authorized parties to access relevant data.In addition, attribute-based access control and fine-grained access control mechanisms are frequently employed, enabling data owners to define access policies based on particular attributes or duties.Frequently, consortium or permissioned blockchains are utilized, allowing multiple trusted parties to collaborate and administer the shared data.Moreover, interoperability, consent management, and compliance with regulations such as the General Data Protection Regulation (GDPR) are also essential considerations for these solutions.These trends highlight the growing interest in utilizing blockchain technology to establish secure, privacy-preserving, and interoperable medical data-sharing systems. Various research studies are garnering interest in the application of blockchain technology to the exchange of medical data.Several approaches and frameworks have been proposed by researchers to resolve the challenges associated with health data exchange.These methods make use of blockchain characteristics such as tamper resistance, secure rules, extendable access control, auditing, and counterfeit protection.They investigate the architecture of smart contracts, devise methods for inter-blockchain communication, and assess the efficacy of these systems by calculating execution costs.Integrating encryption methods such as attribute-based encryption (ABE) and ciphertext-policy ABE (CP-ABE) with blockchain ensures privacy protection, granular access control, and secure search.Consideration is given to public, private, consortium, and hybrid blockchains for the secure storage and management of medical data.Moreover, techniques such as hash computation, deniably authenticated searchable encryption schemes (DASES), and smart contracts are utilized to guarantee data integrity, validity, and traceability.By integrating blockchain technology with encryption techniques, researchers hope to develop dependable systems that improve data security, privacy, and the exchange of medical information. RQ3: What are the issues of using blockchain to share medical data?Using blockchain technology for sharing medical data presents several issues that need to be addressed.Firstly, scalability is a major concern.Blockchain networks may struggle to manage the large volumes of data involved in sharing EMRs among multiple stakeholders.Scaling the blockchain to accommodate these demands is essential for efficient data sharing [69].Ensuring the privacy and confidentiality of sensitive medical data is paramount in healthcare systems.While blockchain offers immutability and transparency, it presents challenges in protecting patient privacy and maintaining data confidentiality.Innovative solutions must be developed to address these concerns and provide robust privacy measures in blockchain-based medical data-sharing systems [70]. Another significant issue is the performance and efficiency of blockchain networks.Public blockchains, in particular, can experience slow transaction-processing speeds and high energy consumption.These limitations hinder the real-time access and responsiveness required for sharing medical data effectively.Optimizing blockchain performance and energy efficiency is crucial to ensure seamless data sharing [74].Additionally, the interoperability of blockchain with the existing healthcare infrastructure is a challenge.Integrating blockchain into diverse systems and ensuring compatibility with legacy systems is complex.Achieving seamless interoperability among different healthcare providers and systems is crucial for effective medical data sharing.Addressing this issue requires careful planning and implementation strategies [75]. Regulatory and legal considerations play a significant role in blockchain-based medical data sharing.Compliance with data protection laws, such as GDPR, is necessary.However, the decentralized nature and immutability of blockchain can make it difficult to meet certain regulatory obligations, such as data deletion and consent management.Developing frameworks that align with regulatory requirements is vital to ensure compliance while leveraging the benefits of blockchain technology [91].Lastly, establishing a governance framework and building trust among participating entities are critical aspects of blockchainbased medical data sharing.The distributed nature of blockchain requires robust consensus mechanisms and trust models to guarantee data integrity and reliability.Creating effective governance structures that address the needs and concerns of all stakeholders is essential for successful implementation [81]. The use of blockchain for sharing medical data poses several challenges.Scalability, privacy and confidentiality, performance and efficiency, interoperability, regulatory compliance, and governance and trust are among the key issues that need to be addressed.Overcoming these challenges is crucial for the successful implementation of blockchainbased solutions in healthcare, enabling secure, efficient, and privacy-preserving sharing of medical data.Table 4 provides an overview of the various research papers pertaining to blockchain-based medical data-sharing schemes.Using blockchain technology for sharing medical data presents several technical challenges that need to be addressed for successful implementation in healthcare systems.Scalability is a major concern due to the large volumes of data involved, requiring the blockchain networks to be scaled appropriately.The privacy and confidentiality of sensitive medical data need to be ensured, necessitating the development of innovative solutions such as attribute-based encryption and robust access control policies.Performance and efficiency issues, including slow transaction-processing speeds and high energy consumption, need to be optimized for real-time data access.Interoperability with existing healthcare infrastructure requires careful integration and compatibility planning.Regulatory compliance, particularly with data protection laws like GDPR, is crucial, and frameworks aligning with these requirements need to be developed.Establishing a governance framework and building trust among participants is essential, necessitating robust consensus mechanisms and effective governance structures.Overcoming these challenges will enable the secure, efficient, and privacy-preserving sharing of medical data, leading to improved healthcare outcomes. Conclusions As medical technology progresses, there is a rising demand for healthcare professionals throughout the globe to communicate an expanding number of data safely.Blockchain is commonly used in the healthcare industry to provide a comprehensive knowledge of patient information and monitor data-sharing permission.It is a robust technology that enables numerous parties to view and exchange data securely.Considering the huge difficulty that healthcare organizations confront in digitizing and exchanging health information, it is not surprising that many are striving to enhance healthcare operations via the use of blockchain.Using publications published between 2017 and 2022, this review examines the present status, research trends, and problems of blockchain in medical data exchange to address the existing gap. To attain this purpose, RQs were formulated and a predetermined technique was used to reduce the number of articles reviewed to 50.They were then studied further.Our results show that blockchain technology development and its use in the exchange of medical data are growing.Hence, most of blockchain's potential remain untapped.Most of the studies propose a unique framework, architecture, or methodology for medical data exchange by utilizing blockchain technology.As there are a multitude of benefits in exchanging patient data in a safe, decentralized manner, it is difficult to comprehend why the industry has not determined and concluded on using this concept earlier.Nevertheless, as with many factors in the commercial sector, there are actual reasons as to why it is difficult to exchange healthcare data.It seems that countless challenges must be addressed before blockchain can become the dominant industrial technology. Figure 1 . Figure 1.Blockchain in a smart contract process. Figure 1 . Figure 1.Blockchain in a smart contract process. Figure 2 shows a summary of the search and selection technique used to choose the articles.AND Health record sharing. Figure 2 . Figure 2. Process of study selection. Figure 2 . Figure 2. Process of study selection. provides a bibliometric summary of the selected articles.Only two articles were published over the years 2017 and 2018.In 2019, four papers were published.The years 2020 and 2021 each have nine items.With 24 papers published in 2022, the growth rate has risen.It contains 48 percent of all papers in this review.This demonstrates that blockchain research in medical data sharing is very important, and expanding, and shows no indication of slowing down.Blockchain enables enterprises to offer proper patient care and provide access to high-quality healthcare services.With this technology, health information exchange, a substantial strain owing to its repetitive nature and time-consuming nature, is swiftly alleviated. objectives Figure 3 .Figure 3 . Figure 3.The trend of publishing articles between 2017 and 2022.Adoption and implementation of blockchain in healthcare systems and organizations for the sharing of medical data should be thoroughly evaluated.Various factors, including regulatory frameworks, technological challenges, interoperability, data protection, and stakeholder acceptance, need to be considered.Several research and development Table 1 displays the highlights of blockchain-enabled smart contracts. Table 2 . Standards of inclusion and exclusion. Table 2 . Standards of inclusion and exclusion. 4. Article should contain and clearly outline research objectives 4. System design must not be defined properly Table 3 . Summary of studies that propose blockchain systems in medical data sharing. Table 4 . Challenges and possible solutions of blockchain-based medical data-sharing schemes.
9,734.2
2023-07-12T00:00:00.000
[ "Medicine", "Computer Science" ]
Strong decays of the newly $P_{cs}(4459)$ as a strange hidden-charm $\Xi_c\bar{D}^*$ molecule In our former work [arXiv:2011.07214], the $P_{cs}(4459)$ observed by the LHCb Collaboration can be explained as a coupled strange hidden-charm $\Xi_c\bar{D}^*/\Xi_c^*\bar{D}/\Xi_c'\bar{D}^*/\Xi_c^*\bar{D}^*$ molecule with $I(J^P)=0(3/2^-)$. Here, we further discuss the two-body strong decay behaviors of the $P_{cs}(4459)$ in the meson-baryon molecular scenario by input the former obtained bound solutions. Our results support the $P_{cs}(4459)$ as the strange hidden-charm $\Xi_c\bar{D}^*$ molecule with $I(J^P)=0(3/2^-)$. The relative decay ratio between $\Lambda_cD_s^*$ and $J/\psi\Lambda$ is around 10, where the partial decay width for the $\Lambda_cD_s^*$ channel is around 0.6 to 2.0 MeV. I. INTRODUCTION In 2019, the LHCb Collaboration discovered three narrow hidden-charm pentaquarks, namely P c (4312), P c (4440), and P c (4457), by using the combined data set collected in Run 1 plus Run 2 [1]. These three P c states locate just below the Σ cD ( * ) continuum thresholds, they are very likely to be Σ cD hidden-charm molecular pentaquarks. Several phenomenological models have been adopted to calculate the mass spectrum of the meson-baryon hidden-charm molecules, like the QCD sum rule, the meson-exchange model, the quark delocalization model, and so on (see review papers [2][3][4][5][6][7] for more details). In particular, through adopting the one-bosonexchange model (OBE) and considering the coupled channel effect, we have demonstrated that the P c (4312), P c (4440), and P c (4457) are corresponding to the loosely bound Σ cD state with I(J P ) = 1/2(1/2 − ), Σ cD * state with I(J P ) = 1/2(1/2 − ), and Σ cD * state with I(J P ) = 1/2(3/2 − ), respectively [8]. And the coupled-channel effect plays an important role in generating hidden-charm molecular pentaquarks. The hadronic molecule is an important component of exotic states. Experimental and theoretical studies on the hadronic molecules can deepen our understanding of the nonperturbative behavior of quantum chromodynamics (QCD). Especially, not only the study of the mass spectrums but also the predictions of the decay behaviors for the P c states can help us to test the binding mechanism of pentaquark states. So far, many groups have discussed the strong decay behaviors of the P c states in the meson-baryon hadronic molecular picture. For example, the decay branch fraction of P c → η c p and P c → J/ψp processes were predicted by using the heavy quark symmetry [9][10][11][12][13][14][15]. The effective Lagrangian method was adopted to study the partial widths of all the allowed decay channels for these P c states at the hadronic level [16]. As we seen, all the results are model dependent. The coupled channel effect is not well taken into consideration in the strong decay of the P c states. In this work, we will study the two-body strong decay properties for the P cs (4459) as a strange hidden-charm molecule. In our calculation, we consider the coupled channel effect and input the bound state wave functions obtained in our former work [18]. In fact, Zou et al. already have predicted the two-body strong decay behaviors of the possible Λ cc states in the single hadronic molecule pictures [33]. The obtained total widths and decay patterns can be valuable in identify the molecular assumptions and spin parities of the strange hiddencharm molecular pentaquarks. This paper is organized as follows. After the introduction, we present the two-body strong decay amplitudes for the P cs (4459) as a strange hidden-charm Ξ cD * molecule in Sec. II. The corresponding numerical results for the decay widths is given in Sec. III. The paper ends up with a summary. II. TWO-BODY STRONG DECAY For the decay process P cs → f 1 + f 2 , its decay width reads which is expressed in the rest frame of the P cs state. m P cs , J, and p stand for the mass and spin of the initial P cs state and the momentum of the final state ( f 1 , f 2 ), respectively, As we discussed in Ref. [18], the P cs (4459) can be explained as the Ξ cD * molecular state with I(J P ) = 0(3/2 − ). When the binding energy is taken as −19.28 MeV, the probabilities for the Ξ cD * , Ξ * cD , Ξ ′ cD * , and Ξ * cD * channels are 38.95%, 34.58%, 6.61%, and 18.86%, respectively. After introducing the coupled channel effect, the interaction for the P cs (4459) → f 1 + f 2 process can be express as cD * , and Ξ * cD * channels in the r−coordinate space, respectively. And we define There are three kinds of two-body strong decay processes, the hidden-charm modes, the open-charm modes, and the cc−annihilation modes. In Table I, we collect the possible two-body strong decay channels. I: Two-body strong decay final states for the P cs (4459) as a Ξ cD * molecule with I(J P ) = 0(3/2 − ). Here, the masses for the final states are in the unite of MeV. The S and D stand for the S −wave and D−wave decay modes, respectively. Because the D−wave interactions are strongly suppressed in comparison with the S −wave interactions, in the follow-ing, we only focus on the J/ψΛ, Λ cD * s , φΛ, ωΛ, ρΣ,K * N, and K * Ξ decay channels. Figure 1 shows the corresponding decay processes. For the isoscalar P cs state as the coupled Ξ cD Here, we need to mention that the sum of the decay amplitude for the Ξ ( ′ , * ) cD ( * ) →K * N by exchanging the Λ c /Σ c is zero in the isospin conservation case. The interaction Lagrangians related to the discussed decay processes are given as [34][35][36][37] (2.11) Here, P, V, B, and D stand for the pseudoscalar and vector mesons, octet, and decuplet baryons. For example, in the S U(4) quark model, the pseudoscalar and vector mesons are expressed as FIG. 1: Two-body strong decay diagrams for the P cs (4459) as a coupled Ξ cD with ω 8 = ωcosθ + φsinθ and sinθ = −0.761 [38]. Coupling constants adopted in the following calculations are estimated from the ρ−π−π, N−N−π, N−N−ρ(ω), N−∆−π, and N−∆−ρ interactions. For example, when explicitly expanding the the S U(4) invariant interaction Lagrangians between baryons and pseudoscalar mesons, one can obtain L BBP = 5b − 4a 4 √ 2 G p (niγ 5 π 0 n −piγ 5 π 0 p +piγ 5 π + n +niγ 5 π − p) with f π = 0.132 GeV. All the coupling constants are determined by comparing the corresponding coefficients in Eq. (2.12). The scattering amplitudes for all the discussed decay processes can be expressed as Here, M E i 1 i 2 → f 1 f 2 corresponds to the scattering amplitude for the i 1 + i 2 → f 1 + f 2 process by exchanging the hadron E. The above scattering amplitudes have the form of For the heavy loosely bound state, the higher order terms like the c i (k 4 , p 4 , . . .) contribute very small. In our calculations, we neglect these interactions. According to the relation in Eq. (2.2), the convergence of the amplitude M(P c → f 1 + f 2 ) only depends on the wave functions of the P cs state as shown in Figure 2. For simplicity, we set a upper limit integral k Max on the amplitude M(P c → f 1 + f 2 ) according to the wave function normalization III. NUMERICAL RESULTS Before calculating the decay widths, let's brief introduce the bound state property of the P cs (4459) as a strange hiddencharm meson-baryon molecular pentaquark. In Figure 3, we present the probabilities for different channels of the P cs (4459) as a coupled Ξ cD * /Ξ * cD /Ξ ′ cD * /Ξ * cD * molecule with I(J P ) = 0(3/2 − ). Here, the coupled channel effect plays an essential role in forming this bound state. And the Ξ cD * and Ξ * cD channels are the most important, followed by the Ξ * cD * and Ξ ′ cD * channels. With the above preparations, we can further produce the two-body strong decay widths for the coupled Ξ cD * /Ξ * cD /Ξ ′ cD * /Ξ * cD * molecule with I(J P ) = 0(3/2 − ). In Figure 4, we present the corresponding decay widths for the P cs (4459). Here, we take the binding energy from −0.75 MeV to −30 MeV. We see that • The total two-body strong decay width Γ tot is from 10 MeV to 25 MeV in the mass range of the P cs (4459). It is consistent to the experiment data Γ = 17.3 ± 6.5 +8.0 −5.7 MeV. • In general, when the hadronic molecular state binds deeper and deeper, the overlap of the wave functions of the components becomes larger and larger. The quark exchange in the hadronic molecular state becomes easier and easier. As shown in Figure 4(a), with the decreasing of the mass of the P cs (4459), the total decay width turns larger. • For the cc−annihilation decay modes, the K * Ξ and ωΛ decay modes are the most important among all the discussed decay channels as shown in Figure 4(b). The partial widths for these two final states are around several or more than ten MeV, the corresponding branch fraction (Γ K * Ξ + Γ ωΛ )/Γ tot is around 80%. For the re-maining φΛ and ρΣ channels, their partial decay widths are around a few tenths and percents MeV, respectively. • Compared to the light hadron final states, the hiddencharm decay widths are much smaller as their narrow phase space. Here, the partial decay width for the P cs (4459) → J/ψΛ process is only several percents MeV. • For the open-charm decay modes, the partial decay width for the Λ cD * channel is in the range of 0.6 MeV to 2.0 MeV. The relative ratios for the R = Γ Λ cD * /Γ J/ψΛ is around ten. Thus, the open-charm decay should be an essential decay mode to search for the P cs state as a strange hidden-charm molecular pentaquark in our model. To summarize, our results of the two-body strong decay widths support the P cs (4459) as the coupled Ξ cD * /Ξ * cD /Ξ ′ cD * /Ξ * cD * molecule with I(J P ) = 0(3/2 − ). IV. SUMMARY In 2019, the LHCb Collaboration reported three narrow hidden-charm pentaquarks (P c (4312), P c (4440), and P c (4457)) in the Λ b → J/ψpK process [1]. They are likely to be the charmed baryon and anti-charmed meson molecular states. And the coupled channel effect plays an very important role in forming a bound state and the strong decay [8,9]. Very recently, the LHCb Collaboration continued to report an evidence of the hidden-charm pentauqarks with strangeness |S | = 1. After adopting the OBE model and considering the coupled channel effect, we find the newly reported P cs (4459) can be regarded as the coupled Ξ cD * /Ξ * cD /Ξ ′ cD * /Ξ * cD * molecule with I(J P ) = 0(3/2 − ). The dominant channels are the S −wave Ξ cD * and Ξ * cD channels. Using the obtained bound wave functions, we study the two-body strong decay behaviors for the P cs (4459) in the molecular picture. Our results show that the total decay width here is well around the experimental value reported by the LHCb Collaboration [17]. The cc−annihilation decay modes are very important. In particular, the partial decay widths for the P cs (4459) → K * Ξ(ωΛ) are over several MeV, their branch fractions are nearly 80%. The partial decay width for the Λ c D * s mode is around 1 MeV. The relative ratio for the R = Γ Λ cD * c /Γ J/ψΛ is around 10. Until now, the inner structure and the spin-parity of the P cs (4459) are still mystery, more theoretical and experimental studies are needed. Although our phenomenological study is still model dependence, the strong decay information provided here can be a crucial test of the hadronic molecular state assignment to the P cs state. Experimental search for the possible hidden-charm molecular pentaquark will be helpful to check and develop these adopted phenomenological models.
2,944.2
2021-01-26T00:00:00.000
[ "Physics" ]
Investigation of Listed Companies Credit Risk Assessment Based on Different Learning Schemes of BP Neural Network Credit risk analysis of enterprise is an important topic in financial field, this paper employs BP neural network to solve this problem. Indexes system of company and three different BP neural networks have been built. The neural networks are trained using financial data from different industry. We use Matlab program and neural network to get the results in seven learning schemes with different training-to-validation data ratios. Experimental results will suggest which neural network model, and under which learning scheme can deliver optimum performance. Introduction In recent years, with the trend of the economic globalization and volatility of financial market, credit risk management will be the focus in finance.Credit risk is one of the main risks of commercial banks that will affect the banks' ability of sustainable operation. Credit risk is the risk that obligation will not be repaid on time and in full as expected or contracted, resulting in a financial loss. Chun feng Wang, Hai hui Wan and Wei Zhang (1999) applied neural network to credit risk assessment for the first time in China.The results demonstrate the effectiveness and robustness of neural network are better than discriminant analysis (Wang, Chun feng, 1999).Zhong zhi Zhang, Lin Fu and Huan wen Tang (2003) researched on neural network and proved that neural network in credit risk assessment with high precision (Zhang, Zhongzhi, 2003).The following study focus on improvement with GA and combination with statistical method for BP neural network.Most of papers aimed at how to improve the model of neural network, however, this paper aim at researching how the different learning schemes to influence the efficiency of neural network.From the angle of input data, we study how to assess the credit risk of listed companies. Sample selection This phase is a data preparation phase for neural network training and classification.Considering the accessibility of financial data, we choose the listed company as the research object.In this paper, we choose 140 listed companies as sample, including 60 ST companies and 80 non-ST companies.The ST company means special treatment company which has been a loss for two consecutive years.Generally speaking, the ST company with high credit risk for their poor financial situation.Based on above introduction, we define the ST company as the high risk company, the non-ST company as low risk company approximately.Their output value is 0 and 1 separately. Evaluating index system selection On the basis of previous papers, we choose 10 financial indexes as the input of neural network.Those indexes reflected four different abilities.Debt-paying ability: (1) asset-liability ratio=debt/asset (2) current ratio=current assets/current liabilities (3) quick ratio=quick assets/current liabilities.Operating capacity: (4) receivable turnover=sale revenue/average receivables (5) current assets turnover=sale revenue/average current assets (6) the total asset turnover=sale revenue/average total asset.Profitability: (7) sales gross margin=gross sales/total sales (8) rate of return= net profit/ average value of assets.Development capacity: (9) growth rate of profit= increment of profit in this year/profit of last year (10) growth rate of asset=increment of asset in this year/asset of last year. Through formerly analysis, we found that the 10 quantitative indicators are not comprehensive.Therefore, we add a qualitative-size of asset index in the index system.Evaluating the size of company as one of the input data for neural network. In order to satisfy the input requirement of neural network, we need to normalize the data of samples and then are used to train neural network.There are many methods of normalization, we use the common maximum and minimum method processing method as follows: After processing of the data in this method, the original meaning of data still remain.The value domain of all indexes in (0,1). Building the model of neural network In this phase we use the back propagation (BP) neural network due to its implementation simplicity and the availability of sufficient dataset. Setting the parameter of network Lots of practice proved that for any complex function, one hidden layer is enough.Considering that we use three layered BP neural network to study the assessment of credit risk.Obviously, the input layer has 11 neurons corresponding to index system in 2.2.The output layer has 1 neuron.The difficult task is to determine the number of neurons in hidden layer and we need to practice and adjust many times for determining the number of neurons. After practicing with Matlab7.0, we choose three suitable numbers, when the number of neurons is 6, the network performance is 0.00999892 seen as Figure1.When the number of neurons is 8, the network performance is 0.00994681 seen as Figure 2. When the number of neurons is 11, the network performance is 0.00993796 seen as Figure 3. Setting the parameter of training Error precision is the minimum error meet the requirement of the training, which is the discrepancy between output and target data.The discrepancy is also in certain degree.Learning coefficient is the adjustment for weight after every training.The bigger the learning coefficient, the bigger the adjustment.Momentum is to adjust the training step, error precision and learning coefficient.The training step is the time of training.Only all the parameter is set reasonably, can we assess credit risk accurately.Through many adjustments, we establish three different BP neural network.The parameter can be seen in Table 1. We can train the samples after setting up all the parameters.The training will stop until the value meets the target value.If the value did not meet the target value, repeated training will be needed. The satisfactory model will be saved in database.The output layer has one single neuron, which uses binary output data representation.A simple thresholding scheme is then sufficient for the neural network's single output neuron to divide the companies into two categories.A threshold value of 0.5 is used to distinguish between good credit and bad credit.If the output of the BP neural network is greater than or equal to 0.5, the company is good.Otherwise, it is assigned to the bad company (high credit risk company). Empirical analysis BP neural networks have been established in 3.3.The importance of empirical analysis is that: under which learning-validation ratio, the neural performance efficiently in assessing the credit risk of listed companies. According to three different BP neural networks, we design seven different learning schemes. We write the M file with toolbox of neural network in Matlab7.0,corresponding data of samples as the input be imported in the program.There 140 samples, for example, the LS1 with 28 samples as training dataset, the other 112 samples as validation dataset to running the program in M file, we can get the accuracy.All the running result can be seen in Table 2. We can get conclusion from table 2 that when the training to validation is 40%:60% and the hidden layer with 8 neurons, the accuracy rate is best. Conclusion This paper research on how different learning scheme influence the capacity of classification for BP neural network.The empirical study proves that when the learning ratio is 40%:60%, the BP neural network performance more efficiently.Therefore, when we use BP neural network to assess the credit risk, the learning ratio should be considered.Too many training samples or too little samples will affect the capacity of classification.The satisfactory ratio must be existed on the condition that the number of validation is decided.We can use this satisfactory ratio to get better results of assessment.There are also some shortcomings in this paper, such as the number of sample is not large enough.The neural network as itself has shortage in explanatory. How to make the neural network performance more efficiently in economic area is a difficult question which need the joint research in terms of economic and artificial intelligence (Wang Li,2005).thecombination between quantitative analysis and qualitative is the trend of research. Table 1 . The parameter of BP neural network
1,793.4
2011-01-20T00:00:00.000
[ "Computer Science", "Business" ]
Mathematical Modelling and Computer Simulation where u(x, t) is the concentration of the species and α1 is the diffusion coefficient or diffusivity of u(x, t). Situations where 1 is space-dependent are arising in more and more modeling situations of biomedical importance from diffusion of genetically engineered organisms in heterogeneous environments to the effect of white and grey matter in the growth and spread of brain tumors. The source term or forcing term f in an ecological context, for example, could represent the birth-death process. However, the equation (1) is strictly only applicable to dilute systems, that is the diffusion is a local or short range effect. In many biological areas, such as embryological development, the densities of cells involved are not small and a local or short range diffusive flux proportional to the gradient is not sufficiently accurate. When we discuss the mechanical theory of biological pattern formation in certain circumstances, it is intuitively reasonable, perhaps necessary, to include long range effects. In 1969, Othmer derived the following formulation (1). Introduction Mathematical modeling and computer simulation are nowadays widely used tools to predict the behavior of biological research problems. To illustrate the idea, we consider nonlocal effects and long range diffusion mathematical biology model [1]. The classical approach to diffusion is the following form ( ) ( ) 1 ( , ) . , , , x u u f u x t t (1) where u(x, t) is the concentration of the species and α 1 is the diffusion coefficient or diffusivity of u(x, t). Situations where 1 is space-dependent are arising in more and more modeling situations of biomedical importance from diffusion of genetically engineered organisms in heterogeneous environments to the effect of white and grey matter in the growth and spread of brain tumors. The source term or forcing term f in an ecological context, for example, could represent the birth-death process. However, the equation (1) is strictly only applicable to dilute systems, that is the diffusion is a local or short range effect. In many biological areas, such as embryological development, the densities of cells involved are not small and a local or short range diffusive flux proportional to the gradient is not sufficiently accurate. When we discuss the mechanical theory of biological pattern formation in certain circumstances, it is intuitively reasonable, perhaps necessary, to include long range effects. In 1969, Othmer derived the following formulation (1). where α 1 > 0 and α 2 are constants. α 2 is a measure of the long range effects and in general is smaller in magnitude than α 1 . The biharmonic term is stabilizing if α 2 > 0, or destabilizing if α 2 < 0. In this form, the first term represents an average of nearest neighbors and the second biharmonic term is a contribution from the average of nearest averages. We then consider the stationary Dirichelet boundary value problem of the equation (2) ( ) ( ) ( ) ( ) 1 2 , , where D represents the species living area, which can be considered bounded and the Dirichelet boundary condition can be interpreted as the number of the species is zero on the boundary of the domain D. Yet many biological applications are affected by a relatively large amount of uncertainty in the input data, such as model coefficients, source term/forcing term, boundary conditions, and geometry. In the case, to obtain a reliable numerical prediction, one has to include uncertainty quantification due to uncertainty in the input data. In this paper we focus on problem (3) with a probabilistic description of the uncertainty in the input data. Let D be a convex bounded polygonal domain in  d , (d = 1, 2, 3) and ( ) , Ω ,P F be a complete probability space, where Ω is the set of outcomes, where D is the closure of D. The space H k (D) is endowed with the norm associated to the inner product and the corresponding norm Denote by finally, we discuss the well-posedness of problem (5). To the end, we quote the following Poincare's inequality ( ) with C P = C P (D, n, k) > 0, [2]. We will prove a(u, v) is continuous and coercive. min The proof is now complete. Finite-Dimensional Noise Assumption In many problems the source of randomness can be approximated using just a small number of uncorrelated, sometimes independent, random variables, for example, the case of a truncated Karhunen-Loeve expansion [3]. This motivates us to make the following assumption. Assumption: The coefficients and forcing terms used in the computations have the forms where N ∈ N + and { } 1 have a joint probability density function : After making Assumption, the solution u of the stochastic fourthorder elliptic boundary value problem (5) Then, the goal is to approximate the u(y, x), where ∈ Γ N y and ∈ x D . Observe that the stochastic variational formulation (5) has a deterministic equivalent which is the following: find ( ) ( ) Since the solution of (1.6) is unique and is also a solution of (5), it follows that the solution has the form ( ) x . The stochastic boundary value problem (4) now becomes a deterministic boundary value problem (6) for a fourth-order elliptic PDE with an N-dimensional parameter. For convenience, we consider the solution u as a function ( ) : Γ → N u H D and use the notation u(y) whenever we want to highlight the dependence on the parameter y. We use similar notation for the coefficient α 1 , α 2 , and the forcing term f. Given Thus, we turn the original stochastic fourth-order elliptic equation into a deterministic parametric fourth-order elliptic equation and we will adopt finite element technique to approximate the solution of the resulting deterministic problem. Regularity Assumption The convergence properties of the collocation techniques that will be developed in the next section depend on the regularity that the solution u has with respect to y. Denote * 1, = ≠ Γ = Π Γ N n j j n j , and let * n y be an arbitrary element of * Γ n . Here we require the solution of problem (4) to satisfy the following assumption. To make this assumption, we introduce the functional space where v is continuous in y. Assumption: For each ∈ Γ with λ a constant independent of n. The following lemma will verify that this assumption is sound. where C 0 is a constant depending on a min , a max and Poincare's constant C P Proof: For simplicity, we first study the following problem: there exists For every point ∈ Γ N y , the k th -derivative of u with respect to y n is obtained by above equation, which satisfies as follows Taking φ = ∂ n k y u , we obtain the following form Using the Poincare' inequality, to obtain Combination of (8), (9) yields and using the bounds on the derivatives of α and f, we get the recursive inequality Next, we will prove the following form , and obtain ( ) ( ) By Lemma 5.1, the following estimate holds (11) follows from (12) and (13). Using that Hence (11) and (14) imply For every ∈ Γ n n y , we get the final estimates on the growth of the derivatives of u. For every : Then, we will prove the uniform convergence of the power series (16) with norm in We get the following formula [4] ( Similarly, for the solution ( ) * , , n n u y y x of problem (7), the conclusion which is above drawn is correct. This finishes the proof. Example 1: Let us consider the case where the coefficient α(ω,x) is expanded in a linear truncated Karhunen-Loeve expansion provided that such an expansion guarantees α(ω) ≥ a min for almost every ω∈Ω and x∈D [5], in the case we have Collocation techniques We seek a numerical approximation to the solution of (7) , , In order to prove error estimates for stochastic partial differential equation, we need estimates for deterministic fourth order elliptic problem. Let us consider the stationary deterministic problem we make the following assumptions: (AA 1 ) there exist a min ; a max > 0 such that The variational form of problem (2.2) is to where <⋅;⋅ > represents the duality pairing. , , , , , Then, we will estimate the error between u and uh. In order to get the estimate, we need the following two lemmas. Lemma 6.1: Suppose the conditions (1) (H; (⋅;⋅)) is a Hilbert space, and V is (closed) subspace of H, (2) a(⋅;⋅) is a bilinear form on V , which is continuous and coercive on V , and that u solves, Given F ∈ V′ [6]. For the finite element variational problem: Given a finite- Then, the following inequality holds where C, a are the continuity constant and the coercivity constant of a(⋅,⋅) on V, respectively. Smolyak approximation Here we follow closely the work [7] and describe the Somlyak isotropic ( , ) w N A . The Smolyak formulas are just linear combinations of product formula (23) with the following key properties: only products with a relatively a small number of points are used. With Given an integer � + ∈ w , hereafter called the level, we define the sets ( ) ( ) 1 , : , 1: ( ) ( ) 1 , : , 1: and for i  + ∈ N we set (28) Equivalently, formula (28) can be written as [8] ( Obviously, A (w; 1) = U w. To compute A (w;N)(u), one only needs to know function values on the "spare grid" denotes the set of abscissas used by i U . Choice of collocation nodes In this section, we will determine how to select the collocation nodes. To the end, we introduce a conclusion. ; inf where the constant C 1 is independent of p. Error Analysis In this section we show error estimates that will help us understand the sparse grid stochastic collocation method in this situation is efficient. Collocation methods can be used to approximate the solution using Thus, we will only concern ourselves with the convergence results when implementing the Smolyak approximation formula, namely, our primary concern will be to analyze the Smolyak approximation error ; , τ τ Γ = ∈ Γ < ∑ z ,dist z C of the complex plane, for some τ > 0, in this case, τ is smaller than the distance between 1 Γ ⊂  and the nearest singularity of u(Z) in the complex plane. For the caseN=1, we quote the following results. : In what follows we will use shorthand notions Obviously when d = 1, we have
2,538.2
2016-03-01T00:00:00.000
[ "Mathematics" ]
A Case of Abdominal Sarcoidosis in a Patient with Acute Myeloid Leukemia The allogeneic bone marrow transplantation usually preceded by induction chemotherapy, in fit patients, represents the gold standard in the acute myeloid leukaemia. In the last years, many trials have been set up with the view of improving the number of remissions during the induction by adding new drugs. Several early or late side effects have been described in the literature. We herein present a patient with acute myeloid leukaemia patient who, after chemotherapy, developed ascites that turned out to be abdominal sarcoidosis. Case Report A 68-year-old Caucasian lady presented to the Haematology Department with pancytopenia and palpitations in March 2009. She had a past medical history of uterine fibroids necessitating hysteroannessectomy, diverticulitis, oesophageal reflux, and hypercholesterolemia, as well as a family history of familial Polyposis coli for which she was under surveillance. She was taking ranitidine and simvastatin regularly. Bone marrow biopsy revealed hypercellular bone marrow with architectural distortion of haemopoiesis, a reduction in mature granulocytes and infiltration of myeloblasts (which made up 30-40% of all cells seen within the biopsy) consistent with acute myeloid leukaemia (AML NOS M1 category, cytogenetics 46 XX). She was commenced on the AML16 trial and treated with Daunorubicin, Clofarabine, Gemtuzumab, and nine months maintenance Azacytidine. Allogenic bone marrow transplant was considered but was not pursued at the patient's request. On completion of five months of treatment (October 2010), the patient was asymptomatic and a repeat bone marrow biopsy demonstrated full remission. Three months later (January 2011), the patient was admitted with abdominal discomfort and distension. On examination, there was evidence of ascites with normal renal and liver function, haemoglobin 12.6, white cell count 3.9, platelet count 316, and an elevated CA125 at 320. Echocardiogram demonstrated good biventricular function. Imaging of the abdomen and pelvis confirmed gross ascites with no focal liver lesion, but there was evidence of fat stranding in the omentum with small omental deposit on the computed tomography of the abdomen and pelvis with contrast ( Figure 1). Subsequent MRI showed omental cake and mesenteric deposits raising the possibility of peritoneal neoplasia or mesothelioma. Abdominal paracentesis was performed and 4 litres of straw coloured fluid was drained. The serum-ascites albumin gradient (SAAG) was 1 with LDH 95, white cell count 923 (90% lymphocytes) and no malignant cells seen. Flow cytometry of the ascitic fluid confirmed no evidence of AML. A total body CT scan was performed to rule out any primary solid tumour and there was no evidence of any solid tumour. The patient continued to require frequent ascitic drainage every 3-4 weeks with approximately 4 litres of fluid drained on each occasion. A laparoscopic peritoneal biopsy was arranged; macroscopically there was evidence of some miliary deposits on the dome of the bladder and along the peritoneum surface ( Figure 2). This was followed by a cystoscopy which did not demonstrate any abnormalities. Histopathology for the biopsies taken from the peritoneal nodules reported the presence of chronic inflammation and noncaseating granulomata (Figure 3). There were no atypical lymphoid cells or immature granulocytes visualised; no evidence of malignancy or vasculitis and no fungi or acid fast bacilli were grown on extended culture with Grocott and Ziehl-Neelson staining. The patient was admitted 6 times with an interval of 7-10 days to have abdominal paracentesis, every time aspirating 4-5 litres. QuantiFERON test was performed to rule out abdominal tuberculosis and it turned out to be negative. Based on these results, the patient was commenced on an empirical dose of 25 mg prednisolone and subsequently her symptoms fully resolved. Residual omental thickening was not present in the repeat MRI scan and she no longer required any further paracentesis. Having excluded the more common causes of granulomata including tuberculosis, fungal infection, autoimmune Discussion Sarcoidosis typically presents with lung involvement or the classic tetrad of fever, bilateral hilar lymphadenopathy, erythema nodosum, and arthralgia; however, 10% of patients present with extrathoracic involvement, most often hepatic [1]. Ascites can occur and is most often transudate, secondary to pulmonary hypertension or portal hypertension due to granulomatous obstruction. However, there are a small number of case reports (a search of the Medline database returned a total of 28) reporting transudative or exudative ascites associated with peritoneal or serosal sarcoid studding [2]. This is associated with a significant elevation in serum CA125, as seen in our patient. Isolated ascites in sarcoidosis is usually benign and highly responsive to steroids, a feature again seen in our patient. ACE is produced by sarcoid granulomas, and serum ACE levels are used to correlate with disease load [2]. Elevated serum ACE occurs in approximately 60% of patients with sarcoidosis. However, having a normal ACE level as in our patient does not exclude the diagnosis. There has been several previous case reports of sarcoidosis associated with a diagnosis of AML. Of these, only three reported AML preceding the diagnosis of sarcoidosis, with intervals ranging from 11 months to 17 years [3][4][5]. The case presented here adds to list, with an interval of 8 months between the diagnosis of AML and sarcoidosis. To our knowledge this is the only case report of peritoneal sarcoidosis following a diagnosis of AML. There have been no previous cases revealed on extensive literature searches. The exact relationship between AML and sarcoidosis is unclear. It has been hypothesised that the granulomatous inflammation observed in sarcoidosis may occur in reaction to tumour-associated antigens that are widespread in AML. In addition, the transmission of sarcoidosis or sarcoidosisinducing pathogens, for example, via bone marrow transplantation, has been considered [6]. Our patient did not receive a bone marrow transplant, but she was treated with several chemotherapeutic agents as part of the clinical trial. Granulomatous disease has been associated with exposure to chemotherapy agents, for example, capecitabine, interferon beta, and particularly heavy metals or methotrexate [7,8]. It is not clear whether this was a factor in the case presented here. There have been no similar cases in the AML16 study cohort to our knowledge, but this has been raised as a potential adverse event with the trial centre. The relationship between AML and sarcoidosis is an area that requires further study. Conclusion The distinctiveness of this case is the uncommon presentation of sarcoidosis, in association with a preexisting diagnosis of AML. As mentioned in the discussion, our patient is a participant of a clinical trial and a possible conjecture is whether one of the chemotherapeutic drugs used could have influenced the development of granulomas in our patient. However no similar symptoms have been reported in other trial patients to our knowledge. We have highlighted this case as a possible late adverse event to the trial centre and to the competent authority.
1,559.6
2013-02-28T00:00:00.000
[ "Medicine", "Biology" ]
Metaphysic Dimension in Labuhan Ceremony of Yogyakarta Palace . The symbolic behavior in Javanese culture is reflected in labuhan ceremony which is done at certain moments by Yogyakarta Palace. This tradition is based on values and has a specific purpose. Labuhan ceremony has strong cultural roots associated with metaphysical meanings. The material object in this study is labuhan ceremony by Yogyakarta Palace, while the formalized object is metaphysics. This research is a qualitative research in philosophy. The method used is the method of analysis-synthesis with methodical interpretation. The existence of labuhan ceremony is regarded as the metaphysical type of communication between the world and the supernatural nature. Both have a reciprocal relationship so that labuhan ceremony is a form of communication to maintain harmony. Labuhan ceremony has a metaphysical meaning, which is a symbol of a harmonious relationship among human beings, supernatural nature, and God. Introduction Translating life philosophically can be done based on cultural patterns. One such pattern is the mythic culture. Even though all human lives cannot be separated from other patterns such as ontological and functional. Peursen stated that the entire level of human life in both primitive and modern life cannot be removed from the three patterns, so that the most important in human beings as cultural subjects. Peursen has mapped three cultural charts that always surround each culture, namely the mythic stage, the ontological stage, and the functional stage [1]. This mythic attitude is still felt in modern culture. These steps are not historically viewed solely, the one that appears after the other stage, but shows the stages contained in each culture. The question that arises later is how human positions itself on the development of culture. Symbolic behavior in Javanese culture, for example, is labuhan ceremony which is carried out at certain times by Yogyakarta Palace until now. This tradition has a purpose, value, way, and is done in a particular place. Seeing these symbolic action patterns tends to have a more distant meaning instead of merely a visible act. If associated with the history of the establishment of Yogyakarta Palace, labuhan ceremony has a strong cultural roots associated with metaphysical meanings. Life within the Javanese culture that is always based on metaphysical values then labuhan ceremony has its own purpose. The fundamental issue in this research is the metaphysical basis in labuhan ceremony by Yogyakarta Palace. Yogyakarta Palace was chosen because it has a specific value background. According to Mulder, when conducting research on Java, it could not be separated from the city of Yogyakarta. According to him, many things and information can be obtained from Yogyakarta. Yogyakarta is in every sense more truly Javanese than anywhere else. Yogyakarta City has a very long history. This city is one of the heirs of the Mataram Kingdom in the XVII century and is the Kingdom of the Sultanate (Vorstenland) which still exists today. The oldest monuments of civilization, such as Hinduism and Buddhism, are scattered in and around Yogyakarta. Yogyakarta is the point of acculturation of various cultural elements. As a Sultanate area, Yogyakarta was never colonized because before the Unitary State of Republic of Indonesia, Nagari Ngayogyakarta was already established. During the independence movement of Republic of Indonesia, Yogyakarta is the place where several were founded national social organizations that inspired youth movements such as Budi Utama, Muhammadiyah, and Perguruan Taman Siswa. During Indonesia's independence, Yogyakarta became the temporary capital to save the Republic of Indonesia. Because of these various roles, Yogyakarta has gained the status of a special area, known as the Special Region of Yogyakarta (Daerah Istimewa Yogyakarta) [2]. Currently Yogyakarta, which has a population of around 3.5 million, has hundreds of universities with hundreds of thousands of students from various parts of Indonesia. One of the oldest tertiary institutions is Gadjah Mada University with more than 60,000 students. Yogyakarta is known as the city of culture and city of students. Yogyakarta is also the second tourist destination after Bali. Many national figures, regional officials and great artists were born and emerged from this city. Research Stages • Exploration of literary sources. At this stage the researcher determines the location of the data source, namely libraries, study centers, or research centers. • The stage of collecting library data is in the form of books and other literature related to the object of research, both formal objects and material objects. • The data processing stage is carried out by carrying out an inventory, systematizing, and classifying data relating to labuhan ceremony and metaphysics. • Data analysis stages. Data relating to labuhan ceremony and metaphysics that have been inventoried, systematized, and classified are then analyzed using the relevant method and methodical elements. . Data Analysis This research is a qualitative research in philosophy. The method used is the method of analysis-synthesis with methodical interpretation. Metaphysics Meaning Philosophy as a critical study of everything in the universe puts a metaphysics position as a crucial point of study, even Rene Descartes, the main character of Modern Western philosophy stated that the metaphysics are the roots of the tree of science. The tree is physics, while the brances are other brances of science [3]. Metaphysics is a branch of philosophy that deals with issues of existence. The term metaphysics means something behind physical objects [4]. Aristotle mentions several terms of meaning equivalent to metaphysics, such as the first philosophy, knowledge of causation and theology [5]. Metaphysics Issues Until now there is no agreement on actual metaphysics issues because each philosopher departs from a different point of view. Nevertheless, we need to look at between Frederick Sontag and Anton Bakker, for example, there is indeed a tangent point. The two figures agreed that metaphysics examines 'being'. Researchers specifically use the theory of Bakker. Bakker explains that ontology or general metaphysics is a branch of philosophy that answers the issue and holds an overview of the existing structure or absolute reality for all types of reality [6]. Bakker in his book, Ontology or General Metaphysics (1992) explains the six fundamental problems that become ontological studies. First, the question of whether reality is many or one? Secondly, what is the question of whether reality has a transcendent homogal trait? Thirdly, the question of whether reality has a permanence or novelty? Fourth, the question of whether dimensional reality is physical or spiritual? Fifth, the question of whether the reality presence is worth or not? Sixth, the question of whether in reality is found the transcendental ontological norm that applies to all? Researchers used five issues to analyse labuhan ceremony of Yogyakarta Palace [6]. Labuhan Ceremony Labuhan ceremony is one of the traditional ceremonies held regularly since Sri Sultan Hamengku Buwono I until now and still affects the life of Yogyakarta people. The community believes that by convening labuhan ceremony then the safety, tranquility, and welfare of the community and the country can be maintained. Humans are cultural creatures. Human culture is close to the symbols, as did the Javanese. The symbol is attributed to tradition and mystical [7]. The symbol for the Javanese has a metaphysical meaning, as well as labuhan ceremony means relating to the symbols. Labuhan ceremony contains a sense of throwing something into the water (sea for example) or certain places according to the customs of palace. The general public, especially Yogyakarta residents still believe in the influence of mythic the king, palace, and the heritage of the palace. Therefore every time ceremony is held, citizens willingly jostle to acquire objects that have to do with the ceremony to seeking blessings. The king is thought to have supernatural charisma and strength. The body parts of the king are considered to possess strong strength, such as hair and nails [8]. In principle, the purpose of labuhan ceremony is for the personal safety of Sri Sultan, palace, and people. In the broad sense of this ceremony to achieve the safety and harmony of the universe, both nature in the sense of the world objective and nature in the metaphysical sense that can only be experienced and felt inward. Labuhan ceremony, as explained by Sumarsih. Sumarsih accompanied by the equipment (uba rampe) containing certain meanings manifested in the form of symbols, including: • Apem. This apem is equipped with sticky rice and kolak. These three foods for Javanese people symbolize apologies to God. • Tumpeng adhem-adheman. This tumpeng dish is intended so that the atmosphere of the palace and its surroundings is always calm or adhem. • Tukon pasar. Tukon pasar consists of a variety of fruits and snacks. This dish is intended so that the people (kawula) whose life is from a trading business can achieve success. • Tumpeng Woran. This dish is intangible rice and severel kinds of side dishes, which symbolize that each individual can interact with other individuals such as fraternity relationships. • Dhahar Kebuli. This food is the favorite of Sri Sultan Hamengku Buwono I. • Dhahar Punar. This dish is a yellow rice (nasi kuning) for those who have gold deposits. • Tumpeng urubing damar. This tumpeng is plugged in a ligand on it, the ends are given cotton so that the form of lamps. This tumpeng is a symbol so that a king can give bright rays to his people. • Tumpeng Ropoh. Tumpeng and side dishes are put into takir (like a bowl formed from banana leaves). This tumpeng is a symbol of hope so that every individual can interact with other individuals such as brotherly relationship. • Tumpeng Yuswa. The tumpeng consists of a large tumpeng which is surrounded by small tumpeng that are entirely adjusted to the age of Sri Sultan which is calculated according to the date of Java. This tumpeng symbolizes the hope that Sri Sultan is long life. • The golden yellow umbrella symbolizes the position of a king. • Other equipment is dhahar rasulan, which is equipped with a chicken ingkung. Chickens that are made ingkung are those have black feathers. The color black is considered to mean sincerity a king [9]. Based on the above explanation, it can be concluded that labuhan ceremony which is held annually by Yogyakarta Palace is a symbol of harmonious relationship among human beings, supernatural nature, and God. Quantity-Reality The first fundamental problem in ontology is the question of whether reality is many or one? The reality according to the Javanese as stated by Bakker is to be in unity with The Absolute. This unity is still temporary in the world (miyos), but permanent in the afterlife, namely Pamoring Kawula lan Gusti [6]. Javanese people realize that humans are creations of God and are part of nature.With regard to this, Mulder stated that humans and nature in the view of Javanese Philosophy have a very close relationship because cosmologically life in the world is part of the unity. In that unity, all the symptoms have a place and are in complementary relationships and coordinated with one another. The unity of existence has a culminating point at the center which encompasses everything in the Most One [2]. Based on the understanding, human beings have an obligation to align with nature, both the sensory nature and the supernatural nature. The ceremony is a form of human expression in the harmonising with nature. Labuhan ceremony is a form of recognition of the reality that can be senses and creatures that cannot be senses. The ceremony is a testament to the fact that between the real and supernatural nature is a reality that can be mutually inseparable. When there is a disaster for example, the solution is metaphysical. Therefore, each year is held by labuhan as a form of mutual relationship. For Javanese people harmonious relations with nature should be kept in harmony. Nature and humans are one regular entity. Based on that explanation, it can be concluded that labuhan ceremony as a symbol of Javanese Philosophy is tendencies monistic. All came from God as the creator of nature and finally returned to God. Dynamics of Reality The fundamental question here is about whether the reality has a permanence or novelty? Concerning this principle we need to seek in real life forms in existence and supernatural. As in labuhan ceremony, we can distinguish two different worlds, but the two worlds are clearly intertwined. An outward welfare is influenced by the spirit. In this case it is obvious that the relationship is permanent but the form is always dynamic. There is a mutually filled alignment relationship. The relationship itself comes down to the king.The king is seen as a manifestation of God who is in the world or manifests in his power. God is the most perfect and eternal immortal. Unlike people who are constantly changing like wheels that spin and evolve but don't lose their identity. Physical and Spiritual The fundamental question here is about whether it is a physical or spiritual dimensional reality? If applied to labuhan ceremony then between matter and spiritual is two things that can not be separated, but there is a tendency that the transcendence of the spirit is above the material. The reality of labuhan is a metaphysical event symbolized by equipment. The physical world draws itself into the spiritual world. Uba rampe in the ceremony as a means to the dialogue level of the spirit. Mantera as well as prayers that are spoken by the leaders of the ceremony serve as intermediates of two worlds so that both the physical and spiritual world have causal relationships and metaphysical relationships. Meaning and value of reality The fundamental issue here is about whether the presence of reality is worth or not? According to Javanese Philosophy, life is experienced in taste. Reality has meaning and value [6]. Reality itself can only be caught in a sense of the level. Reason is only considered able to capture the reality that can be seen with the senses. In order to attain the awareness of taste is described in Dewaruci, a teaching journey of a disciple who wants to reveal a secret 'Being'. Bimo through certain stages finally come to the most deep understanding of taste. It is actually a personal practice. Sense means feeling in all dimensions [10]. Similarly, in labuhan ceremony which has a metaphysical meaning and can only be understood with taste. A sense of union with the Divine. Taste is interpreted as eling, remembering the origin of humans. Reality objectives The fundamental question here of whether in reality is found the transcendental ontological norm that applies to all? Harmony, conformity, and balance are signs of goodness and righteousness. The classification of natural directional toward harmony and seeks to unite with Sang Hyang Widdhi. Forms of conflict (such as sin and disaster) must be cleaned to achieve harmony, in an increasingly complete and unified level [6]. If this is applied to the ceremony, it can be concluded that the ceremony aims to align itself with supernatural nature, to avoid disaster. It means labuhan as a human effort with the cosmos and culminates in the Almighty, that is to achieve life perfection so that in the end people can unite with God (Pamoring Kawula lan Gusti). The soul of sejatining manungsa means a vertical relationship, namely the relationship between humans and Creator. This proves that in Javanese Philosophy there are deficiencies, therefore labuhan ceremony is an effort to clean oneself by throwing something into the water (sea) or to certain places such as mountains, so that harmony occurs. Conclusion The existence of labuhan is seen as a communication between this world and the supernatural world. Labuhan ceremony as a symbol of harmony relationship. In conjunction with both of these worlds, the king's position is seen as the most central thing because the king with his power is able to restore balance, in case of dis-harmonization of his second relationship, which can sometimes bring disaster. The term ngalab berkah in labuhan that is done by the public to acquire the objects that are crushed is an attempt to gain the power of the king so that it can bring prosperity. Dynamic relationships can always be maintained if labuhan is carried out continuously. Labuhan as a symbol of Javanese Philosophy in the monistic tendencies. All came from God as the creator of nature and finally returned to God. Labuhan ceremony as an effort to align themselves to nature, as evidence that humans with nature is a whole, the unity of Microcosm (jagad cilik) and Macrocosm (jagad gedhe). Labuhan is a form of recognition that there is a supernatural nature, in addition to this nature (physical nature). The outward welfare was influenced by the physical. Reality, its relationship is permanent, but the shape is always dynamic. There is a mutually filled alignment relationship. The relationship comes down to the king, because the king as a manifestation of God or reflected especially in the king's dominion. God is most perfect and immortal eternal, different from human beings. Reality is physical and spiritual, they are inseparable and influence each other, but there is a tendency that the transcendence of the spirit is above the material. The reality is tan kena kinaya apa, tan iki iku, but life experienced in taste. While the purpose of reality in labuhan ceremony is universal harmony, containing human meaning with the cosmos and culminating in the Almighty, namely achieving the perfection of life Pamoring Kawula lan Gusti.
4,040.4
2020-01-01T00:00:00.000
[ "Philosophy" ]
SVR and ARIMA models as machine learning solutions for solving the latency problem in real-time clock corrections Real-time precise point positioning (PPP) has become a prevalent technique in global navigation satellite systems (GNSS). However, GNSS real-time users must receive space state representation (SSR) products to correct for satellite clock, orbit, and phase biases. The International GNSS Service (IGS) provides GNSS users with real-time services (RTSs) through different real-time correction SSR products. These products arrive at the GNSS users with some latency, which affects the quality of real-time PPP positioning. The autoregressive integrated moving average (ARIMA) and support vector regression (SVR) models are used in this research to predict those corrections to eliminate the latency effect. ARIMA model reduces the standard deviation by 28% and 13% for GPS and GLONASS constellations, respectively, compared to the real-time solution, which includes the latency effect, the research simulated the latency effect and named it a forced-latency solution, and the SVR model reduces the standard deviation by 28% and 23% for GPS and GLONASS constellations, respectively. The results for the permanent GNSS stations used in this study across different years 2013, 2014, 2015, 2019, and 2021 show a mean reduction in the 3D positioning standard deviation by 13% compared with the forced-latency solution for the ARIMA solution and 9% for the SVR solution. The potential of both models to overcome the latency effect is apparent based on the findings. Introduction The GNSS is widely used to determine positioning through various methods, such as single point positioning, relative positioning, and PPP. The main principles of those methods can be found in Zumberge et al. 1997;Hofmann-Wellenhof et al. 2008;Enge and Misra 2011;Sanz et al. 2013;or Leick et al. 2015. PPP allows GNSS users to locate themselves globally since it does not rely on a local network of GNSS receivers or a base station. This is also free from the local effects resulting from the movement of reference stations. Nevertheless, users must precisely measure pseudoranges and carrier phases from GNSS satellites to perform post-processing and real-time PPP. GNSS biases must be accurately estimated, such as multipath, ionospheric delay, tropospheric delays, earth tides, relativistic effects, and antenna variation. A detailed study of these effects can be found in Zumberge et al. (1997), Ge et al. (2008), Cai and Gao (2013), Li et al. (2015), and Teunissen and Khodabandeh (2015). Finally, code and phase biases and satellite orbital and clock errors can be mitigated by using adequate biases and orbital and clock correction products (Teunissen and Khodabandeh 2015; Henkel et al. 2018;Ye et al. 2018). The IGS and other Analysis Centers (ACs) provide precise respective products (Dow et al. 2009). The PPP coordinate accuracy could reach a centimeter or sub-decimeter depending on the operational mode of postprocessing or real-time (RT-PPP). The International GNSS Services (IGS) initiated a pilot project for real-time activities at the beginning of 2001 to improve RT-PPP accuracy (Dow et al. 2009). The goal was to provide the RTS for GNSS users with accurate real-time products, mainly orbital and clock corrections, to reduce the satellite ephemeris and the onboard satellite clock errors. Consequently, some ACs using the GNSS permanent reference network are involved in the real-time tracking, computation, and broadcasting of real-time products (Grinter and Roberts 2013). Various ACs produce and disseminate their real-time products. IGS combines those products to produce official real-time products such as IGS01, IGS02, and IGS03. IGS01 is a single-epoch combination solution, IGS02 is a Kalman filter combination solution, and IGS03 is an experimental Kalman filter combination for GPS and GLONASS solutions. The orbit and clock products are used by the user with different latencies, i.e., the sum of the time required to generate the products in the ACs, to combine these products for IGS RTS Service (Johnston et al. 2017), broadcast them over the Internet using Networked Transport of RTCM via Internet Protocol (NTRIP) as RTCM state-space representation (SSR) correction streams, and the time required for the local computer where the RT-PPP solution is implemented. As the latency value increases, the corrections become outdated, leading to applying older corrections to realtime observations. Martín et al. (2015b) found a 3D error of approximately 0.15 m and 0.30 m if latencies of 30 and 40 s are employed, respectively. The latency problem is also addressed by Hadas and Bosy (2014). This study showed a negative relationship between latency values and the corrections accuracies. The latency values for the individual products produced by ACs reach 10-12 s. However, this value increases to 30-40 s for the combined IGS products since IGS needs extra time to receive all AC solutions and combine them. Different prediction models are discussed in the literature. Examples are quadratic polynomial or linear models with sinusoid terms found in Huang et al. (2014), El-Mowafy et al. (2017, El-Mowafy (2019a, b), Yang et al. (2019) to predict orbital and clock corrections during data communication failure or discontinuity periods, and genetic algorithms with autoregressive moving average models to predict 15 min of corrections during data loss of the IGS02 stream in Kim and Kim (2017). A limited number of studies have investigated prediction models for navigation satellite systems other than GPS. In Qafisheh et al. (2020), the GLO-NASS satellite system is included in the prediction. Still, both Qafisheh et al. (2020) and Kim and Kim (2017) studies are missing the latency effects on coordinate accuracy. Clock correction values are the most challenging to model compared to orbital corrections because they are highly correlated to the onboard GNSS clock oscillator behavior, which suffers from frequency instability, jumps, offsets, outliers, and frequency drift. Additional research about the characteristics model of the clocks and their behaviors can be found in Daly (1990), Senior et al. (2008), Hauschild et al. (2013), andMaciuk (2019). Different real-time schemes are utilized to adapt clock offsets, jumps, noise, and speed using the Kalman filter (Huang and Zhang 2012). This research aims to overcome the latency in the IGS03 real-time clock product using the ARIMA and the SVR models over a short period. The prediction model and the methodology used in this research can be extended to cover other combined real-time products made available by IGS or other ACs. Various open-source software can handle SSR correction streams such as RTKLIB (Takasu 2009, http:// www. rtklib. com/ rtklib_ tutor ial. htm), GNSSSurfer (SAPOS®-Berlin 2020, http:// 217.9. 43. 196/ Downl oad/), or BNC from the Kartographie und Geodäsie Agency (BKG) (Weber and Mervart 2007). In this research, BNC software is employed. It can be used to accomplish different GNSS tasks, including satellite coordinate comparison, broadcast, combine and upload corrections, PPP post-processing, RT-PPP, and decode the SSR messages to obtain the required correction values to eliminate orbit and clock errors (Kouba and Héroux 2001;Weber and Mervart 2007;Weber and Mervart 2007). The PPP technique uses ionospheric-free combinations to mitigate the ionospheric effects by combining pseudoranges codes and carrier observations of different frequencies. The tropospheric error, antenna phase center, and other biases are well modeled in the BNC software. Experimental data First, the RT-PPP measurements from December 13-16, 2019 were stored. We used the IGS03 correction product, a 5-s sampling interval, and the Brest (France) permanent station observations. Before experimenting, the BNC software stored observation, navigation, and correction files in a local computer synchronized to the Internet time. Latency values were also stored. The mean value of the latency was 31.68 s. However, the range during the implementation varied between 31.34 and 32.21 s. The stored correction file contains data for 52 observed satellites with approximately 26 thousand correction values. Several satellites are affected by data unavailability during some periods. The research extended to cover more stations over several years to evaluate the prediction models properly. Consequently, one day of RT-PPP data was used, and nine stations distributed globally were selected. We investigated the following years: 2013, 2014, 2015, 2019, and 2021. Table 1 shows the percentages of the null value for various satellite blocks for different years, and Fig. 1 shows the distribution of the selected IGS stations. Methodology One of the most important aspects to take into account in any machine learning or data mining project is the proper choice of the method or mathematical model to use. In this case, it is about making future predictions about the studied signal itself, so the choice of the method will depend on the characteristics of the signal. Based on the results obtained in the signal analysis, there will be a greater foundation in the choice of the prediction method. Signal analysis Statistical analysis for stationary inspection of the signal is needed to select the proper machine learning algorithms. Our goal is to predict future values in a temporal series (time series forecasting). If the signal is stationary, that is, having no trends, seasonality, or cyclic patterns, most of the machine learning methods, including random forest, neural networks, or XGB, fail to fit the data. In those cases, the predictions have the same values as the last observation (Bownlee 2017). Augmented Dickey-Fuller, Phillips-Perron, and Kwiatkowski-Phillips-Schmidt-Shin (KPSS) statistical tests can examine a time-series stationarity property (Kwiatkowski et al. 1992). It is possible to improve the stationarity decision by combining different statistical tests (Schlitzer 1995). However, the KPSS test is applied to the stored GNSS clock correction data to simplify the code and reduce processing time. The stationarity was tested for various time windows of clock data split from the IGS03 clock correction files obtained between December 13-and 16, 2019. Table 2 presents the stationarity results for the 8-min data windows, including the combined results for different satellite blocks and the percentage of stationary and not-stationarity windows. In this research, we conclude that the clock corrections are mostly stationary signals, but these results could vary with respect to clock oscillators or signal length, considerably restricting the machine learning model for forecasting. Therefore, SVR and ARIMA models are the best candidates (Hyndman and Athanasopoulos 2018, Bownlee 2017). Furthermore, they can also be used to predict nonstationary signals. Prediction models Once the prediction models have been chosen based on the signal analysis, we are going to describe them briefly, adjusting the explanation to what is really necessary to understand the implementation and results properly. Support vector regression model The support vector machine (SVM) and derived SVR methods have been widely used in machine learning due to their simplicity (Clarkson et al. 2012). SVM development allows applying the traditional support vector classifier in a higherdimensional space (Drucker et al. 1997). The adaptation of higher-dimensional space can be made by harnessing the mapping function (kernel). Radial-base function, polynomial, linear, and other kernels can transform data into higher-dimensional space. Given a data set containing {(x i , y i , i = 1,2, …, m)} where x i and y i ∈ R, x i are the features and y i the label, both normalized to (+ 1, − 1), the classifier categorizes data according to its label. For data categorization, the hyperplane must be established to separate the data. Due to the nature of the data, many hyperplanes can classify the same dataset. The chosen hyperplane must maximize the margin, defined using the nearest points near the margin, called support vectors. The margin can be classified as a hard or soft margin. The so-called soft SVM is generally used because it allows isolated values and misclassified samples, making it more realistic. Smola and Schölkopf (2004) defined the SVM formula as: Subjected to where w represents the margin width, b the bias, ξ denotes the slack variable, which is associated with the defined soft SVM, allowing some value to fall in the margin, and C is the trade-off margin width. ARIMA model The ARIMA model has been extensively used to forecast time series (Sneeuw et al. 2012;Ye et al. 2012;Moreira et al. 2013;Xin et al. 2018; Van Le and Nishio 2019). The ARIMA is a combination of autoregressive (AR), moving average (MA), and the differencing term (I). The ARIMA model depends on the well-known Box-Jenkins methodology (Box et al. 2011;Hyndman and Athanasopoulos 2018). The I-term differentiates the time series to ensure the stationarity, and the AR indicates that it is a regression of the variable against itself. For example, an autoregressive model of order p can be defined as: where a is a constant, ε t is the noise, and b p are the parameters to be identified. This expression is similar to a multiple regression but considers the lagged values of y t as a predictor. The MA suggests a moving average model that uses past forecast errors in a regression model. For example, a model average model or order q can be expressed by: where c is a constant, ε t is the noise, and r p are the parameters to be determined. In this case, the prediction y t can be computed based on a weighted moving average of the past q forecast errors. Finally, the integration of AR, I, and MA generates the ARIMA model described as: where m is a constant, ε t is the noise, and y ′ t signifies that the series is differenced, and it may have been differenced by more than one. The ARIMA model is usually denoted as ARIMA (p, d, q) where the p, d, and q represent the number of lags applied to the MA model, the differencing degree, and the order of the MA model, respectively (Piccolo 1990;Box et al. 2011;Hyndman and Athanasopoulos 2018). Implementation The stored RT-PPP correction data contain orbital and clock corrections. The prediction is centered on the clock corrections since orbital corrections can be well projected based on the radial, along-track, and cross-track components and their velocities broadcast in the navigation messages (Hadas and Bosy 2015). Clock corrections also need to be estimated at a high rate (Ge et al. 2012) due to the frequency instability of onboard GNSS clocks, mainly caused by temperature and gravitational variations. A balance between the training set data length and computational time is required to control the prediction accuracy and process time if real-time predictions are required. Rolling sliding window (RSW), expanding window, or fixed splitting are various testing and training methods to accomplish this. The clock corrections experience a period of high oscillations, with periodic jumps due to the clocks' frequency drift, instability, and changes in gravitational forces. Consequently, the forward validation and the RSW technique is more suitable testing and training method. It guarantees computational time reduction and mitigates the effect of outdated data, which clearly emerges by using an expanding window. Using a rolling window instead of an expanding window allows GNSS users to keep the computational load at the minimum level and improve the prediction accuracy. This research aims to predict clock corrections in approximately 30 s to overcome the latency effect, so three essential aspects must be addressed. First, we must fix the impact of choosing different RSWs with various lengths on both SVR and ARIMA model predictions in the prediction accuracy. Second, the model parameterization in the prediction model definition must be updated by searching and adjusting the optimal hyperparameters in the model definition based on the stored data. The question here is to search for the best rate of updating hyperparameters and then search for the best values for these hyperparameters. This led to evaluating the prediction accuracy with a different combination of C and gamma hyperparameters in the SVR model and (p,q,d) in the ARIMA model, considering a large hyperparameter search space in both models to ensure more stable predictions. Regarding SVR, a large C value means that the model did not allow errors to violate the margin, resulting in margin shrinking; the smaller it is, the more isolated values are allowed in the margin area (soft SVM). The gamma parameter establishes the variance of the Gaussian function used for class separation; it must be tuned to control interpolation, extrapolation, and nonlinearly separable classes (Guyon et al. 1993). For the ARIMA model, p controls the number of lags required to implement the linear regression, d controls the degree of differencing mandatory to ensure signal stationarity, and q is devoted to controlling signal error propagation. Finally, the third essential aspect is to compare, in a post-process experiment, the clock values stored in realtime with the SVR and ARIMA predictions, computing the standard deviation and the range for residuals and the effect on the final PPP computed coordinates. This experiment was performed by reprocessing the stored RT-PPP files using BNC software in the static mode by eliminating the latency effect since the clock and orbital corrections production and transmission are eliminated. So, the latency effect was eliminated to obtain the so-called latency-free solution. The stored clock correction file was modified to hold new clock correction values from SVR and ARIMA predictions. BNC is rerun again with the files containing predictions to compare the coordinates between the stored files in postprocessing (latency-free solution) and SVR and ARIMA predictions, solutions with the latency eliminated by prediction. A simulation of the latency effect is computed to complete the comparisons by shifting the stored correction file ahead by 30 s, which more or less represents the stored mean latency, the so-called forced-latency solution, and the normal RT-PPP situation. Figures 2 and 3 show an overview of the two central parts of the methodology applied in this research. Figure 2 includes the methodology part related to real-time implementation. The BNC software used the broadcasted corrections and navigation information with the station observation stream to produce the real-time coordinates solution. The clock corrections predictions were obtained from both models according to the rolling sliding values. The clock corrections residuals were calculated from different solutions with respect to real-time corrections. The post-processing was described in Fig. 3. The stored navigation and station observation information with the stored clock corrections predictions and clock corrections stored in real-time and the simulated clock corrections were used to produce the station coordinates with respect to the different solutions or clock corrections. The sampling value for real-time and forced-latency clock corrections files was 10 s. SVR and ARIMA models predict the clock corrections with the same sampling interval. Results and discussion The initial portion of the results, following the implementation section, is related to the RSW length effect. Different sliding windows are probed with sizes of 1, 2, 4, 8, and 16 min to fit the SVR and ARIMA models. One hour of clock correction data was implemented for this experiment on December 13, 2019. The standard deviation for clock correction residuals obtained by subtracting the SVR and ARIMA predictions was compared with the latency-free solution. Based on the results, an 8-min-long RSW for the ARIMA model and a 1-min RSW for the SVR model long can be selected as a good compromise between prediction accuracy and processing time. Tables 3 and 4 include the standard deviation comparison for both models. One satellite represents each satellite block, and the required time for processing is also included. Those tables also contain, for comparison purposes, the standard deviation of the residual between the forced-latency solution, as a reproduction of the real-time process, and the latencyfree solution. It should be mentioned that the results are the same if we choose different one-hour intervals of the stored data. It can be seen from the tables that there is a negative relation between the RSW length and the error values obtained by the ARIMA model. This shows that the ARIMA model is highly dependent on the length of previous observations to adjust the hyperparameters properly and construct the prediction model. STDP in Tables 3 and 4 denote the standard deviation of predictions in comparison with the free-latency solution, and STDL denotes the forced-latency standard deviation in comparison with the free-latency solutions. For both tables, the standard deviation unit is in meters, and the processing time corresponds to a one-hour of predictions. The SVR model can be implemented through different kernels such as linear, polynomial, and radial base functions. The reason behind picking the radial base function as a kernel is that it is more suitable for short-term predictions like latency. In this research, the radial base function was used, and as the RSW length increased, the influence of outdated observations became more notable. It is worth mentioning that the different types of clocks used in different satellite blocks that led to negative and positive trends are not valid for all RSWs. According to the implementation section, the second vital position to consider is related to selecting the best updating rate for hyperparameters. Updating rates of 0.25, 0.50, 1, 2, 3, 4, 5, and 6 h were examined from December 13 to December 16, 2019. Based on the experiments, a onehour updating rate for SVR is selected. The list of hyperparameters to resolve in the SVR method should be quite extensive in order to ensure that correct values are chosen. Consequently, a one-hour rate is a good balance between computation time and accuracy. However, the ARIMA model hyperparameters list can be considered fairly small: 0 to 3 for p and d parameters and 0 to 1 in the q parameter. Thus, the hyperparameter search is a high-speed operation, and their fixation can be executed for every RSW of 8 min without loss of computational time. It has also been realized that the rolling sliding window length has a greater impact on the prediction accuracy than the parameter updating rate. The same data and products for the ARIMA model examination were employed to analyze the SVR model. The third and final aspect to consider is related to the comparisons between prediction models and the forced-latency solution regarding the latency-free clock correction values. Figures 4 and 5 demonstrate the range calculated for the clock correction residuals. The residuals were calculated by subtracting the clock correction values of forced-latency, ARIMA prediction, and SVR prediction clock files with respect to the latency-free file. Figures 6 and 7 illustrate the standard deviation for the same residuals. It is apparent from those figures that the range reduction is negligible due to clock correction jumps. However, a remarkable reduction in standard deviation, especially for GPS satellites, has been discovered. For Figs. 4, 5, 6, and 7, the differences are computed with respect to the latency-free clock correction values. From the previous experiments, it can be concluded that the ARIMA model has lower standard deviations than the SVR model, except for the GLONASS M satellite block, where SVR is superior. However, the SVR model shows a faster execution time than the ARIMA model, and Tables 3 and 4 refer to the processing time needed for one-hour predictions. Coordinates free from latency (latency-free solution), simulated real-time (forced-latency solution), SVR predictions, and ARIMA predictions were acquired using the postprocess BNC software module for the permanent stations of Fig. 1, and one day of real-time data for the years 2013, 2014, 2015, 2019, and 2021. Table 5 shows the average of the 3D residuals statistical summary in terms of the mean, standard deviation, and range of the forced-latency, SVR, and ARIMA solutions compared with the latency-free solution. Figure 8 is a scatter plot that expresses the X and Y planar coordinates of all four solutions and includes the fixed coordinates of station IGS BREST during one day period. Both of the prediction models show remarkable improvement in the accuracy of coordinates in comparison with the forced-latency solution, which simulates the real-time situation. Figure 8 confirms that the SVR and ARIMA coordinate solutions are more precise and much denser around the true coordinate than the forced-latency solution, especially the ARIMA solution. For visualization purposes, X and Y coordinates were shifted with (423,100 and 332,700) meters. Tables 6 and 7 show the statistical summary throughout the research days of the years into consideration of standard deviation and range for the residuals obtained by subtracting the clock corrections values from different solutions (forced-latency, ARIMA, and SVR predictions) with respect to latency-free values. Tables 6 and 7 does not include the mean value analysis as the mean values for the residuals are near zero for all solutions. Finally, the data collected with the highest clock correction jumps were examined. Each clock correction value was subtracted from the previous values so that the maximum clock correction jumps could be determined; consequently, the results were grouped for each satellite block by calculating the mean of the maximum values. Figure 9 shows the clock corrections values for 24 h between December 13 and 14, 2019. The figure shows the value for corrections concerning the GPS satellite G05. It can be seen from the figure the clock correction values dropped around 0.40 m within seconds. Figure 10 represents 30 min of clock corrections, the black line represents the free-latency solution, and the gray dash line denotes the forced-latency solution, while the thick gray and black dash lines show the prediction models. Table 8 presents the mean value of the maximum consecutive difference between successive clock correction values. The results proved that the jumps could reach 11.68 m for satellites in GPS block GPS-IIF meters in 2013. Table 8 shows that the clock corrections consecutive jumps have fewer values in recent years, especially for GPS satellites. From the GNSS real-time user point of view, the implementation should be as follows. If the SVR method is selected, the user should store observations for the first 2 min to generate the first valid SVR model. An initial search for the hyperparameters and fixed model prediction is carried out. Next, the fitted model can predict the next clock correction according to the stored latency value, keeping the size of the rolling sliding windows constant at 1 min. Thus, the new observations are stored, and the old observations are automatically deleted in the SRW window. In addition, the GNSS users need to update the C and gamma parameters for the SVR model each hour to ensure accuracy. Therefore, the user is required to store one-hour latency values continuously. For the first hour of observations, the hyperparameters can be updated every 15-20 min to ensure accuracy in the prediction. In case the ARIMA method is selected, the user should store the observation for the first 2 min. The first search for hyperparameters should be executed for the first 2 min of observations. The respective hyperparameters are utilized for the first 8 min of observation. Afterward, an SRW window of 8 min can be employed, including the searchand-fix of hyperparameters. In both models, the GNSS user should use the clock corrections as they are received in the first 2 min with a latency effect; then, the GNSS user can use a prediction model and results. Conclusions The SVR and ARIMA prediction models are applied to IGS03 clock corrections. Both models are used to estimate the clock corrections to overcome the latency effect. In this research, the latency effects on orbital corrections were not studied, as they vary at a low rate compared to the clock corrections and are not severely affected by latency. Three days of real-time data were utilized initially in this study in order to obtain the correct time dimension for the rolling sliding window and the correct rate to update the ARIMA and SVR hyperparameters. Similarly, one day of real-time data for the years 2013, 2014, 2015, 2019, and 2021 is used to confirm the validity of the proposed methods. The results prove that both models can be used to overcome the latency. According to Tables 6 and 7, the ARIMA model reduces the standard deviation by 28% and 13% for GPS and GLO-NASS constellations, respectively, compared to the forcedlatency solution. The SVR model reduces the standard deviation by 28% and 23% for GPS and GLONASS constellations, respectively. Both models showed robust behavior during clock correction jumps because the models rely on 1 min of clock correction observations for the SVR model and 8 min for the ARIMA model. The uses for the RSWs mitigate the effect of jumps and improve the availability of the RT-PPP solution derived from the BNC software and maintain the coordinate convergence during jump periods. Finally, based on the results of Table 5, the average of the 3D standard deviations, both models are successful in eliminating the latency effect, especially the ARIMA model. However, the SVR model displays superiority in processing time compared to ARIMA. It is approximately eight to nine times faster. Thus, it could be an excellent solution to overcome the latency due to simplicity and computational speed, which is important since GNSS receivers need to predict clock corrections for approximately 10-20 satellites that are above the horizon during the observation when using GPS and GLONASS signals. The proposed prediction models could also forecast clock corrections during periods of data loss or discontinuity. In this research, the range analysis is included in order to investigate outlier predictions. Both prediction models experienced this phenomenon to prevent such behavior in a limited number of predictions. They affect both clock corrections and 3D calculated residuals. To eliminate possible outliers, a threshold detector must be implemented for both predicting models. Additionally, the research methodology could improve the accuracy of dynamic GNSS receivers such as self-driving vehicles and mobile users. Those users rely on mobile data connections where the internet connections are more affected by the latency.
6,806.8
2022-06-08T00:00:00.000
[ "Computer Science", "Engineering" ]
Changes in soil oxidase activity induced by microbial life history strategies mediate the soil heterotrophic respiration response to drought and nitrogen enrichment Drought and nitrogen deposition are two major climate challenges, which can change the soil microbial community composition and ecological strategy and affect soil heterotrophic respiration (Rh). However, the combined effects of microbial community composition, microbial life strategies, and extracellular enzymes on the dynamics of Rh under drought and nitrogen deposition conditions remain unclear. Here, we experimented with an alpine swamp meadow to simulate drought (50% reduction in precipitation) and multilevel addition of nitrogen to determine the interactive effects of microbial community composition, microbial life strategy, and extracellular enzymes on Rh. The results showed that drought significantly reduced the seasonal mean Rh by 40.07%, and increased the Rh to soil respiration ratio by 22.04%. Drought significantly altered microbial community composition. The ratio of K- to r-selected bacteria (BK:r) and fungi (FK:r) increased by 20 and 91.43%, respectively. Drought increased hydrolase activities but decreased oxidase activities. However, adding N had no significant effect on microbial community composition, BK:r, FK:r, extracellular enzymes, or Rh. A structural equation model showed that the effects of drought and adding nitrogen via microbial community composition, microbial life strategy, and extracellular enzymes explained 84% of the variation in Rh. Oxidase activities decreased with BK:r, but increased with FK:r. Our findings show that drought decreased Rh primarily by inhibiting oxidase activities, which is induced by bacterial shifts from the r-strategy to the K-strategy. Our results highlight that the indirect regulation of drought on the carbon cycle through the dynamic of bacterial and fungal life history strategy should be considered for a better understanding of how terrestrial ecosystems respond to future climate change. Introduction Soil respiration (Rs) is one of the largest carbon (C) effluxes between terrestrial ecosystems and the atmosphere and plays an important role in regulating atmospheric CO 2 concentration (Davidson et al., 2002;Kuzyakov, 2006;Baldocchi et al., 2018).Soil heterotrophic respiration (Rh) is mainly derived from the decomposition of litter and soil organic matter (Kuzyakov, 2006).However, the underlying mechanisms of the Rh response to climate change are uncertain (Ye et al., 2019).Soil microorganisms, the main decomposers of terrestrial ecosystems, are critical in terrestrial C cycling (Fierer, 2017;Banerjee et al., 2018).Environmental changes could alter C cycling via microorganisms (Davidson and Janssens, 2006;Romero-Olivares et al., 2017).Thus, disentangling the role of extrinsic factors and microbial mechanisms in driving Rh is imperative for predicting C cycling under future global change scenarios (Hashimoto et al., 2015;Hursh et al., 2017). Climate change-induced extremes in the precipitation pattern are becoming more severe and frequent (Huang et al., 2017;Zhao and Dai, 2022), disrupting biogeochemical cycling in terrestrial ecosystems (Sippel et al., 2018;Xu et al., 2019;de Vries et al., 2020).Meta-analysis reported that moderate and extreme decreases in precipitation have significant negative effects on Rh in grasslands (Du et al., 2020).Previous studies have shown that drought changes the soil microbial community structure (Evans et al., 2014;Bastida et al., 2017;de Vries et al., 2018;Ochoa-Hueso et al., 2018).Many studies have shown that the soil microbial community shifts significantly with higher or lower aridity (Neilson et al., 2017;Tu et al., 2017;Yao M. J. et al., 2017;Song et al., 2019;Xu et al., 2020).Inputs of reactive nitrogen (N) from human activities, including combustion-related NO x , and industrial and agricultural N fixation, which are predicted to be 600 Tg N yr.−1 by 2,100, strongly affect ecosystem C cycling (Fowler et al., 2015).A global meta-analysis revealed that adding N reduces Rh and increasing the rate of adding N enhances reduction (Chen and Chen, 2023).N inputs eutrophy and acidify soil leading to altered microbial community structure and reduced Rh (Erisman et al., 2013).Moreover, adding N significantly shifts the microbial community structure, particularly in N-limited ecosystems (Sun et al., 2019;Zhou et al., 2022).However, N deposition simulation generally applied rates that much higher than the critical threshold of 10 kg N ha −1 yr.−1 (Dentener et al., 2006;LeBauer and Treseder, 2008;Xia and Wan, 2008;Bobbink et al., 2010;Janssens et al., 2010;Peng et al., 2017).Therefore, the interactive effect of drought and multilevel addition of nitrogen on the Rh should be fully investigated. The life history strategies of soil microbes determine their metabolic potential and responses to environmental changes, thus driving the change in Rh (Piton et al., 2023).The r-selected species (copiotrophic) have a fast growth rate and a rapid response to available C and nutrient inputs and typically flourish in environments enriched in labile C. In contrast, K-selected species (oligotrophic) are slow-growing and more efficient species with recalcitrant C and lower availability (Fierer et al., 2007;Trivedi et al., 2013).A subtropical forest experiment showed that reducing throughfall increases the relative abundance of r-strategy bacteria, but decreases K-strategy bacteria (Yang et al., 2021b).A previous study revealed the dominant microbial growth strategy shifted from a K-strategy to an r-strategy in degraded grasslands after adding N (Zeng et al., 2021).The r-strategy-dominated soils generally have higher microbial respiration than K-strategy-dominated soils (Malik et al., 2016;Tosi et al., 2016).The rRNA operon (rrn) copy number correlates with the bacterial reproduction rate and the response rate of resource availability (Roller et al., 2016;Wu et al., 2017); thus, reflecting the ecological strategy of bacteria, as a higher rrn copy number is associated with faster growing copiotrophic or r-selected bacteria (Roller et al., 2016;Samad et al., 2017).Drought increases the mean rrn copy number, indicating a higher proportion of r-selection and average potential growth rate (Veach and Zeglin, 2020).A previous study showed that adding N promotes the abundance of bacteria with a higher rrn copy number, thus providing evidence that increased N input favors copiotrophic taxa (Liu et al., 2020;Ma et al., 2022).N deposition may enhance water limits by promoting plant growth, which in turn reduces the availability of soil microbial substrates and limits microbial growth and population size (Chen et al., 2021).In addition, drought in grasslands may further restrict nutrients and thus affect microbial communities (Yang et al., 2021a).However, to what extent the microbial life strategy contributes to the shift of Rh in response to the interaction between drought and adding N remains unclear. Soil extracellular enzyme activities are central to Rh as they control the decomposition and mineralization of soil organic matter (Schimel and Bennett, 2004;Bengtson and Bengtsson, 2007).A previous alpine meadow study demonstrated that drought increases the activities of some hydrolases in some years, but non-significantly decreases the activities of oxidases (Yan et al., 2020).In another study, drought does not affect the activities of hydrolases but increases that of oxidases (Yan et al., 2021).Adding nitrogen also affects soil extracellular enzyme activities.A previous study on a steppe showed that depositing N decreases peroxidase activity by affecting environmental factors (Liu et al., 2018).In a 5-year field experiment on a meadow steppe in northern China, adding N increased the activities of α-glucosidase, and β-glucosidase (Ma et al., 2020).The lignocellulose index (LCI) can be calculated based on hydrolytic and oxidative activities and represents soil substrate C quality.The higher the LCI value, the more vulnerable substrate C is to being decomposed (Moorhead et al., 2013).A recent study reported that Rh and the LCI are positively correlated (Jiang et al., 2023).The definite pattern of soil extracellular enzyme activities response to drought and N deposition should be identified to predict Rh more accurately under global change. The Zoige alpine wetland is located on the eastern edge of the Qinghai-Tibet Plateau and is the largest plateau peat swamp wetland in the world (Wu et al., 2020).Due to its altitude, it is highly sensitive to climate change (Yu et al., 2010;Chen et al., 2014;Zeng et al., 2017).The Zoige plateau plays an important role in the global C cycle (Wang et al., 2012).As such, this region could potentially have a significant impact on regional climate change (Kang et al., 2014).Here, we examined the effects of drought-and N-induced changes in microbial community composition, microbial life strategies, and extracellular enzyme activities on Rh across drought and multiple levels of added N in the Zoige alpine swamp meadow.We hypothesized that (1) the soil microbial community would change to copiotrophic taxon-dominated under the drought and added N conditions; (2) drought and adding N will decrease soil extracellular enzyme activities; (3) drought and adding N will decrease Rh.This study aimed to explore the underlying mechanisms of the responses of Rh to drought and adding N from the perspectives of microbial life strategy and extracellular enzymes. Site description and experimental design This study was performed in a typical swamp meadow ecosystem at the Drought and Nitrogen Deposition Interaction Experiment Platform in Xiangdong village (33°37′16″N, 102°52′21″E, 3,400 m above sea level), Zoige county, Sichuan Province (northeastern Qinghai-Tibetan Plateau).The mean temperature in the region ranges from −0.7 to 1.1°C, with the coldest temperature in January at −10.5°C and the hottest in July at 11°C, respectively.The mean annual precipitation is 650-750 mm, which mainly occurs from June to September.The soil type is peat and swamp soil.The dominant plant species are Poa poophagorum, Elymus nutans, Carex atrofusca, and Potentilla anserina.Soil samples from the study site were collected before the experiment and analyzed.The soil properties of the top 0-20 cm layer were: pH 6.04, total organic C 66.8 mg g −1 , total N 5.09 mg g −1 , and total P 0.90 mg g −1 . In June 2019, a 90 × 90 m plot was enclosed with a 1.6-m-high fence, which prevented herbivores from entering.An interactive experiment between drought and N enrichment was established using a randomized complete block design with two levels of drought (CK and drought) and six levels of N enrichment (0, 2, 4, 8, 16, and 32 g N m −2 yr.−1 ).Four replicate blocks were established and each block had 12 treatments randomly assigned to 4 × 4 m plots.Each plot was 2 m away from neighboring plots.We fixed rainout shelters on the plots to simulate drought in September 2019 and each rainout shelter had a roof made of curved bands of transparent acrylic that intercepted 50% of the rainfall while having a minimal impact on other environmental factors (Zhang et al., 2019a).Iron sheets were installed around the plots to a depth of 40 cm belowground to prevent lateral movement of water between the plots (Zhang et al., 2019b).Six levels of N enrichment consisted of the range from the plant N limitation to saturation according to a previous study conducted near the experimental site (Song et al., 2017).At the end of the 2020 growing season, the effect of the drought treatment on soil water content was tested and was significant, so we initiated the N addition treatment in May 2021.Coated slow-release urea was spread onto the soil surface of the plots by hand at the beginning of the growing season each year. Soil respiration measurements Before the experimental treatments were initiated, soil respiration collars (PVC pipe, 20 cm inner diameter) were installed in the ground in each plot.Two soil respiration collars were set up per plot; one shallow (5 cm) for measuring Rs, and one deep (40 cm) for measuring Rh.The trench method was successfully used in previous studies (Hanson et al., 2000;Keeler et al., 2009;Sayer and Tanner, 2010). Soil respiration (Rs) and heterotrophic respiration (Rh) were measured with a portable soil carbon flux automatic measurement system (PS-9000, LICA, Beijing, China) once every 2 weeks during the growing season.The above-ground parts of newly growing plants were cut off at the surface in both collars in advance.Soil temperature (ST) and soil water content (SWC) were simultaneously measured with a soil temperature and humidity probe on a portable soil carbon flux automatic measurement system in the 10 cm soil layer during the carbon flux measurement. Soil property determination At the end of August 2022, 48 soil samples were collected by drilling 0-20 cm soil layers in 48 plots, removing the stones, roots, and other impurities, and dividing them into two subsamples.Subsample 1 (10 g) was wrapped in foil, quickly placed in liquid nitrogen, transported back to the laboratory, and stored at −80°C for soil DNA extraction.Subsample 2 (200 g) was transported back to the laboratory and stored at −4°C to determine the soil physicochemical properties. Soil-dissolved carbon (DOC) was extracted by adding 50 mL of 0.5 M potassium sulfate to subsamples of 12.5 g homogenized soil and agitating the sample on an orbital shaker at 120 rpm for 1 h.The filtrate was analyzed using a TOC analyzer (multi N/C 3100, Analytik Jena, Germany).Soil microbial biomass carbon (MBC) and microbial biomass nitrogen (MBN) were estimated using a chloroform fumigation extraction method (Brookes et al., 1985).Soil NH 4 + and NO 3 − concentrations were determined by extraction with 2 M KCl solution followed by colorimetric analysis on a FIAstar 5,000 Analyzer (FIAstar 5,000 Analyzer, Foss Tecator, Hillerød, Denmark).Soil pH was determined in a 1:2.5 soil: water solution (w/v). Amplicon high-throughput sequencing and bioinformatics analyses The paired-end raw sequences were spliced using USEARCH (v11), and the low-quality sequences and primers were removed following the UPARSE pipeline (Edgar, 2013).The UNOISE3 denoising algorithm was used to generate zOTU representative sequences, and the zOTUs with sequence numbers <9 were removed (Edgar, 2016).The zOTU table was generated by mapping the zOTU representative sequence with the merged sequences via the otutab script.The taxonomic information annotations of the prokaryotes and fungi were prepared based on the SILVA138 and UNITE8.2databases in QIIME2 (Quast et al., 2012;Nilsson et al., 2018;Bolyen et al., 2019).Finally, 24,131 and 4,004 zOTUs were obtained for prokaryotes and fungi, respectively.The prokaryotic and fungal sequence numbers in each sample were rarefied to 71,712 and 65,914, respectively, for subsequent analysis. The rrn copy number of each OTU was searched using the rrnDB database and estimated according to its closest relatives with a known rrn copy (Stoddard et al., 2015).Then, we calculated the abundanceweighted average rrn copy number for each soil sample (Wu et al., 2017).We calculated the product of the estimated rrn copy number and the relative abundance of each OTU and summed these values of all OTUs for each sample. The soil suspensions to measure the hydrolases were prepared by homogenizing a 1.0 g soil sample in 100 mL of 50 mmol L −1 sodium acetate buffer.Then, a mixture of soil homogenate, methylumbelliferyl (MUB), and a MUB-linked substrate was placed in a black polystyrene 96-well microplate and incubated in the dark for 4 h at 25°C.The hydrolytic enzyme activities were expressed as nmol g −1 h −1 .The soil suspensions for oxidases were prepared by homogenizing a 1.0 g soil sample in 10 mL of 1% pyrogallol solution.The mixture was placed in an incubator after shock and cultured at 30°C for 2 h.The oxidative enzyme activities were expressed as mg g −1 h −1 .The LCI was calculated as the ratio of lnPPO to the sum of lnPPO and lnBG (Duan et al., 2023).The units of the BG and PPO activities were converted to nmol g −1 MBC h −1 before calculating the LCI. Statistical analyses All statistical analyses were performed in R version 4. 1.3 (R Development Core Team, 2022).Repeated-measurement analysis of variance was employed to examine the effects of drought, added N, and date on seasonal Rs and Rh using linear mixed-effect models and the R package nlme.The drought and nitrogen-added treatments were set as fixed effects, block was set as a random effect, and a corAR1type time-autocorrelated covariance matrix was used to avoid violating the assumption of sphericity for repeated measurement of Rs and Rh data.To examine the effects of drought and added N on soil properties, seasonal mean Rs and Rh, rrn copy number, and soil extracellular enzyme activities, we used linear mixed-effect models with the lme4 and lmerTest packages, setting drought and the nitrogen addition treatments as fixed effects and block as the random effect (Bates et al., 2015;Kuznetsova et al., 2017;Pinheiro et al., 2022).Multiple comparisons followed by linear mixed-effect models were performed with the R package lsmeans (Lenth, 2016).The standardized regression coefficient and marginal R 2 were utilized to assess the effect size of fixed factors on Rh (Nakagawa and Schielzeth, 2013).The marginal R 2 was calculated using the partR2 package (Stoffel et al., 2021).Pearson's correlation coefficients between factors were examined and visualized using the R package ggcor. Principal coordinates analysis (PCoA) and permutational multivariate analysis of variance (PERMANOVA) were performed using the vegan package (Oksanen et al., 2018) to reveal the effects of drought and added nitrogen on soil prokaryotic and fungal community composition.All community composition distances were calculated based on Bray-Curtis dissimilarities.Actinobacteriota, Acidobacteriota, and Chloroflexi were classified as the K-selected (oligotrophic-associated) bacterial phyla, and Proteobacteria, Bacteroidota, and Firmicutes were the r-selected (copiotrophicassociated) bacterial phyla (Phung et al., 2004;Fierer et al., 2007;Nemergut et al., 2010;Francioli et al., 2016).Basidiomycota was classified as a K-selected fungal phylum, and Ascomycota and Mortierellomycota were as r-selected fungal phyla (Yao F. et al., 2017;Wu et al., 2021).The bacterial or fungal phyla ratios of K-to r-strategists were calculated based on the relative abundance.The responses of the relative abundance of the bacterial and fungal lineages (from phylum to class) to drought were determined using the linear discriminant analysis effect size (LEfSe) method (Segata et al., 2011).The LEfSe was performed using the online Huttenhower Galaxy server (huttenhower.sph.harvard.edu/galaxy)with a setting LDA score > 4.0. A structural equation model (SEM) was used with the R package piecewiseSEM (Lefcheck, 2016) to examine the causal pathways by which drought and adding N affected Rh.Based on our knowledge of the effects of drought and added N on Rh, we developed an a priori model to allow a hypothesized causal interpretation of the linkages between SWC, DOC, the LCI, prokaryotic community composition, fungal community composition, the abundance-weighted rrn copy number, the ratio of K-to r-selected bacteria (B K:r ), the ratio of K-to r-selected fungi (F K:r ), soil hydrolase activities, soil oxidases activities, and Rh.Prokaryotic and fungal community composition was represented by PC1 from the Bray-Curtis distance-based principal coordinate analysis.Soil hydrolase activities were calculated as the sum of the activities of AG, BG, and CB, and soil oxidase activities were calculated as the sum of PEO and PPO activities. Soil properties After the drought treatments, SWC decreased significantly from 33.02 to 20.89% (p < 0.001), while ST increased significantly from 13.65°C to 14.53°C (p < 0.001; Supplementary Figure S1 and Supplementary Tables S1, S2).DOC increased significantly under drought and added N (p < 0.001; Supplementary Figure S1 and Supplementary Tables S1, S2).MBC did not change statistically under the drought and added N treatments (Supplementary Figure S1 and Supplementary Table S1).MBN decreased significantly under drought (p = 0.0024), and increased under added N (p < 0.001; Supplementary Figure S1 and Supplementary Tables S1, S2).Soil inorganic N concentration, NH 4 + , and NO 3 − , increased under the added N treatment (p = 0.0003, p < 0.001; Supplementary Figure S1 and Supplementary Tables S1, S2).Soil pH decreased significantly from 5.68 to 5.65 by 0.03 under the added N treatment (p = 0.00081; Supplementary Figure S1 and Supplementary Tables S1, S2). Soil respiration and heterotrophic respiration The temporal dynamics of soil respiration were consistent with the heterotrophic components, and all maximum rates occurred in July (Figures 1A,B; Table 1).Drought significantly decreased the mean growing season Rh and Rs values by 40.07 and 52.24%, respectively (p < 0.001, p < 0.001; Figures 1C,D; Supplementary Tables S1, S2).While adding N had no effect on the mean growing season Rh and Rs values (Figures 1C,D; Supplementary Tables S1, S2).The mean Rh/Rs ratio increased significantly over the growing season by 22.04% under drought (p < 0.001; Figure 2B; Supplementary Tables S1, S2), but did not change significantly under the N addition gradient (Figure 2A; Supplementary Table S1).Notably, the effects of adding N on Rs and Rh were not significant for the seasonal dynamics and the mean value (Figures 1A,B; Supplementary Table S1).Drought significantly increased the Rh to Rs ratio from 52.91 ± 1.17 to 64.57 ± 1.70 (p < 0.001), whereas adding N did not affect the Rh to Rs ratio (Figure 2A; Supplementary Tables S1, S2). Soil microbial community composition As shown in Figure 3A, the dominant prokaryotic phylum was Proteobacteria, followed by Acidobacteria, Verrucomicrobiota, Actinobacteriota, and Bacteroidota.As shown in Figure 3B, the dominant fungal phylum was Ascomycota, followed by Basidiomycota and Mortierellomycota.The ratio of heterotrophic respiration (Rh) to soil respiration (Rs) under the different treatments (A), with or without the drought treatment (B). As revealed by the LEfSe analysis of bacteria at the phylum level, the relative abundance of Acidobacteriota and Proteobacteria decreased under drought, while the relative abundance of Actinobacteriota increased.The relative abundance of Thermoleophilia increased under drought, while the relative abundance of Blastocatellia, and Gammaproteobacteria decreased (Figure 4D).The LEfSe analysis of fungi at the phylum level revealed that drought significantly reduced the relative abundance of Mortierellomycota.Drought increased the relative abundance of Dothideomycetes but decreased that of Eurotiomycetes, Sordariomycetes, and Mortierellomycetes (Figure 4E). Correlations between the environmental factors and heterotrophic respiration Pearson's correlations showed that Rh and Rs were significantly positively correlated with SWC, MBN, rrn, PEO, PPO, and the LCI (Supplementary Figure S3), and significantly negatively correlated with DOC, 16SPC1, ITSPC1, and BG (Supplemenatry Figure S3).Rh was significantly positively correlated with MBC, while Rs were significantly negatively correlated with AG and B K:r (Supplementary Figure S3). We analyzed the relationships between potential drivers (i.e., soil properties, microbial properties, and enzyme activities) and Rh to explore the controls of Rh.As a result, Rh was significantly associated with the factors (Figure 6A).Briefly, for soil properties, ST and DOC significantly attenuated Rh (Figures 6C,D).In contrast, Rh was significantly facilitated by the increases in SWC, MBC, and MBN (Figures 6B,E,F).While Rh was not correlated with NH 4 + -N, NO 3 − -N, or pH (Figure 6A).For microbial properties, the prokaryotic community composition, fungal community composition, and B K:r were negatively correlated with Rh (Figures 6G,H,I), while rrn was positively correlated with Rh (Figure 6J).For enzyme activities, Rh decreased with AG and BG but increased with PEO, PPO, and LCI (Figures 6K-O). The SEM explained 84% of the variation in Rh (Figure 7).Standardized total effects from the SEM showed that drought had a significant negative effect on Rh while adding N did not (Figures 7, 8).Except for drought, oxidases were the most significant factor affecting Rh (Figures 7, 8), in which oxidases exerted a positive effect.The standardized direct effect sizes of drought, hydrolases, and oxidases on Rh were − 0.5834, 0.0679, and 0.4279, respectively (Figure 8), but only the drought and oxidase pathways were significant (Figure 7).Drought, SWC, LCI, rrn, ITSPC1, B K:r , F K:r , and hydrolases were important factors that exerted indirect effects on Rh (Figures 7, 8).LCI, F K:r , and hydrolases contributed to increased oxidases, the second most important factor that affected Rh, while B K:r contributed to decreasing oxidases. Discussion 4.1 The shift in oxidase activity induced by the microbial life history strategy-mediated Rh response to drought and added nitrogen Previous studies have shown that drought and adding N change the microbial community composition (Allison et al., 2007;Treseder, 2008;Gao et al., 2021).However, our results show that only drought altered the microbial community composition (Figures 3C,D), which partially supported our hypothesis 1.The results of a study in semiarid grasslands showed that reducing precipitation increases oligotrophs and decreased copiotrophs, which was consistent with our B K:r results (Li et al., 2022).A lower proportion of r-selection and average potential growth rate under drought has been suggested by a lower abundance-weighted average rrn copy number (Roller et al., 2016).Consistent with a previous study (Bu et al., 2018), the abundance of Gammaproteobacteria, which has often been associated with copiotrophic bacteria (Kurm et al., 2017), was suppressed by drought (Figure 4C).Here, the increased B K:r , F K:r , and decreased rrn support that drought favors oligotrophic taxa.Water affects microbial dynamics as a transport medium (Tecon and Or, 2017).Our drought treatments significantly reduced SWC (Supplementary Tables S1, S2) and, therefore, decreased diffusion of dissolved nutrients (Tecon and Or, 2017).The breakdown of hydrological links is thought to be the main reason why drought affects soil community composition (Carson et al., 2010).Pearson's correlation analysis revealed that SWC was significantly correlated with B K:r , F K:r , and rrn (Supplementary Figure S3).Thus, the drought-induced decrease in SWC drove the microbes to shift to an oligotroph-dominated community. The results in extracellular enzyme activities did not accord with our hypothesis 2. In this study, drought increased hydrolase activities, including AG and BG.Previous studies have reported that the potential activities of AG and BG increase under dry conditions, indicating a decrease in hydrolase turnover (Alster et al., 2013;Ochoa-Hueso et al., 2020).Furthermore, partly consistent with our third hypothesis, only drought decreased Rh in this study (Figure 1C), which agrees with some previous research on terrestrial ecosystems (Zhou L. Y. et al., 2016;Zhou X. et al., 2016;Veach and Zeglin, 2020;Zheng et al., 2021).Our SEM showed that drought reduced oxidases via the LCI, B K:r , F K:r , and hydrolases and subsequently, Rh (Figure 7).Drought affected fungal community composition by decreasing SWC and the LCI, then increasing F K:r and decreasing oxidase activities.The B K:r and F K:r values increased, and the rrn value decreased in this study, indicating that drought shifted the microbial community from r-strategist-dominated to K-strategist-dominated (Figures 4A-C) (Roller et al., 2016;Duan et al., 2023).K-strategists are more associated with oxidases than r-strategists and effectively utilize recalcitrant C, including lignin (Chen et al., 2022;Morrissey et al., 2023).Our SEM showed that the increase in B K:r and the decrease in rrn promoted hydrolase activity.However, hydrolases had no significant effect on Rh (Figure 7).Soil hydrolases and oxidases are related to labile and recalcitrant C, respectively (Sinsabaugh et al., 2008;Burns al., 2013).Here, drought reduced the LCI (Figure 6F), indicating that the C substrate was more vulnerable to decomposition (Moorhead et al., 2013).According to the optimization of the cost/benefit ratio (Allison et al., 2011), more labile soil C substrate should increase the activities of hydrolase, and decrease that of oxidase.Microbial community structure may be less important for the turnover of more labile C because a broad phylogeny of taxa is capable of metabolizing simpler compounds (Berg and McClaugherty, 2014).In that case, the rate-limiting variables for Rh may depend more on oxidase than hydrolase activity of the microbial community even if the microbial life history strategy shifts.The r-strategist-dominated soils generally have a higher respiration rate than K-strategistdominated soils, which may decrease Rh by reducing oxidase activities (Bailey et al., 2002;Six et al., 2006;Fierer et al., 2007;Waring et al., 2013;Malik et al., 2016).Oxidase genes were identified in γ-Proteobacteria, indicating it could be a potential oxidase producer Frontiers in Microbiology 10 frontiersin.org(Tian et al., 2014).In this study, the relative abundance of γ-Proteobacteria decreased under drought (Figure 4D), causing the reduction of oxidase activities.According to the LEfSe, Mortierellomycota contributed to the change in the F K:r value at the phylum level (Figure 4E).Moreover, members of Mortierellomycota are sensitive to reduced precipitation (Han et al., 2024).A previous study found that the relative abundance of Mortierellomycota is lower in dry ecosystems (Tedersoo et al., 2014).The phyla Mortierellomycota mostly includes saprotrophs in the soil (James et al., 2006;Tedersoo et al., 2018).Saprophytic fungi perform the initial steps in the decomposition of cellulose, lignin, and other complex macromolecules (Gessner et al., 2010;Berg and McClaugherty, 2014).Mortierellomycota is involved in the decomposition of recalcitrant C (Větrovský and Baldrian, 2013;Fang et al., 2018;Shi et al., 2020), this may be the reason why F K:r increases oxidase activities.From the correlation analysis and structural equation model (Figures 7,8), the bacterial community composition shift exerted a greater effect on oxidase activities than fungal community composition shift, which ultimately led to the decline of oxidase activity, and thus a decrease in Rh. Uncertainties The estimate of Rh in this study may have some limitations.First, the Rh value may have been overestimated, because the trenched subplots allowed root ingrowth underneath the collar (>0.6 m depth) into the subplots (Sayer and Tanner, 2010).Second, trenched subplots may exhibit a different microbial community composition from non-trenched subplots, which would change the Rh value (Chen et al., 2016).Third, long-term collar deployment leads to bias in soil respiration measurements, which contributes to higher soil bulk density and lower microbial biomass; inside long-term collars can underestimate Rh (Ma et al., 2023).Finally, the conclusions were drawn from Rh during the growing season.It is necessary to consider the seasonal pattern of Rh in response to drought and added N, including the growing and nongrowing seasons.In addition, we also assayed the potential activities of soil extracellular enzymes.An assay of potential enzyme activities usually provides an unlimited and relatively simple soluble substrate and is usually performed at a constant temperature, which is inconsistent with reality and may Weintraub, 2008).In this study, adding N had no significant effect on Rh.Adding N has been reported to reduce Rh by decreasing microbial biomass (Treseder, 2008;Liu and Greaver, 2010).Here, MBC did not change significantly after adding N (Supplementary Table S1).Previous global meta-analysis revealed that MBC decreases with increasing experimental duration, indicating that the negative effects of adding N on microbes become more pronounced over time (Zhang et al., 2018).We predict that adding N will decrease Rh because soil microbes suffer progressive inhibition and continue to decrease in the long term.However, drought increases the availability of nitrogen, which can harm phenols, and reductions in phenols can increase Rh.Therefore, experiments on the effects of long-term drought and added nitrogen on Rh are full of uncertainties and should be continuously conducted. Conclusion In summary, this study revealed the regulatory mechanisms underlying the Rh responses to drought and adding N by integrating soil properties, microbial life history strategies, and extracellular enzyme activities.Our findings show that drought decreased Rh primarily by inhibiting oxidase activities, which is induced by bacterial shifts from the r-strategy to the K-strategy.The changes in extracellular enzymes highlight the importance of the dynamics of the ratio of K-to r-selected in bacterial and fungal communities in regulating Rh.However, adding N did not affect Rh, which emphasizes the necessity for long-term observations.Therefore, the dynamic of bacterial and fungal life history strategy should be fully considered for a better understanding of the responses of terrestrial ecosystems to future climate change scenarios.10.3389/fmicb.2024.1375300Frontiers in Microbiology 14 frontiersin.orgMa, X., Jiang, S., Zhang, Z., Wang, H., Song, C., and He, J.-S. (2023).Long-term collar deployment leads to bias in soil respiration measurements.Methods Ecol.Evol. 14, 981-990. doi: 10.1111/2041-210X.14056 Ma, W., Li, J., Gao, Y., Xing, F., Sun, S., Zhang, T., et al. (2020).Responses of soil extracellular enzyme activities and microbial community properties to interaction between nitrogen addition and increased precipitation in a semi-arid grassland ecosystem.Sci.Total Environ. 703:134691. doi: 10.1016Environ. 703:134691. doi: 10. /j.scitotenv.2019.134691 .134691 FIGURE 1 FIGURE 1The seasonal dynamics of heterotrophic respiration (Rh, A) and soil respiration (Rs, B).The mean seasonal values of Rh (C) and Rs (D) under the different treatments.Data are mean ± S.E.(n = 4). FIGURE 3 FIGURE 3 Relative abundance of the dominant (A) prokaryotic and (B) fungal groups at the phylum level under the different treatments.Principal coordinates analysis (PCoA) based on the (C) prokaryotic and (D) fungal communities. FIGURE 4 FIGURE 4The ratio of K-to r-strategist (A) bacterial phyla (B K:r ) and (B) fungal phyla (F K:r ); (C) abundance-weighted average rRNA operon (rrn) copy numbers and linear discriminant analysis effect size (LEfSe) of the (D) bacteria and (E) fungi.Data are mean ± S.E.(n = 4). FIGURE 7 FIGURE 7 Structural equation model considering the plausible pathways through which drought and added nitrogen affect heterotrophic respiration (Rh).Before the SEM analysis, prokaryotic and fungal OTU tables were subject to principal coordinates analysis (PCoA) to generate PC1 representing prokaryotic and fungal community composition.The activities of AG, BG, and CB were summed to represent the hydrolases, while the activities of PEO and PPO were summed to represent the oxidases.Red and blue arrows represent positive and negative pathways, respectively, while solid and dashed arrows indicate significant and nonsignificant pathways, respectively.Numbers at arrows are standardized path coefficients and arrow width is proportional to the strength of the relationship.*0.01 < p ≤ 0.05; **0.001 < p ≤ 0.01; ***p ≤ 0.001.Conditional R 2 and marginal R 2 values near response variables indicate the proportion of variation explained by response variables with and without random effect.The final results of model fitting: Fisher's C = 92.44,p = 0.47, df = 92, n = 48, Akaike information criteria (AIC) = 220.44. FIGURE 8 FIGURE 8Standardized direct, indirect, and total effect sizes of factors on Rh. TABLE 1 Results from linear mixed models for the effects of date, drought, added nitrogen, and their interactions on Rh and Rs.
7,537.6
2024-03-15T00:00:00.000
[ "Environmental Science", "Biology" ]
ENVIRONMENTAL POLICY INTEGRATION AND ITS SUCCESS ON SETTLEMENT LEVEL IN HUNGARY This paper aims to present environmental policy integration and its success on settlement level in Hungary. To do so, firstly, the author gives an overview in historical perspective of the rise of environmental protection and environmental policy taking a look at the international level and Hungary. Secondly, the summary of the author’s empirical researches in the topic in the past decades draws the readers’ attention to the role of the impact of the recent recentralization process in Hungary on the environmental actions at urban level, and also highlights the role of the size of settlement in environmental policy. The analyses show that the lack of information is crucial in the failure to make positive environmental actions. On the other hand, in larger, urbanised settlements, due to their higher development and investment capacity and risk, the role of environmental assessment related to planning activities is considered more important. INTRODUCTION Urbanisation (currently 55% of the global population live in cities (Rácz, 2019), the increase of investments and the rapid expansion of artificial surfaces -especially in metropolitan regions -have caused significant conflicts between nature and society creating challenges for sustainability (Lennert et al., 2020) and for dynamic equilibrium for the ecosystem (Nagy, 2006) and needs for a more environmentally integrative (urban) development policy. Hence, 'cities around the world face many environmental health challenges including contamination of air, water and soil, traffic congestion and noise, and poor housing conditions exacerbated by unsustainable urban development and climate change' (Vardoulakis et al., 2016, p.1). In the modern era, Carson's (1962) 'Silent Spring' initiated the thinking on the connection of human and nature and triggered the emergence of widespread environmental conscious discussion (Kozma, 2019), despite several attempts made earlier (e.g. Leopold, 1949). Today, Varjú, V. the interpretation of the notion varies and can diverge from the original notion depending on the views of the author (Vujko et al., 2018). From the 1980s the redistribution system of the European Community (EC) resulted in the implementation of major investment projects and plans. In parallel, the idea of Sustainable Development and the Environmental Programmes of the EC have been started and the resulting environmental policy tools (such as Environmental Impact Assessment (EIA) or Strategic Environmental Assessment (SEA)). While the tool of Environmental Assessment emerged in the late 1960s in the USA and the 1970s in European countries (e.g. France, the Netherlands) (Szilvácsku, 2003), their institutionalisation was postponed to the late 1990s, early 2000s. (The implementation deadline of the directive 2001/42/EC on Strategic Environmental Assessment was June 2004 for the European member states). SEA, as a new tool can integrate environmental policy concerns into spatial planning and urban development (Varjú, 2011). This paper is a summary of the author's works in the past decade on environmental policy integration (EPI) on settlement level. After the presentation of the methods and materials used for this paper, using multiyear and multilevel approach, the aim of this article is twofold. Firstly, as a theoretical background, the paper gives an overview about the environmental policy integration into settlement/urban policy, then the focus shifts to Hungarian spatial policy and environmental policy integration. The second part -in the results and discussion section -contains a time series empirical research investigating the environmental policy integration and their urban size differences in Hungary and shows how settlements of different size could learn and integrate environmental policy. DATA AND METHODS Using a systematic literature review, the paper provides a historical overview about environmental policy integration into urban/spatial policy from the 19 th century internationally, then in Hungary, emphasising the milestones of the integration. The author conducted an online survey among local governments in Hungary. The first wave was sent out in 2008 to all local governments. Another wave (with the same questions) was sent out in 2011 1 . These questions focused on the appearance and use of SEA and environmental programming at settlement (NUTS 5/LAU2) level. In 2014, under the umbrella 1 Similar surveys were conducted in Slovakia and Romania. Varjú, V. of the ÁROP 2 project, surveys for the local governments investigated -among other questions relating to public services -the orientation of settlement leaders towards environmental policy and its integration into spatial/urban planning (e.g. waste management, environmental planning, nature protection). Parts of this survey are also used here. THEORETICAL BACKGROUND -AN OVERVIEW OF ENVIRONMENTAL POLICY INTEGRATION The well-known idea of 'sustainable development' (WCED, 1987) has played an increasingly important role in policy making since 1987. With the strengthening and far-reaching effect of environmental policy, the idea of Environmental Policy Integration (EPI) came to the fore in the last decades (Lenschow, 1997 2000s, EPI had become an unavoidable element of regional and urban planning policy (Varjú, 2013b), and it requires system-thinking approach (Németh & Péter, 2017). However, how did we get here? And what is the situation like in Hungary? Integrating environmental policy into settlement policy -international outlook In the complex sense, the notion of 'environmental protection' is a product of the second half of the 20th century, becoming widespread in scientific publications from the 1970s. In this sense, 'environmental protection' can be considered as a new issue, but there are three components of the notion that appeared in the legislation long before the middle of the 20th century: − the aim of protecting certain objects of the natural environment against human damage or pollution, − the elimination of the damages of civilization that endanger or disturb human within the settlement environment (Kilényi, 1978, p. 91). In order to protect certain elements of the human environment, human activity has been regulated by individual societies for centuries. England's water protection laws can be traced back to 1848 and the birth of laws to protect air quality, to 1863. In France, the first law was passed in 1810, dealing with environmental damages caused by industry. If we have a look at not only the laws that comprehensively protect certain elements of the environment, but at the sporadic provisions related to environmental protection, we can find a number of regulations. This includes first-century Lex Julia, who banned heavy-duty vehicles from Rome at a time, which had a population of one million at the time (Julesz, 2008). The need to protect the natural environment has intensified with industrial development. Already the medieval British economy had accounted for smoke/air pollution, and royal decrees punished open-color coal burning. The first regulations concerned mainly forest areas, as this was precisely the sensitive point at this time that contributed to everyday industrial activities. But their overuse could be a danger (not only in connection with firewood extraction, but also due to hunting, the forest was an important food industry base even in the early 20th century). This is how the Austrian Imperial Forest Act (Reichsforstgesetz) was created in 1852, and the Swiss Forest Police Act (Forstpolizeigesetz) in 1902. The Dutch Hinderwet (formerly known as Fabriekwet) (Law on Disturbance and Environmental Impact), enacted in 1875, defined 'environmental permitting' as the predominant task of municipalities. Thus, the delegation of environmental issues -which affected not only the natural environment but also the settlements -to the appropriate territorial level took place in time (Varjú, 2015). In countries where industrial development started relatively later, the notion is mainly referred to as 'environmental conservation'. For example, in the Soviet Union and Bulgaria, the term environmental protection did not become established for a long time. The reason is that the term 'nature conservation' was used to denote environmental protection (Kilényi, 1978). The history of Swedish environmental law also dates back to the 19th century with water and neighborhood regulations, additionally, by 1907, the country had passed their first conservation law. Due to its geographical location, Denmark also established its first environmental regulation in the 19th century to protect its coastline (Julesz, 2008). Environmental policy has been actively appearing in urban planning and urban development policies since the early 1930s and came to the forefront with the early Varjú, V. suburbanization processes (Enyedi, 1984) rearranging the urban social structure, accompanied by environmental and sociological problems (Varjú, 2015). One of the early responses to this phenomenon was the Athens Charter elaborated by the Fourth Congress of the International Organization of Modern Architecture, adopted in 1933. The proclamation containing the new principles of urban planning (Egyed, 2018) was published in 1941 by Le Corbusier under the title of Charter of Athens. For decades, even after the Second World War, this document was a definitive document of the ideas of urban development and urban planning (Varjú, 2015). The Athens Charter (1933) emphasized the notion of functionality and proposed that the creation of urban areas, the arrangement of cities would take place along their homogeneous functions. In doing so, the document insisted the planners to ensure a healthy environment in residential areas and emphasized the importance of green spaces. It also states that the separation of industrial areas from residential sectors is a basic requirement and that the distance between the place of work and the place of residence is to be reduced to a minimum. The findings and resolutions of the Athens Charter (1933) were extremely up-to-date and largely still relevant today. It evaluates the natural, social, political and economic whole of the city and its surroundings in a systemic approach and attaches great importance to the physiological and psychological nature of human in relation to urban planning (Hajnal, 2006 (Hajnal, 2006). In Europe, the environmental policies of Sweden and Denmark played an important role in the expansion of environmental action: the European Environment Agency -based in Copenhagen -held its first major environmental conference in Stockholm in 1972 (Julesz, 2008). 1972 was an important year for ex-ante environmental assessments and the strengthening of environmental protection too. It was then that the first report of the Club of Rome was published, entitled The Limits of Growth, which sought to draw attention to the consequences of the overuse of natural resources (Moser & Pálmai, 1992). Since the 1970s, the Commission has also been doing more and more to ensure that the integration of environmental protection and environmental policy is a guiding principle in its basic and other development documents. In 1987, the European Commission integrated the most important principles of environmental protection into the Treaty of Rome, many of which are among the general principles of the European Union. These principles are: the principle of prevention, the principle of integration of environmental considerations, the polluter pays principle, the principle of state responsibility and commitment, the principle of international cooperation, individual and collective participation, and the principle of subsidiarity (Nagy, 2008, p. 309 (Varjú, 2013b). The Maastricht Treaty, signed in 1992, expands the principles of the Union with the principle of sustainable development (Nagy, 2008) and enshrines the integration of environmental objectives into economic and sectoral policies (Kerekes & Kiss, 2003). Certainly, over the last two decades, environmental considerations have been integrated into several other EU policies, including development ones. Both in the ESDP (1999) and as a supplement / renewal of the Lisbon Strategy (2000), the Gothenburg Declaration, adopted in 2001, as a priority, identifies the consideration of the principle of sustainable development and the iterative inclusion of environmental interests in development policy (Varjú, 2011). The idea of social, economic and environmental sustainability also had an impact on urban development. The New Athens Charter was published by the European Council of Mayors in 1998, after nearly 4 years of preparation, recognizing new types of problems in European cities. However, the new Charter does not return to the theses of previous documents, but Varjú, V. aims to 'define a sustainable development program for the city living with its surroundings, define the role of the urban planner in the implementation of the program, and make recommendations to professionals and urban policy makers at various levels' (Hajnal,nd,p. 9). The Charter articulates the need to prioritize mixed land use over the traditional functionalist approach. It emphasizes that the sustainability of the city depends to a large extent on land use patterns and transport systems that cannot be managed separately. The main priorities of the Charter are: − ensuring real civic participation in planning; − plans must be based on the principles of sustainable development; − planning must help economic competitiveness, boost employment; − planning should promote social and economic cohesion (Hajnal, nd). The ESDP (European Spatial Development Perspective) draws up spatial development guidelines for the European Commission and the Member States, based on an assessment of the social, economic and infrastructural spatial structure of the European Union. The document was adopted in 1999 after five years of preparation. The directives drawn up at the Potsdam meeting are not binding, but they play a key role in shaping the institutional system and planning process of European territorial development. The main objective of the document is 'balanced and sustainable territorial development'. One of the key guidelines of the ESDP is the wise and sustainable management of the natural and cultural heritage (ESDP, 1999 The key elements of the document are organized around demographic, social exclusion and environmental issues, and the document itself identifies two main priorities. These are the emphasis on integrated urban development policy and the priority given to the treatment of disadvantaged neighborhoods (Varjú, 2015). The priority of the integrated urban development policy was the need to create high-quality public spaces, the need to modernize infrastructure networks and increase energy efficiency, and to emphasize proactive innovation and education policies (Varjú, 2015). Priority action strategies focusing on deprived neighborhoods will also improve the physical environment, strengthen the local economy and local labor market policies, proactive education and training policies (with a focus on the younger generations) and efficient and affordable urban transport (public transport, pedestrians) and cycling. The development of the Hungarian environmental policy and its integration into settlement policy At the end of the 19th century, the first modern laws related to nature and environmental protection were enacted in Hungary. The Forest Protection Act of 1879 was amended in 1935 to protect nature more widely. The first steps were partly in accordance with Act XXIII of 1885 on water law, which can be linked to river regulation and the XIX Act of 1888 on Fisheries (Varjú, 2010). The concept of environmental protection first appeared in the Hungarian legal literature in 1971. Since then, legislation has accelerated. In 1972, environmental protection was included (Kilényi & Tamás, 1980) (which has since been repealed and replaced by Act LIII of 1995) placed the issue of environmental protection at the highest (sub-constitutional) legal level (Varjú, 2010). In Hungary, the governmental tasks of environmental protection -with the involvement of ministries and central authorities -were performed by the National Environment and Nature Protection Office established in 1977 operating until the end of 1987. The Ministry of Environment and Water Management was established on 1 January 1988 (Tatai, 1988), which has since undergone several name changes and changes of responsibilities to supervise directly and indirectly (through its national and regional authorities) nature protection and environmental protection. In the late 1980s, in addition to the growing environmental and scientific considerations, environmental social movements played an increasingly important role in attacking not only the state socialist system in Hungary, but also the introduction of ecological aspects into public thinking. Initially, the problems typically appeared at the local level. There were places where similar environmental conflict caused more (e.g. in Ajka) upheavel from the population, there were places where less (e.g. in Százhalombatta) provoked resentment and publicity. However, in the period of state socialism, these social actions typically appeared only locally and only to a small extent in the national communication channels (Varjú, 2010). Following the change of regime, the organizational, institutionalized and civil framework conditions and systems of environmental protection improved. Besides, urban strategic planning has also become common in the post-socialist CEEC countries (Bajmóczy et al. 2020). In the 1990s environmental pollutions were clearly reduced (Szirmai, 1999). The reason for that was the post-socialist socio-economic transformation that has resulted in unexpected challenges such as brownfields (Dannert and Pirisi, 2017). On the other hand, the accelerating decline of large-scale industry from the mid-1980s was accompanied by a decrease in pollution, so environmental issues were partially 'resolved'. On individual level, changes in income, existential status have diverted attention from environmental issues towards society. Following the change of regime post-1990, environmental policy became increasingly important at the political level. The environmental profession has also been an active participant in the strengthening of international environmental protection. In addition to Varjú, V. Hungary's representation at the policy level at the previously mentioned international environmental summits, Academician István Láng was also an active participant in the Brundtland Committee (Varjú, 2010). There was a change in the dynamics of Hungarian environmental policy in the 2000s, in which the strengthening of international organizations in Hungary (e.g. Greenpeace) and the institutionalization of social actors (such as the establishment of Civil Consultation Forum or the social consultation procedures) played an important role in spatial planning activities (Glied, 2008). In That legislation is in accordance with Section XXI of the law 1996 on Spatial Development Varjú, V. and Spatial Planning, according to which [ § 3 (3)] the task of spatial planning is to explore and evaluate environmental conditions and to take into account the load-bearing capacity when setting development goals. Article 27 of the current legislation states, inter alia, that 'In order to protect the natural and built environment in a coordinated manner, the expected environmental effects of the ideas contained therein must be explored in the spatial development concepts and in the preparation of spatial planning and settlement structure plans ...'. Sections 43 and 44 of the same law already provide for environmental assessments to be carried out, but also state that various Varjú, V. plans and programs are subject to '... an environmental assessment, which includes an environmental assessment under separate legislation. No plan or program may be submitted without an environmental assessment.' This special legislation is entitled 2/2005. (I.11.) On the environmental assessment of certain plans and programs. This legislation already specifies in which plans and programs it is mandatory to carry out strategic environmental assessment. Environmental integration has been strengthened in the European However, the legislation only stipulates that the program must be an integral part of this assessment, the environmental assessment must be agreed with the competent environmental inspectorate and specifies the content elements and the need for monitoring, but it does not provide more detailed methodological guidance. It should be noted here that neither the Hungarian legislation nor the EU directive regulates exactly to what extent it is necessary to carry out SEA developments. Thus, Hungarian law does not oblige the developer to prepare an SEA for a regulatory plan prepared for a part of a city. It also gives the planner some leeway to determine the size of the plan and the expected environmental impact. However, by referring to this 'room for maneuver', municipalities may be able to avoid the obligation to carry out an environmental assessment in the case of minor modifications (referring to the otherwise legitimate but, as it turned out, irrelevant suggestion that has a licensing obligation) (Varjú, 2010). As indicated above, the legislation that requires urban planners to make settlements sustainable is not too strict. Hence, the author's hypothesis is that in Hungary there is connection between the size and type of settlements and environmental cogitation. Settlements with high population numbers are significantly more inclined to make environmental assessment, facing higher risks due to the higher number of developments and investments. RESULTS AND DISCUSSION The SEA is a relatively new tool which ensures the EPI into regional development policy. programs -only the tools proposed by the European Commission -a methodological framework was developed for the ROP (Varjú, 2010). This SEA has since been followed by several SEAs, mainly prepared for strategic plans, but research experience shows that the process did not reach the urban level, (especially the small settlement level), or was difficult to reach. Not only the SEA, but also the preparation of the environmental program of the settlements are missing. Without this, it is difficult to build sustainable conscious settlements. The empirical survey in 2008 A 73% of the responding municipalities had a plan or an amendment to it that should have been subject to an environmental assessment (Table 1). The settlements that had already heard Varjú, V. about the SEA and had an environmental inspection program (54% of the responding municipalities), almost one-third (28%) of these settlements also prepared an SEA. This means that more than two-thirds of the settlements -even if we put it a little polarized and assume that the non-respondents did not prepare an SEA -may have committed a deliberate or unintentional, but negligent violation of the law (this was also undertaken by several municipalities). The reasons for this are largely to be found in the absence of knowledge. In addition to the fact that almost half of the settlements do not have any information about environmental assessment, those who have some knowledge typically suggest that an information network would make it easier for them to find their way on such and similar issues. Lack of resources also appears as a cardinal issue in the settlements. Although half of those who prepared an SEA entrusted its implementation to the developer of the plan and included the cost of the environmental assessment in the budget of the plan, the other half of the SEA municipalities used additional financial resources to carry out the environmental assessment. Those who deliberately did not prepare an SEA mostly justified this on the grounds that it was not necessary, as a means of enforcing environmental interests already existed, namely the EIA. Several have identified the SEA with the mandatory environmental program of the municipalities, which is really important, but not the same as the SEA. Among the reasons of the respondents, there was an argument at the county level, but with the opposite sign, that the SEA should be prepared not at the municipal level, but at the higher county or regional level (Varjú, 2010). Differences in settlement size If we examine the answers according to the size of the settlement, we can see that it is mainly the small population settlements that conducted an environmental study for their plans or their modifications (Figure 1). Figure 1 shows that despite the fact that small settlements were under-represented in terms of access to the questionnaire -i.e. many of them did not have a working e-mail address, are more excluded from information than larger settlements -in these settlement type, relatively, more SEAs were prepared. However, this does not mean that environmental interests are better enforced or more sensitive. This is supported by the answers given to the other open questions. Here, if we examine the answers according to the types formed according to the population of the settlements (Table 2), the average of the 'ratings' given to each question shows that environmental interests are considered the most important by the settlements between 1000-5000 people (4.54 on average). Small settlements appear to be less environmentally sensitive in this respect. This may be since small settlements are not affected by environmental problems, as small industrial investments in small settlements are small, which can have a polluting effect. This is supported by the fact that cities with more than 10,000 inhabitants consider environmental issues to be more pronounced than average, as the higher traffic load Varjú, V. and the higher volume of industrial investments also carry higher environmental risks, which must be taken into account in development planning (Varjú, 2010). The above-mentioned findings were also supported by the assessment of the answers to the first, third and fourth questions. While settlements with less than 1,000 inhabitants see the environmental assessment as a less profitable investment in the long run than average, cities with more than 10,000 inhabitants tend to see it as an additional, mandatory task (Varjú, 2010). Empirical investigations in 2012 In 2012, we tried to explore the environmental awareness of local governments and their relationship to environmental policy and sustainability through a repeated local government survey. Accordingly, we sent an online, self-administered questionnaire to the Hungarian municipalities twice, in April and May 2012. An online link to complete the questionnaires was sent to a total of 4,584 email addresses. The e-mail address includes both 3153 (2012) settlements in Hungary and 23 districts of Budapest. There were settlements that had multiple email addresses. The e-mails available on the internet were combined with the database available on the websites of the 19 county Government Offices, as well as with the list of local government e-mail addresses officially requested from the Ministry of the Interior. We found a total of 15 settlements where none of the one or more email addresses were alive. Varjú, V. Most (3) "inaccessible" settlements were in Borsod-Abaúj-Zemplén county. 80% of the settlements not reached by e-mail are settlements with less than 1000 inhabitants. Of the municipalities surveyed, 649 clicked on the link sent and / or started filling in the questionnaire, and 283 municipalities completed it. After filtering out duplications and territorially unidentifiable municipalities, 272 fully completed municipal questionnaires were evaluated. The questionnaires were filled in anonymously, the identification of the received questionnaires was automatically blocked, so which settlement did not voluntarily provide the name of the settlement, these settlements were not included in the final analysis. The municipalities that gave the name of their settlement contributed to the success of the analysis, however, according to the rules of research ethics and the profession (Héra & Ligeti, 2005) these settlements remain unidentifiable when presenting the results. In the questionnaires we asked the mayors of the settlements, but there were places where the questions were answered by the deputy mayor or an employee of the mayor's office authorized for questions. 9% of all settlements in Hungary gave an evaluable answer to the survey. By county, the number of responses varied within two percentage points. An exception to this is Borsod-Abaúj-Zemplén county, where the return rate is below 6%, while in Békés county, the return rate is above 14%; Baranya county represents the Southern Transdanubia region with a return of over 11%, Somogy and Tolna counties with a return of 8%. Most of the valid questionnaires came from settlements between 1001-5000 people (42%). Based on the replies, it can be said that groups with a settlement size of more than 1000 people are over-represented, while groups with a settlement size of less than 1000 people are under-represented compared to the national proportions. In order to make the quantifiable findings on the size of the settlement representative, in the group comparisons compared to the main average, the answers were weighted based on the ratio of the numbers of the sample elements of each group to the population. During the empirical research, we were the first to probe the attitudes of local governments towards the performance of tasks. We were interested in how important local governments consider their environmental and nature conservation tasks to be in comparison to their other tasks. We only dealt with the tasks and attitudes related to environmental protection after that. In the questionnaire, we identified eight groups of tasks (Table 3) that local governments had to rank. 38.6% of the settlements marked the basic education tasks, while 25% indicated the basic social tasks in the first, most important place. (A significant part of their budget was also Varjú, V. spent on this until the change in the municipal structure.). In the second place, educational, social and health tasks also appeared (representing a total of 77% of the marks in the second place), while the mark in the third place gave a similar result. Only 1.5% of respondents ranked tasks related to environmental protection in the first place, barely 4% in the second place and 7% in the third place. (No coherence was found between the ranked designations and the size of the settlement.) (Varjú, 2013a) When asked whether the municipality had a municipal environmental protection program, almost 9% of the settlements answered "No". Similarly, Bányai (2017) also draws our attention to the non-sanctioning of the non-existence of environmental protection programs and the lack of environmental protection programs in some settlements in their research conducted in 2016. The following parts of the questionnaire already dealt specifically with environmental activities. Municipalities were also able to rank the ones they considered most important among their most typical environmental task groups (Table 4) (Varjú, 2013a). 26% of the municipalities indicated wastewater treatment and 23.5% public cleanliness as the most important task. With a ratio of between 13% and 17%, there were five second most important environmental tasks, in order of: stormwater drainage; sanitation; organization of waste management; improvement of green spaces, parks, leisure spaces; wastewater treatment. In the third place in the ranking, these five tasks also appeared the most frequently. To the question, 'How would you characterize the environmental awareness of the population of your settlement?' 3% of respondents believe that the population is fully environmentally conscious. 37% of the respondents reported a low level of environmental awareness among the population, 58% about the environmental awareness to be improved, and 1.5% of the respondents reported a lack of environmental awareness. It also appeared that small settlements judged their own settlement most positively in terms of their liveability. As the size of the settlement increases, the favorable perception also decreases ( Figure 2). (The rise of the city category between 30001-50000 people can be attributed to the low number of answers.) (Varjú, 2013a) Figure 2 The rate of 'yes' answers to the following questions by settlement category: 'Is your settlement/city liveable, and its environmental surrounding attractive?' Source: Varjú, V. 2013a ÁROP empirical research shows that local governments have been prepared for one of the key issues of the new waste management law to be adopted in autumn 2012. In accordance with the regulations of the European Union (2008/98 / EC), by 2015 the separate waste collection system must be in operation in the settlements of Hungary, where at least paper, Varjú, V. metal, plastic and glass will be selectively collected. According to the questionnaire survey, in the summer of 2012, 90% of the settlements already have separate waste collection. 85% of the settlements collect paper and glass separately, 88% collect plastic. However, the selective collection of metal takes place only in 36% of the settlements. Typically (almost exclusively) small settlements are those that have not yet developed their selective waste collection system. 7% of settlements with between 500-1000 and 1001-5000 inhabitants, while 38% of settlements with less than 500 inhabitants did not have separate waste collection. (Interestingly, a city with more than 50,000 inhabitants also stated that they do not have separate waste collection.) (Varjú, 2013a) 43% of local governments also encourage selective waste collection in other ways. The most common was the optional smaller collection container. 65% of municipalities use this. Nearly 10% of them provide the opportunity to use less frequent emptying. It was common (20% of incentives) to provide multiple options for separate waste collection, to reduce the amount of municipal waste. 4% of the frontrunner settlements also provide the possibility to pay a fee proportional to the amount of waste transported in some way (e.g. by using a chip system) (Varjú, 2013a). However, it should be noted here that due to the function of the NHKV operating from 2016, the waste management system had been reorganized, namely centralized, without allowing or promoting the separate collection with fee reduction. The fee for waste collection has been unified and does not reflect on waste consciousness. Operation has been also affected by redefined supply areas. When asked whether environmental investments were made in the settlements after the turn of the millennium, 43% of the respondents answered 'yes'. 63% of respondents reported investments related to wastewater treatment and 19% reported the construction of a landfill. These investments were financed by the ISPA/Cohesion Fund. Renewable energy investment was reported by 18% of respondents (typically through environmental development tenders) (Varjú, 2013a). Some findings of a follow-up research The OTKA research -conducted by the University of Debrecen with the leadership of László Fodor taking place between 2015 and 2018 also examined the issue of settlements, cities and environmental protection and local environmental policy. It could also be seen from the volume summarizing the research (Fodor & Bányai 2017) that the focus was primarily on legal aspects, approaching the topic from the point of view of environmental law. Varjú, V. In the analysis of the changes in legislation, Pump (2017) draws attention to the problem that although local governments have numerous opportunities and obligations in shaping their sustainability and environmental policies, 'local governments cannot develop long-term environmental policies because the content of environmental policies is constantly changing. (By that time) it cannot be foreseeable what the division of tasks between the state and the local government would have been in each area, and in what and how it would affect the decision-making freedom of the local government' (Pump, 2017, p.48). Pump also pointed out that in waste management the partial centralization of the system had several spill-over effects. 'However, the radical change in the division of responsibilities between the local government and the state has changed not only their system of relations, but also everyone who was and has been involved in the provision, use and control of public services' (Pump, 2017, p. 49). Fodor (2017, p. 71) also formulates the foundation of a sustainable settlement in such a way that 'one of the keys to environmental protection is the integration of environmental considerations into various decisions'. That is, environmental policy integration is also needed at the local level. On the regulatory side, he draws attention to the fact that the phenomenon of integration can be well illustrated at the level of local government regulation, even though the scope of regulatory issues at this level is far from complete, as many issues only concern central regulation and the scope of local government is limited in both space and time. Fónai and Pénzes (2017) -in their 2016 empirical municipal data collection -found that half of the local governments cooperate with the regional environmental authority in the performance of official tasks, and 39.6% with the national park directorate. In the transformed institutional system, the co-operation of local governments with the territorial environmental protection authority has not fundamentally changed, meaning that local governments have quickly adapted the new institutional methods of managing environmental issues. Incidentally, 38% of local governments have carried out an environmental impact assessment during local decree-making and strategy-making, and a municipality that has carried out such an activity will also take its results into account. 8.8% of the settlements had local conflict from the municipal decisions. In environmental regulations, local governments strive to take into account the aspects of local society and the state of the environment (Fónai & Pénzes, 2017: 80-85). Fónai and Pénzes (2017) also state that overall, local governments enforce a more following and enforcing local environmental policy, in which the determining actor is the local government, which is somewhat influenced by the local society, and much less by professional organizations. CONCLUSION Not surprisingly, the analyses showed that the lack of information is crucial in the failure of making environmental assessment. ¾ of Hungarian settlements had no knowledge of the notion of SEA in 2008. Even more problematic is the fact that the recently presented analyses and the investigation of the mentioned OTKA also showed that around 9% of Hungarian settlements do not have an environmental programme that is compulsory since 1995. Respondents (of empirical researches) were lacking practical experience in the field as well, especially in smaller settlements, where the lack of (human and financial) capacity seems to be the most challenging issue. The bottlenecks and frequent shortcomings of the institutional infrastructure (e.g. local civic interest groups, bureaucratic, often shock-laden green authorities) do not provide an adequate basis for 'urban environmental consciousness'. The past decade of recentralization processes has also affected the potential of cities and rural settlements to be active in environmental improvement. The hierarchical institutional setting, the dominance of institutional knowledge set back the emergence of local, territorial interests, hence the emergence and integration of local environmental cogitations. In larger, urbanised settlements, due to their higher development and investment capacity and risk, the role of environmental assessment related to planning activities is considered more important.
8,485.4
2020-10-31T00:00:00.000
[ "Economics" ]
High-efficiency wide-band metal-dielectric resonant grating for 20 fs pulse compression More than 95% average efficiency TE-polarisation diffraction over a 200 nm wavelength range centred at 800 nm is obtained by a metaldielectric grating structure with non-corrugated mirror. 98% maximum -1st order diffraction efficiency and a wide band top-hat spectrum are demonstrated experimentally opening the way to high-efficiency Chirped Pulse Amplification of femtosecond pulses as short as 20 fs. [DOI: 10.2971/jeos.2007.07024] INTRODUCTION Flux resistance of the pulse compressor gratings is the most critical stage of a Chirped Pulse Amplification (CPA) scheme [1].State of the art femtosecond compression gratings [2] exhibit a low damage threshold as they usually consist of a gold layer deposited onto an undulated organic film [3].A sinusoidal metal grating is known to exhibit high diffraction efficiency, by means of a fabricable shallow corrugation, for the TM polarisation only; high diffraction efficiency for the TE polarisation requires very deep grooves.When this incidence configuration is used with high energy pulses problems may arise since any surface imperfections may excite local plasmons.Originally it was suggested to use an alldielectric structure composed of a multilayer mirror with a dielectric corrugation on top [4] as an alternative.This motivated further developments [5,6] which, in the midnineties, led to the demonstration of large dielectric gratings at 1050 nm wavelength capable of withstanding significantly higher flux [7].Since around 2000 the company Jobin & Yvon have also developed commercial elements [8].With the development of Ti:sapphire industrial lasers for machining in the 800 nm wavelength range [9], new grating specifications have emerged such as high average power.This is as a consequence of higher repetition rates as well as the restricted bandwidth over which an ultimate diffraction efficiency can be reached in the 4-pass schemes currently used.This makes the overall system efficiency very loss sensitive.An all-dielectric solution exists which exhibits a flat top diffraction efficiency spectrum over up to 40 nm [10]. The search for ever shorter pulses in advanced high energy physics gives rise to new bandwidth specifications [11] which all-dielectric gratings can not fulfil.The rationale prevailing in the solution of references [12,13] will be used to obtain a diffraction efficiency as close as possible to 100% over a very broad wavelength range; plane metal mirror will be used instead of a multilayer mirror since the former ensures an almost constant reflection phase shift which permits broadening in the spectral domain around 800 nm where leaky mode resonance can be satisfied [13].The present paper describes the operation of the proposed metal-dielectric grating and gives experimental results for a structure fabricated on small substrates by means of an adapted process, that agree with the expected characteristics. GRATING DESIGN The basic grating structure is represented in Figure 1.It is comprised of a metal mirror and a dielectric multilayer with a corrugation in the last layer at the air side.The diffraction is produced by the sole -1 st order in a contradirectional scheme away from the Littrow condition. As shown in Ref. [13] the condition for high efficiency is given by the dispersion equation of a TE leaky mode excited by refraction of the incident wave in the corrugated layer.The rationale behind this is the following: the reflection of an incident wave by a mirror based dielectric film is composed of two contributions: the reflection from the top of the dielectric film, and the reflection of the wave having penetrated into the film by refraction through its top interface from the mirror.These two contributions sum vectorially in the incident medium in the direction of the Fresnel reflection.If the condition for constructive self-interference of the refracted wave in the dielectric film is meet, its field is partially trapped in the film in the form of a leaky mode.Under this condition, the two contributions to reflection are of opposite sign.This has the important consequence that the Fresnel reflection can be cancelled if the moduli of the two contributions can be made equal.The presence of a periodic grating corrugation at the film-air interface decreases the field reinforcement in the film and diffracts the incident wave in the direction of the -1 st order.There is a certain grating strength for which the Fresnel reflection is cancelled by destructive interference between the two components and consequently 100% of the incident energy is diffracted.Such phenomenological understanding of resonant diffraction does not depend on the polarisation.However, the capability of the dielectric film to trap the incident field is not the same for the TE and TM polarisations.The presence of the Brewster effect on the TM polarisation restricts the possibility of achieving the cancellation of the Fresnel reflection, therefore the TE polarisation is usually preferred.Such resonant diffraction effects are not limited to a single film dielectric structure.If the leaky mode propagating structure on top of the mirror is composed of several layers, the same rationale applies (see Figure 2).The leaky mode dispersion equation is more complex, but it is easy to derive as shown in Ref. [12]. In the present spectral broadening problem there is not much benefit in having a large number of dielectric layers on top of the metal mirror because the grating would then excite a number of true guided modes of the multilayer waveguide bounded by the metal substrate and the air medium.This is especially so if the femtosecond pulses are very short (for instance 20 fs), meaning the equivalent bandwidth is very broad.A spectral band without waveguide mode excitation is needed, which can easily be achieved with a restricted number of dielectric layers.The minimum number of layers is actually one; that in which the grating corrugation can be etched.A metal thin film may however have to be protected by a specific dielectric coating and it is also advantageous to make the grating in a high refractive index layer so as to increase its strength without having to etch the corrugation too deeply.A two-layer system above the metal mirror thus seems to be a suitable configuration: a thin protection layer of index n p and thickness t p and a corrugated high index layer of index n g and thickness t g .For this simple two-layer system the leaky mode resonance condition can be written analytically : where κ p = k 0 n 2 p − n 2 c sin 2 θ i and κ g = k 0 n 2 g − n 2 c sin 2 θ i with k 0 = 2π/λ at vacuum wavelength λ and θ i is the incidence angle in medium of index n c .The phase terms φ m and φ a are the reflection phase shifts at the metal boundary and at the air side with incidence from the leaky mode propagating layer side respectively.φ a is zero since the transmission medium (air) has lower index, and φ m is approximately given by: in the case of a low loss optical metal (gold or silver or aluminium) of complex index m = mr − j mj. The resonance condition must be satisfied in the actual structure comprising the corrugation.The latter is accounted for by considering it as an equivalent homogeneous layer having the same thickness t g (i.e. the corrugation depth) and an equivalent index n eq given by the expression which holds well in the case of the TE polarisation.The value w L is the grating line width made of permittivity l and the grooves of permittivity s have a width w s .The period is denoted Λ. A design example will now be given with a silver mirror ( m = −28 + j1.5 at λ = 800 nm), a protective layer of Al 2 O 3 (n p = 1.65) and a HfO 2 corrugated layer (n g = 2.12 at λ = 800 nm).The incidence angle in air is 50 degrees.From expression (3) the hafnia layer has an equivalent index n eq = 1.657 assuming a 50/50 corrugation line/space ratio. Inputting this data into an exact code based on the modal method [14] gives the diffraction efficiency spectrum of Figure 3 curve a).Close to 90% diffraction efficiency is obtained with 6.6% in the zero order.The missing 3.8% is lost in the metal substrate.The resonance is broad as usual with a metal mirror, however the top hat character is not yet present.Using Lyndin's optimisation code [14] to model the phenomenologically designed structure gives a final structure which achieves the highest diffraction efficiency over the requested spectral width which is here 200 nm centred at 800 nm wavelength.The optimisation code uses a standard multivariate search procedure where the core of the code is a direct problem analysis based on the "true-mode method" [15].This method forms the electromagnetic field from a basis of the modes of the corrugation, as the physical actuality, instead of decomposing the corrugation in to Fourier harmonics.The objective function to be optimised is the -1 st order diffraction efficiency over a given spectral range.With the refractive index of the layer materials and the metal permittivity being known and the incidence angle being imposed, the optimisation starts when two layer thicknesses approximately satisfying dispersion Eq. ( 1) are input.The optimisation code must be somewhat assisted to deliver a structure which is still fabricable.Left to itself the optimisation process tends to lead to the suppression of the necessary protective layer of the metal film and/or to an aspect ratio of the hafnium oxide lines which is too large.To account for technological limitations the relevant critical parameter(s) are removed from the set of optimisation variables and are instead controlled directly by the user of the code.Figure 3 curve b) is the optimised diffraction efficiency spectrum produced by a corrugation of adjusted line/space ratio (smaller than 1) and 580 nm period.The same code also permits tolerances to be set. EXPERIMENT As compared with standard metallised gratings, the fabrication of the above designed structure is quite a challenge.Silver was chosen for its slightly lower losses.Aluminium was ruled out due to the large losses it suffers.It was feared that the adhesion of the dielectric layers on to a gold film would be too weak.The silver protection layer is aluminium oxide.There are two big technological difficulties: the first one is to make the lithography on such a highly reflecting surface, and to achieve a small line/space ratio.The second problem is the rather deep dry etching required.The hafnia layer must be etched down to the protective layer without physically or chemically damaging the silver surface.Yet another difficulty is the removal of the resist rest.These difficulties were however provisionally solved.Instead of using a thick ARC to isolate the resist layer from the highly reflecting silver layer [16,17], a 30 nm-thick layer of CuO was used.This decreased the reflection to below 40% which was sufficient to adjust the two lithographic steps leading to a small line/space ratio.Figure 4 is the AFM scan of a resist grating obtained under adequate exposure conditions.The resist ridges are slightly rounded due to the presence of a standing wave field node close to the top of the resist layer. FIG. 4 AFM scan of a small line/space ratio resist grating on top of the CuO layer. The thin CuO layer at the bottom of the resist grooves was opened by wet etching in vinegar.The RIBE etching conditions were adjusted to further reduce the thickness of the resist walls down to the hafnia layer.Figure 5 is the AFM scan of a typical corrugation obtained in the hafnia layer.The reactive component of the etching process is large enough to only require a short etching time meaning resist rests are easy to remove by wet chemistry. The 25 mm diameter, 6 mm thickness corrugated wafers were tested by means of a CW tunable Ti:sapphire laser under an incidence angle of 50 degrees between 710 and 840 nm wavelength which was the available tuning range.As shown in Figure 6, the -1 st order diffraction efficiency is 95.7% on average and is remarkably flat.The 0 th order diffraction efficiency is 1.7% on average.Although the diffraction efficiency is already quite high, these results do not represent a limit.The first reason is that there were not enough samples to enable a screening of different line/space ratios.It is not excluded that a slightly different ratio could lead to a better extinction of the 0 th order.The second reason is that the silver layer was slightly damaged during deposition by a mechanism which was later identified and which can be corrected with an expected decrease of the losses. Summarising the outcome of the experimental section, it can be stated that the difficult lithography and etching processing of a silver film containing structure has been basically solved, that the wafer scale uniformity of the diffraction efficiency is within a few percent, and that the damage caused to the silver mirror can be avoided which is likely to lead to an increase of the diffraction efficiency by a few percent. FIG. 6 Experimentally measured -1 st order diffraction efficiency and 0 th reflected order spectra under 50 degrees TE incidence. CONCLUSION The evidence has shown that a compression grating of high efficiency and very wide band associating a flat metal mirror can be fabricated to match the demands of femtosecond laser CPA down to a 20 nm pulse duration.The most critical future step is the testing of the flux resistance which will be undertaken after the full CW characterisation has taken place. Regardless of the outcome of the flux resistance tests, this new grating technology is already bound to find applications in domains where high efficiency and broad bandwidth are required without strong demands on the damage threshold. The fabrication technology is difficult, however the present work has also allowed identification of fabrication steps which can lead to a major reduction in the production costs. M FIG.1Cross-sectional view of the metal-mirror based multilayer with binary corrugation in the last layer.TE incidence is at angle θ i , resulting in diffraction only along the -1 st order. FIG. 2 FIG. 2 Representation of the cancelling of the Fresnel reflection by balanced destructive interference between top reflected field and re-radiated leaky mode field.The circles with cross and dot represent the orientation of the electric field. FIG. 3 FIG.3 a) -1 st order diffraction spectrum with corrugated two-layer structure of unity line/space ratio satisfying the leaky mode dispersion equation.b) -1 st order diffraction spectrum with optimised binary corrugation etched through the hafnia layer. M FIG.5AFM scan of a 580 nm period grating after RIBE through the hafnia layer and resist, and CuO removal.
3,365.6
2007-08-07T00:00:00.000
[ "Engineering", "Physics" ]
Effect of mass distribution on curving performance for a loaded wagon The location of wagon gravity center for a loaded wagon is underestimated in a vehicle–track coupled system. The asymmetric wheel load distribution due to loading offset significantly affects the wheel-rail contact state and seriously deteriorates the curving performance in conjunction with the height of gravity center and cant deficiency. Optimizing the location of gravity center and cruising velocity, therefore, is of interest to prevent the derailment and promote the transport capacity of railway wagons. This study aims to reveal the three-dimensional influencing mechanism of mass distribution on vehicle curving performance under different velocities. The wheel unloading ratio is regarded as the evaluation index. A simplified quasi-static model is established considering essential assumptions to highlight the influence of lateral and vertical offset on curving performance. For a more accurate description, the MBS models with various locations of wagon gravity center are built and then negotiate curves in different simulation cases. The simulation results reveal that the distribution of wheel unloading ratio determined by loading offset is like contour lines of ‘basin’. Based on the conclusions of quasi-static analysis and dynamics simulations, the regression equation is proposed and the fitting parameters are calculated for each simulation case. This paper demonstrates the necessity of optimizing the location of wagon gravity center according to the running condition and offers a novel strategy to load and transport the cargo by railway wagons. Abstract The location of wagon gravity center for a loaded wagon is underestimated in a vehicle-track coupled system. The asymmetric wheel load distribution due to loading offset significantly affects the wheel-rail contact state and seriously deteriorates the curving performance in conjunction with the height of gravity center and cant deficiency. Optimizing the location of gravity center and cruising velocity, therefore, is of interest to prevent the derailment and promote the transport capacity of railway wagons. This study aims to reveal the three-dimensional influencing mechanism of mass distribution on vehicle curving performance under different velocities. The wheel unloading ratio is regarded as the evaluation index. A simplified quasi-static model is established considering essential assumptions to highlight the influence of lateral and vertical offset on curving performance. For a more accurate description, the MBS models with various locations of wagon gravity center are built and then negotiate curves in different simulation cases. The simulation results reveal that the distribution of wheel unloading ratio determined by loading offset is like contour lines of 'basin'. Based on the conclusions of quasi-static analysis and dynamics simulations, the regression equation is proposed and the fitting parameters are calculated for each simulation case. This paper demonstrates the necessity of optimizing the location of wagon gravity center according to the running condition and offers a novel strategy to load and transport the cargo by railway wagons. Keywords Railway wagon Á Mass distribution Á Curving performance Á Quasi-static analysis Á Dynamics simulation Á Regression equation Introduction Symmetric distribution is the basic criterion for the loading of cargo on wagons. It has been a consensus that the optimal location of cargo gravity center is at the center of the vehicle laterally and longitudinally. Thus, the general wagon-rail models assume that the wagon gravity center (WGC) is the same as its geometry center in the horizontal plane [1][2][3][4][5]. Since the uneven mass distribution could result in an obvious unbalance of wheel load and deteriorate the curving performance seriously, symmetric loading is the basic prerequisite in the studies with respect to guaranteeing wagon running safety [6][7][8][9][10][11]. However, according to practical experience, skew loading cannot be avoided completely. For the sake of vehicle running safety, loading guidelines of several rail organizations are promulgated to specify the allowed offset values: (1) International Union of Railways (UIC) [12] The ratio of masses per bogie should be less than 3:1 and the ratio of load between the wheels (left/right) of a given axle should be less than 1.25:1. Moreover, the mass per axle should not exceed the maximum axle load. (2) The Association of American Railroads (AAR) [13] Longitudinally, the center of load weight should have a certain distance from either truck center, which depends on the ratio between load weight and load limit. Laterally, the load must be located to equalize the weight. (3) Chinese Railways (CR) [14] The transversal distance between the cargo's gravity center and the carbody's geometrical center should be within 100 mm. The difference between the masses per bogie should be no more than 10 ton, and the mass of cargo on either bogie should not exceed half of the loading capacity of the wagon. Depending on these three representative examples, we can recognize that there are no universal requirements on the mass distribution for a loaded wagon. Moreover, the allowed offset values stated in the loading guidelines are sketchy and empirical to some extent. For a further supplement to loading guidelines, the method of dynamics simulation has been used to search for the safe range of wagon's gravity center. Shatunov and Shvets [15] proposed that, as for a kind of 4-axle flat wagon, the maximum lateral offset could reach 150 mm and longitudinal offset could be expanded too. Bao et al. [16] focused on a common open-top wagon in China and demonstrated that CR criteria were conservative. The loading guidelines and limited former studies are based on assumptions that the best location of the cargo is at the center of a wagon and the height of WGC should be as low as possible. There are two confusions brought by the aforementioned documents and their prerequisites: (1) The location of the combined center of gravity is a three-dimensional variable. The decision on skew loading should be made combined with the height of gravity center, which is neglected when defining the allowed offset. Matsumoto et al. [17] and Bekele [18] pointed out that lowering the height of gravity center was an obvious advantage for running safety. Zhang et al. [19] attempted to figure out the threedimensional constraint for the combined center of gravity of a loaded wagon but failed to draw a quantitative conclusion. (2) The assumption of optimal choice is not convincible. Suda et al. [20,21] proposed that the asymmetric truck may be better for curving performance. Keropyan et al. [22] demonstrated that a longitudinal offset was necessary for the locomotive to promote its traction ability. Since the advantage of asymmetric loading had been proved in other areas, we can infer that symmetric distribution might not be a perfect plan for all the loading cases. Since loading guidelines and former studies have obvious limitations, this paper analyzes the stereoscopic influence mechanism of mass distribution for a loaded wagon on its curving performance. It has been demonstrated that the safety indices would be severe when the wagon negotiates the transition curve (TC) [19]. Optimizing the location of WGC on TC is necessary to improve the curving performance. However, because of the geometry and changing elevation of TC, it is difficult to reveal the quantitative relationship between the location of WGC and the curving performance on TC. Thus, the process of negotiating the circular curve is the objective of this study. In Sect. 2, a simplified quasi-static analysis of an uneven loaded wagon is conducted considering profuse assumptions. Section 3 establishes the MBS model and carries out simulations to discuss the qualitative relationship between the location of WGC and the selected safety criterion. Based on the conclusions in Sect. 3 and Sect. 4 proposes the regression equation and calculates the fitting parameters for each simulation case. 2 Analysis of the vehicle curving performance with a simplified quasi-static model Quasi-static model This paper focuses on the two-axle wagon with threepiece bogies. There is no doubt that the suspension system plays a key role in the vehicle running performance by means of weakening the impact between the carbody and wheelset. Nevertheless, as for the unevenly loaded wagon, the suspension device has little effect on inhibiting the unbalanced wheel load which is resulted from the asymmetric mass distribution and closely related to the vehicle running safety. Thus, the quasi-static model is established to highlight the influence of mass distribution on the wheel load. The wagon in uniform circular motion is deemed as suffering equilibrium force including the centrifugal force so that it can be regarded as a whole. In order to simplify the model, essential assumptions are put forward as below: (1) All the parts of the model are rigid bodies. (2) The track is smooth and has no rail cant. (3) The wheelset is symmetric with the center line of the track. (4) The load of a truck is distributed evenly to each wheelset. (5) The carbody is at its original position. The mass distribution is directly related to the vertical contact force. Consequently, the wheel unloading ratio is more sensitive than the derailment coefficient for this issue in terms of definitions. This conjecture was demonstrated by Zhang et al. [19]. Thus, the quasi-static analysis emphatically reveals the vertical contact force for each wheelset and regards the wheel unloading ratio as the evaluation index. In the X-Z plane of the coordinate system of track centerline, the wagon can be regarded as a simply supported beam as is shown in Fig. 1. where P i (i = 1,2,3,4) is the vertical contact force of wheelset; l is half of the length between bogie pivot centers; G z and L z are the vertical component force of wagon's gravity and centrifugal force, respectively; G zq (q = 1,2) is distributed G z on each truck; L zq (q = 1,2) is distributed L z on each truck; M and N are the geometrical center of the carbody and the gravity center of the wagon, respectively; a is the longitudinal offset; c is the vertical offset. Since the wagon suffers vertical equilibrium force, the sum of vertical contact force can be given by: Based on the torque balance, the sum of torques around the rear truck center can be presented as follows: According to the assumptions, P 1 = P 2 , P 3 = P 4 . Therefore, the vertical contact force can be calculated based on Eqs. (1) and (2): Furthermore, as a simply supported beam, the gravity force and centrifugal force are distributed on the front and rear center plate as below: The mass of wagon is distributed to each truck longitudinally as already stated. Then, lateral offset where G q and L q are the gravity and centrifugal force of the mass loaded on the truck whose order is q, respectively; subscript q denotes the front truck when its value is 1 and the rear truck when its value is 2; P is the vertical contact force; Q is the lateral contact force; subscript i denotes the order of wheelset as is shown in Fig. 1; subscript l and r denote the left and right wheel of the wheelset, respectively; a represents the angle resulted from superelevation; b is the lateral offset; c is the vertical offset from N to M; h is the vertical distance from M to the top of rail; d is the half of tape circle distance. In this paper, we take the right-hand curve as an example. It is easy to point out that: As for the truck whose order is q (q = 1, 2), its distributed vertical load is borne by the corresponding wheelsets: Moreover, the resultant moment of the truck can be calculated as: Based on Eqs. (6) and (7), the vertical contact force can be presented as: For any wheelset, the wheel unloading ratio (UN) can be derived as: Combined with the geometric relationship, Eq. (11) can be derived as: Since we have assumed that the truck would allocate its load to inclusive wheelsets evenly, wheelsets of the same truck have identical UN. According to Eq. (10), it seems that the mass loaded on truck has effect on the UN. For revealing the effect of the mass distribution significantly, Eq. (5) is substituted to Eq. (12) to obtain: where v denotes the vehicle running velocity; R denotes the radii of curve; g denotes the acceleration of gravity. Analysis of derivative results Equation (11) illustrates that apart from the lateral and vertical offset, cant deficiency affects the value of UN significantly. The values of (h ? c), d and L zq þ G zq À Á are definitely positive. Thus, the positive/negative signs of L yq À G yq À Á and b determine the trend of change in the absolute value of UN as below: The absolute value of UN is positively correlated with |b| and c. It is complicated to describe the trend of the absolute value of UN since it is denoted as the sum of a positive expression and a negative expression containing variables. Numerical computation is needed to reveal the distribution rules of UN. The difference between the lateral component forces of centrifugal force and gravity of distributed mass reflects cant deficiency, which is derived to obtain Eq. (13). Based on Eq. (13), we can demonstrate the relationship among the loading offset, velocity and UN clearly. As an example, we assume that tan a equals 0.1045, the gauge equals 1435 mm, d equals 0.75 m. For a better description, we use the variable of z to denote the sum of h and c and set the scopes of z and b. When the loaded wagon negotiates curves whose radii are 350 m and 600 m, respectively, with velocities from 10 m/s to 25 m/s, the absolute value of UN can be calculated as Fig. 3 illustrates. Figure 3 reveals the influence of loading offset and velocity on UN that can be concluded as below: (1) For the selected velocity and curve radius, the distribution of UN is similar as the contour lines of 'basin.' With the increase in velocity, the location of the basin moves to the right of the wagon. (2) For most locations, the absolute value of UN increases with the increase in vertical offset. But if there is a large negative offset laterally, the absolute value of UN will decrease as the vertical offset increases when the velocity is low. (3) The balancing velocity (m/s) can be calculated as Eq. (14): For curves whose radii are 350 m and 600 m, their balancing velocities are 19 m/s and 25 m/ s, respectively, based on Eq. (14). Figure 3 demonstrates that when the wagon negotiates the curve at the balancing velocity, UN can be constrained efficiently with the fluctuations of lateral and vertical offset. By means of quasi-static analysis, the roles of lateral and vertical offset in UN can be revealed. In order to demonstrate the conclusions of quasi-static analysis and study the effect of longitudinal offset on UN, dynamics simulation should be implemented. Dynamic equations of carbody In this paper, we adopt C 70H as the analysis object, which is one of the commonly used open-top cars in China. The MBS model of C 70H is made up of cargo, carbody and two three-piece bogies. The schematic diagram is shown in Fig. 4. where H j (j = 1, 2, 3) denotes the distance between the gravity centers of different components; r denotes the rolling radius of the wheel; i denotes the angle resulted from the superelevation; a denotes the tilt angle of carbody; s denotes half of the tape circle distance. Due to the loading offset, the dynamic equations of carbody play a primary role in building the MBS model of C 70H . Because it is deemed that the bolster is fixed with carbody in each degree of freedom except roll, the carbody can be regarded as being exerted by the lateral and vertical forces of the secondary where O denotes the geometrical center of carbody; C denotes the gravity center of carbody; m C denotes the mass of carbody; m B denotes the mass of bolster; P i (i = 1,2,3,4) denote the equivalent operating points of the secondary suspensions; F yi (i = 1,2,3,4) and F zi (i = 1,2,3,4) denote the lateral and vertical forces of secondary suspensions, respectively; M Bi (i = 1,2) denote the moments around the Z-axis of center plates; M zi (i = 1, 2, 3, 4) denote the moments around the Z-axis of side bearers; M yi (i = 1, 2, 3, 4) denote the moments around the Y-axis of bolsters; F LC denotes the inertial force on carbody; F LBi (i = 1,2) denote the inertial forces on bolsters; Y C and Z C denote the lateral and vertical distances of carbody, respectively; / C , u C and w C denote the roll angle, pitch angle and yaw angle, respectively. In order to simplify the issue, we define that OC ! ¼ ðx; y; ÀzÞ. We assume that the curve radius is R and the speed of wagon is v. Then the curve radius corresponding to C is (R-y). The rotational inertia of carbody can be represented by I Cx , I Cy and I Cz . The rotational inertia of bolster can be represented by I Bx , I By and I Bz . Using the symbols shown in Figs. 4 and 5, the dynamic equations of the carbody in the coordinate system of track can be expressed as follows: Based on Eq. (15) and the former studies about the dynamic equations of freight bogies [23,24], the MBS model of C 70H can be established. Description of MBS model The cargo and carbody are both modeled as rigid bodies and connected by a fixed joint so as to be regarded as a whole. The wagon has two bogies, and each bogie is made up of one bolster, one center plate, two wheelsets, two side bearers, two side frames, and four axleboxes. These bodies are connected by forces, joints, and constraints, including primary suspension forces, secondary suspension forces, etc. The wheelset adopts LM tread profile that matches the rail profile of UIC60 as Fig. 6 shows. The axlebox is connected with the wheelset by a revolute joint. The primary suspension, denoted by bistops which can represent the contact force between the adapter and guiding frame, links the axlebox to the side frame that is connected with the bolster by the secondary suspension. For C 70H , its secondary suspension is also linked to two wedges which are between the bolster and side frame to offer normal force and planar friction on the inclined plane and vertical plane of each wedge in order to decrease the vibration [7]. Between the carbody and the bolster, there is the side bearer which is represented by a spring with gap and the center plate which offers contact force, friction force, and torque component around the normal. The key parameters of the MBS model are listed in Table 1. Loading cases and simulation cases In this paper, the cargo and carbody are connected with the fixed joint so as to be regarded as a whole. The procedure of dynamics simulation is to update the model by varying the location of the cargo and run each model through the tracks of different radii with various velocities. Loading cases The location of WGC contains longitudinal offset (x), lateral offset (y) and the vertical distance from WGC to the top of rail (z). According to the loading guidelines enumerated in Sect. 1, there is no uniform requirement to define the maximum value of x. We think the most basic requirement is that the mass of cargo on either bogie should not exceed half of the load limit [13,14]. We use M limit to indicate the load limit and use M empty to indicate the mass of empty wagon. As is shown in Fig. 1 where the value of x is represented as a, the load distributed on each bogie should be no more than (M limit ? M empty )/2. Considering the parameter values shown in Table 1, the maximum value of x can be calculated as 1.2 m. Then, the values of x and the corresponding locations of the cargo's gravity center can be designed as Table 2 illustrates. Comparatively speaking, the determination for the values of y and z is empirical and rough. They are presented in Tables 3 and 4. The definitions of y c and z c are similar as x c , describing the distance between the reference position and the cargo's center of gravity. Simulation cases In this paper, the MBS model runs through the righthand curves with different curve radii. Generally speaking, the curve with a small radius is an unfavorable factor to affect vehicle curving performance. We assume that the radii of curves are 350 m and 600 m, respectively [25]. According to EN14363 and EN 13,803, we set the superelevation, maximum cant deficiency and maximum cant excess as relatively high values, which are 150 mm, 130 mm and 130 mm, respectively [25,26]. The gauge is designed as 1435 mm. Then the maximum running velocity (V max ), minimum running velocity (V min ) and balancing velocity (V 0 ) can be calculated. The simulation cases consisting of different curve radii and cruising velocities are illustrated in Table 5. In this section, we adopt the standard track irregularity spectrum of FRA 5 (the 5th class track defined by Federal Railroad Administration of US) as the excitation [27][28][29]. Simulation result and analysis In this paper, the wheel unloading radio (UN) is the criterion to evaluate the vehicle curving performance. When the updated MBS model endowed with its loading case negotiates the 120 m length curve in a simulation case, the maximum absolute value of UN (UN max ) for all the wheelsets can be monitored. In this paper, we regard 0.9 as the limit value of UN [24]. For a certain simulation case, if we delete all the loading cases which would result in the UN max larger than 0.9, then the safety range of WGC can be obtained. The distributions of UN max for the safety loading cases in each simulation case are illustrated in Fig. 7. where R denotes the curve radius and v denotes the cruising velocity. Figure 7 gives us full evidence to draw the conclusions as below: (1) Under the premise of a constant z, the distribution of UN max is similar to the contour lines of 'basin.' In the lateral direction, the location of the 'basin' moves to the right side of the wagon when it runs on the same curve track with a higher velocity. In longitudinal direction, there is no obvious law about the location of the 'basin.' Generally, the 'basin' is around the lateral center line or in the front of the wagon. support the consensus that higher WGC will lead to worse curving performance. However, as is illustrated in Fig. 7b, g, UN max will decrease with the increase in z when there is a large lateral offset to the left of the wagon. Such an unexpected trend happens when there is a small cant excess. The mechanism of this novel phenomenon can be demonstrated based on Eq. (11) and is revealed in Fig. 3 in Sect. 2. Regression analysis of simulation data The dynamics simulation results revealed in Fig. 7 demonstrate the effect of mass distribution on curving performance in each simulation case. Because every subgraph in Fig. 7 shows a highly unified law, we believe that there could be a common fitting equation between the location of the WGC and UN max . Based on the characteristics of horizontal distribution illustrated in Fig. 7, we tried using the oblique ellipse equation to denote the role of loading offset in UN max . Besides, we attempt to draw on Eq. (13) in Sect. 2 to denote the role of vertical distance in UN max because the conclusions of the quasi-static analysis are consistent with the vertical distribution characteristic of UN max as is illustrated in Fig. 7. For better expression of the equation, we use p to denote UN max . The regression equation can be proposed as: where the value of tan a is 0.1045 in this paper, g is the gravity acceleration, v and R are the constants depending on simulation cases. Based on the data in each simulation case, the values of parameters can be calculated. For reflecting the fitting effect of Eq. (16), two evaluation indicators are adopted: the root mean squared error (RMSE) and the coefficient of determination (R 2 ). The definitions of RMSE and R 2 are shown in Table 6. The values of parameters and the corresponding indicators for each simulation case are illustrated in Table 7. In order to be compared with the simulation data intuitively, the fitting data are revealed in Fig. 8. According to the evaluation indicators shown in Table 7 Table 7. Based on the regression results, we can directly assess vehicle curving performance with the consideration of mass distribution or adjust the cruising velocity in order to improve vehicle curving performance. NO.9 simulation case NO.10 simulation case Conclusion Loading offset significantly threatens the running safety of railway wagons. Several major railway organizations have formulated loading guidelines to limit the mass distribution for a loaded wagon, and a few relevant studies were conducted to verify the accuracy of rules. Both the existing regulations and studies focused on the limit value of loading offset without demonstrating the influence mechanism of mass distribution on curving performance threedimensionally. The rigid regulations can meet the basic requirements for running safety, but it has great limitations. On the one hand, vehicle curving performance cannot be promoted by optimizing the location of WGC or running velocity. On the other hand, the limit value of the loading offset is in accordance with the worst running condition. Thus, the transportation capacity of the railway wagon is decreased in normal circumstances. Based on the above considerations, this paper explores the relationship between the location of WGC and the safety criterion considering different cruising velocities. A simplified quasi-static model is established on the premise of several assumptions to reveal the roles of lateral offset, vertical offset and velocity in the maximum absolute value of the wheel unloading ratio (UN max ). MBS models with different loading offsets negotiate curves in various simulation cases. By means of dynamics simulations, the results of the quasi-static analysis are validated and the distribution of UN max for each simulation case is presented as the contour lines of 'basin.' According to the conclusions drawn from quasi-static analysis and dynamics simulation, the fitting equation is established. Based on the dynamics simulation data, the fitting parameters are calculated for each simulation case. This paper proposes a novel strategy to load the cargo and optimize the cruising velocity. Unfortunately, up to now, we haven't been able to build the connection between the value of fitting parameter and curve radius or velocity. Further dynamics simulations may be needed to help us establish a fitting equation applicable for various simulation cases. Moreover, the track irregularity is taken as the only oscillation source(OS) in this paper. More OS should be considered to make the conclusions more practical. As far as we are concerned, the changing elevation of transition curve, the action of switch and the wind can be modeled as the supplementary OS in the future study.
6,370
2021-03-25T00:00:00.000
[ "Engineering" ]
200 Gb/s Optical-Amplifier-Free IM/DD Transmissions Using a Directly Modulated O-Band DFB+R Laser Targeting LR Applications We experimentally demonstrate an O-band single-lane 200 Gb/s intensity modulation direct detection (IM/DD) transmission system using a low-chirp, broadband, and high-power directly modulated laser (DML). The employed laser is an isolator-free packaged module with over 65-GHz modulation bandwidth enabled by a distributed feedback plus passive waveguide reflection (DFB+R) design. We transmit high baud rate signals over 20-km standard single-mode fiber (SSMF) without using any optical amplifiers and demodulate them with reasonably low-complexity digital equalizers. We generate and detect up to 170 Gbaud non-return-to-zero on-off-keying (NRZ-OOK), 112 Gbaud 4-level pulse amplitude modulation (PAM4), and 100 Gbaud PAM6 in the optical back-to-back configuration. After transmission over the 20-km optical-amplifier-free SSMF link, up to 150 Gbaud NRZ-OOK, 106 Gbaud PAM4, and 80 Gbaud PAM6 signals are successfully received and demodulated, achieving bit error rate (BER) performance below the 6.25%-overhead hard-decision (HD) forward-error-correction code (FEC) limit. The demonstrated results show the possibility of meeting the strict requirements towards the development of 200 Gb/s/lane IM/DD technologies, targeting 800 Gb/s and 1.6 Tb/s LR applications. enabled by a distributed feedback plus passive waveguide reflection (DFB+R) design. We transmit high baud rate signals over 20-km standard single-mode fiber (SSMF) without using any optical amplifiers and demodulate them with reasonably lowcomplexity digital equalizers. We generate and detect up to 170 Gbaud non-return-to-zero on-off-keying (NRZ-OOK), 112 Gbaud 4-level pulse amplitude modulation (PAM4), and 100 Gbaud PAM6 in the optical back-to-back configuration. After transmission over the 20-km optical-amplifier-free SSMF link, up to 150 Gbaud NRZ-OOK, 106 Gbaud PAM4, and 80 Gbaud PAM6 signals are successfully received and demodulated, achieving bit error rate (BER) performance below the 6.25%-overhead hard-decision (HD) forward-error-correction code (FEC) limit. The demonstrated results show the possibility of meeting the strict requirements towards the development of 200 Gb/s/lane IM/DD technologies, targeting 800 Gb/s and 1.6 Tb/s LR applications. Index Terms-Direct modulation, distributed-feedback laser, on-off keying, pulse amplitude modulation. I. INTRODUCTION D RIVEN by the rapid traffic growth and the subsequent bandwidth-scaling pace of switches for data center networks, the technology roadmap indicates the upgrade from the ongoing-deployed 400 Gb/s optical modules to the nextgeneration 800 Gb/s or 1.6 Tb/s optical modules, will soon happen. Consequently, upgrading the single-lane data rates from 100 Gb/s to 200 Gb/s will be desirable to reduce the lane count and footprint [1]. Compared with 50/100 Gb/s lane rates, the 200 Gb/s/lane technologies face both fundamental and practical challenges towards development, including system bandwidth limitation from both the electronics and optoelectronics, limited footprint and energy efficiency from the components, and the digital signal processing (DSP) application-specific integrated circuit (ASIC), as well as the bounded latency requirement from the forward error correction (FEC) coder and decoder [2]. Moreover, the power budget requirements become very stringent, and the power penalties induced by chromatic dispersion (CD) become non-negligible on the side channels of the 20nm spacing 4-channel coarse wavelength division multiplexing (CWDM4) or even the 800-GHz spacing LAN-WDM4 configurations. Therefore, it is more likely to firstly extend the intensitymodulation and direct-detection (IM/DD) technologies to DR (500 m) or DR+ (2 km) coverage at 200 Gb/s/lane with parallel single mode fiber (PSM) configurations. Further extending the IM/DD technologies to support FR (2 km CWDM4) and even LR applications (6 km/10 km LAN-WDM4) becomes exceptionally challenging considering the required power budget to compensate for the CD-induced power penalties and therefore remains to be explored [3]. However, most of these reported over 200 Gb/s IM/DD transmission results are achieved with the assistance of optical amplifiers or complex DSP algorithms, particularly for transmission distances over 10 km. Digital equalizers of over a few tens or even hundreds of taps, complex nonlinear equalizers based on 2nd or 3rd-order Volterra series, or artificial neural networks (ANN) are often employed to combat the bandwidth (BW) limit and other linear/nonlinear system impairments. Moreover, many of these demonstrations were benchmarked against the high-coding gain soft/hard-decision (SD/HD) FEC code limit with large overhead (OH). Although the use of concatenated SD-FEC schemes has recently been discussed in IEEE 802.3 to improve the overall coding gain [33], it also suggests that large OH should be avoided, as it may introduce unrealistically high complexity and latency for datacom applications. Concatenated FEC framework was proposed by the 800G Pluggable MSA to adopt staircase code variants for FR specification, which appears as a reasonable option considering the balance between coding gain and latency [34]. As we are approaching standardization and practical development, it is extremely challenging to simultaneously meet the stringent requirements of high-bandwidth and sufficient power budget while maintaining low cost, low complexity, and low latency. Lately, several vendors have reported and demonstrated 200 Gb/s EML modules, mainly targeting DR/DR+ and FR applications, and their capabilities in supporting LR applications remain to be seen [35], [36], [37]. Meanwhile, it's worth mentioning that a state-of-the-art 200 Gb/s transmission system demonstration using a high-power DFB+R laser was reported, achieving 10-km single-mode fiber (SMF) with 800G compliant DSP, achieving bit error rate (BER) performance below 7% HD-FEC limit [31]. Yet, the systemlevel performance limit of such lasers remained to be further explored with higher-speed electronics and longer transmission distances. In this article, we extend our latest report on a 200 Gb/s O-band optical-amplifier-free IM/DD system using a directly modulated DFB+R laser [38], with additional results of higher baud rate signals generation and detection in the optical back-to-back configuration and more detailed discussions. Compared with the previous demonstration with the DFB+R [31], this time, we further improved on the generated and received signal quality by using higher sampling-rate test equipment, i.e., arbitrary waveform generator (AWG, Keysight M8199A) and real-time digital storage oscilloscope (DSO, Keysight UXR1104A) both operating at 256 GSa/s and carefully optimized impedance matching at all connections. We show that the demonstrated system can meet the strict requirements of the power budget of 20-km standard single-mode fiber (SSMF) transmission without the use of any high-complexity nonlinear digital equalizations, achieving BER performance below the 6.25%-overhead (OH) hard-decision (HD)-FEC limit [39]. Note that the adopted FEC limit is only for the purpose of benchmarking the system performance whereas the practical FEC implementation for the targeted scenarios remains to be defined. This work, to the best of our knowledge, is among the first experimental demonstrations of 200 Gb/s optical-amplifier-free IM/DD systems simultaneously fulfilling these practical requirements, carrying on the momentum of developing high-baud rate IM/DD solutions for 800 Gb/s or 1.6 Tb/s for LR applications. II. EXPERIMENTAL CONFIGURATION In this section, we first introduce the key enabling components of the high-speed transmission demonstration, i.e., the high bandwidth directly modulated DFB+R laser module. Then, we briefly describe its features and show the measured characteristics. Next, we describe the experimental configuration of the transmission system in detail, enabled both by this laser module and the state-of-the-art high-speed testing equipment. A. Low-Chirp Broadband DFB+R Laser In this experiment, the enabling component is a packaged 65-GHz-class DFB+R laser module with low chirp and high output power, which was fabricated on a time-tested reliable InP buried heterostructure (BH) platform based on a recently reported design [30]. A photo of the laser module is shown in Fig. 1(a). In this design, two essential effects are utilized to enhance the modulation performance of the laser, i.e., the photon-photon resonance (PPR) effect [40] and the detunedloading (DL) effect [41]. For the PPR, a passive waveguide is integrated with the DFB laser to provide optical feedback to the DFB section, forming an external cavity mode in the vicinity of the DFB mode that resonantly amplifies the modulation sideband of the DFB laser, thus enhancing the modulation bandwidth. The DL effect is realized by forming an in-cavity etalon filter between the DFB grating and a 3% mirror on the facet of the passive waveguide, enhancing the differential gain and reduces the laser chirp. Consequently, the modulation bandwidth is improved. The P-I curve of the DFB+R laser is shown in Fig. 1(b), which was measured when operating at 17°C. Two kink points are observed with increased laser drive current where the output power drops due to mode hops. Both the DL and the PPR effects are maximized before the kink points. Thus, the laser is optimally operated close to the kink points to maximize the modulation bandwidth. In the experiment, optimal bias points were found around 8-9 mA before the kinks, as biasing the laser too close to the kinks may cause instability and lead to unwanted mode hops. Therefore, in practical transceiver configurations where automatic power control is required, it is advisable to use an external power regulator and avoid regulating the power by adjusting the laser bias. It is worth noting that more than 20 mW output power can be obtained when driving the laser at the optimal operating point close to kink 2. Fig. 2 shows the experimental setup for the IM/DD transmission system. We use a 256 GSa/s AWG of 65 GHz bandwidth to generate the modulation signals. Three modulation formats are employed for system performance evaluation, i.e., non-return-to-zero on-off-keying (NRZ-OOK), 4-level pulse amplitude modulation (PAM4), and PAM6. Their symbols are Gray coded from a random binary sequence of >1 million unrepeated bit-length generated using the Mersenne Twister with a shuffled seed number. Then, the symbol sequence is filtered with a root-raised-cosine (RRC) pulse shaping filter with roll-off factors of between 0.1 and 0.2, optimized for different baud rates and each modulation format. The AWG output signal is amplified to around 2 Vpp by an electrical amplifier (EA) of 65 GHz. An external bias-tee with 60-GHz bandwidth is used to deliver the combined bias current and the modulation signal to the DFB+R laser. The operational temperature of the laser is stabilized at 17°C with a thermoelectric controller (TEC). One should note that it is possible to achieve a cavity-enhanced DML bandwidth also at a higher temperature, e.g., 50°C, under semicool operation to reduce TEC power consumption. However, such an operation may introduce slightly degraded performance in terms of bandwidth and power due to the thermal reduction of the material differential gain. Such a degradation should be practically compensated for by improved driving signal quality and higher modulation depth when developing the transceiver. The DFB+R output is directly launched into a 20-km G.652 SSMF link. At the receiver, the received optical power (ROP) is adjusted by a variable optical attenuator (VOA) and detected by a 70 GHz photodiode (PD). After the PD, the signal is electrically amplified by another 65-GHz EA and captured by a 256 GSa/s real-time DSO with 110 GHz bandwidth. No optical amplifiers are used before or after the fiber transmission. For offline demodulation, the received signal is firstly upsampled to 8 samples per symbol and then decimated to 1 sample per symbol based on the maximum variance method. Then symbol-spaced data-aided feedforward equalizers (FFE) or decision feedback equalizers (DFE) are employed for equalization. The equalizers are trained with the first 2 13 symbols for convergence. After convergence, the equalizers are tested on the remaining sequences of >1 million symbols with blind adaptation. After equalization, hard decisions are performed on the symbols for symbol-to-bit demodulation, and the BER is counted for each modulation format. Fig. 3(a) and (b) show the characterized end-to-end amplitude and phase response of the system in an optical back-to-back (B2B) configuration, including the cascaded channel responses of the AWG, the DFB+R laser, the PD, the DSO, and all the electrical components in between, measured close to the two kink points, respectively. A more flattened amplitude response and a smoother phase response are observed when biasing the laser close to kink 2 compared with biasing at kink 1. Moreover, the DFB+R laser has almost 3 dB higher output power when biasing around kink 2 than kink 1, as earlier shown in Fig. 1. Therefore, the bias point of the laser was set close to kink 2 at around 71 mA during the transmission measurements to obtain an optimal balance between bandwidth and stability. Finally, based on the pre-calibrated amplitude and phase responses shown in Fig. 3, we perform static pre-equalization at the AWG to flatten the response up to 45 GHz. Fig. 4 shows the optical spectra of the optical signals measured after the 20-km SSMF link, modulated with signals of three modulation formats at different baud rates when the laser is biased at 71 mA. The central wavelength of the DFB+R laser at this bias point is around 1313.8 nm. III. EXPERIMENTAL RESULTS After characterizing the power and frequency performance of the DFB+R laser, we continue with evaluating the system transmission performance with the previously mentioned three modulation formats, i.e., NRZ-OOK, PAM4, and PAM6. For each modulation format under test, we explore the highest bit rates with achievable BER against the 6.25%-OH staircase HD-FEC limit of 4.5E-3 [39]. One should note that such an FEC threshold is only adopted for performance benchmarking, and more specific FEC codes should be applied in practice. Moreover, the complexity of the digital equalizers is bounded to symbol-spaced FFE or DFE with a total number of taps below 50 to emulate the performance of practically implementable configurations. Fig. 5(a) shows the BER performance as a function of the received optical power for 150 Gbaud NRZ-OOK, which is the highest achievable baud rate after transmission over 20-km SSMF. In this case, due to the significant inter-symbol interference (ISI) induced by the bandwidth limit of the system, BER floors above the 6.25%-OH HD-FEC limit are observed when equalized only with FFE of up to 33 taps, as the FFE enhances the high-frequency noise. In all cases, clear performance improvements are observed when adding only 3 decision feedback taps. We show that 13-tap FFE + 3-tap DFE is sufficient to compensate for the ISI and suppress the high-frequency noise enhancement to achieve BER performance below the KR4-FEC threshold of 2 × 10 −5 in the B2B case and below the 6.25%-OH HD-FEC threshold after 20-km SSMF transmission. Moreover, negligible power penalty is observed with the fiber transmission, as the laser wavelength is close to the zero dispersion point of the SSMF. A. Transmission Performance of NRZ-OOK We further explore the highest achievable NRZ-OOK baud rate only in the optical B2B configuration. In this case, up to 170 Gbaud signal is successfully generated and received, and the results are shown in Fig. 5(b). Due to the increased signal bandwidth and the subsequent more severe ISI, up to 21-tap FFE + 3-tap DFE is required to achieve BER performance below the 6.25%-OH HD-FEC threshold. We also observed signal demodulation failures at low received optical power values (<-2 dBm) in the case of using DFE, as shown in Fig. 5(b), resulting from severe error propagation. Fig. 5(c), (d), and (e) show selected eye diagrams for the 150 Gbaud NRZ-OOK signals in both the optical B2B and after 20-km SSMF cases, as well as the 170 Gbaud NRZ-OOK in the optical B2B case. The eye diagrams are plotted after 33-tap FFE+3-tap DFE at the respective highest received optical power value for each case. Clear eye openings can be observed in all three cases. B. Transmission Performance of PAM4 We then switch to the PAM4 signal format with the same system configuration to explore the supported data rates against the benchmarking FEC threshold. For PAM4, we achieved up to 106 Gbaud symbol rate after the 20-km SMF, corresponding to a gross data rate of 212 Gb/s. Fig. 6(a) shows the measured BER results with different equalizer configurations. Compared with the NRZ-OOK, the performance gap between the FFE-only and the FFE+DFE cases becomes smaller due to reduced signal bandwidth. Thus, the impact of the DFE taps becomes less significant as the FFE-induced high-frequency noise enhancement is reduced. Nevertheless, the decision feedback taps are still necessary to achieve the below FEC threshold performance. With 21-tap FFE + 3-tap DFE, the BER can reach below the KP4-FEC limit of 2.2 × 10 −4 for optical B2B and below the 6.25%-OH HD-FEC threshold after 20-km SSMF transmission. Again, we explore the highest achievable symbol rate only with optical B2B, showing that 112 Gbaud PAM4 can be generated and received with a BER floor below the 6.25%-OH HD-FEC threshold. Selected eye diagrams for the three cases are shown in Fig. 6(b)-(d), respectively. These eye diagrams show that the DFB+R laser is driven within its linear modulation region with no eye compressions in the upper or lower amplitude levels observed. C. Transmission Performance of PAM6 Lastly, the transmission performance of PAM6 is evaluated with the experimental setup. Up to 100 Gbaud and 80 Gbaud are achieved in optical B2B and after 20-km SSMF cases, corresponding to gross data rates of 250 Gb/s and 200 Gb/s, respectively. Fig. 7(a) shows the measured BER results. At 80 Gbaud, 13-tap FFE + 3-tap DFE is needed to achieve BER performance below the KP4 FEC limit for optical B2B, and only 13-tap FFE is required to achieve below the 6.25%-OH HD-FEC limit after 20-km SSMF transmission. The performance gap between the FFE-only and the FFE+DFE cases becomes even smaller, and they almost overlap with each other at the highest received optical power after the 20-km SSMF transmission. This is due to the reduced signal bandwidth, resulting in decreased high-frequency noise enhancement after the FFE. At 100 Gbaud, at least 33-tap FFE + 3-tap DFE is required to achieve BER below the 6.25%-OH HD-FEC threshold in the case of optical B2B. Fig. 7 also shows the selected eye diagrams for all tested cases at their highest received optical power, respectively. Again, excellent modulation linearity and noise characteristics of the directly modulated DFB+R laser are verified as there are no amplitude compressions at high-level modulation formats, despite that no nonlinear equalizers are employed. In summary, up to 200 Gb/s IM/DD transmission over 20-km SSMF can be achieved with both PAM4 and PAM6 signals. It is worth noting that we demonstrate 20-km transmission with the purpose of showing power budget margins to potentially compensate for the power penalties of the side channels in CWDM4 or LAN-WDM4 configurations in supporting 6 km or 10 km LR applications. The actual implementation of WDM with multiple DFB+R lasers with dedicated wavelengths will be the next-phase target. IV. CONCLUSION We experimentally demonstrate up to 200 Gb/s IM/DD transmissions over 20-km SSMF in the O-band without using any optical amplifiers or complex nonlinear digital equalizers, benchmarked against the 6.25%-OH HD-FEC limit. The key enabling component is the high-power, low-chirp, and broadband directly modulated DFB+R laser. We show that the modulation and power characteristics of the tested laser module can potentially support 200 Gb/s/lane IM/DD transmission for 6 km or 10 km LR applications, which are considered highly challenging with other types of integrated optical transmitters without the use of optical amplifications. We consider this demonstration a solid case for carrying on the momentum of IM/DD technologies for the next-generation 800 Gb/s or 1.6 Tb/s data center applications.
4,400.2
2023-06-01T00:00:00.000
[ "Engineering", "Physics" ]
Wind Turbine Power Curve Modelling with Logistic Functions Based on Quantile Regression Featured The in forecasting, condition monitoring and energy assessment of wind turbine. Abstract: The wind turbine power curve (WTPC) is of great significance for wind power forecasting, condition monitoring, and energy assessment. This paper proposes a novel WTPC modelling method with logistic functions based on quantile regression (QRLF). Firstly, we combine the asymmetric absolute value function from the quantile regression (QR) cost function with logistic functions (LF), so that the proposed method can describe the uncertainty of wind power by the fitting curves of different quantiles without considering the prior distribution of wind power. Among them, three optimization algorithms are selected to make comparative studies. Secondly, an adaptive outlier filtering method is developed based on QRLF, which can eliminate the outliers by the symmetrical relationship of power distribution. Lastly, supervisory control and data acquisition (SCADA) data collected from wind turbines in three wind farms are used to evaluate the performance of the proposed method. Five evaluation metrics are applied for the comparative analysis. Compared with typical WTPC models, QRLF has better fitting performance in both deterministic and probabilistic power curve modeling. Introduction The wind turbine power curve (WTPC) is defined as the relationship between electrical power output and hub height wind speed of a wind turbine [1], and it is important for energy assessment, wind power forecasting and condition monitoring [2]. As mentioned in [3], the manufacturer provides a design power curve to describe the characteristics of wind turbine power generation. However, affected by the variability of the local environment and the adjustment of wind turbine internal parameters, the design power curve is unable to meet the requirements of wind farm operators. To enhance the fitting accuracy, many published literatures used supervisory control and data acquisition (SCADA) data to establish data-driven WTPC models, which are generally divided into parametric methods and nonparametric methods [4]. Parametric methods are based on solving mathematical expressions, including determination of expressions and parameter estimation. According to [5], linearized segmented model has been widely used in practical production. In [4,6], polynomial regression (PR) with different orders was used for WTPC modelling, and the results show that 6th-order and 9th-order PR have better fitting accuracy. In addition to PR, exponential functions, mean [33]. According to that, we propose a novel outlier filtering method that utilizes the symmetrical relationship of power distribution. It can effectively filter both sparse outliers and stacked outliers, and adjust the number of iterations according to the number of power outliers. To further evaluate the performance of the proposed method, both deterministic and probabilistic evaluation metrics are applied to evaluate the performance of the proposed WTPC model. The results show that the QRLF based power curve model is able to provide both accurate deterministic fitting results and an appropriate predicted CI. The rest of this paper is organized as follows. Section 2 presents the mathematical principle of QRLF. Section 3 introduces the WTPC modelling process based on QRLF. The case study is shown in Section 4. Conclusions are drawn in Section 5. Logistic Functions Logistic functions (LF) have been successfully applied in WTPC modelling due to their good nonlinear mapping ability. Among them, 4-parameter logistic function (4PL) has been commonly used, expressed as [9]: where P(v, θ) is the predicted power output; v is the wind speed; and θ = [a, m, n, τ]. We can obtain the estimated parametersθ of 4PL by minimizing the following cost function: (2) where N is the number of samples in training set and y i is the observed power output. Fitting curves obtained by 4PL, however, are point symmetric on the semi-log axis about its midpoint, which cannot accurately fit the power curves with asymmetric features [34]. Accordingly, researchers proposed a 5-parameter logistic function (5PL) to enhance the mapping ability for asymmetric data, expressed as: where, in 5PL, θ = [a, b, c, d, g], c, g > 0; parameters a and d determine the position of the horizontal asymptote of the fitting curve; and g is the asymmetry factor. The curvature of the fitting curve is jointly controlled via b, c and g. Although 5PL has good nonlinear mapping ability in power curve modelling, it can only provide deterministic fitting results. Quantile Regression Quantile regression (QR) provides an effective method for estimating models for conditional quantile functions [29]. Therefore, the uncertainty of wind power can be described by using the fitting curves of different conditional quantiles without imposing stringent parametric assumptions. Generally, QR can be regarded as an extension of a linear model, expressed as: where P(v, β(τ)) is the predicted power output at τth conditional quantile and β(τ) = [β 0 (τ), β 1 (τ), . . . , β n (τ)] is the model parameter vector in τ-quantile, obtained by: ρ τ (u) = τu, u ≥ 0 (1 − τ)u, u < 0 τ ∈ (0, 1) (6) where ρ τ (u) is the asymmetric absolute value function [29]. However, QR based methods have limitations in complex nonlinear curve fitting. Previous studies attempted to combine QR with a neural network and support vector machine to enhance its nonlinear mapping ability [35], but the fitting accuracy of the predicted CI was still unable to meet the requirements of wind farms. Logistic Functions Based Quantile Regression In this paper, we combine the asymmetric absolute value function from the QR cost function with 5PL, and propose a novel probabilistic logistic function for WTPC modelling. The expression is given by: where θ(τ) = [a(τ), b(τ), c(τ), d(τ), g(τ)] is the model parameter vector in τ-quantile, which can be estimated by minimizing the cost function: Adding ρ τ (.) to the cost function of QRLF increases the complexity of parameter optimization. In order to obtain the optimal estimation ofθ, two meta-heuristic optimization algorithms and a gradient based optimization algorithm are utilized in this paper for comparative studies. Parameter Optimization Algorithms Particle swarm optimization (PSO) has been successfully applied in deterministic power curve modelling (including LF with different model parameters) [8]. Considering the similarity between logistic functions and the proposed QRLF, this paper selects PSO as one of the optimization algorithms. According to [36], the whale optimization algorithm (WOA) is a meta-heuristic algorithm that can be utilized for optimizing complex nonlinear problems. During the optimization process, a spiral equation is added to enhance the robustness and prevent the results from falling into the local optimum. In addition, we attempt to use different types of gradient based algorithms to optimize the model parameters. Among them, the Adam optimization algorithm is selected to make comparative studies with PSO and WOA. Particle Swarm Optimization Particle swarm optimization (PSO) solves optimization problems by defining and moving particles around in the search-space over the particle's position and velocity [37]. Reference [38] added the inertia weight parameter to improve the performance of PSO, and the expressions are as follows: where v i and x i are the velocity and position vectors of particle i; p i is the best position vector of particle i; g is the best position vector of the entire swarm; ω is the inertia weight, c 1 and c 2 are acceleration constants; and t is the number of iterations. After numerous iterations, we can get the global optimal solution of the estimated model parameters. Whale Optimization Algorithm The whale optimization algorithm (WOA) is inspired by the social behavior of humpback whales, which consists of the search for prey, encircling prey and bubble-net foraging mechanisms [36]. The mathematical expressions are as follows: here w i is the position vector of search agent i; w rand is the position vector of a random selected search agent; w* is the best position vector; r is a random vector in [0, 1]; l and p are random numbers in [0, 1] and [−1, 1]; and A is the coefficient vector, calculated by: where t max is the maximum number of iterations. If |A| ≤ 1, w i is updated by w* (search for prey), but if |A| > 1, w i is updated by w rand (encircling prey). With the increase in iterations, the maximum value of |A| gradually decreases from 2 to 0. On the other hand, WOA randomly switches the movement mode of search agents so as to mimic the behavior of humpback whales, e.g., if p ≥ 0.5, the position of search agents will be spiral updated (bubble-net foraging). Adam Optimization Algorithm The Adam optimization algorithm combines the advantages of AdaGrad and RM-SProp, and has been proven to have the ability to solve nonconvex optimization problems in the field of deep learning [39]. The expressions are as follows: where θ is the parameter vector to be estimated; f (.) is the objective function; γ 1 , γ 2 are exponential decay rates; t is the number of iterations; m is the first-order moment vector, m 0 = 0; and u is the second-order moment vector, u 0 = 0. After initialization, θ can be updated by: where η is the learning rate; ε ≈ 0;m andû are moment vectors after bias correction. The detailed information of bias correction is introduced in [39]. When solving nonconvex optimization problems, falling into the local minimum is a common problem in both meta-heuristic methods and gradient based methods. Therefore, this paper repeats PSO, WOA and Adam five times, respectively, and then selects the one with the lowest fitting error to improve the stability of the aforementioned optimization algorithms. Outlier Filtering Affected by a harsh environment and various restrictive factors, power outliers are inevitable in the collected dataset. According to [32], power outliers can be divided into sparse outliers and stacked outliers, as shown in Figure 1. Affected by a harsh environment and various restrictive factors, power outliers are inevitable in the collected dataset. According to [32], power outliers can be divided into sparse outliers and stacked outliers, as shown in Figure 1. In Figure 1, sparse outliers are usually caused by random noise or a transition period where the turbine is going from shutdown to startup. Stacked outliers are mainly caused by wind curtailment, shutdown or data transmission failure (such as anemometer data error). In this paper, an outlier filtering method is proposed based on QRLF, and Figure 2 shows the flow chart. In Figure 2, PCq5, PCq50, and PCq95 are power curves fitted by QRLF (expressed in Equation (7)) with 5%, 50% and 95% quantiles, and PSO is applied for parameter optimization; λ is the hyperparameter of the proposed data filter method; d1 is the sum of distance between PCq5 and PCq50; and d2 is the distance between PCq50 and PCq95. The proposed outlier filtering method mainly consists of preliminary data processing, power curve fitting, threshold setting and outliers filtering. Preliminary data processing Firstly, we use the state parameters to filter the stacked outliers caused by shutdown or other abnormal operation states, and then limit the value ranges of the collected data by using the design parameters of target wind turbines. Table 1 lists the detailed information of the filtering conditions. In Figure 1, sparse outliers are usually caused by random noise or a transition period where the turbine is going from shutdown to startup. Stacked outliers are mainly caused by wind curtailment, shutdown or data transmission failure (such as anemometer data error). In this paper, an outlier filtering method is proposed based on QRLF, and Figure 2 shows the flow chart. Affected by a harsh environment and various restrictive factors, power outliers are inevitable in the collected dataset. According to [32], power outliers can be divided into sparse outliers and stacked outliers, as shown in Figure 1. In Figure 1, sparse outliers are usually caused by random noise or a transition period where the turbine is going from shutdown to startup. Stacked outliers are mainly caused by wind curtailment, shutdown or data transmission failure (such as anemometer data error). In this paper, an outlier filtering method is proposed based on QRLF, and Figure 2 shows the flow chart. In Figure 2, PCq5, PCq50, and PCq95 are power curves fitted by QRLF (expressed in Equation (7)) with 5%, 50% and 95% quantiles, and PSO is applied for parameter optimization; λ is the hyperparameter of the proposed data filter method; d1 is the sum of distance between PCq5 and PCq50; and d2 is the distance between PCq50 and PCq95. The proposed outlier filtering method mainly consists of preliminary data processing, power curve fitting, threshold setting and outliers filtering. Preliminary data processing Firstly, we use the state parameters to filter the stacked outliers caused by shutdown or other abnormal operation states, and then limit the value ranges of the collected data by using the design parameters of target wind turbines. Table 1 lists the detailed information of the filtering conditions. In Figure 2, PC q5 , PC q50 , and PC q95 are power curves fitted by QRLF (expressed in Equation (7)) with 5%, 50% and 95% quantiles, and PSO is applied for parameter optimization; λ is the hyperparameter of the proposed data filter method; d 1 is the sum of distance between PC q5 and PC q50 ; and d 2 is the distance between PC q50 and PC q95 . The proposed outlier filtering method mainly consists of preliminary data processing, power curve fitting, threshold setting and outliers filtering. 1. Preliminary data processing Firstly, we use the state parameters to filter the stacked outliers caused by shutdown or other abnormal operation states, and then limit the value ranges of the collected data by using the design parameters of target wind turbines. Table 1 lists the detailed information of the filtering conditions. Secondly, we calculate the power coefficient (C P ) of each power point, and then filter the power points that exceed the Betz limit (16/27) [1], expressed as: where P is the power output; v is the wind speed; A is the swept area of the impeller; and ρ 0 = 1.225 is the reference air density. This step can eliminate the outliers that have higher power output than normal power points, e.g., data transmission failure in Figure 1. However, limited by the types of monitored parameters, only several kinds of outliers can be eliminated by preliminary data processing. Power curve fitting After data preprocessing, this paper uses QRLF (optimized by PSO) with 5%, 50% and 95% quantiles to build three power curves, and then eliminates the remaining outliers by the positional relationship of these fitting curves. Threshold setting We calculate the distance between different fitting curves (d 1 and d 2 in Figure 2), and then use the ratio of them (d 1 /d 2 ) to quantify the relationship of relative position between fitting curves. For example, in Figure 3a, when the wind turbine is operating normally, the distribution of power at a given wind speed is approximately symmetric about the mean, d 1 /d 2 = 1.14. In Figure 3b, the stacked outliers increase the distance between PC q5 and PC q50 but the distance between PC q95 and PC q50 is basically unchanged because the outliers that has higher power output than the normal power points have been eliminated in preliminary data processing. In this case, d 1 /d 2 = 5.90, which is much larger than 1 (ideal case). Therefore, we can determine whether there are outliers in the raw data by setting a specific threshold based on d 1 /d 2 . If d 1 /d 2 > 1 + λ, the outlier filtering process will be executed. Hyperparameter λ as a margin added on the ideal case, which determines the end condition of the filtering process. If λ is too large, it is difficult to eliminate the power outliers, but if λ is too small, some normal data points will be regarded as the outliers. In this study, λ is set to 0.3 by the cross validation of multiple wind turbines. In some cases, however, λ needs to be fine-tuned according to the actual condition before deployment. Secondly, we calculate the power coefficient (CP) of each power point, and then filter the power points that exceed the Betz limit (16/27) [1], expressed as: where P is the power output; v is the wind speed; A is the swept area of the impeller; and ρ0 = 1.225 is the reference air density. This step can eliminate the outliers that have higher power output than normal power points, e.g., data transmission failure in Figure 1. However, limited by the types of monitored parameters, only several kinds of outliers can be eliminated by preliminary data processing. Power curve fitting After data preprocessing, this paper uses QRLF (optimized by PSO) with 5%, 50% and 95% quantiles to build three power curves, and then eliminates the remaining outliers by the positional relationship of these fitting curves. Threshold setting We calculate the distance between different fitting curves (d1 and d2 in Figure 2), and then use the ratio of them (d1/d2) to quantify the relationship of relative position between fitting curves. For example, in Figure 3a, when the wind turbine is operating normally, the distribution of power at a given wind speed is approximately symmetric about the mean, d1/d2 = 1.14. In Figure 3b, the stacked outliers increase the distance between PCq5 and PCq50 but the distance between PCq95 and PCq50 is basically unchanged because the outliers that has higher power output than the normal power points have been eliminated in preliminary data processing. In this case, d1/d2 = 5.90, which is much larger than 1 (ideal case). Therefore, we can determine whether there are outliers in the raw data by setting a specific threshold based on d1/d2. If d1/d2 > 1 + λ, the outlier filtering process will be executed. Hyperparameter λ as a margin added on the ideal case, which determines the end condition of the filtering process. If λ is too large, it is difficult to eliminate the power outliers, but if λ is too small, some normal data points will be regarded as the outliers. In this study, λ is set to 0.3 by the cross validation of multiple wind turbines. In some cases, however, λ needs to be fine-tuned according to the actual condition before deployment. Outlier filtering On the basis of step 3, when d 1 /d 2 > 1 + λ, we eliminate the power points lower than PC q5 and then repeat step 2 to step 4, until d 1 /d 2 ≤ 1 + λ. Figure 4 shows the intermediate results of the iterative process, and the final results of outlier filtering. The relationship between d 1 /d 2 and the number of iterations is shown in Figure 5. Outlier filtering On the basis of step 3, when d1/d2 > 1 + λ, we eliminate the power points lower than PCq5 and then repeat step 2 to step 4, until d1/d2 ≤ 1 + λ. Figure 4 shows the intermediate results of the iterative process, and the final results of outlier filtering. The relationship between d1/d2 and the number of iterations is shown in Figure 5. In Figure 4, during the iteration, PCq5 gradually approaches to PCq95, but the position of PCq95 is basically unchanged. After seven iterations, d1/d2 is below the threshold. At this time, most outliers are filtered while normal power points are effectively preserved. In Figure 5, the iteration process stops automatically when d1/d2 is lower than the threshold. It can be inferred that the proposed method has a certain adaptive processing capability, which can determine the number of iterations via the number of outliers. WTPC Modelling with the Proposed QRLF After outlier filtering, we determine the width of CI by setting the confidence level α. Once α is confirmed, we can get the upper and lower boundaries of CI by using QRLF with quantiles 1/2 ± α/2. If α = 0, a deterministic power curve can be obtained, i.e., the width of CI is equal to zero. At last, the probabilistic WTPC model is established by combining the confidence intervals of different quantiles. Outlier filtering On the basis of step 3, when d1/d2 > 1 + λ, we eliminate the power points lower than PCq5 and then repeat step 2 to step 4, until d1/d2 ≤ 1 + λ. Figure 4 shows the intermediate results of the iterative process, and the final results of outlier filtering. The relationship between d1/d2 and the number of iterations is shown in Figure 5. In Figure 4, during the iteration, PCq5 gradually approaches to PCq95, but the position of PCq95 is basically unchanged. After seven iterations, d1/d2 is below the threshold. At this time, most outliers are filtered while normal power points are effectively preserved. In Figure 5, the iteration process stops automatically when d1/d2 is lower than the threshold. It can be inferred that the proposed method has a certain adaptive processing capability, which can determine the number of iterations via the number of outliers. WTPC Modelling with the Proposed QRLF After outlier filtering, we determine the width of CI by setting the confidence level α. Once α is confirmed, we can get the upper and lower boundaries of CI by using QRLF with quantiles 1/2 ± α/2. If α = 0, a deterministic power curve can be obtained, i.e., the width of CI is equal to zero. At last, the probabilistic WTPC model is established by combining the confidence intervals of different quantiles. In Figure 4, during the iteration, PC q5 gradually approaches to PC q95 , but the position of PC q95 is basically unchanged. After seven iterations, d 1 /d 2 is below the threshold. At this time, most outliers are filtered while normal power points are effectively preserved. In Figure 5, the iteration process stops automatically when d 1 /d 2 is lower than the threshold. It can be inferred that the proposed method has a certain adaptive processing capability, which can determine the number of iterations via the number of outliers. WTPC Modelling with the Proposed QRLF After outlier filtering, we determine the width of CI by setting the confidence level α. Once α is confirmed, we can get the upper and lower boundaries of CI by using QRLF with quantiles 1/2 ± α/2. If α = 0, a deterministic power curve can be obtained, i.e., the width of CI is equal to zero. At last, the probabilistic WTPC model is established by combining the confidence intervals of different quantiles. Data Sources In this paper, SCADA data collected from three wind farms are applied to evaluate the performance of the proposed method. Among them, all wind turbines are horizontal axis wind turbine equipped with an active yaw system and electrical variable-pitch blades. Wind farm 1 (WF1) and wind farm 2 (WF2) are on-shore wind farms located in Hunan province, China (25 • , and the data acquisition time is from 07/01/2018 to 09/30/2018. All raw data are recorded at 1Hz, and a 10 min average is used in this paper according to [4]. Table 2 lists the detailed information of each wind farms. The first 70% of the measured data are used for training, and the remaining data are used for testing. Mean absolute percentage error (MAPE) and root mean square error (RMSE) are the most commonly used indicators for point prediction [8]. In order to make a better comparison of power curves between wind turbines with different installed capacity, this paper uses normalized root mean square error (NRMSE) instead of RMSE, and the mathematical expressions are as follows: where N is size of test set; P pre is the predicted power output; P mea is the measured power output; and C is the installed capacity of wind turbine. Probabilistic Evaluation Metrics Prediction interval coverage probability (PICP) and prediction interval normalized average width (PINAW) are significant indicators to evaluate the performance of interval predictions, which have been successfully applied to probabilistic wind power forecasting and electrical load forecasting [30,40]. The expressions are as follows: where PICP α and PINAW α are PICP and PINAW at confidence level α; N is the size of test datasets; y i is the observed power output; and L i and U i are the lower and upper boundaries of the ith predicted CI. According to [40,41], a good CI prediction should have both high PICP and low PINAW. Therefore, we use the ratio of PICP α and PINAW α for relative comparisons with several state-of-art probabilistic WTPC methods, expressed as: Although there is no specific index to evaluate the fitting effect, according to [41], the smaller the NC α , the more appropriate the predicted CI. Experimental Results This part first makes a comparative analysis of QRLF with different model parameters and optimization algorithms to determine the optimal model structure. Then, the measured data with power outliers are applied to verify the effectiveness of the outlier filtering method introduced in Section 3.1. Lastly we compare the QRLF based WTPC model with 5PL, RVM and QRNN to further evaluate the model performance. Results for Parameter Selection and Optimization Before power curve fitting, we eliminate the outliers by using the method introduced in Section 3.1 to reduce the interference of power outliers on model structure determination. Then, the fitting results of three selected wind turbines in WF3 are listed in Table 3. In Table 3, NC 90% is NC α at the confidence level of 0.9; 4P-QRLF and 5P-QRLF are QRLF with four (five) model parameters; we can get the deterministic fitting curves when α is set to 0.5. For each type of QRLF based method, PSO, WOA and Adam optimization algorithm are used to estimate the model parameters, respectively. As mentioned in Section 2.4.3, we repeat each optimization algorithm five times to reduce the fitting error caused by local minimum. Table 4 lists the detailed information of the control parameter for optimization algorithms, and Figure 6 shows the values of QR cost function (expressed in Equation (8)) of WT02 during the training process. Adam exponential decay rates (γ 1 , γ 2 ) = 0.9; learning rate = 0.0002 Figure 6. The values of quantile regression (QR) cost function optimized by particle swarm optimization (PSO), whale optimization algorithm (WOA) and Adam algorithms during the training process. As shown in Figure 6, WOA has the fastest convergence speed in both 4P-QRLF and 5P-QRLF, followed by PSO. However, Adam algorithm is difficult to converge, especially in the optimization process of 4P-QRLF. After 1000 iterations, the values of QR cost function of 4P-QRLF optimized by PSO, WOA and Adam are 72.6, 72.7 and 98.3, and the values of 5P-QRLF are 60.6, 62.2 and 83. Although the test results might be inconsistent in the repeated experiments, they generally have the same trend. We can draw the following conclusions from the results in Table 3 and Figure 6. (1) Similar to the conclusions of previous studies on 4PL and 5PL [11], 5P-QRLF can reduce the lack-of-fit error of 4P-QRLF in asymmetry curve fitting. The results show that 5P-QRLF has better performance in both deterministic and probabilistic WTPC modelling. (2) WTPC optimized by PSO and WOA has higher fitting accuracy than Adam algorithm. The main reason is that Adam is difficult to converge during the optimization process. (3) Although the convergence speed of PSO is lower than that of WOA, it has the lowest fitting error among the above three optimization algorithms, especially in probabilistic WTPC modelling (listed in Table 3). In addition, similar conclusions can be obtained when the confidence level α is set to other values, such as 0.95 or 0.85. According to the experimental results, this paper determines 5P-QRLF optimized by PSO as the optimal model structure of the proposed QRLF. Results for Outlier Filtering Similar to Section 4.3.1, three wind turbines in WF3 are selected to verify the effectiveness of the outlier filtering method based on QRLF. The scatter plots of wind speed and power output before and after outlier filtering are shown in Figure 7. Before outlier filtering, we can clearly observe the stacked outliers caused by wind curtailment from WT08 and WT10, and few sparse outliers in the scatter plot of WT02. After outlier filtering, both sparse and stacked outliers are eliminated, while most normal data points are reserved. Among them, outliers caused by zero power output are filtered We can draw the following conclusions from the results in Table 3 and Figure 6. (1) Similar to the conclusions of previous studies on 4PL and 5PL [11], 5P-QRLF can reduce the lack-of-fit error of 4P-QRLF in asymmetry curve fitting. The results show that 5P-QRLF has better performance in both deterministic and probabilistic WTPC modelling. (2) WTPC optimized by PSO and WOA has higher fitting accuracy than Adam algorithm. The main reason is that Adam is difficult to converge during the optimization process. (3) Although the convergence speed of PSO is lower than that of WOA, it has the lowest fitting error among the above three optimization algorithms, especially in probabilistic WTPC modelling (listed in Table 3). In addition, similar conclusions can be obtained when the confidence level α is set to other values, such as 0.95 or 0.85. According to the experimental results, this paper determines 5P-QRLF optimized by PSO as the optimal model structure of the proposed QRLF. Results for Outlier Filtering Similar to Section 4.3.1, three wind turbines in WF3 are selected to verify the effectiveness of the outlier filtering method based on QRLF. The scatter plots of wind speed and power output before and after outlier filtering are shown in Figure 7. Appl. Sci. 2021, 11, x FOR PEER REVIEW 11 of 16 Figure 6. The values of quantile regression (QR) cost function optimized by particle swarm optimization (PSO), whale optimization algorithm (WOA) and Adam algorithms during the training process. As shown in Figure 6, WOA has the fastest convergence speed in both 4P-QRLF and 5P-QRLF, followed by PSO. However, Adam algorithm is difficult to converge, especially in the optimization process of 4P-QRLF. After 1000 iterations, the values of QR cost function of 4P-QRLF optimized by PSO, WOA and Adam are 72.6, 72.7 and 98.3, and the values of 5P-QRLF are 60.6, 62.2 and 83. Although the test results might be inconsistent in the repeated experiments, they generally have the same trend. We can draw the following conclusions from the results in Table 3 and Figure 6. (1) Similar to the conclusions of previous studies on 4PL and 5PL [11], 5P-QRLF can reduce the lack-of-fit error of 4P-QRLF in asymmetry curve fitting. The results show that 5P-QRLF has better performance in both deterministic and probabilistic WTPC modelling. (2) WTPC optimized by PSO and WOA has higher fitting accuracy than Adam algorithm. The main reason is that Adam is difficult to converge during the optimization process. (3) Although the convergence speed of PSO is lower than that of WOA, it has the lowest fitting error among the above three optimization algorithms, especially in probabilistic WTPC modelling (listed in Table 3). In addition, similar conclusions can be obtained when the confidence level α is set to other values, such as 0.95 or 0.85. According to the experimental results, this paper determines 5P-QRLF optimized by PSO as the optimal model structure of the proposed QRLF. Results for Outlier Filtering Similar to Section 4.3.1, three wind turbines in WF3 are selected to verify the effectiveness of the outlier filtering method based on QRLF. The scatter plots of wind speed and power output before and after outlier filtering are shown in Figure 7. Before outlier filtering, we can clearly observe the stacked outliers caused by wind curtailment from WT08 and WT10, and few sparse outliers in the scatter plot of WT02. After outlier filtering, both sparse and stacked outliers are eliminated, while most normal data points are reserved. Among them, outliers caused by zero power output are filtered Before outlier filtering, we can clearly observe the stacked outliers caused by wind curtailment from WT08 and WT10, and few sparse outliers in the scatter plot of WT02. After outlier filtering, both sparse and stacked outliers are eliminated, while most normal data points are reserved. Among them, outliers caused by zero power output are filtered by using the monitoring parameters of the SCADA system (the first step of the proposed outlier filtering method), and then the remaining outliers are eliminated via the iterative calculations based on 5P-QRLF (step 2 to step 4). The proposed method has a certain adaptive processing capability, which can determine the number of iterations according to the number of outliers. As shown in Figure 7, after seven iterations, the outlier filtering algorithm of WT08 reaches the end condition, but for WT02, only one iteration is needed. This feature can significantly reduce the computing cost, e.g., under the same conditions, the computing time of WT08 is 181.7 s; this is more than seven times of that of WT02, i.e., 23.8 s. For in-depth analysis, both GP based and DBSCAN based outlier filtering methods are selected to make comparative studies with the proposed method. The former one filters the outliers through removing the measurements that deviate from the expected value by more than a certain σ-dependent threshold [15], and the latter one eliminates the outliers by clustering [32]. Before filtering, we first use the same data preprocessing method (listed in Table 1) for all filtering methods to be tested to reduce the interference of other factors. Then, the model parameters are fine-tuned through cross validation in order to achieve the best filtering effect. Figure 8 shows the filtering results of one of the test wind turbines under wind curtailment. Appl. Sci. 2021, 11, x FOR PEER REVIEW 12 of 16 by using the monitoring parameters of the SCADA system (the first step of the proposed outlier filtering method), and then the remaining outliers are eliminated via the iterative calculations based on 5P-QRLF (step 2 to step 4). The proposed method has a certain adaptive processing capability, which can determine the number of iterations according to the number of outliers. As shown in Figure 7, after seven iterations, the outlier filtering algorithm of WT08 reaches the end condition, but for WT02, only one iteration is needed. This feature can significantly reduce the computing cost, e.g., under the same conditions, the computing time of WT08 is 181.7 s; this is more than seven times of that of WT02, i.e., 23.8 s. For in-depth analysis, both GP based and DBSCAN based outlier filtering methods are selected to make comparative studies with the proposed method. The former one filters the outliers through removing the measurements that deviate from the expected value by more than a certain σ-dependent threshold [15], and the latter one eliminates the outliers by clustering [32]. Before filtering, we first use the same data preprocessing method (listed in Table 1) for all filtering methods to be tested to reduce the interference of other factors. Then, the model parameters are fine-tuned through cross validation in order to achieve the best filtering effect. Figure 8 shows the filtering results of one of the test wind turbines under wind curtailment. In Figure 8, the threshold of GP based filtering method (GP-Filter) is set to 3σ; the eps and the sample numbers [32] of DBSCAN based filtering method (DBSCAN-Filter) are set to 1.2 and 20, respectively. The results show that GP-Filter is not able to effectively eliminate the stacked outliers caused by wind curtailment. Although DBSCAN-Filter can filter the abnormal power points, the filtering results are sensitive to the setting of model parameters, and it is difficult to be deployed in actual wind farm, e.g., we should fine tune the eps and sample numbers for each wind turbine (even for the same wind turbine in different time periods) to obtain the correct filtering results. Compared with DBSCAN, the proposed QRLF-Filter only has one hyperparameter λ that needs to be adjusted. The filtering results of QRLF-Filter are more robust than that of DBSCAN. Once λ is determined, we can use the same value of λ in the whole wind farm. Results for WTPC Modelling This part compares the proposed QRLF method (5P-QRLF optimized by PSO) with 5PL, RVM and QRNN to comprehensively evaluate the model performance. In order to avoid the impact of individual cases, we randomly select five wind turbines from each wind farm for testing. On the one hand, MAPE and NRMSE are utilized to evaluate the deterministic fitting accuracy of the aforementioned fitting methods; on the other hand, we use PICP, PINAW and NC to test the performance of the predicted CI. Table 5 lists the average values of the fitting results of each wind farm, and Figure 9 shows the detailed information of one of the test wind turbines. In Figure 8, the threshold of GP based filtering method (GP-Filter) is set to 3σ; the eps and the sample numbers [32] of DBSCAN based filtering method (DBSCAN-Filter) are set to 1.2 and 20, respectively. The results show that GP-Filter is not able to effectively eliminate the stacked outliers caused by wind curtailment. Although DBSCAN-Filter can filter the abnormal power points, the filtering results are sensitive to the setting of model parameters, and it is difficult to be deployed in actual wind farm, e.g., we should fine tune the eps and sample numbers for each wind turbine (even for the same wind turbine in different time periods) to obtain the correct filtering results. Compared with DBSCAN, the proposed QRLF-Filter only has one hyperparameter λ that needs to be adjusted. The filtering results of QRLF-Filter are more robust than that of DBSCAN. Once λ is determined, we can use the same value of λ in the whole wind farm. Results for WTPC Modelling This part compares the proposed QRLF method (5P-QRLF optimized by PSO) with 5PL, RVM and QRNN to comprehensively evaluate the model performance. In order to avoid the impact of individual cases, we randomly select five wind turbines from each wind farm for testing. On the one hand, MAPE and NRMSE are utilized to evaluate the deterministic fitting accuracy of the aforementioned fitting methods; on the other hand, we use PICP, PINAW and NC to test the performance of the predicted CI. Table 5 lists the average values of the fitting results of each wind farm, and Figure 9 shows the detailed information of one of the test wind turbines. In Table 5, 5PL is selected as the benchmark for deterministic WTPC modelling because it has been proved to have good fitting accuracy in previous studies [9]. As mentioned in Section 2.1, 5PL can only be utilized for deterministic curve fitting, therefore, we cannot obtain the PICP90%, PINAW90% and NC90% of 5PL. RVM is selected because it can significantly increase the calculation speed while maintaining the fitting accuracy of GP [27]. We can make the following conclusions from the test results listed in Table 5. (1) both RVM and QRLF have good nonlinear mapping ability in deterministic power curve fitting, and their average MAPE and NRMSE are lower than the benchmark (5PL). (2) For interval predictions, QRLF can significantly reduce the width of predicted CI, while maintaining high coverage probabilities. As listed in Table 5, the proposed method has almost the highest PICP90% and the lowest PINAW90% compared with RVM and QRNN. As a result, the proposed QRLF provides the most suitable predicted CI and has the lowest NC90%. (3) From the fitting results in Table 5, there is no obvious correlation between the fitting accuracy and the installed capacity of wind turbine. Moreover, the performance rankings of the aforementioned fitting methods have not changed with the wind turbine installed capacity. More details can be obtained from Figure 9. In Table 5, 5PL is selected as the benchmark for deterministic WTPC modelling because it has been proved to have good fitting accuracy in previous studies [9]. As mentioned in Section 2.1, 5PL can only be utilized for deterministic curve fitting, therefore, we cannot obtain the PICP 90% , PINAW 90% and NC 90% of 5PL. RVM is selected because it can significantly increase the calculation speed while maintaining the fitting accuracy of GP [27]. We can make the following conclusions from the test results listed in Table 5. (1) both RVM and QRLF have good nonlinear mapping ability in deterministic power curve fitting, and their average MAPE and NRMSE are lower than the benchmark (5PL). (2) For interval predictions, QRLF can significantly reduce the width of predicted CI, while maintaining high coverage probabilities. As listed in Table 5, the proposed method has almost the highest PICP 90% and the lowest PINAW 90% compared with RVM and QRNN. As a result, the proposed QRLF provides the most suitable predicted CI and has the lowest NC 90% . (3) From the fitting results in Table 5, there is no obvious correlation between the fitting accuracy and the installed capacity of wind turbine. Moreover, the performance rankings of the aforementioned fitting methods have not changed with the wind turbine installed capacity. More details can be obtained from Figure 9. In Figure 9, during the training process, RVM assumes that the wind power follows the same Gaussian prior distribution, and thus the predicted CI in different wind speed ranges have similar width. However, the actual power output does not follow a specific distribution, which leads to a deviation between the predicted CI and the measured power output, especially when the wind speed is around the cut-in wind speed or exceeds the rated wind speed. Although QRNN can provide interval predictions without considering the prior distribution of wind power, the predicted CI calculated by QRLF is more suitable, especially in the wind speed range near the rated wind speed. Discussions At present, the proposed WTPC modelling method still has some limitations and needs to be improved. (1) Currently, the wind farm operators have not provided us with the detailed installation location of each wind turbine. Therefore, it is difficult to avoid the impact of turbine wakes on WTPC modelling. If the training set contains a large amount of measured data under wake effect, the established power curve will be "lower" than the real power curve (without turbine wakes). (2) The fitting accuracy of QRLF is sensitive to the initial settings of PSO. On the one hand, as mentioned in Section 2.4.3, we can enhance the reliability of the fitting results via repeating the optimization algorithm, i.e., PSO, multiple times. On the other hand, for the same type of wind turbine, we can use the model parameters of a trained wind turbine as the initial model parameters of a wind turbine to be trained to decrease the probability of falling into the local optimum. In future works, we plan to use a full year of SCADA for model training, and then study the seasonal effects on power curve modelling. In addition, we will optimize the QRLF based WTPC model according to the specific application scenarios, such as probabilistic wind power forecasting and blade icing detection. Conclusions This paper combines the asymmetric absolute value function from the QR cost function with LF and proposes a new method for WTPC modelling. We use PSO, WOA and Adam optimization algorithm, respectively, to optimize the proposed QRLF with different model parameters. The results show that 5P-QRLF optimized by PSO generally has the best fitting performance. Based on QRLF, an adaptive outlier filtering method is developed through the symmetrical relationship of power distribution. After filtering, both sparse outliers and stacked outliers are eliminated while normal power points are effectively preserved. Compared with DBSCAN-Filter, the filtering results of the proposed QRLF-Filter are more robust and easy to deploy in actual wind farms. At last, we make comparative studies of QRLF and three typical WTPC modelling methods by using SCADA data collected from three wind farms. The results demonstrate that QRLF can provide both accurate deterministic fitting curves and appropriate interval predictions in different wind speed ranges. Compared with RVM and QRNN, it can reduce the width of the predicted CI while maintaining high coverage probabilities. Conflicts of Interest: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
10,287.2
2021-03-29T00:00:00.000
[ "Engineering", "Environmental Science" ]
Mason: a JavaScript web site widget for visualizing and comparing annotated features in nucleotide or protein sequences Background Sequence feature annotations (e.g., protein domain boundaries, binding sites, and secondary structure predictions) are an essential part of biological research. Annotations are widely used by scientists during research and experimental design, and are frequently the result of biological studies. A generalized and simple means of disseminating and visualizing these data via the web would be of value to the research community. Findings Mason is a web site widget designed to visualize and compare annotated features of one or more nucleotide or protein sequence. Annotated features may be of virtually any type, ranging from annotating transcription binding sites or exons and introns in DNA to secondary structure or domain boundaries in proteins. Mason is simple to use and easy to integrate into web sites. Mason has a highly dynamic and configurable interface supporting multiple sets of annotations per sequence, overlapping regions, customization of interface and user-driven events (e.g., clicks and text to appear for tooltips). It is written purely in JavaScript and SVG, requiring no 3rd party plugins or browser customization. Conclusions Mason is a solution for dissemination of sequence annotation data on the web. It is highly flexible, customizable, simple to use, and is designed to be easily integrated into web sites. Mason is open source and freely available at https://github.com/yeastrc/mason. Introduction Annotating regions or features within nucleotide and protein sequences (such as locations of binding sites, conserved residues, transmembrane regions, protein domain boundaries, or protein secondary structure) is a ubiquitous part of biological research. Previous annotations are an essential component of experimental design and interpretation, and new sequence annotations are often the goal of new studies-themselves becoming part of subsequent experimental design and interpretation in future studies. Given the growth of sequence annotation data and the importance of these data in research, it is becoming increasingly important to effectively disseminate and visualize these data. Of particular importance is the ability to merge separate sequence annotations into a single view that allows for the interpretation of new data in the context of known annotations. Aligning and displaying multiple sequence annotations is already a core feature of genome browsers-software designed for navigating whole genomes and capable of visualizing a very wide array of annotations for genetic loci. Prominent examples of genome browsers include the UCSC genome browser [1], GBrowse [2], the Ensembl genome browser [3], and JBrowse [4]. While these tools are well-designed, mature, and feature rich; these tools are not designed to disseminate feature annotations for individual sequences outside the context of a broader genome. Other websites have developed web pages for displaying aligned feature annotations of individual protein sequences, including the UCSC Proteome Browser [5], the Protein Data Bank (PDB) [6], InterPro [7], WormBase [8], and the Saccharomyces Genome Database (SGD) [9]. While well-designed and informative, these views are optimized for the particular features they are displaying. Additionally, they are only available as parts of their respective web sites and not as a generalized distributable tool that may be integrated into other websites. Recently, tools have started to emerge that are designed to visualize protein sequence feature annotations from any source on any web site. FeatureViewer [10], a component of BioJS [11], is a JavaScript library that uses SVG to render feature annotations. FeatureViewer is very customizable, but consequently complicated to set up. To simplify the setup, two extensions are provided: DasFeatureViewer and SimpleFeatureViewer. However, DasFeatureViewer requires the availability of a serverside Distributed Annotation System (DAS) resource and SimpleFeatureViewer has no support for overlapping feature annotations. pViz.js [12] is a JavaScript library that uses SVG and CSS to provide a dynamic interface for visualizing feature annotations in protein sequences. pViz is simpler to set up and requires no server-side component. However, pViz has only very basic support for overlapping annotations (annotations appearing on separate tracks). Additionally, pViz utilizes pre-defined CSS classes to assign different colors to different features, which limits pViz's ability to achieve true-data driven coloring or shading schemes (such as shading based on confidences scores of the annotations), as all possible cases must be defined in advance. A comparison of the features offered by FeatureViewer, pViz and the work described here is presented in Table 1. These differences and their significance are further explored in the context of actual applications in "Findings", under "Current Implementations." Here we present Mason, a generalized web site module designed to display sequence feature annotations on any web site. Mason aligns and displays many sequence feature annotations in a single, dynamic view and is particularly well-suited for many overlapping annotations. Mason is independent of any specific source or type of annotation and is highly customizable, supporting true data-driven tooltips, click events, and coloring. It is written purely in JavaScript and SVG, requiring no 3 rd party plugins. Mason is designed to be simple to use, easy to set up, and requires no server-side component. Mason is open-source and freely available at https://github.com/ yeastrc/mason. Software architecture Mason is designed to be flexible and customizable with regard to type and source of sequence annotations. All of the code that is independent of a specific type of data (such as building the viewer itself or detecting user events) is contained in the Mason core. All of the code that is specific to a particular type of data is passed into the core when the viewer is instantiated as a set of JavaScript callback functions that adhere to a specific interface. This set of callback functions, which may be collectively considered a module, is then used by the core to provide custom behavior for a specific instance of the Mason viewer. The Mason core expects the input data to be provided at the time of instantiation and for that data to adhere to a specific Javascript object structure. This provides Mason with independence from any particular source of data and allows the code for processing the data to be a part of the Mason core, but requires that the source data be converted to this structure before being passed to Mason. Further customization is achieved by providing simple customization parameters to the Mason core at the time of instantiation. Export Image ✓ †pViz.js, by default, supports color customization via CSS, which requires pre-defined color definitions for classes of annotations in advance and is not amenable to true data-driven coloring schemes where color may indicate any possible value. ‡DasFeatureViewer extension of FeatureViewer requires a server-side DAS data source (SimpleFeatureViewer does not). These parameters include items such as row heights, border colors, or font sizes. Full implementation details, including examples and documentation of the interfaces for callback functions, input data format, and the customization options are provided at the Mason GitHub site at https://github. com/yeastrc/mason. Additionally, this site includes several pre-built modules for common sources of sequence annotations. These are discussed in more detail in the Results section. Installation The simplest method for installing Mason is by using one of the pre-built modules that supports the output of a specific sequence annotation program (described below) or by using the more-flexible generic JavaScript Object Notation (JSON) module that may be used for data from any source. Along with the pre-built modules, the generic JSON module requires no knowledge of JavaScript to implement and requires no server-side component. It only requires that the data be formatted as JSON text using a relatively simple pre-defined schema (available at our web site). The generic JSON module supports tooltips, linking annotations to external URLs, expanding overlapping annotations, and row-level coloring. To install the generic JSON module, first include the necessary JavaScript files on the page using standard HTML. Then, create a DIV on the page with the pre-defined class ("generic-json-mason-viewer") that references the location of the data, such as: The data will be read in from the indicated file location and a Mason viewer will be automatically created at the location of the DIV. (Note: because of web browser security models, the JSON file must be accessed via a web server and that must be the same web server address as the HTML file referencing it.) Alternatively, the text in above may be present within the page, itself, by leaving out the attribute and assigning the "masonData" variable equal to the text contents of the file inside of a < script > element. For full documentation, including the syntax of the JSON, examples, and download files for the generic JSON viewer, visit the Mason demo page at http://www.yeastrc.org/mason/. To apply Mason to sequence annotation data that is beyond the scope of the pre-built modules, it is necessary to write code to convert the annotations to the expected input format and to write a series of callback functions to customize the look and behavior of Mason (see "Software Architecture, above). Note: that a working proficiency with JavaScript is necessary for this step. Once the data is formatted and the callback functions are written, Mason may be instantiated on the page using the JavaScript function call: Where is the location on the page to build the viewer (jQuery variable), includes the data to be displayed, includes configuration parameters, and is an object containing the customized callback functions that constitute a module for a given type of sequence annotation. Note that multiple Mason viewers may be added to the same page by making multiple calls to . Detailed documentation for installation, the input data format, configuration parameters, and the callback functions are available at the Mason GitHub site at https:// github.com/yeastrc/mason. Graphical user interface Basic functionality The Mason viewer graphically represents a sequence horizontally, with position 1 on the left and the final position on the right. Each set of feature annotations is represented as a separate row, where each annotation includes a starting and ending position in the sequence. These annotations are represented as blocks in that row that start and end at the specified positions ( Figure 1). Mason is capable of displaying multiple rows of annotations per viewer, which is meant to display multiple sets of annotations of the same type from separate sources (e.g., sets of secondary structure predictions from different programs or protein coverage from multiple proteomics experiments) ( Figure 2). Because sequence positions are consistent between multiple rows in the Mason viewer, the positions of the annotations may be directly compared between the different rows. Additionally, multiple Mason viewers containing data of different types may be available on the same page (e.g., one viewer for secondary structure predictions and one viewer for disordered regions) ( Figure 2). The positions in the sequences between different viewers also line up and may also be directly compared. Furthermore, Mason is aware of multiple instances of the Mason viewer on the same page, and provides a visual indication of how annotations in distinct viewers line up when the user moves their mouse arrow over an annotation of interest (or tap on mobile devices) (Figure 2). Overlapping feature annotations Feature annotations may sometimes overlap in the sequence. For example, annotation A may describe positions 2-10 and annotation B may describe positions 8-19-creating overlapping annotations for positions 8-10. Visually, this will appear as a single block from positions 2-19; however, a clickable icon will appear to the left of the row label that indicates overlapping annotations are present. When click, that row will expand such that overlapping features are displayed in multiple rows, ensuring all distinct annotated features may be displayed (Figure 3). Tooltips and click events Text to appear in a tooltip when the user mouses-over (or taps) on any annotated feature may be defined in a callback function passed into the Mason viewer creator (see Implementation). Examples include displaying the starting and ending positions and the confidence scores associated with the annotation. Likewise, the result of clicking (or double tapping) on any of the annotated features may be similarly defined via another callback function. This may be useful as a means for users to click through to another web page with more information about the specific annotation. Colors and shading The color of the blocks in the Mason viewer may be customized via a callback function that has access to the data associated with the annotations. This enables a very broad range of capabilities regarding data visualization. Coloring schemes may range from simple (all blocks are the same color) to more sophisticated schemes that use shading to indicate annotation confidence scores or separate colors to indicate annotation properties (such as different colors for an alpha-helix or beta-sheets in secondary structure predictions). Lines noting positions of interest Mason may also display vertical lines at specific positions in the rows to note positions of interest that aid in interpretation of the data. Examples would include noting cleavage sites in DNA sequences or trypsin cut sites in protein sequence (Figure 4). The positions to draw lines is passed into the Mason creator, the color of the lines are defined via callback functions, and the visibility Figure 3 An illustration of how Mason handles overlapping feature annotations. The small box with the plus sign to the left of the row label indicates that overlapping feature annotations are present in that row. Clicking that box will expand the row such that all distinct feature annotations are displayed. In this case, a single mason viewer with multiple rows is shown. The user has clicked the box next to "Run: 471" and the row expanded to show all distinct annotations for that row in shades of magenta. The mouse pointer has been placed over a distinct annotation, resulting in the display of vertical lines showing the boundaries of that annotation across all rows and a tooltip describing that annotation. of the lines may be toggled via a simple function call to the Mason viewer. Summary bars Mason may optionally show a summary bar on the right-hand side of the rows to visually indicate some type of summary statistic associated with the entire row of sequence annotations. Examples including showing protein quantitation data or protein sequence coverage for a given mass spectrometry run. Multiple rows containing summary bars effectively provide a horizontal bar graph for comparing summary statistics between rows. Custom colors, shading, tooltips, and click handlers may be defined for the summary bars using callback functions. Current implementations The Mason viewer has been integrated into two upcoming (not yet published) large-scale proteomics data resources ( Figure 5). In the first case ( Figure 5A), Mason is used to visualize the relative abundance of a protein and the relative abundance of the individual peptides used to identify that protein across many different conditions. This implementation of Mason makes use of the summary bar feature (to the right of the rows) to show overall relative protein abundance, makes use of datadriven coloring and shading to provide an indicator for relative abundances of the peptides, and makes use of Mason's ability to disambiguate overlapping annotations to show relative abundances of distinct peptides that were identified and to provide links for viewing the underlying mass spectrometry data collected for each peptide. FeatureViewer and pViz.js would not be suitable solutions for this visualization, as the row level summaries and dynamic disambiguation of overlapping annotations are essential aspects of this view of the data. Additionally, coloring and shading that describe underlying values in the data (such as quality or quantity of identifications) would be difficult to accomplish by predefining classes of colors using CSS, which is the default coloring model used by pViz.js. In the second case ( Figure 5B), Mason is again used to visualize the coverage of a protein in many different conditions-but in this case, those conditions are separate proteomics experiments where the protein was identified. The top viewer in the figure visualizes the coverage of this protein in many different runs, where the shading of the red blocks indicates the strength of Figure 4 An illustration of lines noting positions of interest and how an options menu can be used with the Mason viewer. In this example, a single Mason viewer depicting the peptide coverage for a single protein from multiple mass spectrometry proteomics experiments is shown. In the experiments, the proteins have been digested with trypsin and the green vertical lines represent positions in the protein's sequence that contain the trypsin cut motif. The expectation is that all peptides should be terminated on both ends by a trypsin cut site. The presence of the green lines is controlled via an options menu, which is not itself a part of Mason, but can interact with the Mason viewer via Javascript function calls. In this case, checking the "Show Trypsin cut points" checkbox toggles the green vertical lines on and off by calling functions in the Mason core. the identification, and the row-level summary bars to the right indicate the overall protein coverage in that run. In this type of data, identifying overlapping peptides for a protein in an experiment is very common, so the ability to handle many overlapping annotations for the protein is essential to effectively disseminating the data. Attempting to show all disambiguated peptides from all runs at once in multiple tracks would result in a much more cluttered and non-informative view. To provide context, the remaining viewers on the page display annotations for this protein from other sources, and makes use of Mason's ability to communicate between instances of the viewer to show precisely how annotations in one viewer map to the others. Pre-built examples Several pre-built code examples are available for displaying data from common sources of sequence annotations. Working demos and downloads are available at http:// www.yeastrc.org/mason/. Generic JSON module The Mason site includes code for reading and displaying data formatted as JSON adhering to a simplified schema (available on the web site). This module is suitable for providing a simple view of sequence annotation data from nearly any source, especially data that has many overlapping annotations. This module supports overlapping features, tooltips, links to external URLs, and rowlevel coloring. Transmembrane and signal peptides The Mason site includes code for displaying transmembrane and signal peptide predictions from the Philius prediction server [14]. The code accepts a protein sequence directly, submits this to the Philius prediction server, and displays the results in the newly-built Mason viewer. Only the protein sequence is required, and there is no need to install or run Philius on the part of the web site operator. Secondary structure The Mason site includes code for displaying predicted protein secondary structure as generated by the psipred program [15]. This is accomplished by pointing the code to the URL for a .ss2 file (PSIPRED VFORMAT) that is generated by the psipred program-the code for accessing the data and converting it to JSON is provided. Consequently, psipred must be run in advance and the resulting file made available on a web server. Coiled-coil regions The Mason site includes code for displaying predicted coiled-coil regions generated by the Paircoil2 program [16]. This is accomplished by pointing the code to the URL for a .pc2 file that is generated by the Paircoil2 program-the code for accessing the data and converting it to JSON is provided. Consequently, Paircoil2 must be run in advance and the resulting file made available on a web server. This module also includes a custom options menu that allows the user to filter the data based on the P-score generated by Paircoil2. Disordered regions The Mason site includes code for displaying predicted disordered regions generated by the DISOPRED program [17]. This is accomplished by pointing the code to the URL for a .diso file that is generated by the DISOPRED program-the code for accessing the data and converting it to JSON is provided. Consequently, DISOPRED must be run in advance and the resulting file made available on a web server. Conclusions The Mason viewer is a generalized, flexible, and portable web site module capable of displaying DNA or protein sequence annotations for single sequences. Mason is designed to be integrated with existing 3 rd party web applications, though some familiarity with JavaScript is required. Although Mason has a highly dynamic interface, it uses only standard web technologies, requires no (See figure on previous page.) Figure 5 Screenshots from two implementations of the Mason viewer on data-driven web applications. (A) Mason is used to show protein sequence coverage, relative protein abundance, and relative peptide abundance across many conditions. The top viewer compares the data across multiple developmental stages of the model organism C. elegans, and the bottom viewer compares the data across multiple mass fractions. The summary bar to the right of the viewer indicates overall relative protein abundance (as compared between conditions in the respective viewers). The protein and peptide abundance is shown using shades of red, where black represents the least abundance and bright red represents the most. The rows with red boxes to the left of the labels may be expanded to disambiguate the observed peptides. Each disambiguated peptide may be clicked on to view the underlying mass spectrometry data. (B) Mason is used to show protein sequence coverage (viewer with red bars) among many mass spectrometry runs. The bars to the right represent total sequence coverage for the protein in the respective run. Shades of red in the rows indicate the quality of scores the peptide identification received, and the shade of red in the row level summary bar serves as a secondary indication of protein sequence coverage. Each row with the red box to the left of the label may be expanded to disambiguate overlapping peptides, and each peptide may be moused-over to view summary information and clicked on to view underlying mass spectrometry data. The other viewers (purple, green, cyan, and black) show annotations for this protein from other sources. 3 rd party web browser plugins, and is designed to be simple-to-use and intuitive for end users. Mason is open-source and is freely available at the Mason GitHub site at https://github.com/yeastrc/mason. The site includes extensive documentation and examples, including pre-built code for displaying sequence annotations from several existing sources. Availability and requirements Project name: Mason Project home page: https://github.com/yeastrc/mason Operating system(s): Platform independent Programming language: JavaScript, HTML, SVG Other requirements: None License: Apache 2.0 Any restrictions to use by non-academics: None
5,174.4
2015-03-07T00:00:00.000
[ "Biology", "Computer Science" ]
Loss of AMP-Activated Protein Kinase Induces Mitochondrial Dysfunction and Proinflammatory Response in Unstimulated Abcd1-Knockout Mice Mixed Glial Cells X-linked adrenoleukodystrophy (X-ALD) is caused by mutations and/or deletions in the ABCD1 gene. Similar mutations/deletions can give rise to variable phenotypes ranging from mild adrenomyeloneuropathy (AMN) to inflammatory fatal cerebral adrenoleukodystrophy (ALD) via unknown mechanisms. We recently reported the loss of the anti-inflammatory protein adenosine monophosphate activated protein kinase (AMPKα1) exclusively in ALD patient-derived cells. X-ALD mouse model (Abcd1-knockout (KO) mice) mimics the human AMN phenotype and does not develop the cerebral inflammation characteristic of human ALD. In this study we document that AMPKα1 levels in vivo (in brain cortex and spinal cord) and in vitro in Abcd1-KO mixed glial cells are similar to that of wild type mice. Deletion of AMPKα1 in the mixed glial cells of Abcd1-KO mice induced spontaneous mitochondrial dysfunction (lower oxygen consumption rate and ATP levels). Mitochondrial dysfunction in ALD patient-derived cells and in AMPKα1-deleted Abcd1-KO mice mixed glial cells was accompanied by lower levels of mitochondrial complex (1-V) subunits. More importantly, AMPKα1 deletion induced proinflammatory inducible nitric oxide synthase levels in the unstimulated Abcd1-KO mice mixed glial cells. Taken together, this study provides novel direct evidence for a causal role for AMPK loss in the development of mitochondrial dysfunction and proinflammatory response in X-ALD. Introduction X-linked adrenoleukodystrophy (X-ALD) is an inherited neuroinflammatory demyelinating peroxisomal disorder [1]. The underlying defect is a mutation/deletion in the ABCD1 gene that encodes the peroxisomal integral membrane transporter adrenoleukodystrophy protein (ALDP) [2]. ALDP is responsible for importing very long chain fatty acids (VLCFA; C >22 : 0) into the peroxisomes for degradation, a function exclusive to peroxisomes. As a result, VLCFA accumulate in the tissues and body fluids of X-ALD patients, a biochemical hallmark of the disease [3]. The disease has two major phenotypes: severe inflammatory and often fatal cerebral adrenoleukodystrophy (ALD) and mild relatively benign adrenomyeloneuropathy (AMN) [1,3]. ALD patients develop spontaneous neuroinflammatory responses and demyelination, which results in death within 2-5 years from the onset of symptoms [1]. AMN patients, on the other hand, live into adulthood with mild axonopathy [1]. However, about 30% of AMN patients progress spontaneously to fatal ALD phenotype in adulthood [1]. The mechanism(s) for differential phenotypes (AMN or ALD) or the progress of AMN to ALD phenotype remain unknown [1,4]. Intriguingly, the ABCD1 mutation and VLCFA levels are common among the two major phenotypes [4]. In fact, both the phenotypes are detectable within a family with similar ABCD1 mutations; thus, there is no phenotype-genotype correlation [1]. An animal model of X-ALD, a classical knockout of Abcd1 (Abcd1-KO), accumulates VLCFA in tissues and body fluids but fails to develop the neuroinflammatory response [5][6][7]. In the late stage of life (>15 months), the mice develop axonopathy in the spinal cord and thus the mouse model at best resembles the human AMN phenotype [8]. We recently documented the first evidence of loss of a metabolic gene, AMP-activated protein kinase (AMPK), in ALD but not AMN patient-derived cells [9]. In vivo 2 Mediators of Inflammation and in vitro studies have shown that AMPK signaling and proinflammatory responses are mutually coupled via negative feedback [10][11][12]. Activation of AMPK suppresses proinflammatory mediators [13,14] while stimulation with inflammatory cytokines promotes dephosphorylation and hence inhibition of AMPK [14]. ALD patient-derived cells lacking AMPK demonstrated an increased proinflammatory gene expression [9]. Mitochondrial dysfunction (measured as oxygen consumption rate (OCR)) was also observed in ALD patient-derived cells [9]. This was not surprising considering that AMPK is the principal upstream regulator of mitochondrial function and loss of AMPK induces spontaneous mitochondrial dysfunction and proinflammatory response both in vivo and in vitro [12,15]. A direct causal role for AMPK 1 in the X-ALD neuroinflammatory response, however, remained to be investigated. The status of AMPK 1 in Abcd1-KO mice central nervous systems is unknown. Since Abcd1-KO mice mimic the human AMN phenotype [5][6][7] and do not develop the cerebral inflammation characteristic of human ALD [5][6][7][8], in this study we investigated the status of AMPK 1 in the brains and spinal cords of Abcd1-KO mice. Furthermore, the expression and levels of AMPK 1, mitochondrial complex subunits, and mitochondrial OCR were compared between wild type (WT) and Abcd1-KO mice mixed glial cells. To investigate a causal role for AMPK 1 in the development of the neuroinflammatory response in X-ALD, we used lentiviral vector carrying mouse AMPK 1-shRNA to delete AMPK 1 in Abcd1-KO mouse primary mixed glial cells. Mitochondrial function (OCR) and induction of proinflammatory response were compared between WT, Abcd1-KO, and AMPK 1-deleted Abcd1-KO mice mixed glial cells. ALD (ALD1, GM04934, ALD2, and GM04904), and AMN (AMN1, GM07531, AMN2, and GM17819) patients were obtained from the National Institute of General Medical Sciences Human Genetic Cell Repository (https://catalog.coriell .org/) and cultured as described previously [9]. Primary Mixed Glial Cells. Mouse primary mixed glial cells were prepared from 2-day-old WT and Abcd1-KO pups, as described previously [16]. Abcd1-KO mixed glial cells were cultured in DMEM with 10% fetal bovine serum, and viral particles (AMPK 1 and control) were added with a multiplicity of infection of 2.5. Transduced cells were selected using puromycin (3.0 g/mL). AMPK 1 silencing was observed by western blot and mRNA quantification. Western Blot Analysis. Samples for western blot were prepared and ran as described previously [9]. The membranes were probed with AMPK 1, PGC-1 (Santa Cruz Biotechnology, Dallas, TX), or MitoProfile Total OXPHOS Rodent WB Antibody Cocktail. The membranes were detected by autoradiography using ECL-plus. 2.6. RNA Extraction and cDNA Synthesis. Following total RNA extraction using TRIzol (Invitrogen), per the manufacturer's protocol, single-stranded cDNA was synthesized from 5 g of total RNA using iScript cDNA synthesis kit (BioRad, Hercules, CA). Real-Time Polymerase Chain Reaction. Real-time polymerase chain reaction (PCR) was conducted using Bio-Rad's CFX96 Real-Time PCR Detection System. The primer sets for use were synthesized from Integrated DNA Technologies (Coralville, IA). IQ SYBR Green Supermix was purchased from Bio-Rad. The normalized expression of target gene with respect to L27 was computed for all samples. Measurement of Mitochondrial Oxygen Consumption and Extracellular Flux. Oxygen consumption in intact adherent WT and Abcd1-KO mixed glial cells was measured using a Seahorse Bioscience XF e 96 Extracellular Flux Analyzer (North Billerica, MA), as previously described [9]. Mixed glial cells were seeded to 1.5 × 10 4 cells/well in XF e 96-well cell culture microplate (Seahorse Bioscience) in 200 L of DMEM and cultured at 37 ∘ C in 5% CO 2 atmosphere. The growth medium was replaced with 175 L of bicarbonatefree DMEM, and the cells were incubated for 1 hour for degassing before starting the assay procedure. Basal and carbonyl cyanide p-trifluoromethoxyphenylhydrazone-(FCCP-) linked OCR was measured as described by us previously [9]. Determination of ATP Levels. Primary Abcd1-KO mice mixed glial cells (2 × 10 4 cells/well) were seeded in a 96well cell culture plate in complete medium and deleted for AMPK 1 as described above. Cells were lysed in 20 L lysis buffer and 10 L of lysate was used to measure ATP levels using an ATP determination kit (Molecular Probes, Invitrogen). 1 L of the cell lysate was used for normalization of protein levels. Statistical Analysis. Using the Student Newman-Keuls test and analysis of variance, values were determined for the respective experiments from three identical experiments using GraphPad software (GraphPad Software Inc., San Diego, CA). The criterion for statistical significance was < 0.05. AMPK 1 Levels Are Similar between WT and Abcd1-KO Mice Brains and Mixed Glial Cells. The mechanism(s) of induction of the neuroinflammatory response in X-ALD remains unknown [1,4]. More intriguing is the observation that the inflammatory response and demyelination have no genotype correlation [1,4]. Individuals with the same ABCD1 mutation (even between monozygotic twins) may develop strikingly opposite phenotypes of fatal neuroinflammatory ALD (and hence a shortened life span) or relatively benign AMN phenotype that exhibits only mild spinal cord pathology that is too late in adulthood [1,3]. We recently documented the first evidence of differential loss of a metabolic and anti-inflammatory protein, AMPK 1, in the patient-derived fibroblasts and lymphocytes of the severe (ALD) phenotype of X-ALD [9]. AMPK 1 levels between healthy control and AMN patient-derived cells were largely unchanged [9]. Our laboratory previously reported that loss of AMPK 1 is associated with more severe neuroinflammation and neurodegeneration in an animal model of multiple sclerosis [17]. Abcd1-KO mice fail to develop the neuroinflammation and demyelination characteristic of human ALD and only mimic the human mild AMN phenotype [5][6][7][8]. It was, therefore, of interest to investigate the status of AMPK 1 in Abcd1-KO mice brains and spinal cords. In line with our recent report in human AMN patient-derived cells (compared to healthy controls) [9], there was no significant difference in AMPK 1 protein and mRNA levels in the brain cortexes or spinal cord of age-matched (3-month-old) Abcd1-KO mice compared with WT mice (Figure 1(a)). Also, AMPK 1 levels in the primary mixed glial cells of Abcd1-KO mice were not decreased and were, in fact, higher than those in WT mice mixed glial cells (Figure 1(b)). In both cells and animal models with AMPK deletion, there is spontaneous mitochondrial dysfunction and proinflammatory response [10,12,18,19]. To investigate if AMPK 1 loss in Abcd1-KO mice mixed glial cells can induce mitochondrial dysfunction and inflammation reminiscent of the ALD phenotype, Abcd1-KO mixed glial cells were silenced for AMPK 1 using lentiviral-shRNA (Figure 1(b)). Following transduction and selection of transduced cells with puromycin, lentiviral-mediated silencing of AMPK 1 in Abcd1-KO mice mixed glial cells was highly successful (Figure 1(b)). Western blot analysis showed complete loss of AMPK 1 protein in Abcd1-KO mixed glial cells silenced for AMPK 1 (Figure 1(b)(i)). Real-time PCR with primers against mouse AMPK 1 showed significant ( < 0.001) downregulation of AMPK 1 expression (Figure 1(b)(ii)). Silencing with nontargeting scrambled sequence (Scr) had no effect on the levels of AMPK 1, thereby suggesting that lentiviral vector-mediated knockdown in Abcd1-KO mixed glial cells was specific for AMPK 1. Mitochondrial Complex (I-V) Gene Expression and Levels Are Reduced in ALD Patient-Derived Cells and in Abcd1-KO Mixed Glial Cells Deleted for AMPK 1. There is increasing evidence that mitochondrial dysfunction is important in the pathophysiology of X-ALD [20,21]; however, the exact upstream mechanism(s) remain hypothetical at this point [22]. AMPK is the principal upstream regulator of mitochondrial biogenesis and function [15]. We documented the first report that AMPK 1 levels and mitochondrial function (OCR), a measure of oxidative phosphorylation (OXPHOS), were decreased in ALD patient-derived cells [9]. The OXPHOS system is composed of five complexes, four of which, complexes I-IV, cooperate to generate a proton gradient across mitochondrial inner membrane. Complex V generates the universal energy ATP coupling with proton flow [23]. Quantitative PCR (Figure 2(a)) for individual subunits comprising complexes I-V and antibody cocktail against all five complexes showed (Figure 2(b)) that mitochondrial complex subunit expressions and levels were also significantly decreased in ALD patient-derived cells when compared to AMN patient-derived cells. Two subunits of complex I (NDUFS8 and NDUFB1) and 1 of complex II (SDHA) were significantly ( < 0.05) reduced in AMN fibroblasts when compared to control healthy patient-derived fibroblasts, while the rest of the subunit expressions were unchanged between AMN and healthy control patient-derived cells (Figure 2(a)). Similar to human AMN cells (compared to healthy controls) (Figure 2(a)), mitochondrial complex subunit expressions and levels in Abcd1-KO mice mixed glial cells were comparable to control WT mice mixed glial cells (Figure 3(a)). This could be expected since AMPK 1 levels were unchanged between Abcd1-KO mice's central nervous systems and mixed glial cells (Figure 1(b)). Moreover, Abcd1-KO mice mimic human AMN phenotype [8]. This provided us with an opportunity to test if deletion of AMPK 1 in Abcd1-KO mixed glial cells could mimic changes in mitochondrial complex expressions similar to that found in ALD patientderived cells (Figure 2). Abcd1-KO mice mixed glial cells deleted for AMPK 1 indeed had significantly reduced expression (Figure 3(a)) and protein levels (Figure 3(b)) of mitochondrial complex subunits. Although their individual role in X-ALD remains to be investigated, underexpression of multiple mitochondrial subunits has been associated with neurodegenerative conditions [24]. For instance, Leigh's syndrome is a severe neurodegenerative disease associated with mutations and underexpression of mitochondrial complexes [25][26][27][28]. Mutations and underexpression of multiple complex I subunits (including NDUFS1 and NDUFS8 observed here in ALD fibroblasts/AMPK 1-deleted Abcd1-KO cells) are associated with severe Leigh's syndrome [25][26][27]. Homozygous mutations in the complex II subunit SDHA gene are also associated with Leigh's syndrome [28]. Complex II is critical to mitochondrial function since it lies at the crosspoint of OXPHOS and Krebs-cycle pathways. SDHA levels were decreased in ALD patient-derived fibroblasts ( Figure 2) as well as in AMPK 1-deleted Abcd1-KO mixed glial cells ( Figure 3). Despite reduced SDHA expression (Figure 2), AMN patient-derived cells had mitochondrial OCR and ATP levels comparable to healthy controls [9]. This may be attributed to the fact that expression of complex V subunit (ATP-synthase subunit) was not significantly altered between AMN and healthy controls ( Figure 2). Complex V subunit expression was decreased in ALD patient-derived cells ( Figure 2) and in AMPK 1-deleted Abcd1-KO mice mixed glial cells (Figure 3). Deficiency of the complex V subunit ATP5A1 is associated with fatal neonatal encephalopathy [29]. AMPK affects mitochondrial biogenesis principally acting through peroxisome PGC-1 [15,30,31]. AMPK regulates both the PGC-1 activation (phosphorylation) and its transcription [30,31]. PGC-1 in turn drives mitochondrial biogenesis through multiple mitochondrial transcription factors [30,31]. PGC-1 levels were similar between WT and Abcd1-KO mice mixed glial cells (Figure 4(a)). Knockdown of AMPK 1 significantly decreased ( < 0.001) PGC-1 expression and levels in Abcd1-KO mice mixed glial cells (Figure 4(a)). Indeed, PGC-1 levels were also decreased in vivo in AMPK-KO mice [15]. Whether the underexpression of mitochondrial complex subunits in ALD patient fibroblasts KO-SCR KO-AMPK * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Figure 3: Mitochondrial complex subunit expression and levels in wild type, Abcd1-knockout (KO), and adenosine monophosphate activated protein kinase-(AMPK 1-) deleted Abcd1-KO primary mixed glial cells. Wild type (WT) and Abcd1-KO primary mixed glial cells were cultured as described in Section 2. Abcd1-KO mixed glial cells were silenced for scrambled control (Scr) or AMPK 1 as described in Section 2. mRNA (a) and protein (b) levels of complex subunits were significantly reduced in Abcd1-KO mixed glial cells deleted for AMPK 1. (c) Densitometric ratio of mitochondrial subunit levels versus actin western blots. Results represent the mean ± SD of triplicates from two different experiments. * < 0.05; * * < 0.01; * * * < 0.001. COM: complex, NS: nonsignificant. Mediators of Inflammation is due to loss of AMPK, mutations in genes encoding the mitochondrial subunits, or a combination of both remains to be investigated. Mitochondrial OCR Is Comparable between WT and Abcd1-KO Mixed Glial Cells and Is Significantly Reduced in Abcd1-KO Mixed Glial Cells Deleted for AMPK 1. We recently documented that mitochondrial OCR, a measure of mitochondrial OXPHOS, was significantly reduced in ALD (but not AMN) patient-derived cells [9]. These ALD patientderived cells also had a loss of AMPK and decreased levels of mitochondrial complex (I-V) subunit genes and proteins (Figures 2(a)-2(b)). Having demonstrated that AMPK 1 levels are similar between WT and Abcd1-KO mice mixed glial cells and that deletion of AMPK 1 significantly reduces mitochondrial complex (I-V) subunit expression and protein levels (Figure 3), we next characterized the bioenergetics of these cells using a Seahorse extracellular flux (XF e 96, Seahorse Bioscience) analyzer. In this system mitochondrial respiration (OCR) is used to measure OXPHOS in intact cells [32]. WT and Abcd1-KO mice mixed glial cells (1.5 × 10 4 /well) were plated in a XF e 96-well microplate (Seahorse Bioscience). Abcd1-KO cells were deleted for AMPK 1 within the 96-well plate and OCR was measured 48 hours after AMPK 1 deletion (Figure 4(b)). Basal and FCCP-uncoupled maximal OCR (a measure of mitochondrial integrity [9]) were similar in WT and Abcd1-KO mice mixed glial cells (Figures 4(b) and 4(c)). Scramble (Scr) silencing did not impact the basal and FCCP-uncoupled OCR values between control and Scr silenced Abcd1-KO mixed glial cells (Figures 4(b) and 4(c)). On the other hand, deletion of AMPK 1 significantly ( < 0.001) reduced both the basal and FCCPuncoupled OCR levels in Abcd1-KO mixed glial cells (Figures 4(b) and 4(c)). This provides evidence of a direct causal role for AMPK 1 in mitochondrial dysfunction in ALD, which was recently postulated by us in ALD patient-derived cells [9]. Mitochondrial OXPHOS generates the ATP energy for the cell [33]. Downregulation of complex V may affect the synthesis of ATP and ATP-dependent processes [33]. Therefore, the decreased expression of complex V subunits in ALD patient-derived cells observed in this study ( Figure 2) supports our recent report of decreased ATP levels in the same patient-derived cells [9]. Abcd1-KO mice mimic human AMN phenotype, and primary mixed glial cells from Abcd1-KO mice (compared with WT mixed glial cells) did not exhibit a decrease in ATP levels (Figure 4(d)) similar to our observation in human AMN patient-derived cells [9]. However, deletion of AMPK 1 led to a significant decrease ( < 0.001) in ATP levels in Abcd1-KO mice mixed glial cells (Figure 4(d)) in line with our observation of decreased complex V levels ( Figure 3) and mitochondrial OCR (Figures 4(b) and 4(c)) in these cells. These findings, together with our recent report of differential loss of AMPK 1 in ALD [9], indicate a detrimental role for loss of AMPK 1 in inducing mitochondrial dysfunction in severe ALD pathology. Deletion of AMPK 1 Induced Spontaneous Proinflammatory Response in Unstimulated Abcd1-KO Mice Mixed Glial Cells. The mechanism of the neuroinflammatory response in X-ALD remains unknown. In vitro loss of peroxisomal transporters (Abcd2 and Abcd3) that have significant homology to the Abcd1 gene has been shown to induce an inflammatory response in central nervous system cells [4]. However, levels of Abcd2 and Abcd3 are unaltered in the central nervous systems of X-ALD patients and, therefore, are likely not the disease modifying genes for initiation and progression of X-ALD [34]. Since AMPK 1 levels were reduced only in ALD patient-derived cells that presented with increased expression of proinflammatory genes [9], this provides a causal association between AMPK 1 loss and induction of the proinflammatory response in the ALD phenotype. This is expected since AMPK 1 is crucial for the anti-inflammatory skewing of cells [10,19] and is involved in inhibiting lipid-induced inflammation [18]. Furthermore, AMPK-knockout animal models consistently demonstrate increased proinflammatory skewing [35]. To provide a direct evidence for this causal association between AMPK 1 loss and development of the proinflammatory response in X-ALD, we investigated the expression and levels of inducible nitric oxide synthase (iNOS) in Abcd1-KO mixed glial cells deleted for AMPK 1 ( Figure 5). In vivo [4] and in vitro [16] evidence implicates involvement of iNOS in X-ALD neuropathology. Basal iNOS levels were undetectable in WT and Abcd1-KO mixed glial cells ( Figure 5). Silencing with control (Scr) lentiviral particles did not induce any iNOS level ( Figure 5). However, AMPK 1 deletion significantly induced ( < 0.001) the iNOS level ( Figure 5(a)) and gene expression ( Figure 5(b)) in unstimulated Abcd1-KO mixed glial cells. Conclusions In conclusion, these findings represent the first direct evidence of a link between loss of AMPK 1 and initiation/augmentation of mitochondrial dysfunction and the neuroinflammatory response in X-ALD, especially in mixed glial cells. The central nervous system (brain and spinal cord) is the target organ for development of X-ALD therapies. AMPK 1, therefore, provides a novel target for development of therapeutic strategies aimed at ameliorating the initiation and/or progression of the neuroinflammatory response in X-ALD.
4,614.4
2015-03-10T00:00:00.000
[ "Biology" ]
Bis(N-sec-butyl-N-n-propyldithiocarbamato-κ2 S,S′)(1,10-phenanthroline-κ2 N,N′)zinc(II) Two independent but very similar molecules comprise the asymmetric unit of the title compound, [Zn(C8H16NS2)2(C12H8N2)]. The N2S4 donor set about Zn is defined by two symmetrically chelating dithiocarbamate ligands and a 1,10-phenanthroline ligand. Distortions from the ideal octahedral coordination geometry arise from the restricted bite angles of the ligands. The main feature of the crystal packing is the formation of tetrameric supramolecular aggregates mediated by C—H⋯S interactions. Disorder was found in each of the sec-butyl groups. This was resolved over two positions in each case with the major components of the disorder having site occupancies in the range 0.551 (6)–0.725 (5). Two independent but very similar molecules comprise the asymmetric unit of the title compound, [Zn(C 8 H 16 NS 2 ) 2 -(C 12 H 8 N 2 )]. The N 2 S 4 donor set about Zn is defined by two symmetrically chelating dithiocarbamate ligands and a 1,10phenanthroline ligand. Distortions from the ideal octahedral coordination geometry arise from the restricted bite angles of the ligands. The main feature of the crystal packing is the formation of tetrameric supramolecular aggregates mediated by C-HÁ Á ÁS interactions. Disorder was found in each of the sec-butyl groups. This was resolved over two positions in each case with the major components of the disorder having site occupancies in the range 0.551 (6)-0.725 (5). 302 restraints H-atom parameters constrained Á max = 1.32 e Å À3 Á min = À0.59 e Å À3 Table 1 Hydrogen-bond geometry (Å , ). Symmetry codes: (i) x À 1; y; z; (ii) Àx þ 2; Ày þ 1; Àz þ 1. We thank UKM (UKM-GUP-NBT-08-27-111 and UKM-ST-06-FRGS0092-2010), UPM and the University of Malaya for supporting this study. The presence of C-H···S interactions, Table 1, lead to the formation of tetrameric supramolecular aggregates in the crystal structure of (I). The S8 atom is pivotal in these. The molecules comprising the asymmetric unit are connected by the C-H···S8 interaction involving the C28-H28a atom. The dimeric aggregates thus formed are linked via the C-H···S8 interaction involving the C50-H50 atom, Fig. 3. Globally, the molecules pack into layers parallel to (0 1 1) and inter-digitate, Fig. 4. The solution was kept at 273 K for an hour. Zinc chloride (10 mmol) dissolved in ethanol (50 ml) was added to give a white precipitate. This was collected and redissolved in chloroform (50 ml). The solution was mixed with a solution of 1,10-phenanthroline (10 mmol) dissolved in ethanol (10 mol). The yellow solution was set aside for the growth of crystals. Refinement H-atoms were placed in calculated positions (C-H 0.95 to 1.00 Å) and were included in the refinement in the riding model approximation, with U iso (H) set to 1.2 to 1.5U equiv (C). All sec-butyl groups were found to be disordered. For each group, the site occupancies were refined. The major component had site occupancy factor of 0.551 (6) for the C5-containing group, 0.725 (5) for the C13-group, 0.587 (6) for the C33-group, and 0.557 (6) for the C41-group. For both the ordered n-propyl and disordered sec-butyl groups, the 1,2-related C-C distances were tightly restrained to 1.500±0.005 Å and the 1,3-related ones to 2.51±0.01 Å. Within the S 2 C-NC 2 fragment, the S 2 C-N distances were restrained to 1.35±0.01 Å and the N-C alkyl distances to 1.45±0.01 Å. Finally, all anisotropic displacement parameters were refined individually except for the C33' and C36' atoms which were refined isotropically. supplementary materials sup-2 Additionally, the anisotropic displacement parameters for the carbon atoms of both the ordered n-propyl and disordered sec-butyl groups were tightly restrained to be nearly isotropic.
840
2010-08-28T00:00:00.000
[ "Chemistry", "Materials Science" ]
Probing Dark Matter Clumps, Strings and Domain Walls with Gravitational Wave Detectors Gravitational wave astronomy has recently emerged as a new way to study our Universe. In this work, we survey the potential of gravitational wave interferometers to detect macroscopic astrophysical objects comprising the dark matter. Starting from the well-known case of clumps we expand to cosmic strings and domain walls. We also consider the sensitivity to measure the dark matter power spectrum on small scales. Our analysis is based on the fact that these objects, when traversing the vicinity of the detector, will exert a gravitational pull on each node of the interferometer, in turn leading to a differential acceleration and corresponding Doppler signal, that can be measured. As a prototypical example of a gravitational wave interferometer, we consider signals induced at LISA. We further extrapolate our results to gravitational wave experiments sensitive in other frequency bands, including ground-based interferometers, such as LIGO, and pulsar timing arrays, e.g. ones based on the Square Kilometer Array. Assuming moderate sensitivity improvements beyond the current designs, clumps, strings and domain walls may be within reach of these experiments. I. INTRODUCTION LIGO's measurement of gravitational waves emitted from black hole binary mergers [1,2] has opened novel ways to study the properties of astrophysical objects via their gravitational interactions. After decades-lasting developments and technical improvements on interferometrical methods, an unprecedented sensitivity in the measurement of gravitational interactions has been achieved. Future ground-and space-based interferometers, such as the Cosmic Explorer [3], the Einstein Telescope [4] and LISA [5,6], are expected to further improve on this sensitivity and to complement it in different frequency ranges. In addition, this may also be supplemented by instruments such as AEDGE [7], AION [8], BBO [9] or DE-CIGO [10]. To achieve sensitivity to lower gravitational wave frequencies and smaller strains, longer interferometer arms and less background noise are needed. In the former, pulsar timing arrays, such as NANOGrav [11], PPTA [12] or EPTA [13], currently provide additional tests of gravitational physics. In the future, pulsar timing arrays might even use the Square Kilometer Array (SKA) [14] to further improve on sensitivity. In this work, we want to explore the potential of gravitational wave interferometers to probe concentrated structures of dark matter via their gravitational interaction with the apparatus. This provides an additional science goal for these instruments, but also requires different analysis strategies. While previous studies [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29] have focused mainly on clumps and primordial black holes, we consider also topological defects, such as cosmic strings and domain walls. The opportunity to search for the latter two has already been noted in [24,30] 1 . Here, 1 In [30] the main focus is on axion streams arising from the de-we aim for a more detailed analysis of the experimental sensitivity and also consider the possibility that the topological defects feature a more general equation of state, e.g. due to a non-trivial behavior of the string/domain wall network. We also take a brief look beyond localized structures and consider the possibility to measure the local dark matter density fluctuation power spectrum (see also [20]). The sensitivity to the latter is, however, somewhat limited. Our analysis is based on the simplest effect (originally discussed in [16]), namely, that any massive object or, more precisely, any localized energy density will locally perturb the gravitational field in the vicinity of the interferometer, thereby exerting a different gravitational pull on each node of the detector. Due to the differential gravitational acceleration, this perturbation will, in turn, lead to a measurable Doppler shift in the apparatus. For instance, a similar analysis of such acceleration burst signals at space-based interferometers has been discussed for nearby asteroids and similar objects [31][32][33]. In addition to this, a signal could also arise due to the (changing) gravitational potential when a structure exists within the line of site connecting the different nodes of the gravitational wave detector [17,20,23,26,28]. It would be interesting to study this Shapiro effect [34] also for cosmic strings and domain walls. We leave this for future work. In principle, this type of dark matter search is very general as it solely relies on measuring purely gravitational interactions 2 . Of course, the downside is that it requires struction of axion miniclusters, which are, however, rather similar to strings. 2 Signals of dark matter structures in gravitational wave interferometers can also arise from non-gravitational interactions, see, e.g., [24,[35][36][37][38][39][40][41]. the dark matter to be strongly concentrated in highly localized structures which, in turn, re-introduces a model dependence. While our study is purely phenomenological and does not rely on the origin of the structures in question, let us nevertheless mention a few possibilities allowing for such scenarios. Localized clumps in the dark matter structure can appear in a variety of models and situations. Perhaps the most obvious scenario for clumps are primordial black holes [42][43][44][45][46] (in the context of gravitational wave astronomy, see also more recently, e.g., [47][48][49][50][51][52] and [53] for a recent review). That said, already standard cold dark matter may feature clumpy structures (see, e.g., [20,22]), although these may likely be beyond the reach of near future gravitational wave detectors [20]. In addition, localized clumps can also be produced if dark matter features strong self-interactions [54][55][56][57]. Another important potential source are initial conditions featuring large inhomogeneities, a prominent example of which are axion miniclusters [58][59][60][61] (see also [62][63][64][65][66] for some more recent work). More recently, such large fluctuations have also been discussed in the context of inflationary production mechanisms [67,68] or as a consequence of a fragmentation of homogeneous fields due to their (self-)interactions [69,70]. Furthermore, macroscopic clumps could arise as solitonic objects such as, e.g., Q-balls [71][72][73][74]. In addition to localized clumps, dark matter structures might also come in form of topological defects, such as cosmic strings or domain walls. These could have formed in the early Universe [75][76][77]. In a cosmological context, topological defects, in particular dynamical networks of cosmic strings or domain walls (see, e.g., [78][79][80][81][82][83][84]), contribute to the total energy budget of the Universe. However, while their equation of state is usually negative, there is still uncertainty in the behaviour of a network of these objects (see, e.g., [85,86]). Therefore they may account for dark energy but also for dark matter [82][83][84]. This has led to a variety of ideas to investigate their experimental signatures as dark matter candidates [29,35,38,41,[87][88][89][90][91][92][93]. We will follow this more phenomenological spirit and be agnostic about the equation of state of these networks by treating it as a free parameter. This allows us to investigate their experimental imprint in gravitational wave detectors independent of the dynamics of the network. Our discussion is structured as follows. Section II first reviews the analysis of acceleration burst signals at the LISA gravitational wave interferometer. As an initial test case, we then apply these techniques to obtain the signal power spectrum associated to localized clumps of dark matter passing by the detector. In Section III, we extend our analysis to the case of cosmic strings and domain walls. Next, Section IV gives an overview of how the same technique might also be used to measure stochastic fluctuations of the local dark matter density with LISA. In Section V, we extrapolate our results to other gravitational wave experiments, i.e. ground-based interfer-ometers and pulsar timing arrays, that are sensitive to different frequency bands. As particular examples, we examine LIGO and a future PTA using the SKA. Finally, we summarize our results and conclude in Section VI. II. LOCALIZED CLUMPS OF DARK MATTER Localized dark matter clumps can cause a signal in a gravitational wave interferometer as shown by a number of previous studies [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29]. In this section, we will review how such a signal is generated. We focus on the effect where the clump exerts a stronger gravitational acceleration on one of the interferometer nodes and derive the associated signal power spectrum for the LISA detector. This discussion will then serve as a basis and test bed for our investigation of strings and domain walls in the next section. For a general introduction to the physics and measurement techniques of gravitational wave interferometers see, e.g., [94,95]. Although the general strategy in principle applies to any gravitational wave detector, in the present work, we will consider LISA as a prototypical example to obtain the experimental signature of gravitational perturbations caused by the above macroscopic astrophysical objects in a gravitational wave interferometer. We will later (rather naively) use the same technique to estimate the sensitivity in other gravitational wave experiments. Here, loosely speaking, we focus on an experimental setup with three distinct nodes arranged in an equilateral triangle (see Appendix A for some more details). Along the three interferometer arms with a length of about 2.5 million kilometers, the satellites exchange laser beams. Consequently, a differential acceleration due to a gravitational perturbation caused by macroscopic objects in the vicinity of the interferometer will then lead to a measurable Doppler signal. To begin with, let us first consider the case of a single dark matter clump passing by one of the LISA satellites. This is similar to the case of the detection of asteroids treated in [31], which we will follow closely. As a simple coordinate frame, we choose the dark matter clump to be in a straight uniform motion with velocity V parallel to the y-axis. The trajectory of the clump is, in addition, confined to the xy-plane. The closest distance between the clump and the satellite located at the origin, i.e. the impact parameter, is denoted by D. For a schematic illustration of this reference frame we refer the reader to Appendix A 1. The differential acceleration of the interferometer node is then given by [31], where G denotes the gravitational constant and M is the mass of the dark matter clump. A schematic form of this acceleration burst was first considered in [16] with regard to the detection of primordial black holes with spacebased interferometers. As [31], in our analysis we will instead consider the velocity shift, v(t), i.e. the integrated gravitational acceleration. However, we will use the more appropriate signal response functions defined in [96,97] that take into account a time retardation of the signals from different nodes (see also discussion below). In principle, the gravitational field of the dark matter clump passing by the detector will exert a gravitational pull on each interferometer node separately. Due to the differential velocity shifts relative to each other, the laser beams exchanged between the satellites will be influenced by Doppler shifts. That is, we expect a time-dependent response of the detector to these velocity perturbations. This signal is parametrized by a so-called response function X(t). The signal power spectrum, that we are interested in, is given by the absolute square of the Fourier transform of the detector response 3 , (II.2) In the case of LISA, this response function typically is a linear combination of the velocity perturbations of all three interferometer nodes. Its exact form depends on which undesirable noise sources are tried to be removed from the signal spectrum. Therefore, the detector response function is not unique. For concreteness, throughout this work, we will use the so-called Michelson response function X(t) for the readout of a signal at a single detector node [31,96]. For simplicity, we will not present its exact form here. Instead, we give a detailed definition in (A.1) of Appendix A. Nevertheless, let us point out its main features. Naively, the components of the response function are given by projections of the velocity perturbations of the nodes onto the interferometer arms, Here, the n i denote the unit vectors pointing between two nodes, labelled by the opposite side of the triangle, v i is the velocity perturbation of the i-th node induced by the gravitational pull and c is the speed of light. The integer coefficients a ij take into account retardation effects along the different signal paths. For more details on this notation see Appendix A and also [96,97] for alternative response functions. In fact, in some situations the detector response function above can be simplified (see, e.g., [31]). For instance, 3 Technically, the power spectrum of a given signal X(t) is defined by the Fourier transform of the so-called auto-correlation function, P (ω) = F [(X X) (τ )] = F R dt X * (t)X(t + τ ) . The latter is, in fact, equivalent to |X(ω)| 2 , i.e. to the definition given in the main text. Throughout this work, we will denote the Fourier transform of a function f (t) byf (ω). if the velocity perturbations of two nodes are negligible for all practical purposes, e.g., v 2 ≈ v 3 ≈ 0, it reduces to where v 1 (t) is the velocity perturbation of a single node and we have used that n 1 + n 2 + n 3 = 0. The latter condition is true, if all interferometer arms are of equal length. Carefully note that this form of X(t) is only an approximation of the detector response. Eq. (II.4) is the dominant contribution to the exact response function, if the dark matter clump approaches the detector node very closely, or, in other words, if the impact parameter is smaller than the arm length of the interferometer, D L. This is the so-called close-approach limit (see, e.g., [16,31]). In contrast, if the impact parameter is much larger than the arm length (sometimes called the tidal limit, see [16,31]), D L, the differential gravitational pull on the nodes can be qualitatively different from the pull on a single node. To see this, imagine a simple situation where two nodes are aligned on an axis perpendicular to the trajectory of the dark matter clump, with their relative distance to each other being much smaller than their distance to the clump's trajectory. In this scenario, the impact parameters of both satellites will differ by the arm length, i.e. they read D and D +L, respectively. According to (II.3), the detector response will be proportional to the differential gravitational acceleration between the two nodes, with impact parameters and D and D + L, Hence, comparing both regimes, we observe the behavior That is, the detector response in the tidal regime, D L, falls off much faster with distance than in the closeapproach limit, D L. In this regime, we therefore expect the close-approach approximation of the detector response to break down and one would need to take the exact response function into account. Nevertheless, we will argue a posteriori that, for the purpose of this section, we only have to consider the close-approach case, D L, for events with a detectable signal-to-noise ratio. Therefore, (II.4) gives a reasonable approximation to the detector response. We note however, that when the sensitivity of the experiment becomes better, i.e. the noise is reduced, events with larger impact parameters will become detectable and the close-approach approximation might have to be reconsidered. In practice, by means of (II.4) we can compute the response of the interferometer to an arbitrary velocity perturbation. For concreteness, we have considered the velocity perturbation associated to a gravitational pull by a dark matter clump passing by a single LISA satellite in (II.1) in a specific coordinate frame. However, in principle, the dark matter clump can approach the satellite from any direction. In order to account for this, we can equivalently choose an arbitrary orientation of the LISA experiment. That is, we can parametrize the unit vector n 1 in (II.4) by n 1 = (sin ϑ cos ϕ, sin ϑ sin ϕ, cos ϑ). The angles ϑ and ϕ essentially implement the arbitrary orientation of the detector plane relative to the dark matter clump. For a detailed discussion of this, see Appendix A 1. As we are interested in the signal power spectrum (II.2), it is then natural to consider the Fourier transform 4 of the response function,X(ω). In our scenario, the latter is given by (cf. [31], adding suitable time delays which give the L dependent factor) where K i is the i-th modified Bessel function of the second kind. In principle, we could now obtain the signal power spectrum P (ω) by squaring this expression. Obviously, the spectrum would then depend on the orientation of the LISA detector relative to the trajectory of the dark matter clump through the angles ϑ and ϕ. As a simple approximation it is reasonable to assume that the dark matter clumps generically move in a random direction. That means, in principle, any value of ϑ and ϕ is equally likely. To account for this, we can assume a uniform distribution for both, of which we then take the average (see Appendix A 1 for details). Before proceeding we note, however, that a uniform average is only a rough approximation. It assumes that the local dark matter distribution is isotropic. While this may be true in an isotropic reference frame where the detector is at rest, the Sun, together with the experiment, is moving through the dark matter halo at a constant velocity, thereby imposing a preferred direction on the system. Therefore, strictly speaking, the distribution of the angles parametrizing the relative orientation between the dark matter and the detector plane should not be uniform. This, however, is somewhat ameliorated by the motion of LISA itself (see also [31]), which changes direction along its orbit around the sun (cf., e.g., [98]). While certainly not all directions are attained with equal probability, it nevertheless amounts to at least a partial averaging. For a more detailed discussion of this see A 4. Finally, with this caveat in mind, we arrive at the angular-averaged signal power spectrum associated to the 4 In this work, we use the symmetric convention of the Fourier transform,f (ω) = (2π) −1/2 R dt f (t) exp(iωt). gravitational pull of a dark matter clump on an interferometer node [31], For a few examples of the typical shape of P (ω) we refer the reader to Fig. 1. Note that there, for reasons that will become clear momentarily, we show the signal power spectral density, 4ωP (ω). Obviously, the signal power spectrum is only useful for an experimental test, if the desired signal can be distinguished from the background noise that the experiment is subject to. In general, at a gravitational wave interferometer the background noise is characterized by a noise power spectrum, commonly defined by ñ(f )ñ * (f ) = 1 2 δ(f − f )S n (f ) (for a comprehensive review see, e.g., [99]). For future experiments such as LISA this is not yet completely settled. In principle, different estimates can lead to quite different results for the detection rate. For concreteness, we will use a recent estimate [100] as our benchmark, Note that, here, we have added an additional factor of 4(ωL/c) 4 /(1 + (ωL/c) 2 ) 2 compared to [100]. This essentially acts as an estimate of a transfer function to convert the original strain spectrum to the equivalent of our signal spectrumX 5 . Furthermore, we have divided by 2π in order to match our (symmetric) convention of the Fourier transform to the convention typically used in signal processing. We further remark that other choices of a noise power spectrum might also be reasonable, for example, the ones presented in [98,101]. Finally, as a measure for distinguishing a signal from the background noise, we can define the signal-to-noise ratio by comparing the signal-to the noise power spectrum over the range of all frequencies (see, e.g., [94]), (II.12) Here, one factor of 2 arises from the definition of the noise power spectral density given above. A second factor of 2 reflects the fact that we are considering single-sided power spectra only [31,94,99]. The form on the very right hand side of the equation is particularly useful for a quick estimation of the signal-to-noise ratio from the usual logarithmic plots of the sensitivity, as it is obtained from an integral of the logarithm over a dimensionless ratio between the signal, 4ωP (ω), and the noise, S n (ω), power spectral density. Before continuing, let us remark that the signal power spectrum P given in (II.8) remains constant with decreasing frequency due to the constant velocity of the satellite in the asymptotic future. Nevertheless, this does not lead to an infinite signal-to-noise ratio when normalizing to our choice of the noise power spectrum, S n . Instead, it remains finite, even when integrated over all frequencies. As was already noted in [31], alternative noise power spectra, however, might not share this feature and hence require the introduction of an experimentspecific lower frequency cut-off. For LISA imposing a cutoff ω c ∼ 10 −4 Hz due to experimental limitations provides a relatively conservative estimate of the lower end of the frequency band that the detector is sensitive to (see, e.g., [31]). However, in this paper we use the closeapproach approximation of dark matter clump encounters with the interferometer. This is valid for impact parameters smaller than the size of the experiment, D L. Therefore, we will use an even more conservative cut-off, that is essentially determined by the characteristic time of flight of a dark matter clump through the detector volume, ω c ∼ 2πV /L ∼ 10 −3 Hz. In Fig. 1 we show the angular-averaged signal power spectral density, 4ωP (ω) with P (ω) given in (II.8), associated to dark matter clumps traversing the detector ating a strain signal of a gravitational wave in the channel X(t), yielding a transfer function proportional to 4 sin 4 (ωL/c). In this case our factor corresponds to the envelope of this conversion factor. Note that this approximation is correct up to a possible constant factor of order O(1 − 10). volume and compare it to the experimental sensitivity of LISA. As an example, we consider events with an impact parameter of D = 50, 000 km. The coloured lines correspond to the signal, while the black solid line shows the noise power spectrum S n (ω) given in (II.9), which essentially determines the overall experimental sensitivity of the detector. As indicated earlier, the signal-tonoise ratio is the logarithimic integral over the ratio of the two plotted quantities and we can roughly read this figure in such way, that we have a chance to distinguish a signal from the detector noise, whenever the coloured signal exceeds the black noise spectrum for a sufficient range of frequencies. Also note that the signal power spectral density shown here, is uniformly averaged over the angles parametrizing the relative orientation between the interferometer and the trajectory of the dark matter clump (cf. Appendix A 1). As explained above, this approximates the typical size of the signal. The signal shape and strength of individual events will be different. For instance, in extreme cases, we expect a different signal from a clump with normal incidence to the detector plane as from one that traverses the detector volume almost parallel to it. Furthermore, as we have already pointed out before, the uniform average can only be seen as an approximation. This is because there is preferred direction given by the Sun, together with the detector, moving through the dark matter halo at a constant velocity (see Appendix A 4) as well as the rotation of the detector itself not covering all angles equally. In general, the signal power spectrum receives large contributions from low frequencies, while it quickly drops in the high frequency regime. As pointed out earlier, the constant value at low frequencies is due the constant velocity component of the LISA satellite in the asymptotic future. Consequently, the sensitivity may benefit from background noise that is reduced in the low frequency tail of the spectrum. This, however, will also require going beyond the close approach approximation. Keeping this caveat in mind, below we investigate the benefits of improving at low frequencies by considering two different low-frequency cutoffs. Having reviewed all necessary aspects of distinguishing a possible signal induced by a localized clump of dark matter traversing the detector volume of LISA from background noise, let us now quantify the discovery potential for these clumps inside the dark matter halo of our Galaxy. The mass of the dark matter clumps, M DM , determines the characteristic distance between them, d ∼ (M DM /ρ DM ) 1/3 . Therefore, it controls the average rate of encounters of a clump with one of the satellites at or below a given impact parameter D, where Φ DM is the effective dark matter flux at velocity v DM , given by Φ DM ∼ ρ DM v DM /M DM . That is, naively, η is the rate at which, on average, a dark matter clump of mass M DM and velocity v DM passes through a surface of radius D. Therefore, applied to our scenario, we can use it to estimate the rate at which we expect a dark matter clump to induce a signal in the interferometer. Note, however, that using the average signal strength in the calculation of the required impact parameter and thereforė η is only an approximation. In general, clumps passing by at a suitable angle may already give rise to a signal at somewhat larger distances than indicated by the average signal strength and, similarly, closer encounters are needed for other angles. However, we expect that this simplistic treatment nevertheless captures the effect reasonably well. Furthermore, we assume the velocity distribution of the dark matter clumps inside the halo to be a (simplified) superposition of the Sun moving through the Galaxy at v ≈ 220 km/s [102] and the dark matter velocity having a uniformly random direction 6 while its magnitude is Maxwell-Boltzmann distributed with a root mean square of v rms = 3/2v ≈ 270 km/s (see, e.g., [103]). For a discussion of possible caveats of this approximation and how it is implemented in practice, see Appendix A 4. In addition, we fix the dark matter energy density to ρ DM ≈ 0.39 GeV/cm 3 [104]. Note that these values can come with relative uncertainties of up to 25 %, which could alter our results by a similar amount. We As long as this is smaller than the size of LISA we expect the close-approach approximation to be valid. We use the same baseline parameters for the dark matter density and velocity as in Fig. 1. remark, however, that the approximations we employ as well as the differences in the noise power spectra probably cause larger uncertainties. In Fig. 2 we show the average gravitational interaction rate as a parametric function of the signal-to-noise ratio as well as the mass of the dark matter clumps. As pointed out before, here, we only take impact parameters into account that are smaller than the arm length of LISA, D L, i.e. we consider the close-approach regime. This is indicated by the black dashed line in the bottom panel (with the appropriate scale being on the right hand side), which illustrates typical impact parameters at which the signal-to-noise ratio exceeds one, SNR 1, for a given mass of the dark matter clump. We can see that this is always smaller than the typical size of LISA and therefore the close-approach approximation of the detector response function (II.4) is valid in the mass regime we show here. This approximation has to be reconsidered, if the sensitivity of the experiment is improved. In this case, dark matter clumps at larger impact parameters can still be detected. The impact parameter shown in Fig. 2 also gives an upper bound on the size of the dark matter clumps for which our treatment of the clumps as point-like is valid (most of the signal rate originates from the largest detectable impact parameters). In general, we observe that with increasing mass of the dark matter clumps, the signal-to-noise ratio is enhanced. However, at the same time, the event rate of encounters is reduced due to the reduced effective dark matter flux, Φ DM ∼ M −1 DM . That is, at higher masses there is a balance between the increased signal-to-noise ratio and the reduced interaction rate. In particular, we find an optimal detection potential that passes a minimal detection threshold, SNR 1, for dark matter clumps of mass M DM ≈ 10 10 kg which could be observed at LISA approximately every 200 years on average. However, we note that this threshold of a signal-to-noise ratio, SNR 1, is rather optimistic. In practice, as the signal itself has to be distinguished from other sources, the detection threshold may be significantly higher, of the order of SNR 10 (e.g. this is on the lower end of the typical signals considered in the Mock LISA Data Challenges [105,106]). As shown in Fig. 2, this will reduce the average detection rate by an order of magnitude while shifting the optimal detection potential to dark matter clumps of higher masses. Unfortunately, the sensitivity is not yet on a desirable level for a near future discovery potential. Let us note, however, that the latter value has to be understood as a relatively conservative estimate, which could also be significantly higher. As already mentioned, the signal-tonoise ratio depends on the noise power spectrum which features a comparably large uncertainty. For the relatively broad signal spectra associated to the dark matter clumps, in particular the low frequency tails of the spectrum may be relevant. Therefore, the signal-to-noise ratio may vary significantly, if the overall noise spectrum was smaller, to some degree also at low frequencies. We try to illustrate this through the bars in the top panel of Fig. 2. For these, we push the close-approach approximation closer to the boundary of its validity by introducing a low-frequency cutoff of ω c ≈ 5 × 10 −4 Hz as compared to the one imposed by the close-approach approximation, ω c ≈ 10 −3 Hz. Let us stress, however, that this really has to be understood only as a very first estimate of potential improvements. Finally, we note that, in light of the very rough nature of our estimates, the results presented Fig. 2 are reasonably in line with but perhaps somewhat more pessimistic than earlier works, in particular [24], with regards to the detection of dark matter clumps LISA. While the obtained rate is still rather low, it nevertheless brings us much closer to a desired level, so that we can hope that further improvements both in the detector as well as in the analysis might allow for a detection in a reasonable time frame. Indeed, one such improvement could be in the analysis. For example, already the authors of [15] noted that the dark matter clump inter-action is inelastic and therefore differs from the elastic interaction of a gravitational wave exploited in current detection strategies. Further improvements are expected from a more detailed analysis of the time structure of potential signals as discussed in [28]. III. TOPOLOGICAL DEFECTS In this section, we now want to go beyond dark matter clumps and investigate the detection of structures such as cosmic strings and domain walls 7 . As already mentioned in the introduction, these topological defects might have been produced in the early Universe [75][76][77], but their use as dark matter requires a non-trivial behavior, e.g. interacting networks, of such objects. We do not address how such a network is formed or how it can be made to satisfy the constraints imposed by the properties of dark matter (or dark energy) but simply assume its presence with a given density. This is in the spirit of and follows the completely phenomenological approach to study their detection, also pursued in [29,35,38,41,[87][88][89][90][91][92][93] in the context of various different detection techniques. We briefly note that our discussion also applies to localized structures of "ordinary" dark matter with a string-like or domain wall-like geometry, an example of which are the string-like axion streams that were already investigated with regard to LISA in [30]. The gravitational properties of topological objects can significantly differ from those of non-relativistic matter. Cosmologically, this is due to a different equation of state, relating the pressure and the energy density of the cosmic fluid, w = p/ρ. This then results in a typical scaling behavior ρ ∼ a −3(1+w) of the average energy density on cosmological scales. For our signals we are interested also in the behavior in the vicinity of individual objects. Very naively, this can be obtained from Poisson's equation for the Newtonian gravitational potential. For a fluid this is sourced by the combination of pressure and energy (ρ + 3p) (see, e.g., [109]), and the gravitational potential satisfies i.e. it explicitly depends on the equation of state of the source. However, it is not obvious that the gravitational field of topological objects such as strings and domain walls behaves locally as suggested by their global equation of state. That said, in good approximation this is nevertheless true for static strings as well as domain walls, as shown by the results of [110,111] and discussed below for each case in the respective subsection. To model the local field of more complicated networks of topological structures we therefore simply assume that it can be approximated by Eq. (III.1) with the equation of state parameter given by its cosmological value. The equation of state as well as their dimension for cosmic vacuum strings and domain walls leads to significantly different gravitational fields sourced by these objects compared to non-relativistic clumps of matter. In the following subsections, we want to discuss their gravitational properties and the related imprints they might leave at LISA. A. Cosmic strings In order to obtain the gravitational potential of a cosmic string, we can solve (III.1) in a cylindrically symmetric space for an energy density that is distributed along an infinite string, e.g. ρ = µδ(x)δ(y). Here, µ is the tension of the string, i.e. the energy stored per unit length. This yields a gravitational potential that grows logarithmically with distance, φ ∼ log (r 0 /r), and sources the gravitational field where r denotes the radial distance to the string. In general, we can directly use (III.2) to obtain the signal strength at LISA. However, before we proceed, let us mention a noteworthy special case. If we consider a single static vacuum string, the equation of state is given by w = −1/3, and its energy density dilutes as ρ S ∼ a −2 [112]. At the same time, this implies that its gravitational field vanishes, indicating that a static cosmic string does not couple to matter [110,113]. Nevertheless, this picture can change, if the vacuum string starts to be dynamical. For instance, a string moving at a velocity β = v/c has a modified equation of state, w = 2/3β 2 − 1/3 [112]. Going beyond such scenario, the situation can deviate from this even more, if multiple strings are considered. In particular, if the strings interact with each other to form a network, the corresponding equation of state can drastically change. Therefore it is possible that they can contribute to the dark matter or dark energy component of the Universe (see, e.g., [82]). The naive reasoning from above suggests, that we would not expect any gravitational pull on the interferometer by a static vacuum string at all. However, it was shown that, due to the globally non-trivial conical spacetime geometry sourced by the string, it will still attract massive objects around it [114]. In fact, the gravitational acceleration of a mass m in the vicinity of the string is given by [114] where κ ≈ 1/32 for small Gµ/c 2 . In this case, the gravitational field falls of as g ∼ r −2 in contrast to the r −1 asymptotics of the general configuration (III.2). Naively, this can be understood as follows. An infinite, straight and static cosmic string sources a conical spacetime geometry that can lead to double copies, i.e. mirror images, of nearby objects [110,113]. In this sense, (III.3) can be understood as the gravitational field sourced by the effective mass, (Gµ/c 2 )m. Hence, we can interpret the gravitational field of the string as similar to the usual attractive force between two massive objects. Cosmological considerations, such as the isotropy of the cosmic microwave background, provide bounds on the dimensionless string tension, which are typically of the order Gµ/c 2 10 −6 (see, e.g., [115,116]). Therefore, we expect a tiny acceleration of a test mass in the gravitational field of a static cosmic string. In the following, we will distinguish between signals due to the gravitational pull on the interferometer generated by a static vacuum string and by an interacting network of cosmic strings that accounts for dark matter or dark energy. The former will be characterized by the gravitational field (III.3), while the latter is given by (III.2) with a general equation of state. Signal power spectrum Given the gravitational field of a cosmic string, we can proceed analogously to the case of dark matter clumps and determine the gravitational acceleration that each node of the interferometer experiences in the vicinity of a string. For concreteness, let us choose a coordinate frame, in which the (infinite) string is parallel to the z-axis and the LISA satellite located at the origin is, initially at t 0 = 0, at a minimum distance D to the string. We furthermore assume that the string is uniformly moving at velocity V . Due to the additional internal orientation of the string as compared to spherical clumps, in the frame where the string motion is confined to the yz-plane, its velocity can have a component in the y-as well as the z-direction, i.e. V y = V sin θ and V z = V cos θ, respectively. In other words, the string has an additional inclination angle when approaching the interferometer node. For a schematic illustration see the bottom panel of Fig. 12 in Appendix A. In this reference frame, in the gravitational field of a static cosmic string (III.3), a test mass is subject to an acceleration of while in the field of an interacting string network (III.2) it reads Note that in both expressions only the velocity component V y = V sin θ appears. This is because only the radial distance to the string, in this reference frame determined by the x-and y-coordinate, enters the gravitational field. Along the lines of our discussion in Section II, in order to obtain the frequency spectrum of the detector response, we now need to determine the Fourier transform of the velocity perturbations associated to the accelerations of the detector nodes. As we will again argue a posteriori, we consider the regime where the strings traverse the detector volume in the close vicinity of a single node, i.e. the close-approach limit, D L. Therefore, the detector response can be approximated by (II.4), which in the frequency domain readsX(ω) In addition to the orientation of the string and its motion relative to the node of the interferometer, the arbitrary orientation of the detector plane has yet to be implemented. In the close-approach approximation, we can take this into account by parametrizing n 1 accordingly, n 1 = (sin ϑ cos ϕ, sin ϑ sin ϕ, cos ϑ). For a detailed discussion of this see Appendix A 2. By squaring the Fourier transform of the response function, |X(ω)| 2 , we obtain the signal power spectrum induced by a cosmic string in the vicinity of the interferometer. At this point, the latter still depends on the relative orientation between the string and the detector plane, parametrized by ϑ and ϕ. As already noted for the case of clumps, we assume that any orientation occurs equally likely (a discussion of the validity of this simplistic approximation and additional details are given in Appendix A 4). Therefore, we take the uniform average over both (cf. Appendix A 2). Finally, the angular-averaged signal induced by the gravitational pull of a static cosmic string is given by where G is the Meijer G-function. For an interacting network of cosmic strings with an arbitrary equation of state, it reads (III.7) Here, K 1 denotes the first modified Bessel function of the second kind. In the panels of Fig. 3, we illustrate an example of the angular-averaged signal power spectral densities induced by a static cosmic string (top) and an interacting string network with w = 0 (bottom) and compare it to the experimental sensitivity of LISA. In particular, we show different values of the string tension and consider, as an example, events with a fixed impact parameter of D = 100, 000 km. Moreover, we have assumed Here, we have chosen an impact parameter of D = 100, 000 km. Therefore, as the strings are taken to account for dark matter, these signal events are very rare and occur every 10,000,000 (blue) to 1,000 (green) years on average. The black lines correspond to the sensitivity of LISA and the vertical dashed lines to two different values of the used low-frequency cutoff. In both panels we have assumed a Maxwell-Boltzmann distribution for the velocity of the strings with root mean square vrms ≈ 270 km/s in combination with the LISA experiment moving through the galaxy at v ≈ 220 km/s. a (simplified) superposition of a Maxwell-Boltzmann distribution for the velocity of the strings with root mean square v rms ≈ 270 km/s in combination with the LISA experiment moving through the galaxy at v ≈ 220 km/s. We present the practical implementation of this in Appendix A 4. Similar to the dark matter clumps, we find that the signal receives large contributions from low frequencies, while it quickly drops in the high frequency regime. The signal induced by a static cosmic string is orders of magnitude smaller compared to the signal caused by interacting cosmic strings with w = 0 as appropriate for the dark matter component of the Universe. This is due to the fact that the gravitational perturbation of the former is sourced by the tiny mirror mass, (Gµ/c 2 )m. In fact, the suppression already indicates that only an interacting network of cosmic strings is within experimental reach of LISA. Average gravitational interaction rate of cosmic strings with the interferometer as a function of the signalto-noise ratio. We distinguish between a network of static cosmic strings (top) and an interacting network of dynamical strings (bottom), that constitutes the dark matter or dark energy component of the universe. In the top panel, the colors denote typical values of the dimensionless string tension, Gµ/c 2 , while we have fixed the energy density to the dark matter one, ρS ≈ 0.39 GeV/cm 3 . In the bottom panel, we have chosen a string tension of Gµ/c 2 ≈ 10 −23 and fixed the energy density and equation of state for dark matter as (ρDM ≈ 0.39 GeV/cm 3 , w = 0) and for dark energy as (ρDE ≈ 3.2 keV/cm 3 , w = −1). In both panels we have assumed a Maxwell-Boltzmann distribution for the velocity of the strings with root mean square vrms ≈ 270 km/s in combination with the LISA experiment moving through the galaxy at v ≈ 220 km/s. Given the angular-averaged signal power spectra, let us now estimate the discovery potential for cosmic strings in the vicinity of the gravitational wave interferometer. Assuming that the energy stored in the cosmic string network is determined by a single scale d, that is the characteristic distance between the strings, the energy density is given by ρ S ∼ µ/d 2 . In addition, this scale will also determine the average interaction rate of a string with the detector. Since we have averaged over the angles describing the orientation of the string, for simplicity, we can assume that the strings are all aligned and moving into the same direction perpendicular to their orientation. Projecting into the plane orthogonal to the strings then effectively reduces the problem of estimating the flux of cosmic strings to computing the flux of pointparticles through a given surface element. Here, however, we have projected out one dimension and are thus dealing with the flux through a line element. Accordingly, the rate of cosmic strings approaching the detector at velocity V and impact parameter D can be estimated to beη Note that, in line with the angular-averaged signal power spectral density shown in Fig. 4,η is an estimate of the average rate at which a cosmic string of velocity V is passing by the detector at a distance D. Therefore, it is only an approximation, as it does not take the shape of the signal into account, which, for instance, can crucially depend on the relative orientation between the string and the interferometer (this is similar to the discussion of dark matter clumps in Section II). Nevertheless, we do not expect any significant changes of our conclusions, if this complication was taken into account more carefully. The panels of Fig. 4 show the average gravitational interaction rate of a cosmic string network of static (top) and interacting strings for w = 0 and w = −1 (bottom; for the yellow curve we also choose the energy density to be that of dark energy, ρ DE ≈ 3.2 keV/cm 3 [117], which will use throughout this work) with the LISA interferometer as a function of the signal-to-noise ratio. The bars illustrate the alternative low-frequency cutoff for the noise power spectrum of the detector (cf. Section II for details). For static cosmic strings, we find that, due to their tiny gravitational field, the rate of sufficiently strong interactions with the detector is basically negligible, even for the most optimistic values of the string tension. Naively, one could of course consider situations where one tries to probe the gravitational field with more massive test objects, thereby increasing the source of the field. This, however, is completely determined by the experimental setup, such as LISA in our case, and not a free parameter. Consequently, the tiny gravitational field renders the signal of a static cosmic string by its gravitational pull on the interferometer unobservable. The more interesting case is an interacting network with an equation of state w = 0, suitable for being the dominant component of the dark matter. Due to the greatly enhanced gravitational field, the situation is significantly improved, as we demonstrate in the bottom panel of Fig. 4. For instance, we find that a network with strings of tension Gµ/c 2 ≈ 10 −23 could induce a signal in the interferometer with a signal-to-noise ratio of about 10 every 10,000 years on average. Taking a more optimistic low-frequency cutoff this could even be improved, as indicated by the bars in the figure. Therefore, an overall gain in experimental sensitivity could allow to gain a sizeable factor in the signal event rate, giving hope that already moderate improvements will allow for a detection. As a consequence, similar to the case of dark matter clumps, our analysis strategy would benefit from improvements in the low frequency regime. Note that, in order to obtain a decent interaction rate with a signal-to-noise ratio still greater than one, we have, to some extent, chosen the string tension close to an optimal value, Gµ/c 2 ≈ 10 −23 . We will show the overall experimental sensitivity of LISA for different string tensions in Fig. 10 of Section VI. We conclude that the overall discovery potential of a cosmic string network with the LISA interferometer depends on its dynamics, i.e. its equation of state. Our estimates of the signal-to-noise ratio indicate that a network of static vacuum strings appears not to be observable, while an interacting network with an equation of state appropriate for dark matter may be closer to the experimental reach of LISA. Furthermore, our comparison of different low-frequency cutoffs suggests that the overall discovery potential may be increased by improvements in the low frequency regime. B. Domain walls In order to obtain the gravitational field sourced by a domain wall, we can solve Poisson's equation (III.1) for an energy density confined to an infinite plane, e.g., ρ = σδ(x). Here, σ denotes the surface tension of the domain wall and we have neglected a possibly finite thickness. In this background, the gravitational potential grows linearly with the distance, φ ∼ r. Therefore, the gravitational field of a domain wall with an arbitrary orientation is constant, Here, n is the unit vector normal to the plane parametrizing the domain wall and the sign ensures that the field always points towards (or, as we will see momentarily, even away from) the wall 8 . That is, the gravitational field of a domain wall is pointing in the direction normal to it and, in particular, is independent of the distance to the wall. Similar to cosmic strings, a network of domain walls can have different equations of state, depending on its dynamics. For instance, the equation of state of a static domain wall is given by w = −2/3, such that its energy density dilutes as ρ DW ∼ a −1 , while a domain wall moving at a velocity β = v/c obeys an equation of state of w = β 2 − 2/3 [112]. Intriguingly, according to (III.9), this implies that the gravitational field of a static domain wall is repulsive rather than attractive [110,111]. Nevertheless, complicated dynamics of an interacting network of domain walls may lead to a very different equation of state. Therefore, it can as well serve as an exotic candidate for dark matter or dark energy [82][83][84] with a corresponding equation of state. In the following, we want to derive the signal due to the gravitational pull (or push, in the case of a repulsive potential) of a domain wall travelling through the LISA interferometer. Signal power spectrum The gravitational field of a domain wall is independent of the distance to it. In contrast to the detection of dark matter clumps or cosmic strings, this means that all three nodes of the interferometer will experience the same acceleration due to the presence of a domain wall. Therefore, in order to obtain a differential acceleration and hence a signal in the interferometer, the domain wall has to traverse the space between the different satellites. That is, it has to completely separate one spacecraft from the other two, thereby accelerating them into opposite directions. The velocity perturbation that each node picks up due to the constant acceleration in the gravitational field of the domain wall reads v(t) = ∓2πG(1 + 3w)σtΘ (t) n , (III.10) where n denotes the normal vector of the domain wall and the different signs follow the conventions of (III.9). Here, for simplicity, we assume that the wall starts traversing the interferometer at a time t 0 = 0. Hence, the Θ-function implements the fact that, due to the equal acceleration of the three nodes, we do not expect a signal, if the domain wall does not separate the individual nodes from each other. Similarly, we will assume that the signal ceases to exist when the domain wall has traversed the detector volume completely. In other words, for simplicity, we consider a signal induced by a domain wall, that starts travelling through the detector by passing the first node, then passing the second, which thereafter gets accelerated in the opposite direction, and finally traverses the third node, after which it ceases to exist. That is, strictly speaking, we view the signal as caused by the acceleration burst instead of the individual velocity perturbations. When considering the latter, obviously, the signal will not cease to exist after the domain wall has passed the last satellite, as there is still a differential velocity shift between all three nodes. In principle, this will, in addition, lead to a persistent deformation of the LISA triangle. In general, the situation is even more complicated, when considering a complex network of interacting domain walls which traverse the detector volume at different times and directions. A correct and thorough treatment would require an involved numerical simulation of this scenario. Here, we will not consider this layer of complexity. Due to the constant gravitational field, the signal that a domain wall will induce in the interferometer when traversing the detector volume involves each of its three nodes. Therefore, the detector response has to be parametrized by the exact response function given In contrast to the case of clumps and strings, the lower one is chosen at a smaller frequency as our calculation for the domain walls has no close-approach limitation in the present case. Here, we assume that the domain wall is moving at a velocity of v/c = 10 −3 . If the domain wall network is to cover the dark matter component of the Universe, the signal events shown here are quite infrequent. On average, we expect them every 1.5 (blue), 150 (yellow) and, in the extreme case, 15,000 (green) years. in (A.1), where the v i are now given by (III.10). The terms of the detector response function can be evaluated explicitly, e.g., in a reference frame where each node is located on a coordinate axis, r i = L/ √ 2e i , and we parametrize the domain wall by the unit vector n = (sin θ cos φ, sin θ sin φ, cos θ). We can then proceed by considering the Fourier transform of the response function, i.e. of the velocity perturbations, and uniformly averaging over the angles (θ, φ) in order to obtain the angular-averaged signal power spectrum. For a detailed discussion of the geometrical aspects of this, see Appendix A 3. We also note that, similar to our discussion of localized dark matter clumps, a uniform average is only an approximation. We discuss its validity and more details in Appendix A 4. In Fig. 5 we illustrate the angular-averaged signal power spectral density due to the differential gravitational acceleration by a domain wall traversing the interferometer and compare it to the experimental sensitivity of LISA. In particular, we consider domain walls of different surface tensions and fix their typical velocity to v/c = 10 −3 . We find that, compared to the case of dark matter clumps or cosmic strings (cf. Fig. 1 and Fig. 4), the signal caused by a domain wall is significantly enhanced in the high frequency regime of LISA's characteristic frequency range. One factor contributing to this is that we assumed an infinitely thin domain wall and point like nodes of the experiment. In practice, both have a finite thickness that should lead to a faster drop off at large frequencies. However, this does not play any significant role in our determination of the sensitivity. Furthermore, we can estimate the average rate of gravitational interactions with the LISA detector that we expect for a given network of domain walls. The energy density that is stored in a network is essentially determined by the surface tension of the walls σ and the characteristic distance d between them, ρ DW ∼ σ/d. In order to estimate the rate at which we expect them to approach the detector, we can consider the simplified situation where the domain walls of the network are all parallel to each other 9 . The overall rate is then simply given by the inverse time interval between two consecutive walls travelling through the detector volume. That is, if the domain walls move at a certain velocity v, the rate at which we expect them to approach the detector can be roughly estimated bẏ Note that, in line with the angular-averaged signal power spectral density shown in Fig. 5,η is merely an estimate of the average rate at which a domain wall is passing through the detector and is therefore only an approximation. In particular, it does not take the shape of the signal caused by a specific domain wall into account, which, for instance, can crucially depend on the relative orientation between the domain wall and the detector (this is similar to the our discussion of dark matter clumps in Section II). As an extreme example, a domain wall that is exactly parallel to the interferometer plane would not leave any experimental imprint in the detector. While this does not make any significant difference for the results we show in this section, we present results of using a somewhat improved approach in Fig. 10 of Section VI. There we account for this additional angular dependence. In particular, a thorough treatment requires to weight the signal events according to their relative orientation with respect to the detector. For instance, as a simple schematic example, this can be written (see also Appendix A 3) (III.12) where s is the required signal-to-noise ratio. That is, naively, only signal events are taken into account for configurations which lead to a detectable signal at the interferometer. Nevertheless, this does not qualitatively alter the results presented in this section. Fig. 6 shows the average gravitational interaction rate of a domain wall network as a function of the signal-tonoise ratio. In the top panel, we consider the case of a static domain wall network, while in the bottom panel, we consider an interacting network of domain walls that constitute the dark matter or dark energy component of the Universe. Similar to Fig. 2, the bars illustrate an alternative choice of the low-frequency cutoff for the noise power spectrum of the detector given in (II.9). However, as our estimate of the signal induced by a domain wall does not rely on the close-approach approximation, the cutoff can be shifted towards even lower frequencies. For both setups, we observe that with increasing signalto-noise ratio the event rate reduces according to a linear power law. This is because, here, we consider the angular-averaged signal-to-noise ratio, SNR ∝ σ, as well as interaction rate,η ∝ σ −1 , which are solely determined by the domain wall surface tension σ. That is, each data point shown in the figure corresponds to a specific value of the latter. In general, as expected, the overall discovery potential for static and interacting domain walls is comparable, as long as the energy density is fixed to that of dark matter. Intriguingly, it is also somewhat better than in the case of dark matter clumps or cosmic strings. For instance, we expect a signal due to a domain wall traversing the interferometer with a signal-to-noise ratio of about ten every 10 to 100 years on average. Taking into account potential improvements due to a more optimistic noise spectrum, this rate might also be higher. In this case the same signal may even be expected almost every 1 to 10 years on average. Therefore, in the future, such a domain wall network could certainly be within experimental reach of LISA. Let us close this discussion with a few words of caution. The estimates presented in this section are subject to some simplifications we have made in our derivation. Our results are based on the assumption that the domain wall can be parametrized by an infinite plane of vanishing thickness. This assumption, however, might not always be fully justified in particle physics models that admit domain wall solutions. Moreover, and perhaps more importantly, we have only considered the situation of a domain wall inducing a signal that ceases to exist once the wall has traversed the entire detector volume. In principle, one would also have to include the remnant velocities of the nodes relative to each other once the domain wall has passed, which most likely would modify the signal power spectrum at low frequencies. In addition, this would also change the triangular detector geometry persistently. Nevertheless, our relatively conservative estimates for gravitational interactions of domain wall networks with the LISA interferometer provide hope for a future discovery potential. IV. STOCHASTIC FLUCTUATIONS OF THE DARK MATTER DENSITY In addition to strongly localized energy densities, such as dark matter clumps or topological defects travelling through the Universe, the gravitational potential in the vicinity of the interferometer can also be perturbed by stochastic fluctuations of the dark matter density, as already discussed in [20] for the example of pulsar timing arrays. To linear order, these gravitational perturbations satisfy where δφ denotes the perturbation of the gravitational potential, while δ = δρ/ρ is the relative fluctuation of the mean densityρ. In principle, (time dependent) perturbations of the gravitational potential are precisely what gravitational wave experiments try to measure. In this section, we aim to estimate the sensitivity of LISA to these perturbations caused by fluctuations of the dark matter density. For simplicity, we consider a simple density fluctuation, oscillating in space, that travels through the interferometer. That is, after decomposing δ in Fourier space, the Here, we have fixed the normalization of the dark matter power spectrum to P = 1. We furthermore assume a velocity of v ≈ 220 km/s at which the detector is moving through the density wave, while for the dark matter density we use the same value as in Fig. 1. gravitational response to a Fourier mode with wave vector k is given byδ where k = |k|. We can now determine the detector response to a specific Fourier mode of the density fluctuation by considering the contribution to the gravitational perturbation of each mode separately, δg(x) = d 3 k δg k (x), where the perturbation of the gravitational field associated to a mode with wave vector k reads Naively, this corresponds to a situation, where we select a plane wave of specific wave length and "freeze it", as there is no dynamical wave evolution. Instead, we obtain a detector response to this density wave, if the interferometer moves through the perturbation of the gravitational field. To some extent, this setup is similar to a network of domain walls travelling through the interferometer, discussed in Section III B. Assuming a detector node is moving through the density perturbation of wave vector k at a constant velocity v, i.e. x(t) = vt + x 0 , the frequency spectrum of the associated gravitational acceleration is given bỹ (IV.4) Note that the right most term denotes the δ-distribution, not to be confused with the density fluctuation. This represents that, if the detector moves through a plane wave at a constant velocity, there is only one frequency contributing, which depends on the angle between the wave vector of the density perturbation and the motion of the detector. The above gravitational acceleration will, in turn, lead to a velocity perturbation of each node. Similar to the discussion of domain walls in Section III B, we can plug each velocity perturbation into the detector response function, X(t), to obtain the response of the interferometer to a given Fourier mode of a density perturbation, X k (ω). Furthermore, in order to determine the detector response to a superposition of density fluctuations, we can then sum over all Fourier modes. The absolute square of this finally yields the signal power spectrum associated to the linear superposition of density perturbations, which schematically reads where, similar to the previous sections, n i denote the unit vectors pointing between two nodes, labelled by the opposite side of the triangle, and r i is the initial position of each node. The coefficients c ij (ω) encode the linear combination of the velocity perturbations of each interferometer node in the detector response (A.1). Therefore, they also include phases of the form exp(iωL/c) due to retardation effects. That is, naively, the signal response of the detector to a superposition of density fluctuations with different wavelengths is weighted according to the dark matter density power spectrum. For the latter we use the conventional definition (see, e.g., [118]) As a simple example of the density power spectrum, we consider a broken power law 10 , (IV.7) 10 In particle physics models of very light dark matter, such a spectrum and the wavelengths of interest to us may be achieved quite naturally, see, e.g., [67,68,119]. where P is a normalization constant and n is the spectral index. Obviously, P(k) exhibits a peak P at a characteristic scale k , which we treat as a free parameter. Indeed, for the purpose of this work, we assume that it obtains its maximum within the characteristic frequency band of LISA, ω . The corresponding wavelength is then of the order Again we average over angles (cf. Appendices A 3 and A 4). We illustrate the angular-averaged signal power spectral density associated to a dark matter density fluctuation traversing the interferometer in Fig. 7 and compare it to the overall experimental sensitivity of LISA. The signal power spectrum shown is normalized to unity, i.e. P = 1. We also show different choices of the spectral index n as well as different peak positions k of the dark matter power spectrum. For simplicity, we have fixed the velocity at which the detector is moving through the density wave to v ≈ 220 km/s. For density fluctuations of order 1 we find that, even for optimistic choices of the peaks, the overall signal power spectrum is suppressed by many orders of magnitude compared to the noise. As the signal scales linearly with the power spectrum of the fluctuations, P , we can see that enormous fluctuations are needed for a signal to be detectable. Indeed, this result is, to some extent, expected from our discussion of localized dark matter clumps. Naively, clumps correspond to a fluctuation of significant overdensity compared to the background. However, as we have shown in Section II, these are scarcely within experimental reach of the LISA detector. In summary, we therefore do not expect any detectable experimental signature of stochastic fluctuations of the dark matter density at LISA, unless the overdensities are very large (at which point they may actually resemble more the localized structures we have already discussed). V. EXTRAPOLATION TO OTHER GRAVITATIONAL WAVE EXPERIMENTS The general strategy we have presented in the previous sections is not limited to the LISA interferometer, but in principle applies to any experimental apparatus that is sensitive to gravitational perturbations. In this section, we want to extrapolate our results to other gravitational wave experiments. In particular, we aim to obtain the detectable rate corresponding to the gravitational pulls of dark matter clumps and topological defects in groundbased interferometers, for example LIGO [120,121], and pulsar timing arrays, for instance utilizing SKA [14]. Due to their different characteristic sizes compared to LISA, these detectors are sensitive to gravitational interactions in other frequency regimes and therefore, ultimately, to other energy densities of dark matter objects. A. LIGO With its pioneering measurements, LIGO has strongly advanced the field of gravitational wave astronomy [1,2]. For the purpose of this work, we will treat it as a classical Michelson interferometer, which aims to measure phase shifts between laser beams sent across two orthogonal arms. Since the characteristic length of both arms is L ≈ 4 km, it is sensitive to gravitational perturbations at frequencies of approximately 1 Hz to 1000 Hz [120,121]. In general, while the basic idea of our survey for LISA also applies to LIGO, there are a few differences due to the detector geometry. Most importantly, LIGO is a ground-based interferometer. Naively, its detector nodes are freely hanging mirrors suspended from a static laboratory frame, in stark contrast to the freely moving satellites of LISA. Therefore, while the detector response at LISA was determined by relative velocity shifts between the satellites, at LIGO we can only expect a signal due to sufficiently short gravitational acceleration bursts on the mirrors. Consequently, as a simplified detector response to a gravitational perturbation, we consider the differential acceleration between the four mirrors [94], where x and y denote the arms of the interferometer. In addition, being a ground-based interferometer, LIGO is subject to different sources of background noise, such as seismic motion, for example. To take the different background noise into account, we model the noise power spectrum according to the sensitivity curve of the design sensitivity of advanced LIGO [121]. Since we parametrize the detector response X(t) in terms of differential accelerations, we then convert the strain-to an acceleration noise by multiplying with (ω 2 L/2) 2 (see, e.g., [95]). With these definitions, we can repeat the analysis of the previous sections. We summarize our results in Fig. 8, where we show the average gravitational interaction rate of localized dark matter clumps as well as string and domain wall networks with LIGO as a function of the signal-to-noise ratio. Note that here, due to the small characteristic size of the detector as compared to LISA, we cannot use the close-approach approximation of the detector response, D L, in every region of the parameter space we are interested in. For simplicity, and in order to also take the regime D L into account, we extrapolate our results by multiplying the gravitational acceleration by a factor L/D in this limit (see (II.6) for a qualitative explanation). In general, for localized dark matter clumps, we observe a behaviour qualitatively similar to the LISA analysis shown in Fig. 2. As one may expect, we find that LIGO with its much smaller size is sensitive to clumps with comparatively low masses. We find that dark matter clumps of masses 10 3 kg, could be observed every 2,000 years on average. More optimistically, given that these estimates crucially depend on the noise spectrum as well Here the color indicates the equation of state of the cosmic string network. We furthermore assume strings of tension Gµ/c 2 ≈ 10 −28 . (c) As in (b) but for a domain wall network travelling at a fixed velocity, v/c = 10 −3 (as noted in Fig. 6, the combination of interaction rate, density and velocity fixes the domain wall tension). All remaining parameters for dark matter and dark energy, as well as for the velocity distribution of clumps and strings, are chosen as in Fig. 4. as the frequency cut-offs used to determine the signalto-noise ratio (in addition to the other approximations we employed), these values could also be a factor of 10 or even higher. Bearing this in mind, our results are in rough agreement with [24]. An overview of the expected signal event rate for different masses of the clumps is shown in Fig. 10. We obtain similar results for networks of topological defects. As an example, we show a cosmic string network with tension Gµ/c 2 ≈ 10 −28 , for which we find that LIGO could observe a corresponding signal with a signalto-noise ratio above one approximately every 20,000 years on average. Note that these values depend on the string tension and velocity distribution we assume. The signals induced by a domain wall network traversing the interferometer are similar to the ones we obtained at LISA. Quantitatively, we expect a possible signal with a signalto-noise ratio above one every 1,000 to 10,000 years on average. In summary, LIGO can measure sufficiently short gravitational acceleration bursts caused by localized clumps of dark matter or networks of cosmic strings and domain walls. Due to LIGO's smaller size and correspondingly different frequency range compared to LISA, it is sensitive to objects of typically smaller masses as well as string and domain wall tensions. Indeed, with the same analysis strategy, further experimental improvements are required to lift the discovery potential of LIGO for these dark matter structures to a level that allows for a detection in a reasonable time frame. B. Pulsar timing arrays Another type of experiment, aiming towards a measurement of a gravitational waves, are pulsar timing arrays (PTAs). These exploit the fact that the time of arrival of radiation bursts from pulsars across the Universe can be precisely predicted. Gravitational waves that traverse the space between a certain set of pulsars and the observer on Earth will perturb the correlations between the different times of arrival, thereby generating a signal in the corresponding detector. Due to the long lines of sight between the pulsars and the telescopes, PTAs are able to probe gravitational wave spectra in very low frequency regimes, typically of the order 10 −9 Hz to 10 −6 Hz (see, e.g., [11-14, 122, 123]). In this section we want to explore the discovery potential of PTAs with respect to the astrophysical structures discussed in the previous sections, i.e. dark matter clumps (note the previous works [17,18,20,21,23,[26][27][28]) as well as networks of cosmic strings and domain walls. For simplicity, we will do so by viewing a PTA as a gravitational wave interferometer of the same type as common ground-or space-based interferometers at very large scales, i.e. as an interferometer with arm lengths of several thousand light-years. As a prototypical example, we will consider a replica of LISA, that is an interferometer made up by three nodes at equal distance to each other, but with an arm length of L ≈ 1000 ly. However, the sensitivity of PTAs is usually based on the observation of multiple pulsars. To use this, the signal must exist in all Earth-pulsar combinations. For our search strategy this requires that the dark matter structure is affecting the velocity of the Earth. Compared to LISA, where all nodes can be treated equally, this reduces the detectable rate by a factor of 3. Although this is a crude simplification of the experimental techniques used for gravitational wave measurements with PTAs, we still expect reasonable estimates of prospective signal strengths for the purpose of this work. In particular, in terms of average signal event rates, we expect to benefit from the largely increased detector volume. As we are essentially considering a LISA experiment at large scales, our analysis strategy is similar to the one presented in Sections II and III. The only major difference is the sensitivity of the experiments in different frequency regimes. We implement this by modelling the noise power spectrum according to the typical sensitivity curve of a PTA. As a prototype, we use the noise power spectrum of a PTA utilizing the future Square Kilometer Array (SKA) [14] as shown in [99], and extrapolate it to high frequencies. Here, similar to (II.9), we again introduced a transfer function to adapt the raw strain spectrum to our signal spectrum. Furthermore, we added an additional factor of (2π) −1 in order to match the Fourier conventions. Due to the limited observation time of SKA, we introduce a cut-off for frequencies below f c ≈ 1.5 × 10 −9 Hz in the computation of the corresponding signal-to-noise ratio. A summary of our results is given in Fig. 9, where we show the average gravitational interaction rate of localized dark matter clumps as well as string and domain wall networks as a function of the signal-to-noise ratio at SKA. Note that here, similar to the analysis of LISA, we consider the close-approach regime of the localized dark matter clumps and cosmic strings. That is, we only consider events with impact parameters smaller than the size of the detector, D L. Because of the large size of the experiment, this condition is basically fulfilled in any region of the parameter space we are interested in. For localized clumps of dark matter, we find that we benefit from the increased detector volume, allowing for very massive clumps while still retaining decent interaction rates. As expected, the qualitative behaviour we observe is similar to the detection of dark matter clumps with LISA shown in Fig. 2. Due to the increased detector volume, we expect a discovery potential for signals induced dark matter clumps with masses of M DM ≈ 10 −9 M at a signal-to-noise ratio of SNR ∼ 1 every 300 years on average. Apparently, this is a slightly more pessimistic estimate compared to what was found in earlier works. For example, in [18] the original estimate of a detection rate for primordial black hole dark matter with PTAs is somewhat higher. Nevertheless, bearing in mind the crude approximations we have applied in our derivation, our results are in rough agreement. We find a similar discovery potential for networks of cosmic strings as well as domain walls, constituting the dark matter or dark energy budget of the Universe, respectively. In particular, a cosmic string network featuring a larger string tension appears to benefit from the experimental setup at large scales. Here, we show an example of strings with a tension of Gµ/c 2 ≈ 10 −18 . For this network we expect a signal with a signal-to-noise ratio of about 1-10 every 1000 to 10,000 years on average. For a domain wall network with the same equation of state, we find an expected signal with a signal-to-noise ratio of 10 5 every few million years on average. We clearly do not need such a high signal-to-noise ratio. However, to preserve the validity of our analysis we require to have only a single domain wall traversing the detector volume at the same time. This, in turn, implies a lower bound on the surface tension of the domain wall network. As each data point shown is uniquely determined by a specific surface tension, this forbids us to go beyond the shown signal-to-noise ratios and corresponding interaction rates. Nevertheless, keeping these caveats in mind, we can extrapolate the given data. This raises the hope that we can have events every 10 to 100 years with a signal-to-noise ratio above one, SNR 1. In summary, we find that, in a scenario where the dark matter properties optimally fit the experimental sensitivity, dark matter clumps, cosmic strings and domain walls, may be observed at SKA with a slightly higher rate than LIGO while being comparable to LISA. All categories benefit from the increased detector volume, such that PTAs are sensitive to more massive objects than smaller gravitational wave interferometers. We will give an overview of this feature in Section VI, which then also makes explicit the complementarity that exists between these different types of experiments. However, carefully note that the values we present in this section are all based on the assumption that a measurement of gravitational perturbations with a PTA is the same as with the LISA detector, but at large scales. The discovery potential of PTAs for strings and domain walls using a more accurate treatment of its experimental techniques certainly merits further investigation. VI. CONCLUSIONS In this paper, we have studied the potential of gravitational wave interferometers to measure gravitational perturbations caused by the presence of macroscopic dark matter objects in the vicinity of the detector. Any localized energy density passing by sufficiently close to the interferometer will exert a gravitational pull on each of its nodes, and hence cause a differential acceleration. This acceleration leads to Doppler shift signal in the interferometer that can be measured. The objects we have considered in the present work include localized clumps of dark matter (see also [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29]), topological defects, such as cosmic strings and domain walls, as well as stochastic fluctuations (cf. [20]) of the dark matter density. As our baseline example, we have examined the LISA experiment for which we have given the signal power spectrum associated to the presence to each of these sources in the vicinity of the detector. Based on this we then looked 11 at LIGO and a pulsar timing array using SKA that are sensitive to complementary frequency ranges. Our results are summarized in Fig. 10. For each experiment, we show contours of the average gravitational 11 Note that our analysis strategy for LIGO and PTAs corresponds essentially to an extrapolation of LISA to different scales. Therefore, these estimates need to be taken with a bit of caution. interaction rate of events with a signal-to-noise ratio greater than one, SNR 1, as a function of the energy density of each dark matter structure. Due to their different characteristic sizes, LIGO, LISA and SKA complement each other very well in the sense that they are sensitive to different masses and tensions of dark matter clumps and topological defects, respectively. In the most sensitive regimes of each experiment, we find a prospective signal with a signal-to-noise ratio SNR 1, every 10,000 up to 100 years on average. Note that the most striking signature is a domain wall traversing LISA which might even be expected almost annually. However, we also note that a signal-to-noise ratio SNR 1 is in fact, at best, a minimum requirement for a signal to be detected. As the latter still needs to be distinguished from various other sources, a significantly higher signal-to-noise threshold is probably more realistic. As an example, we illustrate a detection threshold of SNR 10 by the dash-dotted lines in Fig. 10. While this is not yet on a desirable level for a near future discovery potential, let us remark that the signal estimates we show here are based on the close-approach approximation, imposing a relatively high low-frequency cutoff. Indeed, crucially, the signal-to-noise ratio can vary significantly, if a different low-frequency cutoff is taken into account (cf. Section II for a detailed discussion). This indicates that improvements of the experimental sensitivity as well as the theoretical analysis, notably in the low-frequency region, may lead to a sizable enhancement of the detection rate. Keeping this in mind, localized clumps of dark matter as well as cosmic strings and domain walls may still be within experimental reach of LIGO, LISA and PTAs. In contrast, as clumps of dark matter clumps naively constitute a critical overdensity of dark matter, stochastic fluctuations of the latter most likely cannot be measured above background noise. Overall, this clearly requires more sensitive future gravitational wave experiments. In addition to the acceleration burst signals we have studied in this work, a signal could also arise due to the Shapiro effect [34], i.e. from the changing gravitational potential due to a dark matter structure within the line of sight connecting different nodes of a gravitational wave detector (see [17,20,23,26,28]). It would be interesting to investigate the corresponding signal associated to the presence of cosmic strings or domain walls, which we leave for future work. In summary, not only localized clumps of dark matter but also cosmic strings or domain walls are close to the experimental reach of gravitational wave interferometers. Current and future gravitational wave experiments, such as LIGO, LISA and PTAs, are sensitive to gravitational perturbations due to the presence of these objects in the vicinity of the detector. These experiments are complementary to each other, as the different characteristic sizes and time-scales of the detectors make them sensitive to different parameter regions of the gravita- where we used a uniform average, here, the determination of the total rate takes into account the relative orientation between the domain walls and the experiment (see (III.12)). In all panels, a dashed line illustrates a naive extrapolation of our analysis. In (a) and (b) we extrapolate outside the close-approach regime by rescaling the differential acceleration with an additional tidal factor L/D (see (II.6)). In (c) we extrapolate the signal to a regime, where, on average, there is more than one domain wall traversing the detector volume at the same time. tional sources. Already moderate improvements in the detector noise and analysis may yield interesting discovery potential to intriguingly exotic dark matter objects such as cosmic strings and domain walls. The shape and strength of a signal at LISA (or any other gravitational wave interferometer) induced by a gravitational perturbation due to a massive object in the vicinity of the detector, of course, depends on the distance, velocity and orientation of the latter with respect to the experiment. In this appendix, we aim to define the relevant geometrical quantities that enter the derivation of the signal power spectrum associated to these events. In general, LISA is a space-based gravitational wave interferometer involving three distinct nodes, which are arranged in an equilateral triangle of side length L ≈ 2.5 × 10 6 km. Following the notation of [97], we schematically illustrate the general experimental setup of LISA in Fig. 11. Each pair of nodes exchanges laser beams, such that, in principle, there are six functions, U 1,2,3 and V 1,2,3 , that encode Doppler shifts due to gravitational perturbations by massive objects in the vicinity of the experiment. The desired detector response to an acceleration burst is then given by a suitable linear combination of these functions. Here, we use the so-called Michelson response function for the readout of a signal at a single detector node [96], where we have assumed that the arms of the interferometer are of equal length L. The components of the response function are given by projections of the velocity perturbations onto the interferometer arms, and cyclic permutations thereof. Here, the n i denote the unit vectors pointing between two nodes, labelled by the opposite side of the triangle, v i is the velocity perturbation of the i-th node induced by the gravitational pull and c is the speed of light. We note that other response function are also possible, see, e.g., [96,97]. From the response function (A.1) it is clear that any gravitational pull exerted by a massive object in the vicinity of the interferometer nodes has to be projected into the detector plane. For example, in the extreme case, where only one satellite is accelerated perpendicular to the detector plane, e.g. n 2 · v 1 = n 3 · v 1 = 0, the object will not leave any signature in the interferometer 12 . Therefore, the detector response will depend on how an object traverses the detector volume, i.e. on its orientation relative to the detector plane. In the following, we want to define the relevant geometrical quantities describing this relative orientation of different macroscopic astrophysical objects we aim to probe with LISA. For simplicity, we will only consider the closeapproach limit, where the object passes by an interferometer node with an impact parameter smaller than the characteristic size of the detector, D L. As explained in the main text, in this case the gravitational perturbation of two of the three interferometer nodes can be 12 Strictly speaking, this is not true, because the two other nodes will also experience a gravitational pull which is not perpendicular to the detector plane. Nevertheless, at large distances between the source and these nodes this effect is negligible. 11. Schematic overview of the LISA interferometer. Here, ni are unit vectors connecting pairs of satellites. The Ui and Vi encode the possible Doppler shifts of the laser beams that are exchanged between the detector nodes. neglected, such that the detector response function can be approximated by adding suitable time delays to [31] where v i (t) is the velocity perturbation of the i-th node, that dominates compared to the two others. In other words, i is the interferometer node with smallest impact parameter with respect to the object traversing the detector volume. Hence, we do not sum over the indices in (A.4). Spherical clumps Massive spherical objects, such as localized clumps of dark matter, are in a sense the most symmetric configuration when they pass by one of the LISA nodes. That is, in the close-approach limit, their lack of an internal orientation allows to parametrize their motion relative to the detector by a velocity vector v and an impact parameter D, i.e. the closest distance in an encounter between the massive clump and a detector node. The former is completely determined by its magnitude v and an arbitrary direction given in terms of two angles, i.e. v = v (sin θ cos φ, sin θ sin φ, cos θ). Clearly, not all of these four parameters will enter the detector response. In fact, since the gravitational force between the spherical clump and the LISA satellite is only determined by their relative distance, we can consider the projection into the plane spanned by the satellite and the trajectory of the clump. This effectively removes two degrees of freedom, such that we are left with the relative velocity v and the impact parameter D. As a particular example discussed in the main text, we can choose the spherical clump to be in a straight uniform motion with velocity v parallel to the y-axis at an initial distance D to the satellite. The clump is furthermore confined to the xy-plane. We illustrate this scenario in the top panel of Fig. 12. The above considerations determine the gravitational pull exerted by a massive clump on a single LISA satel-lite. As pointed out earlier, in order to determine the corresponding detector response, this gravitational acceleration burst has to be projected into the detector plane. The latter can be parametrized by an arbitrary unit normal vector, n T = (sin ϑ cos ϕ, sin ϑ sin ϕ, cos ϑ). In our analysis, the orientation of the detector plane is implemented in the detector response function, which in the close-approach regime, D L, is given by (A.4). That is, we can take this orientation into account by parametrizing the "dominant" unit vector of the LISA triangle accordingly, n i = (sin ϑ cos ϕ, sin ϑ sin ϕ, cos ϑ). Note that, strictly speaking, we are slightly abusing notation here. Obviously, the angles parametrizing the normal vector of the detector plane and the unit vector connecting two nodes of the triangle are not the same. Nevertheless, since we will average over these angles later, we denote them by the same symbol to avoid an overload of notation. In summary, in the close-approach limit, there are four geometrical degrees of freedom in total that enter the detector response to a localized massive clump travelling through the interferometer. In particular, the clump's velocity v, the impact parameter D as well as the orientation of the detector plane (ϑ, ϕ) completely determine the signal at LISA. That is, the detector response is a function of these parameters, X(t) = X(t, v, D, ϑ, ϕ). Finally, since we assume a locally isotropic situation, i.e. the clumps can approach the interferometer from any direction equally likely, we uniformly average over the orientation of the detector plane in order to obtain the signal power spectrum from the detector response, Overall, the signal power spectrum then still depends on the velocity of the clump as well as the impact parameter of the encounter. Let us close this discussion with a few words of caution. Strictly speaking, the uniform average we have employed above, is not fully justified. This is because the configuration we consider is not strictly isotropic. Instead, there is a preferred direction in the system, given by the Sun, together with the detector, moving through the Universe. In this sense, a uniform average is only an approximation. We present a more detailed discussion of this in Appendix A 4. by line elements, thereby having an additional orientation themselves. Obviously, when determining the detector response to a gravitational acceleration caused by a cosmic string, this orientation has to be taken into account. In general, an infinite string can be parametrized by a straight line, γ(s) = x 0 + sn γ , where n γ denotes an arbitrary unit vector. In addition, the string can move in a direction with a certain velocity v relative to the nodes of the interferometer. Again, similar to the case of spherical clumps, not all of these parameters will enter the detector response to a gravitational perturbation in the close-approach regime. In fact, the gravitational field of the string only depends on the radial distance to the source, such that we can consider the projection into the plane perpendicular to the string. As in the main text, we can choose a coordinate frame, in which the infinite string is parallel to the z-axis and the satellite located at the origin is, initially at t 0 = 0, at a minimum distance D to the string. We can then assume that the string is uniformly moving in a random direction in the yz-plane with velocity v, i.e. v y = v sin θ and v z = v cos θ, respectively. Indeed, this reflects the fact that the string has an additional internal orientation as compared to spherical objects such as clumps. This situation is depicted in the top panel of Fig. 12. Therefore, the gravitational acceleration burst induced by a cosmic string on a single LISA node depends on three parameters, namely the relative velocity v, the impact parameter D as well as the direction of motion relative to the string orientation, parametrized by θ. Note that, equivalently, we could also choose a reference frame where the satellite is moving uniformly and the satellite is at rest. Finally, as pointed out in the previous section, the overall gravitational acceleration has to be projected into the detector plane, parametrized by the angles ϑ and ϕ. Therefore, in summary, LISA's detector response to a gravitational pull by cosmic string (in the close-approach regime) is a function of five geometrical parameters in total, X(t) = X(t, v, D, θ, ϑ, ϕ). In an isotropic Universe, the signal power spectrum is finally given by a uniform average over all arbitrary orientations involved, In total, the signal power spectrum depends on the velocity of the string as well as the impact parameter of the encounter. However, we note that, similar to the case of dark matter clumps, a uniform average over all possible directions might not be fully justified, see Appendix A 4. Domain walls When determining the detector response of LISA to a gravitational potential sourced by an energy density localized on an infinite plane, i.e. a domain wall, additional degrees of freedom compared to a spherical clump or an infinite string have to be taken into account. Geometrically, the plane parametrizing a domain wall can be described by the algebraic equation n · (r − r 0 ) = 0, where r 0 is an arbitrary point in the plane and n denotes the unit vector normal to it. Nevertheless, as we will see momentarily, the exact signal shape caused by a domain wall involves fewer geometrical parameters than, e.g., spherical objects or cosmic strings. This is due to the fact that its gravitational field only induces a signal at LISA, if the domain wall is located in between the detector nodes, thereby separating them from each other 13 . Therefore, we only have to consider a situation, where the triangle spanned by the LISA satellites intersects an infinite plane. If the domain wall, or equivalently LISA, is 13 As discussed in the main text, this is because LISA only measures differential accelerations between the satellites. However, the gravitational field sourced by a domain wall does not depend on the distance, but is constant everywhere. Therefore, signals are only generated for configurations where the satellites are accelerated into opposite directions. moving at a certain velocity, this line of intersection will move, too, until it has completely passed the detector plane. We illustrate this in the bottom panel of Fig. 12. The only geometrical quantities that enter the detector response function X(t) in this scenario are, in fact, the orientation of the domain wall with respect to the triangle spanned by the LISA satellites as well as its relative velocity. Without loss of generality, the former can be completely parametrized by, e.g., the unit vector normal to the plane, n = (sin θ cos φ, sin θ sin φ, cos θ), while we can assume the latter to point into the normal direction, v = vn, (cf. Fig. 12). Accordingly, the detector response will be a function of these parameters only, X(t) = X(t, v, θ, φ). Finally, similar to the previous sections, in a locally isotropic dark matter distribution, the signal power spectrum associated to the gravitational perturbation by a domain wall traversing the detector volume is given by the uniform average over all possible orientations relative to the detector, That means, the overall signal power spectrum will only depend on the velocity of the domain wall relative to the LISA detector. However, we also note that, similar to the case of dark matter clumps, a uniform average over all possible directions might not be fully justified, as we will discuss in the following subsection. Velocity distribution of dark matter In the previous subsections we have illustrated how our estimate of the signal power spectrum accounts for the relative orientation between the detector plane of LISA and the source of the gravitational perturbation. In particular, we have assumed that the dark matter can approach the interferometer from any direction equally likely and hence uniformly averaged over the solid angle which parametrizes the latter (see, e.g., (A.5)). Naively, this partly follows from the Maxwell-Boltzmann distributed velocities of the dark matter structures. However, strictly speaking, this is not fully justified for the following reason. In a naive approximation, it is reasonable to assume that the dark matter inside the halo surrounding our Galaxy behaves like an ideal gas of non-interacting particles and therefore roughly follows a Maxwell-Boltzmann distribution (see, e.g., [124]), for some normalization v 0 , which, from a microscopic perspective, is determined by the dark matter mass and the temperature of the gas. The Maxwell-Boltzmann distribution (A.8) of the dark matter inside the halo of our Galaxy yields an isotropic uniform distribution for the direction in which the dark matter structures are moving. That is, dark matter can approach the experiment from any direction equally likely. In the previous sections, this feature is taken into account by a uniform average of the angles parametrizing the relative orientation between the detector plane of LISA and the trajectory of the dark matter structure (see, e.g., (A.5)). Obviously, this is true in an isotropic reference frame where an observer is at rest inside the dark matter halo of the Galaxy. However, in practice, the Sun, together with the detector, is moving through the halo at a constant velocity of v ≈ 220 km/s [102], thereby imposing a preferred direction on the system. That means that not every direction in an encounter occurs equally likely, such that the average over these directions should not be uniform. Instead, to correctly account for this one would need to weight the velocity in each direction according to the normal distribution (A.8) with the appropriate velocity shift by v , for example in At first glance the situation looks even worse, because, in addition, LISA is moving on a complicated orbit around the Sun (see, e.g., Fig. 4 in [98]). However, this composite motion of the detector might turn out to be a blessing in disguise [31], as it does not impose a single preferred direction but (at least naively) periodically changes the latter. Hence, in order to account for the relative orientation between the experiment and the dark matter trajectory, a uniform average average over the orientation might indeed be closer to the experimental scenario than singling out only one preferred direction [31]. In practice, as an approximation, we therefore take a uniform average over the solid angle accounting for the direction (see, e.g., (A.5)) and weight the detector response according to the probability distribution Here, we try to approximate the motion of the detector through the dark matter halo with v ≈ 220 km/s [102] and finally normalize to the dark matter rms velocity of the latter, v 0 = v rms / √ 3 with v rms ≈ 270 km/s (see, e.g., [103]). Let us remark that, for the purpose of this work, we do not expect any large quantitative changes if a more accurate estimation of the dark matter velocity distribution with respect to the detector was performed.
23,185.4
2020-04-28T00:00:00.000
[ "Physics" ]
Smart Buildings Enabled by 6G Communications Smart building (SB), a promising solution to fast-paced and continuous urbanization around the world, includes the integration of a wide range of systems and services and involves the construction of multiple layers. SB is capable of sensing, acquiring, and processing a very large amount of data as well as performing appropriate actions and adaptation. Rapid increases in the number of connected nodes and thereby the data transmission demand of SB have led to conventional transmission and processing techniques becoming insufficient to provide satisfactory services. In order to enhance the intelligence of SBs and achieve efficient monitoring and control, sixth generation (6G) communication technologies, particularly indoor visible light communications and machine learning need to be incorporated in SBs. Herein, we envision a novel SB framework featuring a reliable data transmission network, powerful data processing, and reasoning abilities, all of which are enabled by 6G communications. Primary simulation results support the promising visions of the proposed SB framework. IntroductIon Urbanization has sharply accelerated in recent decades, and the United Nations Population Fund (UNFPA) has forecast that around 60 percent of the global population will live in urban areas by 2030 [1]. Feasible solutions to settle such a large number of people are being sought in order to provide sustainable and high-quality standards of life and efficient resource management in urban areas. Among a number of potential solutions, smart building (SB) has many advantages. SB is a high-profile concept belonging to the category of smart cities, and has attracted researchers' attention with advances in artificial intelligence (AI) and the Internet of Things (IoT) [2]. It integrates a wide range of systems and services into a unified platform. SBs are able to perceive the environment, acquire, and process relevant data, as well as respond to changes of the environment and/or users' needs with a high degree of intelligence and autonomy [3]. The aforementioned abilities allow SB to provide various intelligent indoor services for residents (e.g., tracking, navigating, positioning, and downloading). Moreover, SB can also monitor and control the global operating status. To achieve such complex functionality, the framework of an SB must be constructed over a multi-layer structure consisting of the sensing layer, network layer, semantic layer, software layer, processing layer, reasoning layer, and service layer. Note that herein we intend not to include an interactive interface for user interaction in the multi-layer structure, since this is an independent functional module and can, to some extent, be regarded as a part of the external environment. The multi-layer structure of an SB with an interactive interface is illustrated in Fig. 1. In order to fully exploit the advantages of the SB and provide satisfactory services to its residents, reliable connectivity and efficient information processing infrastructure for data transmission and distributed processing are indispensable. Consequently, high reliability and intelligence are crucial technical barriers hindering the practical use of SBs [1]. To overcome these barriers, we here propose a novel framework for SBs enabled by sixth generation (6G) communication technologies, harnessing indoor visible light communications (VLC) and machine learning (ML) [4]. An indoor VLC module is implemented for reliable and massive data transmission in order to ensure that the raw data collected by distributed sensors is received and used effectively throughout the entire SB framework. As a by-product, indoor VLC can also satisfy communication demands as a supplement of radio frequency communications (RFC) from residents living in the SB. ML is mainly employed to enhance the intelligence of the SB and enable real-time smart control. The framework presented herein is validated by simulation results and found to be a feasible solution by which the two main barriers currently handicapping the development of SBs may be overcome. MotIvatIons In order to provide SBs with high reliability and intelligence, we aimed to equip SBs with two key technologies in 6G communications, indoor VLC and ML, and propose a complete framework with details of all key fabrics. As shown in Fig. 1, the framework of the SB can be split into seven functional layers and an interactive interface, in which the sensing and network layers are partially supported by indoor VLC in combination with other communication approaches, and the semantic, software, processing, and reasoning layers are strongly associated with ML techniques. The service layer and interactive interface are also directly affected by ML techniques, which conduct and display ultimate outcome outputs using a complex reasoning procedure based on ML algorithms. Using such a framework, the SB, ML, and indoor VLC are intricately interconnected and form a dynamic and holistic system. The benefits and motivations of the proposed framework are detailed as follows. Since the most important feature of VLC is the availability of large and unregulated bandwidth, indoor VLC is a promising approach to handle the massive data transmission relevant to SBs in the 6G era, where there are a huge number of sensors for data collection [5]. Because the security of SBs takes priority over other design metrics, indoor VLC is able to offer secure and reliable connections against jamming, eavesdropping, and other cyber attacks via the construction of a physically isolated network. Aside from reasons of security, the reduction of energy consumption is also a key metric for SB, and, since all data transmissions are piggybacked onto illumination, indoor VLC can help reduce the energy consumption required for data transmission. As a by-product, indoor VLC can also help to off load the cellular and household communication demands of residents and improve the quality of service (QoS) when coexisting with RFC. Table 1 provides a comprehensive qualitative comparison between RFC and VLC. Additionally, in the context of 6G, high intelligence is a key feature of SBs. It indicates that an adaptive mechanism needs to be implemented such that SBs can learn from collected data and improve over time with a high degree of autonomy. Due to real-time control requirements and the vast volume of data collected in SBs, traditional processing techniques are no longer competent, and ML stands out as having uniquely advantageous capabilities to deal with big data in SBs [7]. ML is also computationally efficient and thus suitable for volatile environments, from which it is able to extract useful information from massive observed data to make decisions and improve setting parameters in an iterative manner. Moreover, ML is able to conduct pattern recognition and prediction, as well as resource allocation by utilizing historical data, which are necessary for extracting contextual information and providing proactive actions when considering long-term objectives. In the semantic layer, ML can interpret users' demands and allow demand inputs via voice or other customized interactive approaches by pattern recognition. ML is expected to be used throughout the software, processing, and reasoning layers as a kernel from which to construct a complete adaptive mechanism and thus improve the services provided by SBs according to established objectives. In the service layer, ML supports a large number of auxiliary functions (e.g., energy saving, space planning, resource coordination, indoor navigation, positioning, and smart alerting). Although indoor VLC and ML enable SBs in the 6G era, SBs also reciprocally enable the success of indoor VLC and ML. The physical properties of VLC are suited to indoor scenarios, and SBs provide such an application scenario. Furthermore, SBs provide reliable and suffi cient transmission power for VLC. In the case of ML, SBs off er suffi cient power for computing and provide a large volume of storage space and processors to carry out rapid big data analytics. We depict the interdependent relation among the SB, indoor VLC, and ML in Fig. 2. proposed FraMeWorK In this section, we present details of the multi-layer framework for SBs in the context of 6G together with its interactive interface. The functional layers of the proposed framework integrate state-of-the-art sensing, communications, networking, and processing techniques. InteractIve InterFace The interactive interface is designed to enable interactions between human users and intelligent systems embedded within SBs. The interactive interface should be designed in a human-centric manner. Accordingly, two kinds of interactive interfaces, the fi xed control panel and mobile control terminal, may be provided in SBs depending on users' accessibility and usage habits. The former is installed by upgrading pre-existing smart electricity meters or centralized temperature controllers. The latter can be downloaded online as an app to smartphones and/or tablets. Using the interactive interface, residents can monitor the security status of spaces of interest and obtain resource usage profi les, as well as other basic information. As a bidirectional system, users can also provide feedback, submit requests for services, and report issues for attention. The interactive interface is directly linked to the semantic layer. seMantIc laYer In the semantic layer, original user inputs are treated as raw data and mapped to machine languages. Using ML in the semantic layer, voice and gesture recognition enable the extraction of contextual information and interpret users' demands accurately. The extracted information and interpreted demands from users are pre-processed and compressed before being sent to the network layer for transmission. Another important function of the semantic layer is to label user feedback such that emergency feedback can be transmitted and processed with priority over other non-emergency messages. sensIng laYer With the exception of a small portion of information sent from the interactive interfaces by users, most data throughput originates from the sensing layer. Signals generated in the sensing layer contain a variety of observable environmental data, including security, safety, temperature, humidity, space occupancy, electricity usage, water usage, and other optical and acoustic information. To collect such a variety of environmental information and ensure accurate understanding of the surrounding environment, a large number of sensors are indispensable. In order to reduce the implementation cost of the proposed framework, one should try to reuse existing facilities and instruments and only require the installation of new sensing modules and devices if necessary. netWorK laYer The network layer supports the transmission and reception among the functional layers. Additionally, because cloud tech- nology and other distributed computing architectures are adopted in the following layers, the network layer requires the construction of a secure and reliable connection among a large number of distributed controllers and processors [8]. Meanwhile, the accessibility to the Internet and cellular networks is a basic demand and should also be supported in the network layer. Indoor VLC, due to its advantageous properties, has been employed as the protagonist in the network layer. However, to overcome some of the drawbacks of indoor VLC and optimize the communication service provided, two further supporting roles are required: RFC and power line communications (PLC) [6]. RFC can be employed for mobile data transmission and provides a supplementary transmission mechanism via mode selection. Meanwhile, due to its low cost of deployment, PLC, relying on the existing power supply infrastructure in SBs, is an attractive approach to connect light emitting diode (LED) transmitters and is adopted in the proposed framework as a networking backbone. soFtWare laYer The software layer is employed as an interface to receive raw data from the network layer and provide software platforms to process and store these data. In particular, the software layer should support interactions with the external environment and the service layer by defi ning I/O interfaces and activating control programs. In order to achieve these functions, fi rst, a powerful database must be constructed and used to store historical data from various sensors and interactive interfaces. Additionally, cloud and distributed computing should be supported, since the hardware architecture adopted in the proposed framework is based on distributed controllers and processors. processIng laYer The processing layer is utilized to pre-process large amounts of raw data, which are presented in different formats and structures, thereby minimizing data redundancy and restoring missing data where possible. Dimensionality reduction is another important function of the processing layer, by which the system aims to maintain the validity of sensory information using a minimum number of variables by means of data redundancy elimination. To achieve this goal, the processing layer must be able to extract the features of diff erent data and perform appropriate selection and projection. ML techniques can also play a role in this functionality. In short, processed data must be ready in unifi ed and appropriate forms for use by ML techniques in the reasoning layer. reasonIng laYer The reasoning layer is the intelligence kernel in the proposed framework, supporting all intelligent functions and services in the SB. In this layer, ML is the absolute protagonist and performs diverse intelligent reasoning based on various application requests. In essence, ML in the reasoning layer provides an adaptive mechanism capable of learning from historical data when the learning objectives are specifi ed by the users or system designers. All intelligent functions and services, as well as the status of the entire SB, are controllable by a set of parameters that can be changed in the reasoning layer according to input data containing demands and sensory information from the users. Between the input data and output parameter set, appropriately designed ML algorithms suitable for diff erent scenarios can adaptively optimize its parameters according to output feedback. After having been trained by several training datasets, the reasoning layer will be capable of producing optimized output parameters for the service layer. servIce laYer The service layer consists of actuators controlled by the output parameters from the reasoning layer and can therefore change the status of the SB. For a typical SB, these actuators include, but are not limited to, temperature and humidity controllers in air conditioning systems, smart switches of a variety of electric apparatuses, dimming controllers, smart stereos, safety alarms, video surveillance cameras, and LED transmitters for VLC. Using this smart framework and the smart functions supported by ML, all services in SBs are expected to be continuously improved in the long term by iterative training using new datasets. proMIsIng vIsIons When indoor VLC and ML are integrated by the proposed framework detailed above, new features emerge. By harnessing these features, more advanced services can be provided for users, and the operational effi ciency of SBs can be signifi cantly improved in the 6G era. First, by employing VLC in combination with other communication approaches (e.g., RFC and PLC integrating optical sensing), environmental parameters in SBs can be accurately detected and transmitted to higher layers. ML supports the rapid processing of such large amounts of data and enables the display of real-time monitoring information to users on interactive interfaces. In this way, an accurate profi le of the indoor environment can be constructed in real time. When providing accurate information regarding the indoor environment in real time, by-products include various location-aware services, including localization and navigation. Besides excellent observability, the joint application of indoor VLC and ML in SBs also results in far better controllability, benefiting from the high-rate transmission and powerful reasoning capability provided by both techniques. Consequently, the indoor environment in SBs can be adjusted to be more comfortable for residents in a smart and rapid manner. A pictorial illustration of the anticipated application scenarios of SBs coupled by 6G communication techniques is presented in Fig. 3. case studY and valIdatIon Classical approaches are either unable to satisfy the requirements of new applications (e.g., natural language understanding and face detection) or to obtain better performance than ML algorithms (e.g., indoor localization and navigation). To ensure rigor, we use a simple indoor localization example to evaluate the performance of combined VLC and ML algorithms. This involved setting a simulation platform in a cuboid room with width, length, and height of 10 m, 10 m, and 3 m, respectively. To simulate the scenario incorporating both VLC and ML, we further assumed the presence of commercial LEDs, which were modeled by point light sources installed on the ceiling separated from each other by 1 m. This confi guration is similar to that shown in [9], in which four white spotlight LEDs were installed in a cuboid room. Con- sequently, 81 LEDs were installed on the ceiling. Moreover, four WiFi access points (APs) were assumed to be installed on the ceiling at a distance of 2.5 m to the walls and separated by 5 m, which is even more than in a usual confi guration in practice. In order to achieve a comprehensive comparison between the performance of WiFi and VLC, we further assumed another scenario in which four LEDs were installed in the same manner as the WiFi APs. We denote the results for this confi guration as VLC-4 and the results when utilizing 81 LEDs as VLC-81. We utilized received signal strength (RSS)-based algorithms to predict the locations of receivers with the emitted signals of all VLC LEDs and WiFi transceivers. Normally, these algorithms require the pre-installment of receivers to collect RSS data and build datasets. Without losing generality, the pre-installed receivers are assumed to be located at a height of 1 m above the floor, which is typically the height of a phone held by a human. We set a grid of 99  99 receivers in which each was separated from adjacent receivers by 0.1 m with a received field of view (ROV) of 0.7854, and the azimuth angle of each receiver was randomly chosen from -60° to 60°. The RSS dataset for VLC was generated by CandLES, a communication and lighting emulation platform [10]. It is worth noting that each entry of the RSS dataset records the signal of all transmitters. To generate datasets for training purposes, we ran the simulation 50 times for both the VLC and WiFi scenarios. Instead of training a single ML model in a two-dimensional space with 99  99 classes to predict, we established two ML models to enable separate localization on the x-axis and y-axis. Accordingly, for each axis, we have 50  99  99 instances to train the ML model and 99 classes to predict. The collected raw data of RSS will be sent back to the sensing layer and transported to the semantic layer to be converted to countable numerical values. Then the software layer pre-processes the numerical values through data cleaning and data normalization before going into the processing layer for training ML models. After training the models for VLC and WiFi, we applied the same settings to generate new datasets for testing the trained models and generate outputs within the reasoning layer during the inference phase. Specifi cally, we utilized the accuracy rate corresponding to diff erent prediction error distances (PEDs) as a measure to evaluate diff erent localization approaches. The PED is defi ned as the Euclidean distance between the predicted location and the authentic benchmark, and the accuracy rate is consequently defined as the number of predicted locations whose distances to the authentic benchmarks are smaller than the PED. A larger accuracy rate corresponding to a smaller PED thus yields a more accurate localization system. Since we generated the training datasets for each receiver with a 0.1 m separation between adjacent receivers, the precision of the simulated system was 0.1 m. The accuracy rates for diff erent PEDs using various localization methods are presented in Fig. 4. To be comprehensive, we employed three representative ML algorithms: support vector machine (SVM), neural network (NN), and K-nearest neighbors (KNN) to assist in localization data processing. As shown in Fig. 4, diff erent ML algorithms result in diff erent accuracy gains in the localization systems, and the superiority among diff erent ML algorithms could change in terms of the required PED and affordable system complexity. It is also evident that accuracy rates increase with increasing PED for all adopted ML algorithms, which is consistent with our expectation. As shown by the results, the performance of VLC-4 with KNN is better than that of WiFi with KNN. However, the performance of VLC-4 is worse than that of WiFi when applying the SVM and NN algorithms. The reason is that VLC has better directionality than WiFi, which indicates that the received signal at diff erent locations of VLC has a higher degree of independence. Additionally, VLC-81 generates the best results when equipped with all three algorithms, and the results generated by the SVM and KNN algorithms converge when PED becomes large. Considering the physical size of a human, 0.3 m is deemed an applicable value of PED for practical indoor localization systems. The corresponding accuracy rates produced by VLCbased localization assisted by multiple ML algorithms are greater than 95 percent, justifying the feasibility and promising future directions for such a joint approach in SBs. challenges and potentIal Future research dIrectIons As a prototype framework, much can still be done to further promote and improve the framework in practice; this should form the basis of future work for 6G communications. In this regard, we articulate several challenges and potential future research directions. hetnet and InterFerence ManageMent VLC is compelling compared to RFC regarding security, energy efficiency, and bandwidth. However, it also has severe weaknesses, such as shadowing and noise interference. Therefore, a heterogeneous network (HetNet) architecture should be adopted in the network layer of the proposed framework, consisting of VLC, RFC, and PLC, which inherently increases the efficiency of the entire network layer. However, coordination among heterogeneous communications is not a trivial task [11]. Gateway design and compatibility should be given special attention, and relevant standardization works should also be considered in order to support the HetNet architecture of SBs in practice. From Table 1, we know that interference has a more severe impact on VLC than RFC. Therefore, to ensure the performance superiority of indoor VLC in the proposed framework, interference mitigation technologies must be applied to maintain interference levels below a certain threshold. Since LED transmitters are the main source of interference in the indoor environment, appropriate VLC network deployment and LED transmitter placement are crucial for alleviating interference. Moreover, resource allocation and multiple-input multiple-output (MIMO) beamforming could also be promising approaches for mitigating interference and optimizing the overall performance [12,13]. Meanwhile, ML, as an optimization tool, is helpful to coordinate among heterogeneous communications of the HetNet. It can also adjust the light intensity according to the interference level of VLC, hence maintaining an energy-efficient environment of SBs. These are still being researched for SBs. archItecture oF Ml algorIthMs One should note that ML is a generic concept of learning meaningful patterns from complex data distributions, representing a package of diff erent learning algorithms, for example, SVM, NN, KNN, decision tree (DT), and logistic regression (LR). According to the training domains, ML can also be divided into supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning, according to whether or not the training datasets are labeled. The life circle of ML algorithms consists of both the training and inference phases. For the training phase, an ML algorithm processes data collected from outside and updates its parameters in a gradient decreasing direction. The trained ML algorithm will freeze its parameters and be deployed into the smart system for inference when a new data sample comes. However, these parametric algorithms based on historical records might not produce accurate inference on new instances because user preferences may vary over time, causing the distribution of collected data to change over time; this is called "concept drift." There are kinds of ML algorithms that belong to online learning that will update the parameters after deployment to solve it. Multiple SBs may share the same building types (e.g., the same residential buildings in a neighborhood) or may share the same residents (e.g., a residential and offi ce building). From diff erent scopes, multiple SBs share common data patterns; hence, it is possible to cooperatively train from the collected dataset. Considering the data patterns and residents' privacy, federated learning (FL) [14] would be a perfect solution. sb-edge-cloud coMputIng archItecture In most cases, data are collected locally, and the corresponding ML models for processing these data are also trained locally. However, as the amount of data available proliferates, local computing power could become insuffi cient to cope with the increasing complexity of ML models and energy consumption. For this reason, SB-edge-cloud computing architecture has been proposed as a potential technique for extracting useful information from complicated datasets on the SB. Local processors in SBs provide limited computing power to deal with the most sensitive information, such as human-related data. With more computing resources and less latency, edge computing can help to satisfy computing tasks with high-reliability and low-latency requirements. Cloud computing is fl exible to extend the computing resources in scale and could eventually be played as the "trump card" for particularly computation-hungry tasks as demanded. The cloud centers could be built away from urban areas and take advantage of renewable energy and sustainable sources, such as tidal, solar, and wind power, making it possible to reduce the carbon footprint in the future. SB-edge-cloud computing architecture promises intelligent applications in SBs; however, scheduling the off loading of tasks remains a substantial challenge for researchers. realIstIc Factors aFFectIng the IMpleMentatIon The above description of the proposed framework demonstrates that high-rate data collection, transmission, and pre-processing/ processing over multiple functional layers might result in significant challenges to reliability, stability, and security. These challenges become more severe when collected data are subject to pollution, malicious user behavior, and active network attacks. Consequently, strictly regulated anomaly detection mechanisms must be involved to ensure the reliability, stability, and security of the entire data network. This topic awaits future research. A large number of sensors constitute the sensing layer. By implementing such a framework, all residents and their living conditions are observable and can be "seen" by high-layer programs and processing procedures. This issue risks compromising the privacy of residents. To further promote the proposed framework, further investigation and legislation should be conducted and developed with the aim of ensuring a sufficiently secure data protection mechanism. conclusIons To meet the rapid trend of urbanization, SB plays an invaluable role. With the aim of equipping SBs with environmental perception and logic reasoning abilities, we envision a novel SB framework in the context of 6G communications. Two key technologies of 6G communications, indoor VLC and ML, were jointly applied to construct a reliable transmission infrastructure and perform big data analytics while adapting the indoor environment of the SB. Within this framework, the SB is envisioned to provide a variety of advanced services to residents in a smart and efficient manner. To promote further research and implement the framework in practice, we also simulated a simplistic case to verify its feasibility, and considered the challenges facing such SBs and potential future research directions to mitigate these challenges. reFerences BaSeM ShihaDa [SM '12]<EMAIL_ADDRESS>is an associate and founding professor of computer science and electrical engineering in the CEMSE Division at KAUST. Before joining KAUST in 2009, he was a visiting faculty member in the Computer Science Department at Stanford University. His current research covers a range of topics in energy and resource allocation in wired and wireless communication networks, including wireless mesh, wireless sensor, multimedia, and optical networks. He is also interested in SDNs, IoT, and cloud computing. Two key technologies of 6G communications, indoor VLC and ML, were jointly applied to construct a reliable transmission infrastructure and perform big data analytics while adapting the indoor environment of the SB. Within this framework, the SB is envisioned to provide a variety of advanced services to residents in a smart and efficient manner
6,357.4
2021-01-13T00:00:00.000
[ "Computer Science" ]
Identification of candidate diagnostic biomarkers for adolescent idiopathic scoliosis using UPLC/QTOF-MS analysis: a first report of lipid metabolism profiles Adolescent idiopathic scoliosis (AIS) is a complex spine deformity, affecting approximately 1–3% adolescents. Earlier diagnosis could increase the likelihood of successful conservative treatment and hence reduce the need for surgical intervention. We conducted a serum metabonomic study to explore the potential biomarkers of AIS for early diagnosis. Serum metabolic profiles were firstly explored between 30 AIS patients and 31 healthy controls by ultra high-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry. Then, the candidate metabolites were validated in an independent cohort including 31 AIS patients and 44 controls. The results showed that metabolic profiles of AIS patients generally deviated from healthy controls in both the discovery set and replication set. Seven differential metabolites were identified as candidate diagnostic biomarkers, including PC(20:4), 2-hexenoylcarnitine, beta-D-glucopyranuronicacid, DG(38:9), MG(20:3), LysoPC(18:2) and LysoPC(16:0). These candidate metabolites indicated disrupted lipid metabolism in AIS, including glycerophospholipid, glycerolipid and fatty acid metabolism. Elevated expressions of adipose triglyceride lipase and hormone sensitive lipase in adipose tissue further corroborated our findings of increased lipid metabolism in AIS. Our findings suggest that differential metabolites discovered in AIS could be used as potential diagnostic biomarkers and that lipid metabolism plays a role in the pathogenesis of AIS. the best tool to measure trunkal asymmetry for scoliosis screening 5 . However, screening for scoliosis is still controversial. The United States Preventive Services Task Force and American Academy of Family Physicians recently recommended against routine scoliosis screening in asymptomatic adolescents, because its low specificity would expose many low-risk adolescents to unnecessary radiographs and referrals 8 . Therefore there is a pressing need to find new diagnostic biomarkers to facilitate accurate early detection 6,7 . AIS is thought to be a multifactorial disorder, involving genetic factors, nervous system, hormones and metabolic dysfunction, skeletal growth, biomechanical factors and environmental and lifestyle factors 9 . Despite considerable efforts, the etiopathogenesis of AIS remained largely unknown. Previous studies have investigated several proteins related to AIS. However, their application prospects as diagnostic biomarkers were poor or not fully evaluated [10][11][12][13][14] . Platelet calmodulin levels were found to correlate closely with curve progression and stabilization by bracing or spinal fusion 10 . However, the lack of normal data and the large variability in the baseline levels limited its potential use and necessitated the use of the AIS patients as their own control 11 . Qiu et al. in 2007 12 , first reported decreased leptin levels in AIS patients. Yet, later the same research group found no significant differences in total leptin levels between AIS females and healthy controls, but significant differences in the ratio of leptin to soluble leptin receptor 13 . Recently, abnormal levels of plasma osteopontin, soluble CD44 and serum ghrelin were found in some AIS patients 14 . However, further research and a validated method for early diagnosis were still needed. The advent and development of metabonomics enables researchers to detect a large number of small-molecule metabolites quantitatively in a single step 15 , providing immense potentials in discovering disease-related biomarkers. Metabolic profiles often reflect the consequences of the pathophysiological process and may assist the development of novel diagnostic tests. Thus, we conducted this serum metabonomic study on AIS patients using ultra high-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry (UPLC/QTOF-MS). A two-stage study design was utilized-discovery stage and replication stage. The primary aim was to discover potential diagnostic biomarkers of AIS, and the secondary aim was to explore the mechanism of the "presumed abnormal metabolic profiles" of AIS. Results Demographic data of AIS patients and healthy controls. Demographic data and biochemical index of all participants are shown in Table 1. The proportion of females was 82.8% in the whole population as AIS affected females more frequently. There were no significant difference of age and sex ratio between both groups in the discovery and replication sets (P > 0.05). The mean weight between the two groups was also similar in both sets (P > 0.05). The Cobb angles of the main curve, which represented the severity of scoliosis, ranged from 30° to 92° in the discovery set, and 30° to 78° in the replication set, suggesting moderate to severe scoliosis of the collected cohorts. Common biochemical indices, including alamine aminotransferase, total bilirubin, direct bilirubin, creatinine and urea, also showed no statistical significance between the two groups in both sets (P > 0.05). Serum metabolic profiles of AIS patients in discovery set. The metabolic profiles of serum samples from 30 AIS patients and 31 healthy controls in the discovery set were acquired using UPLC/QTOF-MS in positive mode. The representative base peak intensity chromatograms from both groups are illustrated in Supplementary Figure S1. The PCA analysis was performed by the Markerlynx 4.1 software to discern the metabolic profiles of AIS and healthy controls. The score plot showed that the metabolic profiles of AIS patients generally deviated from the healthy controls, suggesting that significant biochemical changes occurred in AIS patients (Fig. 1A). Validation of diagnostic biomarkers in replication set through UPLC/QTOF-MS. To validate the findings in discovery set, serum samples of 31 AIS patients and 44 healthy controls were collected and analyzed with the same analytical procedures as the discovery set. The metabolic profiles of AIS samples also deviated from healthy controls indicated by PCA analysis (Fig. 3A), mainly in [t2] axis, suggesting favorable repeatability of UPLC-MS assays. To validate the values of the potential diagnostic biomarkers for accurate diagnosis, only the seven differential metabolites identified in discovery stage were used as variables and imported into SIMCA-P software. A new score plot of AIS and control by PCA analysis was performed. As showed in Fig. 3B, the metabolite profiles of AIS and normal were clearly discriminated from each other in the [t1] axis, which meant superior separation than that in [t2] axis. In addition, the Q2 and R2 values of the new PCA model were 0.786 and 0.928, which was higher than the PCA model containing all the variables (Q2 = 0.14, R2 = 0.26), suggesting greatly improved capacities of model prediction. Mean peak areas of the seven candidate diagnostic biomarkers between AIS patients and healthy participants were compared in discovery set and replication set, respectively. It turned out that the seven potential diagnostic biomarkers were significantly different in serum samples of AIS to the same tendencies in both discovery and replication sets (P < 0.01; Fig. 4), which meant the concentrations of these metabolites in AIS patients were relatively stable. This was valuable for the use of diagnosis of AIS. However, as illustrated in Fig. 4, changes of these metabolites between the two sets of samples could still observed. Qualification and validation with a larger number of samples is still needed. Correlations between these seven candidate diagnostic biomarkers and main clinical features were analyzed in the supplementary information (Table S1). Stratification of all AIS patients based on the severity of the Cobb angle were performed and PCA analysis was proceeded (Supplementary Figure S2). Expressions of lecithin: cholesterol acyltransferase (LCAT) in serum and adipose triglyceride lipase (ATGL) and hormone sensitive lipase (HSL) in adipose tissue. The mechanism of disturbed metabolic pathways of AIS was further explored. LCAT is a serum enzyme that catalyzes the transacylation of the sn-2 fatty acid of phosphatidylcholine to the free 3-hydroxyl group of cholesterol, generating lysophosphatidylcholine (LysoPC) and cholesteryl esters 16 . LysoPC has a role in lipid signaling by acting on LysoPC receptors. In blood, significant amounts of LysoPC are formed by this specific enzyme system. Two types of LysoPC (LysoPC(18:2) (6) and LysoPC(16:0) (7)) were significantly decreased in serum samples of AIS patients. Thus, detection of serum LCAT activity was performed using fluorescent ELISA (Fig. 5A). The mean ratio of the two emission intensities (460 nm/405 nm), which reflected LCAT activity, were 1.117 ± 0.048 and 1.122 ± 0.05 in AIS patients and healthy participants, respectively. No statistical significance was observed (P = 0.812), indicating that differential LysoPCs of AIS were not caused by LCAT. Decreased levels of diacylglycerol (DG(38:9), (4)) and monoacylglycerol (MG(20:3), (5)), which were decomposition products of triacylglycerol (TG), were observed in present study (Fig. 5B). The breakdown of stored TG is largely regulated by ATGL and HSL 17 . mRNA expression levels of ATGL and HSL were analyzed by Reverse transcription-polymerase chain reaction (RT-PCR). Compared with healthy participants, band intensities of ATGL and HSL in AIS patients were significantly increased, suggesting higher expressions of ATGL and HSL in adipose tissues of AIS patients (Fig. 5C). Discussion Early diagnosis of AIS could potentially reduce surgical intervention, however clinically useful and accurate diagnostic biomarkers have yet to be realized. Metabonomic study enabled us to discover diagnostic biomarkers through an untargeted metabolites searching approach. Using metabolic profiling, seven differential metabolites characterizing AIS were identified by the OPLS-DA model in discovery set. To the best of our knowledge, all the differential metabolites were firstly identified as potential biomarkers in serum samples of AIS. The discrimination power of these seven diagnostic biomarkers was validated by UPLC/QTOF-MS analysis in replication set, thus minimizing false-positive findings. Glycerophospholipids are important membrane lipids and play a vital role in cellular functions, such as signal transductions, regulations of transport processes and protein functions 18 . In addition, glycerophospholipids are essential components of lipoproteins and influence their metabolism and functions 19,20 . In a recent metabonomic study, abnormal glycerophospholipid metabolism has been detected in murine osteoclast treated by estradiol 21 . Differentiated LysoPCs were also observed in plasma samples of the osteoporotic rat's model 22 . These results are consistent with osteopenia and deranged bone quality presenting in AIS 23,24 . Interestingly, lower estrogen content or their abnormal action on respective target cells, e.g. bone cells, are supposed to be one of the etiologic factors of AIS 25,26 . Estrogens interact with many pathophysiological factors that are believed to influence the development of scoliosis, such as modulation of growth factors, inhibition of melatonin synthesis, interaction with calmodulin, exacerbation of response to strain and bone formation/resorption 26 . We detected the perturbed glycerophospholipid pathway in AIS patients, which might be caused by abnormal biological functions of estrogens of AIS. Thus, PCA analysis was further performed between males and females in AIS patients and healthy controls. However, the metabolic profiles were not generally deviated in both AIS patients and controls (supplementary Figure S3). Other mechanisms, which were not caused by sex, might exist between in AIS. LCAT is a key enzyme in the process of LysoPC metabolism (Fig. 5A). However, we did not observe abnormal LCAT activity in AIS patients, suggesting the decreased LysoPCs were probably caused by other mechanisms. Abnormal glycerolipid degradation was largely regulated by ATGL and HSL, which aroused our interest in exploring their expressions (Fig. 5B). Because ATGL and HSL are mainly located in and play a role in adipose tissue, mRNA expression levels in adipose tissue were assayed by RT-PCR. Increased expressions of ATGL and HSL were observed in AIS patients, suggesting increased lipolysis of AIS patients. Additionally, increased 2-hexenoylcarnitine (2), a fatty acylcarnitine, was observed in AIS patients. Fatty acylcarnitines act as the medium which assist the transportation of long chain fatty acid into mitochondria and increased serum fatty acylcarnitines concentrations are reported to reflect long chain fatty acid β-oxidation, thus indicating increased lipid metabolism [27][28][29] . These were consistent with previous reports of lower body mass index and low fat mass in AIS compared with general population 30,31 . In fact, it has already been postulated that AIS has a dysfunctional energy balance involving a complex systems including white adipose tissue, the adipose-tissue derived hormone leptin and other cytokine-hormones, hypothalamus and neuroendocrine axes 32 . Controversies were still existed about the pathogenesis of AIS. However, researchers reached a consensus on its multifactorial etiologies. Hormones and metabolic dysfunction was thought to be one of the top theories 9 . In this study, perturbed glycerophospholipid metabolism, glycerolipid metabolism and fatty acid metabolism were firstly discovered in AIS patients, validating the postulation of disturbed energy metabolism of AIS by some researchers 30,32 . Energy homeostasis are regulated by integratory centers in the central nervous system which receive and convey signals from peripheral organs and then send efferent neural and hormonal signals to peripheral tissues. So we postulated that the perturbed lipid metabolism in AIS could be the manifestation of an abnormal neuroendocrine system, caused by genetic variations 9,33,34 . There were some limitations in the current study. Firstly, serum samples of AIS in both discovery and replication set came from patients seeking surgical intervention, which represented relatively severe scoliotic deformities. So the results might not be extrapolated to AIS patients who present at an earlier stage. The early diagnostic values of the potential biomarkers discovered in our study still need to be further validated in larger clinical trials. In addition, given the nature of the cross-sectional study design, the differentiated metabolic profiles of AIS might be confused by other confounding factors, such as different dietary intakes. However, all the serum samples were collected at the same time (fasting morning) in both cases and controls. Most importantly, a two-stage study design was used and the differential metabolites discovered in the first cohort were further confirmed in an independent population-based replication sample, thus enhancing the reliability of our results. Finally, RT-PCR assays of ATGL and HSL were not analyzed quantitatively due to the small number of adipose tissue samples collected. Further larger studies focusing on the pathogenesis of AIS are required to confirm this. Conclusions Our study has provided serum characteristic metabolic profiles of AIS patients. Seven differential metabolites were identified from metabonomic analysis in the discovery set. These candidate diagnostic biomarkers were validated by metabonomic analysis in an independent replication set. Differential metabolites suggested a disrupted lipid metabolism in AIS, including glycerophospholipid metabolism, glycerolipid metabolism and fatty acid metabolism. Additionally, related proteins of the perturbed pathways showed elevated ATGL and HSL in adipose tissues of AIS, providing clues for further researches of the pathogenesis of AIS. We believe that our study is one further step closer to finding a clinically useful and validated diagnostic biomarker test for early detection of scoliosis which could potentially reduce the need for invasive surgical correction. Methods Subjects and sample collection. Both discovery samples and replication samples were derived from patients seeking surgical treatment in Peking Union Medical College Hospital. They were age and sex matched with healthy controls. 30 AIS cases and 31 healthy controls were selected at discovery stage whilst 31 AIS cases and 44 healthy controls at replication set. The diagnosis of AIS was made pre-operatively by experienced surgeons mainly based on rotational rib prominence during the Adams Forward Bend Test and a maximum Cobb angle above 10°. All other types of scoliosis, such as syndromic scoliosis and congenital scoliosis, were excluded from our study. Meanwhile, the healthy controls were firstly tested with Adams Forward Bending Test to rule out any scoliosis. In the case of any uncertainty, radiographs were performed for validation. Fasting blood samples were collected in AIS patients preoperatively as well as in healthy controls. Collected blood samples were left to clot for two hours at room temperature, then centrifuged at 3000 rpm for 15 minutes at 4 °C. Serum was aliquoted and stored at − 80 °C until use in the assay. To assay mRNA levels of ATGL and HSL, subcutaneous adipose tissues of 4 AIS patients and 2 controls (one from a 17 year-old female patient with a diagnosis of genu valgum and the other from a 16 year-old female patient with gluteus contracture) were collected intra-operatively. The study was approved by the institutional review board of Peking Union Medical College Hospital and written informed consents were obtained from all participants. All experimental procedures were carried out in accordance with the approved guidelines. Serum Metaobnomics study by UPLC/QTOF-MS. Sample preparation. The metabolites extraction process was performed according to procedures outlined in our previously published study with minor modifications 35 . Briefly, 200 μL serum was added into 800 μL methanol-acetonitrile mixture (4:1, v/v), vortex-mixing undertaken for 1 minute. After 5 minutes standing in ice-bath, the above mixture was centrifuged at 13000 rpm for 15 minutes at 4 °C to precipitate the proteins. Supernatant was collected and dried with nitrogen at 37 °C. The dried residue was reconstituted in 200 μL 30% (by volume) acetonitrile in water, then vortex-mix for 1 minute. After centrifugation at 13000 rpm for a further 15 minutes at 4 °C, 2 μL of supernatant was injected for UPLC/ QTOF-MS analysis. Data acquisition. Chromatographic separation was preceded on an Acquity UPLC HSS T3 column (2.1 mm × 100 mm, 1.8 μm, Waters Corp., Milford, USA) using Waters Acquity TM UPLC system. The column was maintained at 40 °C and eluted at a flowing rate of 0.45 mL/min, using a mobile phase of (A) 5% (by volume) acetonitrile in water and (B) 95% (by volume) acetonitrile in water. The gradient program was optimized as follows: 0-5 min, 1%B to 45%B; 5-9 min, 45%B to 70%B; 9-11 min, 70%B to 99%B; 11-13 min, washing with 99%B, and 13-17 min, equilibration with 1%B. The eluent from the column was directed to the mass spectrometer without split. A Waters SYNAPY G2 HDMS (Waters Corp., Manchester, UK) was used to perform the mass spectrometry with an electrospray ionization source operating in positive ion mode. The capillary voltage was set to 3.0 kV. Sample cone voltage and extraction cone voltage were at 40 V and 4 V, respectively. Using drying gas nitrogen, the desolvation gas rate was set at 800 L/h at 450 °C, the cone gas rate at 50 L/h, and the source temperature at 120 °C. The scan time and inter scan delay were set at 0.3 s and 0.02 s, respectively. Leucine-enkephalin was used as the lockmass in positive ion mode (m/z 556.2771[M + H] + ). Data was collected in centroid mode from m/z 50-1200 Da. To validate the stability of sequence analysis, 10 μL samples were extracted from randomly selected 10 AIS patients and 10 controls and were pooled as a quality control sample (QC). The pooled QC sample was prepared in the same way as the other samples and analyzed randomly through the analytical batch. The extracted ion chromatographic peaks of ten ions in positive mode were selected for method validation, as retention time (RT) and m/z pairs of 0.89_315.0804, 1 Multivariate statistical analysis. The raw spectral data were first analyzed with MassLynx Applications Manager Version 4.1 (Waters, Manchester, UK). Deconvolution, alignment and data reduction were performed to provide a list of RT and mass pairs with corresponding peak area for all the detected peaks from each file in the data set. The main parameters were set as follows: RT range 0.5-14 min; mass range 50-1200; XIC window, 0.02 Da; automatically calculate peak width and peak-peak base-line noise; use the raw data during the deconvolution procedure; marker intensity threshold (count), 300; mass tolerance, 0.02 Da; RT windows, 0.2 s; noise elimination level, 6; retain the isotopic peaks. The resulting UPLC-MS data were then transferred to SIMCA-P software package (version 12.0, Umetric, Umeå, Sweden). Principal component analysis (PCA), which mapped samples based on their spectral profile without using previous knowledge of class, was used to explore inherent grouping between AIS patients and healthy controls by visual inspection of score plots. Supervised models were subsequently performed by orthogonal partial least squares discriminant analysis (OPLS-DA) to maximize the separation between different classes and identify biomarkers associated with AIS. The results were visualized in the form of score plots and potential biomarkers were selected on the basis of variable importance in the project (VIP) value and S-plot. Fluorescent enzyme-linked immune sorbent assay of LCAT activity. LCAT activity was assayed with LCAT kit as per manufacturer's instructions in serum samples of 12 AIS patients and 12 healthy controls (Calbiochem/ EMD-Millipore/Merck KGaA, Darmstadt, Germany). Each sample was tested in duplicate. Briefly, 1 μL LCAT substrate was mixed with 200 μL LCAT assay buffer containing 5 μL serum samples. The mixture was then incubated for 4 hours at 37 °C. Then, 100 μL of the mixture was added to 300 μL READ reagent. The fluorescence at an excitation wavelength of 355 nm and emission wavelengths of 405 nm and 460 nm was read. The ratio of the two emission intensities at 460 nm and 405 nm were analyzed between the two groups. RT-PCR for ATGL and HSL in adipose tissue. Total RNA was extracted using Trizol Reagent (Life Technologies AB & Invitrogen, Carlsbad, USA) and then converted into cDNA with RevertAid First Strand cDNA Synthesis Kit (Thermo Fisher Scientific Inc., Waltham, USA). Using β -actin as an internal reference, PCR was preceded using FastStart Universal SYBR Green Master (F. Hoffmann-La Roche Ltd., Basel, Switzerland) (Supplementary Table S2). All the procedures were performed following the manufacturer's instructions. Agarose gel electrophoresis was then performed and fluorescent strips were recorded. Statistical analysis. Statistical analysis was conducted using SPSS software version 16.0 (Chicago, IL, USA). All the numerical variables between AIS patients and healthy participants, including age, weight, biochemical indexes, LCAT emission intensity and mean peak areas of representative metabolites, were compared by two-tailed Student's t-test. Sex ratio was compared using Fisher's exact test. P values less than 0.05 were set as statistically significant.
4,958.8
2016-03-01T00:00:00.000
[ "Biology" ]
Infrared optical properties modulation of VO2 thin film fabricated by ultrafast pulsed laser deposition for thermochromic smart window applications Over the years, vanadium dioxide, (VO2(M1)), has been extensively utilised to fabricate thermochromic thin films with the focus on using external stimuli, such as heat, to modulate the visible through near-infrared transmittance for energy efficiency of buildings and indoor comfort. It is thus valuable to extend the study of thermochromic materials into the mid-infrared (MIR) wavelengths for applications such as smart radiative devices. On top of this, there are numerous challenges with synthesising pure VO2 (M1) thin films, as most fabrication techniques require the post-annealing of a deposited thin film to convert amorphous VO2 into a crystalline phase. Here, we present a direct method to fabricate thicker VO2(M1) thin films onto hot silica substrates (at substrate temperatures of 400 °C and 700 °C) from vanadium pentoxide (V2O5) precursor material. A high repetition rate (10 kHz) femtosecond laser is used to deposit the V2O5 leading to the formation of VO2 (M1) without any post-annealing steps. Surface morphology, structural properties, and UV–visible optical properties, including optical band gap and complex refractive index, as a function of the substrate temperature, were studied and reported below. The transmission electron microscopic (TEM) and X-ray diffraction studies confirm that VO2 (M1) thin films deposited at 700 °C are dominated by a highly texturized polycrystalline monoclinic crystalline structure. The thermochromic characteristics in the mid-infrared (MIR) at a wavelength range of 2.5–5.0 μm are presented using temperature-dependent transmittance measurements. The first-order phase transition from metal-to-semiconductor and the hysteresis bandwidth of the transition were confirmed to be 64.4 °C and 12.6 °C respectively, for a sample fabricated at 700 °C. Thermo-optical emissivity properties indicate that these VO2 (M1) thin films fabricated with femtosecond laser deposition have strong potential for both radiative thermal management or control via active energy-saving windows for buildings, and satellites and spacecraft. Increasingly vanadium dioxide (VO 2 ) (M1) is a technologically important metal oxide, owing to its remarkable change in first-order insulator-to-metal transition (IMT) at a critical temperature of around 68 °C 1,2 . The phase transition temperature of VO 2 (M1) thin films can be triggered using external stimuli such as thermal, electrical, and ultrafast optical excitations. The induced phase transition of VO 2 thin films from monoclinic insulator to rutile metallic phase is reversible and accompanied by a large change in electrical, magnetic and optical properties. These characteristics have significant potential for a wide range of modern applications such as actuators, passive smart radiation devices, thermochromic smart (active) windows, modulation of near-infrared (NIR) to mid-infrared (MIR) wavelengths or optical switching to modulate the MIR emissivity, and passive radiative cooling [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] . For instance, the phase transition temperature of VO 2 (M1) thin film is associated with modulating NIR to MIR spectral range transmittance and reflectance as a function of temperature. These properties could be utilised to develop more efficient thermal control systems 2,14 depending on the IR substrate on which VO 2 film is deposited and its thickness. Changes in optical properties of VO 2 (M1) thin film in the MIR is quite useful www.nature.com/scientificreports/ for specific applications including spacecraft thermal control, energy-saving buildings and selective camouflage against IR sensors. There had been numerous studies on VO 2 (M1) thin films (thickness < 0.90 μm) for visible and near-infrared (NIR) thermochromic energy-saving applications [19][20][21][22][23] . Such VO 2 films exhibit outstanding NIR (1.0 to 2.5 μm wavelength) transparency (> 70% transmittance) at low temperatures of about 25 °C. However, the transmittance is completely blocked or reduced to nearly zero at temperatures above the 68 °C metal-to-insulator transition. These studies demonstrate better control of the insulator-metal transition switching properties in the NIR wavelength range, but there are limited numbers of comparative studies on VO 2 (M1) films operating in the MIRto-longer wavelength region (LWIR). Guinneton et al. 15 in 2001, fabricated VO 2 thin films on silica substrates in thicknesses less than 200 nm using a vanadium target and RF reactive sputtering to evaluate controllable optical properties in the infrared. Similarly, Gianmario et al. 16 deposited VO 2 thin films on a silicon wafer using the same RF sputtering methods to estimate the optical properties and thermal hysteresis in the MIR sub-spectral ranges. Naturally, both examples required a post-deposition annealing stage. A transition temperature around 68 °C was reported with a significant difference in the thermal hysteresis bandwidth at the short and longwavelength regions. Recently, Dongqing et al. 23 synthesised VO 2 thin films of thicknesses 400 nm and 900 nm using a sol-gel process to evaluate thermochromic phase transitions and IR thermochromic property's in the 7.5-14 μm wavelength range. Over the past few decades, VO 2 (M1) nanostructure thin films have been fabricated by using various methods, which include sol-gel, chemical vapour deposition, sputtering, atomic layer deposition, and nanosecond(ns) or femtosecond (fs) pulsed laser deposition (PLD) 18 . However, the majority of these deposition techniques are limited to the synthesis of VO 2 films of less than 400 nm and require essential post-annealing processing to convert the various amorphous VOx phases to crystalline VO 2 (M1). Consequently, there is a need to develop a suitable method capable of synthesising thicker VO 2 (M1) films and ideally without post-annealing. As a consequence, fs-PLD offers the exceptional advantage of producing nanostructures of different particle sizes/thin film thicknesses, morphology, and chemical composition by fine-tuning the laser parameters (laser energy, repetition rate, and pulse width) and chamber conditions (gas pressure, substrate temperature, and substrate-target distance) in a single-deposition process. Conceivably this can also be done at speed and large scale. For example, the ablation mechanism of the fs-PLD is completely different from that of ns-laser PLD with an average ablation rate around 35 times higher than conventional ns-PLD; reported elsewhere 19 . We have recently demonstrated a sharp and abrupt metal-to-insulator transition (MIT) of three-to-four orders of magnitude resistivity change in thicker high-quality VO 2 (M1) films on sapphire substrates using the fs-PLD with a laser of repetition rate 10 kHz 1 . To the best of our knowledge, there has not been any report on fs-PLD with a repetition rate higher than this for fabricating of VO 2 thin film onto silica substrate, and the significance of which is high deposition rates of high-quality materials. In this study, we investigated the optimum conditions for the synthesis of thick VO 2 (M1) films on silica substrate using a high repetition rate (10.0 kHz) fs-PLD technique. Significant parameters, including substrate temperature, surface morphology, optical band gap and refractive index in the UV-vis-NIR spectrum, and transition-switching in the MIR are discussed and reflect the potential application range of such materials. Experimental details Sample fabrication. Two VO 2 (M1) thin films were fabricated onto silica substrates using a vanadium pentoxide (V 2 O 5 ) target as reported previously by Kumi-Barimah et al. 1 . The silica substrates of sizes 20 mm × 30 mm × 1 mm were cleaned in an ultrasonic bath using acetone, followed by an isopropyl alcohol rinse then dried with clean lens tissue. The substrate and the target were mounted to respective holders in the PLD chamber, which was pumped down to a base pressure of 10 -7 Torr prior to the process run, and then injected with high-purity process oxygen to a pressure of 70 mTorr. The substrate temperature was held at 400 °C (Sample code VT400) and 700 °C (Sample code VT700) with a substrate-to-target distance of 70 mm, for both. The deposition process used a laser fluence of 0.27 J/cm 2 to ablate the V 2 O 5 target for a period of 2 h using KMLabs Wyvern™ 1000-10 solid-state Ti:sapphire laser/amplifier. Samples VT400 and VT700 have growth rates of 6.25 nm/s and 5.42 nm/s with thin-film thicknesses of ~ 750 nm and ~ 650 as the deposition rate depends mainly on laser fluence and substrate temperature. Characterisation. The surface morphology and cross-sections of the VO 2 (M1) thin films were prepared and characterized using a high-resolution monochromated field emission scanning electron microscope (FEG-SEM) with precise, focused ion beam (FIB) (FEI Helios G4 CX DualBeam). Furthermore, the VO 2 (M1) thin films were analysed for elemental identification based on cross-sectional compositional contrast of the different atomic numbers via high-resolution transmission electron microscopy (HRTEM) and scanning (S)/TEM EDX spectroscopy imaging (FEI Tecnai F20 200 kV FEGTEM). Additionally, X-ray diffraction (XRD) pattern analysis of the prepared thin films was done using a Philips PANalytical X'pert Diffractometer, using Cu Kα radiation (λ = 1.54056 Å), at 40 kV and 100 mA. Each scan was performed with a diffractometer angle varied between 5° and 80° in a step size of 0.033°. A Perkin Elmer UV/VIS/NIR Lambda 950 spectrometer was also used to gather the transmittance and reflectance spectra at room temperature from 250 to 2500 nm to determine the optical band gap and complex refractive index of the samples under test. Furthermore, the MIR and LWIR (2500 nm to 25,000 nm) optical transmittance and reflectance were measured by Bruker Vertex 70v transmittance FTIR spectrometer, together with an A513/Q variable angle reflection accessory. The VO 2 thin films were mounted on a heated stage to vary the sample temperature from 25 to 100 °C in the step of 10 °C increment during the study. The samples were allowed to reach a steady temperature during the heating stage before MIR transmittance was recorded to determine the thermochromic transition temperature and hysteresis width. The reflectance meas- www.nature.com/scientificreports/ urement was done using a variable angle reflection accessory (A513/Q Vertex 70v, Bruker) at an incidence angle of 20° and film temperatures of 25 °C, 60 °C, and 100 °C to determine the MIR emissivity. Results and discussion Surface morphological and structural evolution. The surface morphology of VO 2 thin film samples fabricated was initially characterised by SEM imaging to evaluate substrate temperature effect on VO 2 particles or grain size when deposited onto the silica substrate. Figure 1a,b show the top-view SEM images of the samples prepared at the substrate temperature of 400 °C and 700 °C. Sample VT400 °C exhibits a more uniform particle size distribution and pores with an average grain size of about 12 nm (according to ImageJ software analysis). On the other hand, adjusting deposition temperature to 700 °C larger and denser particles with an average grain size of 460 nm was obtained. Sample VT400 on the other hand consists of a particulate film which is a more coarse, loose, and porous structure. The TEM cross-section of the samples VT400 and VT700 was prepared by focussed ion beam etching and mounting as illustrated in Fig. 2a,d, respectively. These lamellae were cut and mounted on TEM stubs for analysis and had average lamellae thicknesses of ~ 750 nm and ~ 650 nm with growth rates of 6.25 nm/s and 5.42 nm/s. The TEM cross-section of sample VT700 (2d) evidences the homogenous metastable state of VO 2 film as compared to sample VT400 (2a), which is more porous (overly bright and dark areas in the image). These clearly show that the higher deposition temperature contributes to the nucleation and amalgamation of the denser polycrystalline material. Furthermore, the crystallinity of the samples was examined at the atomic scale by means of using a high angular dark field (HAADF) STEM image and selected area electron diffraction (SAED) pattern. Figure 2b,e illustrate HAADF-STEM and SAED patterns of samples VT400 and VT700 with VT400 exhibiting polycrystalline structure due to existing short-range order. On the other hand, the SAED pattern of sample VT700 confirms extended long-range polycrystalline structure owing to a discrete spot with a high degree of periodic order in the crystal lattice. To more quantitatively evaluate the crystallographic properties we carried out a Fast Fourier Transform (FFT) analysis to determine the d-spacing of the HAADF-STEM images. Figure 3a illustrates the HAADF-STEM crosssectional image obtained from sample VT700 for crystallographic orientation evaluation. The HRTEM image extracted from the red rectangular region in Fig. 3a inset as Fig. 3b was employed to envisage the diffraction pattern and d-spacing depicted in Fig. 3c. The interplanar spacing attained matching to a d-spacing or an outof-plane spacing of 0.324 nm, which correlates with the (110) plane of the VO 2 (M1) phase. Similarly, in-plane spacing was found to be 0.169 nm corresponding to (221) plane with its lattice fringe shown in Fig. 3d. The interplanar spacing obtained from the FFT analyses and SAED pattern matches with a monoclinic structure of VO 2 (M1). In addition, the diffraction quality of sample VT400 was assessed by examining the lattice crystal with the FFT of the image and comparing it with the SAED pattern (Fig. 2b) Additionally, we also analysed the elemental composition of samples VT400 and VT700 by using HAADF-STEM cross-sectional images. The STEM-EDX of these samples confirms a uniform distribution of elemental species such as vanadium (V) and oxygen (O) on the deposited layer without any intermixing between VO 2 layer and silica substrate as depicted in Fig. 2c,f. Following the precise FIB, TEM and FFT examination of the thin film samples, XRD was performed to measure the crystallographic structure of the VO 2 (M1) thin films deposited on silica substrates. Figure 4 illustrates the XRD pattern obtained from the samples VT400 and VT700 as prepared. Sample VT400 reveals six crystalline peaks centred at 2θ = ∼27.5°, ∼37.1°, ∼42.2°, ∼56.9°, ∼65°, and ∼73.5°, which correlate to (011), (200), (210), (220), (013), and (231). This confirms that sample VT400 is a polycrystalline material in good agreement with the SAED pattern observed from the HRTEM analysis. Moreover, as the substrate temperature was increased to 700 °C the XRD pattern exhibited one intense peak occurred at 2θ = 27.95° and a minor orientation peak at 56.9°. These peaks match with XRD patterns of (011) and (220) indicating a highly texturized polycrystalline Vis-NIR optical properties. The optical transmittance and reflectance spectra of the VO 2 thin films were measured by UV-VIS-NIR Spectrophotometer (PerkinElmer, LAMBDA 950) equipped with a 60 mm integrating sphere module in the spectral range of 250-2500 nm wavelength; which are presented in Fig. 5a,b. As shown in Fig. 5a, the transmittance for both samples fabricated at different substrate temperatures remained the same www.nature.com/scientificreports/ for wavelengths from 250 to 500 nm; however, the absorption edge which is sensitive to the thin film fabrication substrate temperature increases from 500 to 1200 nm [shows in the inset of Fig. 5a]. The absorption edge for samples VT400 and VT700 occurred at ~ 503 nm and ~ 470 nm. On the other hand, the transmittance decreased slightly with increasing substrate temperature in the NIR spectra range, which could be attributed to the large particle size and lack of porosity for VT700. Figure 5b displays the reflectance spectra for both samples. www.nature.com/scientificreports/ The optical absorption coefficient, α, of both samples were derived from the transmittance and reflectance spectra based on the following relationship 24,25 : where T and R are the transmittance and reflectance, and t is the thickness of the film. The optical band gap of sample VT400 and VT700 was determined using Tauc's relationship between α and energy of incident photons exciting electrons from the valance band to the conduction band (hν) 24,26 : where k is an energy-dependent constant, E g is the optical band gap. The exponent n depends on the nature of transition responsible for the absorption, n = 1/2, 2, 3/2 or 3 , which correspond to allowed direct, allowed indirect, forbidden direct or forbidden indirect transition. We initially tested all the possible type of transitions n-values by plotting (αhυ) 1 n versus the incident photon energy (hυ) . It was observed that n = 1/2 (direct allowed) transition displays the best slope fitting or tangent to the curve where the intercept occurred at αhυ = 0 . Figure 5c illustrates (αhυ) 2 versus hυ of samples VT400 and VT700 with direct allowed optical band gap values of 1.821 eV and 1.678 eV. The decrease in optical band gap with increasing substrate temperature is attributed to the increase in grain size as discussed above. These optical band gap values are consistent with those observed by Yu et al. 28 , where they synthesised high-quality VO 2 thin films on silica substrates via radio frequency sputtering and plasma-enhanced chemical vapour deposition. They reported optical band gap values ranging from 1.54 to 1.74 eV. Similarly, Zhen-Fei et al. 29 reported an optical band gap of 1.81 eV for thermochromic nanocrystalline VO 2 thin films fabricated by magnetron sputtering and post-oxidation, which is in good agreement with that of VT400. The imaginary refractive index or extinction coefficient (k) was also deduced from the absorption coefficient obtained from Eq. (1) and using the following relationship 29 : Following this, the refractive index (n) of the films was determined from the reflectance (R) spectra by employing the following Eq. 27 : The real (n) and imaginary (k) refractive indices deduced from transmittance and reflectance spectra are shown in Fig. 5d. It is noted that real and imaginary refractive indices for sample VT700 are slightly higher than sample VT400. However, in both samples the complex refractive index decreases with increasing wavelengths from 250 to 2500 nm. These results are consistent with optical constants such as n and k obtained from VO 2 thin films deposited on silica-soda-lime and silica-potash-soda using a UHV magnetron sputtering system, reported by Dai et al. 30 . Similarly, Kana et al. 31 fabricated VO 2 thin films onto various glass substrates by radiofrequency inverted cylindrical magnetron sputtering and then investigated temperature-dependent studies on optical constants. The refractive index and extinction coefficient measured at 30 °C ranges from 2.0 to 3.6 and 1.86 to 0.25 in the spectral range between 300 to 1600 nm 31 . These results suggest that optical constants of VO 2 thin film depends on the fabrication conditions and techniques employed. MIR thermochromic properties and phase transition temperature control. The MIR optical transmittance of the VO 2 thin films as a function of temperature ranging from 20 to 100 °C were measured to evaluate their thermochromic properties and insulator-to-metal transition temperatures. Figure 6a,b show the transmittance behaviour in the 2.5 to 25.0 μm wavelength range obtained as a result of heating of the thin films. The thermochromic transition efficiency of the VO 2 (M1) film is defined in terms of optical contrast factor, τ ( ) , expressed as 32 : where τ LT , and τ LH are transmittance at low and high temperatures, respectively, and λ is the MIR wavelength. For example, the optical contrast factors attained at transparency windows peaking at 2.6 μm and 3.2 μm are 66.26% and 48.15% for VT400, and 65.87% and 40.00% for VT700, respectively. According to Guinneto et al. 22 , the primary parameters affecting the contrast factor are particle size and morphology and the high contrast factor in the case of VT400 is attributed to the high porosity combined with the small grain size compared to sample VT700. Figure 6c,d depicts transmittance obtained at 3.2 μm as a function of heating temperatures for both samples (VT400 and VT700). Sample VT700 reveals a sharp and abrupt switching hysteresis transmittance curve compared to sample VT400, which looks steeper in the transition region. This clearly shows that VT700 sample exhibits excellent MIR transmittance switching efficiency than the sample VT400. Moreover, the MIR transmittance is nearly reduced to zero below the transition temperature at 70 °C as illustrated in Fig. 6a,b. The differential curves of transmittance to temperature [i.e.{dT r /dT} ] is displayed in Fig. 6d, which was fitted with a Lorentz profile equation to ascertain metal-to-insulator transition parameters. The phase transition temperatures were determined to be ~ 60.0 °C and ~ 64.4 °C for VT400 and VT700 samples. Sample VT700 has a narrow hysteresis www.nature.com/scientificreports/ width of FWHM = 12.6 °C as related to FWHM = 33.7 °C of sample VT400. The phase transition temperatures are in good agreement with their heating temperature as a function of resistivity measurements as shown in Figure S2 (a) and (b (i) and (ii)) in the supplementary information. Consequently, such significant variation in transition temperature and hysteresis width between both samples fabricated at different substrate temperatures can be attributed to film discontinuity, density, porosity, crystallinity states, grain boundaries, defects, film particulates and thickness of the samples studied here 22,31 . For instance, the VO 2 grain size increases with increasing the substrate temperature. This is due to the fact that particles are agglomerated at high substrate temperature to form a more compact thin film with minimising grain boundaries as illustrated in Fig. 2d from the TEM cross-section image. Notably, the transition temperature of VT700 is closer to that of the bulk VO 2 (M) sample (68.0 °C). The thermochromic parameters attained from sample VT700 are identical to the results reported by Guinneton et al. 22 , who reported a thermochromic optical switching transition temperature of 68.0 °C and transition range of less than 10 °C for a VO 2 film thickness of 120 nm. We however report a similar performance for a film of 5 times thicker than this grown using fs-PLD. MIR and LWIR emissivity. Temperature dependent reflectance measurements at 25 °C, 60 °C, and 100 °C are illustrated in Fig. 7a,b. It can be seen that the VO 2 thin films exhibit a significant rate of change of reflectance upon heating, which correlated with previous report by Guinneton et al. 15 . The transmittance and reflectance measured at temperatures of 25 °C, 60 °C, and 100 °C were used to determine temperature-dependent emissivity of the VO 2 thin film. The emissivity as a function of wavelength was estimated by employing conservation of energies related to thermodynamic radiation characteristics expressed as 23 : where ε( ) , ρ( ) , and τ ( ) represent absorptivity, emissivity, reflectance and transmittance. According to Kirchhoff 's second law of thermodynamics, at equilibrium, the emissivity of a material must be equal to the absorptivity, α, at constant wavelength ( ) , and temperature (T). www.nature.com/scientificreports/ Figure 7c,d shows the infrared emissivity of VT400 and VT700 films at different temperatures revealing thermochromic properties. It is noted that the emissivity is highest at lower temperatures and decreases at a higher temperature for sample VT400 as compared with sample VT700. Such difference in emissivity values are attributed to variance in optical contrast, reflectivity and transmittance. Thus, the rougher thin film surface has a lower reflectivity and higher scattering owing to more grain boundaries or higher porosity. The initial feasibility studies suggest that variable thermo-optical emissivity properties can be achieved passively within a small change in temperature in the MIR from VO 2 (M1) thin film prepared using fs-PLD. It is important to mention that the change in emissivity of VO 2 (M1) thin film decreased as temperature increases. These results correlate with Gomez-Heredia et al. 33 , who synthesised VO 2 thin films onto sapphire and silicon substrates using a pulsed laser deposition technique with a KrF pulsed excimer laser. The authors demonstrated decreasing in emissivity as a function of increasing temperature in the MIR wavelength. Moreover, it's been suggested that the MIR emissivity properties of VO 2 (M1) thin film depends mainly on the infrared optical properties of the substrate. For example, Benkahoul et al. 34 . synthesised VO 2 thin films on various substrates including quartz, silicon, and polished minor-like Al employing RF reactive sputtering of vanadium target. The authors reported that temperature dependence of the emissivity of VO 2 thin film deposited onto highly IR reflective Al substrate is opposite to samples deposited on quartz and silicon substrates. This is attributed to increase in reflectance with temperature for VO 2 thin film deposited onto quartz substrate as compared to VO 2 film on Al substrate, which decreases with increasing temperature. Conclusion A fs-PLD technique enables realising scalable manufacturing of thicker VO 2 (M1) thin film within a shorter timescale from less expensive V 2 O 5 target material as compared with the conventional methods. This technique was employed to deposit VO 2 (M1) thin films onto silica substrates at different substrate temperatures. Surface morphology studies using SEM imaging reveals that sample fabricated at a substrate temperature of 400 °C (VT400) comprises small nanoparticles or grain sizes of about 12 nm. Conversely, as the substrate temperature increased to 700 °C (VT700), the particles agglomerated to form a film of larger particle size with an average value greater than 360 nm. The TEM and XRD characterisations confirmed that VO 2 thin films deposited on silica substrate consist of polycrystalline and single crystal systems, respectively, with a monoclinic orientation of (011). The increase in substrate temperature (sample VT700) leads to an increase in particle or grain size with reduced grain boundaries and film thickness, minimum porosity defects on the surface and cross-section. Subsequently, the optical absorption edge decreases with an increase in substrate temperature due to lack of porosity defects on the thin film surface and cross-section. This leads to decrease in optical band gap and slight increases in refractive index from the visible to NIR spectrum. Furthermore, sample VT700 exhibits high-quality www.nature.com/scientificreports/ thermochromic properties and best insulator-to metal transition temperature switch of 64.4 °C and hysteresis width of 12.6 °C at 3.2 μm wavelength. On the other hand, sample VT400 shows a better modulation of emissivity under heating from 25 to 100 °C. Consequently, these results confirm the tunable optical and thermochromic properties of these VO 2 thin films on silica substrate fabricated by fs-PLD technique with significant potential for developing smart window applications. Data availability All experimental deposition conditions and characterization procedures, methods and data are provided in the text and supplementary information. Any clarifications will be available by contacting the corresponding author.
5,999
2022-07-06T00:00:00.000
[ "Physics" ]
Multistability of saxophone oscillation regimes and its in fl uence on sound production – The lowest fi ngerings of the saxophone can lead to several different regimes, depending on the musician ’ s control and the characteristics of the instrument. This is explored in this paper through a physical model of saxophone. The harmonic balance method shows that for many combinations of musician control parameters, several regimes are stable. Time-domain synthesis is used to show how different regimes can be selected through initial conditions and the initial evolution (rising time) of the blowing pressure, which is explained by studying the attraction basin of each stable regime. These considerations are then applied to study how the produced regimes are affected by properties of the resonator. The inharmonicity between the fi rst two resonances is varied in order to fi nd the value leading to the best suppression of unwanted overblowing. Overlooking multistability in this description can lead to biased conclusions. Results for all the lowest fi ngerings show that a slightly positive inharmonicity, close to that measured on a saxophone, leads to fi rst register oscillations for the greatest range of control parameters. A perfect harmonicity (integer ratio between the fi rst two resonances) decreases fi rst register production, which adds nuance to one of Benade ’ s guidelines for understanding sound production. Thus, this study provides some a posteriori insight into empirical design choices relative to the saxophone. Introduction A classic endeavor in musical acoustics consists in the systematic study of sound production features of a musical instrument. Early studies use an artificial mouth to replace the musician (on the clarinet [1][2][3][4] or the bassoon [5]) in order to better describe and understand the physical phenomena at play during sound production. Later on, artificial mouths have been robotized to provide a complete mapping of the instrument's behavior, aiming at understanding how the instrument must be acted on to produce different sounds [6,7] or describing the influence of an acoustical parameter of the resonator on sound production [8]. The objective of this last study is shared by other works using a rather different approach to systematic description of the instrument's behavior: using a physical model. Based on oscillation thresholds for instance [9], some conclusions can be drawn as to the acoustical characteristics facilitating the production of sound. Numerical resolution of the model's equations also constitute a repeatable way to map the produced sound to the characteristics of the instruments, which has direct applications in instrument making [10,11]. However, from a mathematical perspective, as nonlinear dynamical systems, wind instruments models often admit multiple solutions for a given set of parameters. The question of the stability of each of these solutions holds great importance when aiming to describe or predict the playability of an instrument based on its physical model. But some important questions remain unanswered, even for ideal cases where the stability or instability of each regime would be known. For instance, which regime is produced if two regimes are stable for the same control parameters combination? In the case of such coexistence of stable solutions, denominated multistability hereafter, the convergence towards one or the other solution depends on the initial conditions. Indeed, each solution is associated with a region of attraction or attraction basin, defined as the region of the phase space where all initial conditions converge towards this solution [12,13]. For instance, attraction basins are studied in walking models [14,15], where the "walking" (periodic) regime almost always coexists with a stable equilibrium, corresponding to falling. In this case, describing attraction basins informs control strategies in robotics [16,17]. Attraction basins are also studied for classic dynamical oscillators, such as Chua's circuit [18], with experimental explorations of the attraction basins [19] as well as numerical investigations [20]. As strongly nonlinear self-oscillating systems capable of multiple oscillating regimes, wind instrument models are among the systems for which studying attraction basins can shed light on their rich behavior and help understand control strategies used by musicians. However, to our knowledge, no study on the attraction basins of musical instruments has been produced, although several studies explore their multistability. Experimental work on the clarinet [21] and a numerical study of several idealized woodwind resonators [22] illustrate in particular the hysteresis between regimes, which is a consequence of multistability. On the flute, continuation and synthesis have been used to investigate the hysteresis between regimes, notably depending on inharmonicity [23]. Describing the attraction basins and comparing their sizes is expected to give information on which regime is most likely produced, assuming some probabilistic repartition of the initial conditions in the phase space [24]. However, an exhaustive description is almost impossible for a complete model of instrument, where the phase space is of very large dimension. In such cases, attraction basins may be partially explored, based on a reduction of the phase space to one or two dimensions. For instance, the infinitedimensional phase space of a delayed system can be partially described along two dimensions [25]. In the case of musical instruments, a reduction of the phase space is proposed in this paper, based on knowledge of typical musical scenarios. Throughout this work, the case of a model of saxophone is considered, and two scenarios are studied: transition from another established limit-cycle (scenario number 1), and first attack transient of a note, where the blowing pressure parameter goes from 0 to a certain final value (scenario number 2). Section 2 presents the physical saxophone model and the two numerical methods used to solve its equations: the harmonic balance method and time-domain synthesis. Next, multistability is introduced by computing the bifurcation diagram with the harmonic balance method and continuation (asymptotic numerical method) and exhibiting hysteresis cycles using time-domain synthesis in Section 3 (control scenario number 1). Then, in Section 4, a simple test-case of scenario number two is presented to study sound production, where the blowing pressure increases from 0 to its final value over different durations. We show how this duration can influence the final regime in multistability regions, and explain these results by presenting the attraction basin of each regime. Section 5 demonstrates how the awareness of multistability can lead to a better description of the behavior of the model. Depending on the inharmonicity of the resonator, the size of the control parameter regions where each regime appears in synthesis is described, taking into account multistability. This provides an interpretation to the inharmonicity value measured on the saxophone by showing that it corresponds to an optimum in periodic regime production. Numerical simulation framework 2.1 Saxophone model The saxophone model used in this study is comprised of three main elements: a one degree-of-freedom oscillator representing the reed, a regularized nonlinear characteristic giving the flow through the reed channel, and a modal description of the measured impedance of the resonator. Similar models solved by time-domain synthesis (Sect. 2.2) are used in conjunction with analytical techniques to study the playing frequency [26] and spectrum [27] of clarinets, as well as their radiated power with a comparison to measurements [28]. The harmonic balance method (Sect. 2.3) can also be applied to this model to study its dynamic behavior, for instance to quantify the effect of neglecting reed contact [29]. Dimensionless [5,30] acoustical Kirchhoff variables (p, u) are used in this work: where the hat notation indicates the variable in physical unit, p M is the static pressure necessary to close the reed channel completely and Z c is the characteristic input impedance of the resonator for plane waves. Similarly, the reed displacement from equilibrium is given in dimensionless form: where H is the distance between the reed and the mouthpiece lay at rest. With this formalism, the reed channel is closed when x À1. In this work, the only time-varying control parameter [4] is the dimensionless blowing pressure c: where p m is the physical value of the pressure in the mouth of the musician. We leave all other control parameters constant in order to limit the dimensionality of the study. The values and names of the parameters are summarized in Table 1 and detailed below through the model description. Their values are drawn from [31] for the reed parameters q r and x r , from [32] for the order of magnitude of the contact stiffness K c . The reed model Following [32], the reed is modeled by a single degree of freedom oscillator including a nonlinear contact force accounting for the mouthpiece lay: where the two parameters of the reed are its angular eigenfrequency x r and its damping coefficient q r , and T. Colinot et al.: Acta Acustica 2021, 5, 33 the contact force is a function of the dimensionless reed opening x + 1 and is taken from [33], where K c = 100. Since x + 1 is the distance between the reed and the mouthpiece lay, F c can be interpreted as a quadratic stiffness activated whenever the reed touches the lay. The ramp function min(x + 1, 0) is regularized using a parameter g = 10 À3 to avoid non-differentiability at x = À1 (reed closure), The regularization controlled by parameter g is necessary for the system to fit the quadratic formalism required by the implementation of the harmonic balance method and asymptotic numerical continuation in the MANLAB software, which produces the bifurcation diagrams of the present article. Although the parameter g is not necessary for the time-domain synthesis method to function, it is kept for comparison purposes. The reed channel The flow at the input of the resonator is deduced from Bernoulli's law [34,35] applied to the reed channel and turbulent mixing into the mouthpiece, where f is the dimensionless control parameter accounting for reed opening at rest, w being the effective width of the reed channel and q the density of the medium. We choose to ignore the flow due to the speed of the reed [26,36] in the present model, as it only has a small effect on the playing frequency, which is not discussed here. The absolute value and ramp function in Equation (7) are regularized with the same parameter g as in Equation (6), The resonator The input impedance is used to represent the resonator's acoustical response. The dimensionless input impedance Z(x) of a Buffet-Crampon Senzo alto saxophone is measured with the CTTM impedance sensor [37]. The saxophone is measured without its mouthpiece, placing the reference plane of the impedance measurement at the input cross-section of the crook. A cylindrical tube is added by transfer matrix method [38] in post-processing to represent the mouthpiece, before using it in synthesis. The length of the cylinder is 60 mm and the radius is the same as the input radius of the crook, 6 mm. The total volume of the added cylinder approximately fits that of the missing cone apex, as per a classical academic approximation [39]. In order to use this input impedance for the two numerical synthesis methods presented above, it is decomposed into modes [26] so that, where C n and s n are the estimated complex modal residues and poles [40] and N m is the number of modes retained in the simulation. In this paper N m = 8 modes are used. This translates into the time domain by describing the pressure as a sum of complex modal components p n , whose evolution depends on the modal coefficients, such that, _ p n ðtÞ À s n p n ðtÞ ¼ C n uðtÞ; 8n 2 ½1; N m ; ð12Þ Reðp n ðtÞÞ: ð13Þ The flow u in (12) is given by (7). Figure 1 displays the measured impedance and the associated modal reconstruction according to Equation (11) for the D] fingering used throughout the rest of this article. The corresponding modal coefficients C n and poles s n are summarized in Table 2. Note that the choice of a modal formalism over a direct resolution of partial differential equations in the resonator is made here because of its lower computational cost and number of variables, which facilitates large-scale numerical studies such as those presented in Section 5. Additionally, the modal formalism involves a limited number of parameters, which are directly tied to the acoustics of the resonator, instead of the complete geometrical description that a wave propagation model would require. Time-domain synthesis Equations (4), (7) and (12) are discretized using finitedifference approximations for the time-domain derivatives, Table 1. Parameters of the numerical model: musician control parameters c and f, reed parameters q r and x r , contact parameter K c and parameters inherent to the numerical implementation g, N m and H. following a discretization scheme first applied to simple waveguides [41] and then adapted to a modal formalism [26]. The reader can find a detailed description of this discretization scheme in a recent document [42]. The sampling rate used in the simulation is F s = 176 400 Hz, four times higher than the standard audio sampling rate. Such a high sampling rate is required, given the chosen finite difference scheme, to give precise results that match those obtained with the harmonic balance method. As an illustrative result, Figure 2 shows an example of the synthesized pressure signal and its spectrogram. Note that the signal shown is a portion of the signal used in Figure 4, with control scenario number one. It corresponds to the first occurrence of the oscillations at a blowing pressure value c ' 0.45. At this point, the system jumps from equilibrium to the first register and passes through fleeting second register and quasi periodic regimes. The spectrogram (Fig. 2b) shows the second register to be the octave (double the fundamental frequency) of the first register. The quasiperiodic portion of the signal displays amplitude variations, seen in the envelope of the signal (Fig. 2a) and on the odd harmonic components of the spectrogram. Quasi-periodic regimes are well-known on saxophone-like instrument models, documented for instance in [2,8,43]. Harmonic balance and numerical continuation The harmonic balance method is an analysis method particularly adapted to the study of musical instrument models [44], since it focuses on periodic solutions, which correspond to the produced notes. Assuming periodicity of the solution allows expanding all variables in Fourier series [45,46] up to order H, such that the i-th variable X i expands to where there are 2H + 1 complex Fourier coefficients X i,h per variable, and x 0 is the fundamental angular frequency of the signal. In this work, the number of harmonics retained is H = 20. Applying the method to a differential system transforms it into an algebraic system of which the unknowns are the Fourier coefficients of the variables and the solution's fundamental frequency, of the form, A numerical continuation method such as the asymptotic numerical method can then be applied to the resulting algebraic system [47,48] to find how the solution changes for other constant values of a chosen control parameter, for instance the blowing pressure parameter c. More precisely, knowing a solution to the system for a given value of c, the continuation method finds a solution for a slightly higher or lower values of c, and therefore progressively maps out the solutions of the system for a given range of blowing pressures. In this work, simulations were carried out using the MANLAB software (http://manlab.lma.cnrs-mrs.fr/). This yields the value of the Fourier coefficients of the oscillating solutions along several values of a control parameter. The Fourier coefficients can then be used to reconstruct the time-domain solutions. This evolution can be summarized by a bifurcation diagram, which represents the variation of some descriptor, for instance the amplitude, of the solutions of the system with respect to the chosen control parameter. In addition, the stability of the solutions is determined using Floquet theory (for more details refer to [49][50][51]). Multistability This section presents the blowing pressure ranges where the model can produce each regime by studying their stability with the harmonic balance method. This result is summarized in the bifurcation diagram on which multistability zones appear as intervals where several regimes are stable. Signals are also synthesized with time-domain synthesis to exhibit how multistability leads to hysteresis. The correspondance between the two methods (the harmonic balance method and time-domain synthesis) is also checked. Overlapping stability zones on the bifurcation diagram The bifurcation diagram is computed for the (written) low D] fingering of an alto saxophone. The written D] [52]. Figure 3 shows the L 2norm of the pressure signal: where T is the period of the signal, and identifies which regime each branch corresponds to. The blowing pressure parameter c spans the interval between 0 and 2. This bifurcation diagram contains branches corresponding to the so-called equilibrium, where no sound is produced, for the lowest and highest c values. The equilibrium at low c corresponds to the musician not blowing hard enough into the instrument to obtain a sound, while at high c equilibrium means that the reed channel remains closed due to the large pressure difference between mouth and mouthpiece. For intermediate c values, the first and second register both appear. The first register is the fundamental pitch obtained with a given fingering, and the second register, sometimes referred to as overblowing, is pitched one octave higher than the first register. Even though the saxophone has an octave key facilitating the production of the second register, musicians know how to produce second register regimes without activating it. Note that contrary to the clarinet, the register key of the saxophone controls the two register holes, opening one or the other depending on all the pressed keys on the instrument. It is therefore not surprising that both regimes appear on the same fingering. In the present case, the c interval between 0 and 2 contains all the studied limit cycles of the model, and at its bounds, only the equilibrium solution exists and is stable. The diagram in Figure 3 displays several zones of coexistence between stable regimes (i.e. multistability). These regions of coexistence are often bounded by bifurcation points, that mark qualitative changes in the oscillating regimes. In the present work, the system encounters several types of bifurcations, which we define succinctly in terms of the regime changes they correspond to. More formal definitions and characterizations of these bifurcations, notably in terms of critical values of the eigenvalues of the jacobian matrix of the system, can be found in [53,54]. The Hopf bifurcation marks the emergence of an oscillating solution from equilibrium. The fold bifurcation corresponds to a stable and unstable solution branch colliding and disappearing, which can be better seen on the bifurcation diagrams as limit points in the solution branches. Neimark-Sacker bifurcations correspond to a periodic regime becoming unstable and being replaced by a quasi-periodic regime. A degenerate case of the Neimark-Sacker bifurcations is the perioddoubling bifurcation, where a periodic solution of halved frequency emerges from an oscillating solution. On saxophone models, period doubling bifurcations transform a second register regime into a first register. As is discussed in the next paragraph for the saxophone, Hopf and fold bifurcations often delimit coexistence between the equilibrium and an oscillating regime, while Neimark-Sacker and period-doubling bifurcations mark the limits of multistability regions between two oscillating regimes. Starting from low blowing pressure values, the first coexistence zone appears between the first register and the equilibrium. It is delimited by the fold bifurcation F1 of the first register around c = 0.3 and the inverse Hopf bifurcation H1 at c = 0.4 where the equilibrium becomes unstable. The second coexistence zone is between first and second register, in the interval where the second register is stable between the Neimark-Sacker bifurcation NS1 and the period-doubling bifurcation PD1 (respectively at c = 0.66 and c = 0.79). The Neimark-Sacker bifurcation NS1 mark the destabilization of the second register and the emergence of a quasi-periodic regime (not represented here), sometimes called multiphonics by musicians. The next coexistence zone occurs in the interval between the two period-doubling bifurcations PD1 and PD2 on the second register branch, where a stable double two-step solution [52] emerges. This coexistence zone is not shaded on the figure, as it could represent less of a musical issue, since double two-step regimes have roughly the same frequency as standard first register regimes. The fourth coexistence zone is more complicated: it starts between first and second register at the period-doubling bifurcation PD2, and then the equilibrium also becomes stable at the Hopf bifurcation H4. The limit of the last coexistence zone is made by the two fold bifurcations F2 and F3 where the first and second register solutions cease to exist. The diagram in Figure 3 shows that coexistence zones between stable regimes span most of the range in c where oscillating solutions exist, including arguably crucial c values like the lowest for which an oscillating regime exists. Multistability is not an isolated phenomenon, but rather corresponds to the general situation, at least for this fingering. 3.2 Time-domain synthesis with blowing pressure ramps (control scenario number one) Once multistability zones are identified, time-domain synthesis can be used to exhibit their role when playing the instrument. One of the main phenomena multistability entails is hysteresis: for several values of the blowing pressure, a different regime is produced depending on whether the blowing pressure is increasing or decreasing. Various multistable regimes are exhibited using this method in [22] on woodwind models. Figure 4 shows the hysteresis cycles obtained by using ramps of c: in control scenario number one, the parameter c is progressively increased from 0 to 2 and then decreased back to 0. Each one of the increasing and decreasing phases of the synthesis has a duration of 60 s. This duration was chosen after several trials, sufficiently long to let stable regimes establish while keeping a c slope steep enough to limit dynamical bifurcation delays [55]. Figure 4 shows that the synthesized signal starts from c = 0 at equilibrium, its L 2 -norm being zero. Then, at Hopf bifurcation H1, the equilibrium becomes unstable, which causes the system to start oscillating. At this point, the synthesis goes through a transient represented in Figure 2 (between t = 14.2 s and t = 14.8 s), shortly passing by unstable second register and quasi-periodic regimes before reaching the first register. Once the first register is established, the branch is followed all the way to extinction (around c = 1.75 on Fig. 4), because the first register does not become unstable until the fold bifurcation F3. At this point, the system returns to equilibrium until the highest value of the ramp, c = 2. The blowing pressure c then starts descreasing, and the system stays at equilibrium until Hopf bifurcation H3 is reached, for c ' 1.05. There, the system jumps to the stable second register regime. The second register branch is followed until the period-doubling bifurcation PD2, where the system briefly follows the double two-step branch. The double two-step branch appears on the L 2 -norm very close to the second register branch, and looks like a small bump. The two branches cross again at period-doubling bifurcation point PD1. Then, something rather surprising occurs: the L 2 -norm of the time-synthesized signal seems to follow the second register branch of the bifurcation diagram further than Neimarck-Sacker bifurcation NS1, although the second register branch has become unstable. This is because the quasi-periodic regime emerging at NS1 is actually a stable attractor, and the associated L 2 norm happens to be close to that of the second register. Branches of quasiperiodic regimes have not been computed so as not to clutter Figure 3, but note that it is possible with harmonic balance method and MANLAB [48]. When the quasi-periodic regime becomes unstable (around c = 0.55) the system jumps back onto the first register branch, which is followed until fold bifurcation F1, beyond which the stable equilibrium is the unique solution of the model. The path described precedently is highly hysteretic: the sequence of regimes produced for increasing and decreasing c are very different. Actually, the two paths only coincide in three regions: the lowest and highest c intervals, for which only the equilibrium is stable, and a very small region around c = 0.5 where only the first register is stable. The hysteresis phenomenon observed here in timedomain synthesis can be interpreted as the first step in attraction basin description: once a certain stable regime is reached, it is followed until extinction or loss of stability, even when other regimes are simultaneously stable. This confirms that a stable periodic regime is part of its own attraction basin. Reconstructing stable parts of the bifurcation diagram using time-domain synthesis and comparing them to those obtained using the harmonic balance method also provides validation for the numerical discretization scheme in this context. Here, it shows that time-domain synthesized signals are not perturbed by numerical artifact due to time discretization and can be trusted to describe properties of the model. Note that even subtle details of the behavior such as the tiny branch of double two-step solution between period doubling PD1 and PD2 are found by the time-domain synthesis. This exploration of the blowing pressure space using a long ramp is very useful to exhibit the hysteresis phenomenon, as well as test the coherence between the two synthesis methods. However, this kind of sound is extremely artificial and far from anything a musician would use in everyday practice (provided it is even possible for a musician to produce it). Therefore, we frame the conditions of the rest of the study so that they can be interpreted in terms of selection of one regime over another. Regime emergence in a multistable context It is very likely that musicians learn to select between coexisting stable regimes, adjusting their control so that the established regime in a multistability region is the one they desire. This idea provides the layout for control scenario number two: we study the effect of a parametrized transient control or initial conditions on the established steady-state regime that follows when the control is constant. To provide another way to interpret the selection process between multistable regime, a third control scenario is used, where only initial conditions are varied and all control parameters are constant. Control scenario number two: increasing blowing pressure One way to study the attraction basins more thoroughly is to run many simulations with initial conditions spanning the whole phase space. However, since the considered model has a 2N m + 2 dimensional phase space, a complete exploration is not possible. Moreover, many of the possible initial conditions are unlikely to be created by the musician. More interesting is the exploration of the regions of the phase space that are crossed by the system when a given control pattern is varied. Here, we focus on a monotonic increase of the blowing pressure c at the attack: without using the tongue, the player starts blowing progressively harder into the instrument. Such a scenario was proposed in [56]. Note that instrumented mouthpiece measurements performed on the saxophone, such as those presented in [57], often show a different profile including a pressure overshoot before the apparition of the oscillations. However, in the present study, we omit that overshoot so that the control scenario is entirely defined by a single parameter. In control scenario number two, the blowing pressure starts from 0 and rises up until stabilizing at a certain value c f , during a certain time determined by the parameter s g . The temporal variation of c is given by the expression, which is differentiable infinitely many times. Figure 5 displays four examples of such transients. Other envelopes (sigmoid, sine branch) were tested, only causing small quantitative changes to the results. Figure 6 shows which established regimes appear in time-domain synthesis depending on s g , for final values c f belonging to the multistability zones described in Figure 3. Each dot on the figure represents the type of established regime after five seconds of time-domain synthesis. This synthesis duration was chosen to be sufficiently long so that the transient is completed and the established regimes can be observed. Regime types are estimated using an energy-based criterion for equilibrium (if the energy of the pressure signals in the last ten periods is less than that of the first ten, the regime is classified as equilibrium) and a fundamental frequency estimator to determine the first and second register. To detect quasi-periodic regimes, an attribute that can be observed in Figure 2 is used: the fact that the amplitudes of the harmonics of a quasi-periodic signal vary temporally. For the classification, if the mean variance of the harmonic amplitudes is more than a certain threshold (here set to 10 À6 ), then the regime is considered quasi-periodic. Figure 6a focuses on the first multistability region (highlighted in gray in Fig. 3), near the first Hopf bifurcation H1. The two stable regimes in this region are the equilibrium (s) and the first register ( ). For final values c f between 0.38 and 0.4, the system can converge to both regime depending on the characteristic rising time s c . It is interesting to note that equilibrium is reached for the longest rising times, i.e. the slowest c variation, whereas the oscillating regime is reached for the shortest rising times. This is understandable as a quick c increase tends to drive the system away from equilibrium, and therefore possibly out of its attraction basin. Note that some of the oscillating regimes near the limit, for the longest attack times, are classified as quasiperiodic. This is due to the transient being extremely long in this particular region: the steady-state regime is not yet established at the end of the synthesized signals. This classification particularity, which could be seen as an error, is not corrected because, from a musician's perspective, a regime still varying five seconds after the start of the attack will arguably not be considered periodic. The second zone of multistability is explored in Figure 6b. The first and second register are separated by some stable quasi-periodic regimes ( ). This is the same quasi-periodic regime that appears in time-domain synthesis in Figure 2, which overlays the unstable portion of the second register branch. There is a particular range of characteristic time s g that seems to produce the first register for a larger range of c f . The corresponding values of s g are close to the period of the first register, represented by the horizontal line in 5 (b). In the last multistability (0.9 c 1.25), three regimes may be stable for the same parameter values, as shown on Figure 3. However, Figure 6 reveals that there is no c f region where all three are produced. This can be explained by analyzing the attraction basins (see Sect. 4.2, Fig. 8). Control scenario number three: varying initial conditions The results concerning the influence of the blowing pressure parameter can be better understood by examining the region of the phase space leading to each regime. A point in the phase space represents the current state of the system, meaning the value of the state variables and their derivatives. Since the system is deterministic, a given point in the phase space will always lead to the same stable established regime. Therefore regions of the phase space can be associated with each regime. These regions are called attraction basins. The attraction basins are only rigorously defined in a context where all control parameters are constant. However, to interpret the results obtained with a control parameter transient such as control scenario number two, it is useful to observe the layout of attraction basins obtained for the final blowing pressure value c f . Specifically, at the beginning of control scenario number two, the blowing pressure is subject to fast variations, making a direct approach based on attraction basins ill-defined. However, after the transient, the blowing pressure stabilizes around its final value c f . To elucidate the behavior of the system at this moment, Section 4.2 performs a systematical analysis of its convergence with constant control parameters, in which case the attraction basins representations are relevant. Therefore, a third control scenario is devised, where the control parameters are kept strictly constant and only the initial values of certain state variables of the system are modified. Because the phase space is of dimension 2N m + 2 (all modal components and their derivatives, plus reed position and speed), it is necessary to choose a projection to represent the attraction basins. After some trials, a projection of the phase space on the two first modal components (see Eq. (12)) and the derivative of the second one, (p 1 , p 2 , _ p 2 ), was chosen as a three-dimensional projection. These variables were chosen not only because of their physical or mathematical meaning, as they relate respectively to the first and second register, but also because they allow for the clearest visual separation of the limit cycles and attraction basins that could be obtained by the authors. To estimate the attraction basins, time-domain synthesis is launched with initial conditions spanning the projected phase space and constant control parameters (control scenario number three) . A total of 256 initial conditions are scattered in a Latin hypercube sampling into a rectangular parallelepiped such that, p I 1 2 ½À0:2; 0:2; p I 2 2 ½À2; 2; _ p I 2 2 ½À707; 707: ð18Þ These bounds should be understood with respect to the amplitude of the limit cycle along each dimension (that can be seen in Figs. 7 and 8). They were chosen so that whenever a regime is stable, it is obtained in synthesis at least once. All the other modal pressure components and their derivatives, as well as the reed speed _ x, are initially zero. So that there is no discontinuity when starting the synthesis, the initial values of the variables p, then x and u are computed accordingly through Equations (13), (4) and (7). All control parameters, including the blowing pressure c, are kept constant during these simulations: recall that the attraction basin analysis is only strictly valid for constant values of the control parameters. Therefore, the deductions that are made using the projected attraction basins (control scenario number three) on the results of control scenario number two are subject to caution. Figure 7 shows these initial conditions in the plane (p 2 , _ p 2 ), associated with the regimes they lead to, for three values of the blowing pressure parameter c. All the phase space points the system passes through during the transient are also part of the attraction basin, so they are represented in the figure as well. The last period of synthesized signals is plotted as the limit cycles. The three values of c are chosen near the last Hopf bifurcation (H3 and H4 in Fig. 3), where the equilibrium becomes stable again. Graph 6 (a) is computed at c = 1, before the Hopf bifurcations, so only register one and two are stable. Therefore, none of the initial conditions lead to equilibrium (no black dots on the figure). Each attraction basin is located around the corresponding limit cycle. The attraction basins seem to overlap, but it is merely an effect of the projection of the phase space. Graph 6 (b) corresponds to c = 1.1, right above the Hopf bifurcations H3 and H4. One can see that some initial conditions located near the origin now lead to equilibrium. This plot can seem surprising when compared to Figure 6c, which only shows equilibrium and register one for this value of c f using the control scenario number two, although the attraction basin of the second register (red points in Fig. 7b) seems larger than that of the other regimes. This is due to two effects. First, the projection spreads the second register but shrinks the first register. Secondly, control scenario number two starts from the origin of the phase space: it is ensured by setting all variables and their derivatives at 0 at the start of the synthesis. Therefore, attraction basins surrounding the origin, such as that of the first register, are more likely to be entered. Here we see that considering only the size of the attraction basins and ignoring their position provides only little information. This data needs to be interpreted in terms of musician action, notably pertaining to the ability of the musician to actually reach the region of the phase space. Here, the fact that the attraction basin is vast is effectively countered by the fact that it is far from the origin, making it hard to reach using the studied control scenario number two. Graph 6 (c) represents the same results for c = 1.2, where Figure 6c showed more occurences of the equilibrium than for c = 1.1. This is explained by the attraction basin of the equilibrium being largerthere are many more black dots on Figure 7c than on (b). Notice that the attraction basin of the first register expands (also many more green dots than on Fig. 7b), while the attraction basin of the second register (red dots) shrinks. This process (not represented here) continues until the second register ceases to be stable at fold bifurcation F3 (see Fig. 3). Figure 8 shows the attraction basins and limit cycles, in a three-dimensional projection of the phase space (p 1 , p 2 , _ p 2 ), at particular values of c highlighted in Figure 6. Graphs 2 (a), (b), (d) and (e) should be read as further information on the regime map 5 (b), at the beginning of the multistability zone between first and second registers. Graph 2 (a) corresponds to c = 0.6, and confirms that the first register is the only stable regime: it was the only one to appear in the regime map 5 (b) (for c f = 0.6). Then, a quasi-periodic attractor appears in Graph 2 (b), for c = 0.63. Although the associated attraction basin seems smaller than that of the first register, it seems to almost surround the origin of the phase diagram. It was observed in synthesis that the transient of control scenario number two does not send the system very far from the origin in the phase spacecompared, for instance, with the size of the limit cycles of register one and two. This is linked to it necessarily starting from the origin of the phase space. The fact that control scenario number two tends to lead the system to phase space points near the origin explains why regime map 5 (b) displays more quasi-periodic regimes than first register. A similar interpretation can be formulated with regards to graphs 2 (d) and (e), for c = 0.645 and c = 0.72 respectively. For these values of c on Figure 6b, there is more second register than first register. On graphs 2 (d) and (e), it can be seen that although the size of the first register's attraction basin seems comparable to that of the second register, the latter clearly holds a central position around the origin of the phase space. When the second register attraction basin grows in Graph 2 (e), this translates to the disappearance of the first register from the regime map 5 (b). Note that Graph 2 (e) confirms that the first register is still stable, as announced by the harmonic balance method in Figure 3. Graph 2 (c) c = 0.9 illustrates a slightly different explanation of a similar case of only second register appearing in the regime map 5 (c): in this case, the attraction basin of the first register is just too small, it only makes up for a few points in Figure 8c. Graph 2 (f) (c = 1.25) is comparable to (d) in that many regimes are stable, but only the one with the most central attraction basin appears in the corresponding regime map on Figure 6c. In this case this regime is the equilibrium, whose attraction basin in Graph 2 (f) surrounds the origin, although it appears smaller than the others. To complete the study, the full evolution sequence of the attraction basins obtained with control scenario number three can be found as an animation in multimedia file Supp1.mp4. The authors suggest frequently pausing the animation to observe precisely how the attraction basins develop along multistability zones. Effect of the resonator's inharmonicity on regime production The concept of multistability and the attraction basins are presented and explored here because they seem to be a very important part of the observed behavior of a saxophone model. The present section offers a succinct description of the behavior of the model, applied to an instrument design problematic. It also illustrates how ignoring multistability can affect the description of the behavior of a model. Before using the analysis of a woodwind physical model to develop new instruments, it can be very informative to apply it to existing instruments, in the idea of a reverse engineering procedure. If the analysis method can explain a posteriori some design choices made on instruments with satisfying sound production characteristics, then it might help guide further innovative design choices in the right direction. In the present case, the produced regimes are studied for the seven lowest first register fingerings, and one acoustical parameter is varied artificially: the inharmonicity between first and second resonance. The original data corresponds to measured impedance for the corresponding fingerings of a Buffet-Crampon Senzo professional alto saxophone. According to the so-called Bouasse-Benade prescription [58][59][60], near perfect inharmonicity between the resonances is cited as a condition for good playability of the instrument. This prescription is also discussed in recent studies [9,61]. On the saxophone, experimental studies using an artificial mouth have shown that varying inharmonicity greatly affects regime production [2,8]. In this work we define inharmonicity as the ratio between the second and first resonance f 2 /f 1 , or Iðs 2 Þ=Iðs 1 Þ in terms of the parameters of Equation (11). On a saxophone this ratio is close to two. Many definitions of the inharmonicity can be devised, possibly taking into account more resonances. The present definition of the harmonicity has the advantage of being very short, and easy to modify in the modal formalism by adjusting only one modal frequency. Specifically for the purpose of the following study, optimal regime production conditions are defined crudely in terms of how often each regime appears in synthesis. This definition differs from that employed in [9]. On the lowest fingerings, in this study, we simply consider that optimizing regime production means maximizing the appearance of the first register while minimizing that of the second register and quasi-periodic regimes. Indeed, one of the challenges many beginner saxophone players face on the lowest fingerings is controlling the instrument so that the first register can be produced, and not another regime. Quasi-periodic regimes are largely considered undesirable in common musical practice, however they are a common issue on the lowest fingerings of the saxophone. Regime production regions Expanding on the idea in Figure 6, one can study the produced regime across the two-dimensional parameter space (c f , f), while still varying the characteristic time s g of control scenario number two. Figure 9 shows the classification of obtained regimes for several combinations (c f , f) and several characteristic times. Note that for a musician using their lower lip to control the instrument, it is difficult to control the reed opening parameter f without also varying the properties of the reed x r and q r . Leaving the reed parameters constant in this study amounts to a simplification of the musician control. For readability reasons, the resolution of the cartography presented here is rather coarse: only eight values of c f and f and three characteristic times s g , for a total of 192 synthesized signals. The range in parameter f is inspired by the range measured in [8] for artificial mouth experiments, using the method proposed in [36]. This method relies on measuring the flow rate as a function of the mouth pressure, and estimating f based on the maximum flow using the nonlinear characteristic of Equation (7). As a case study, two maps computed with different inharmonicity values are presented on Figures 9a and 9b, so that they can be compared. In the modal formalism, the inharmonicity is changed very simply by modifying the value of the second modal frequency. Note that changing the length of the saxophone mouthpiece constitutes another method to vary the inharmonicity. However, it modifies all the modal frequencies simultaneously, which makes some interpretations less robust. Therefore, only the results obtained by varying the second modal frequency are presented here. Similar results can be obtained by varying the length of the mouthpiece. Two typical values of inharmonicity are chosen: one that could be called null, f 2 /f 1 = 2; and the value measured on the saxophone which is slightly higher, f 2 /f 1 = 2.065. Focusing on Figure 9a, several features can be described, and recognized from the situations explored in Section 4 with a fixed f. Coexistence regions can be noticed on most of the map, with a given (c f , f) couple leading to different regimes depending on the characteristic time. This further demonstrates that multistability is a very common phenomenon across the control parameter space in woodwind models. A particular case of coexistence occurs on all regime maps, where long attack times lead to the system remaining at equilibrium, while fast attacks can trigger oscillations. This is the same phenomenon as in Figure 6a. These phenomena are located, as for Figure 6a, at the boundaries between equilibrium and oscillation regimes, which correspond to the Hopf bifurcations of the model (where the equilibrium becomes unstable). On the regime maps, points containing both equilibrium regimes and oscillating regimes are seen on the vertical threshold near c = 0.4, the horizontal threshold near f = 0.2 and the extinction threshold around c = 1.1. Coexistence situations similar to Figure 6b can also be seen on Figure 9a, for example at f = 1.2 and c f ' 0.8, where the short and long characteristic times lead to the second register, while the medium time leads to a first register. This possibly indicates that the second register attraction basin almost surrounds the origin, as seen on Figure 8, in all the multistability zones between first and second register for that fingering. Contrary to what could be expected, a null inharmonicity, where the second resonance frequency is twice the first, does not lead to more first register production. Since an exact integer ratio between resonances does not facilitate the production of the first register, one can ask if the model shows a particular value of inharmonicity which favors the production of first register. Rate of produced regimes: influence of the rise time on global regime production To study the question of the inharmonicity favoring first register production, regime maps are computed for all the fingerings of the saxophone that should produce first register regimes, meaning those where the register holes are closed. For each fingering, the second modal frequency f 2 is varied from 1.96f 1 to 2.15f 1 , by steps of 0.01f 1 . A regime map containing N p = 192 points, as for Figure 9, is then computed for each value of inharmonicity. The produced regimes are counted for the whole map, and a rate is computed for each of them with respect to the total number of oscillating regimes as, where regime i can either be first register, second register or quasi-periodic regimes, N p,i is the number of points corresponding to regime i in the regime map and N p,o is the total number of points corresponding to any oscillating regime (i.e. all regimes but equilibrium). Note that this description ignores non-oscillating regimes. Figure 10 depicts the rate of each oscillating regime produced depending on inharmonicity for the lowest fingering of the first register, the written B[ which produces the heard note C3 at 131 Hz. On this figure the regimes are counted separately for each characteristic time s g before being summed for the whole map to produce an averaged rate. On the averaged rate, optimal points are highlighted by triangles, corresponding to the maximum of first register and minimum of second register produced, respectively. Both appear for values slightly above f 2 /f 1 = 2. The inharmonicity values maximizing first register production do not correspond to exactly harmonic resonances, but a second resonance slightly higher than the octave of the first. The proportion of quasi-periodic regimes is also displayed on the figure. Note that inharmonicity values around two lead to less quasi-periodic regimes, which is corroborated by existing results [8,43]. On Figure 10, it can be seen that the region of low rate of production of the second register for f 2 /f 1 between 2.05 and 2.09 almost coincides with a region of high rate of production of quasi-periodic regimes, for f 2 /f 1 between 2.06 and 2.1. In this case, minimizing second register production seems to favor the production of quasi-periodic regimes. Figure 10 shows that ignoring multistability, by using only one value of characteristic time, could have lead to conclusions similar to those drawn with production rates averaged over several characteristic times. Indeed, all the Figure 9. Classification of the regimes produced (s equilibrium, first register, second register, quasi-periodic) depending on control parameters: c f and f. Each rectangle corresponds to a couple (c f , f) and the points inside indicate the regime for each characteristic time s g (bottom 0.1 ms, middle 3 ms and top 100 ms). One rectangle on graph (b) is annotated as an example. Graphs correspond to two inharmonicities for low written D] fingering (a) f 2 = 2.065f 1 , the value measured on a real saxophone and (b) f 2 = 2f 1 . characteristic times yield qualitatively similar production rates. However, this is not always the case, and Figure 11 shows two examples where neglecting multistability and studying only one characteristic time can lead to biased conclusions. For the D] fingering (Fig. 11a), one can see that depending on the chosen increase duration s g the production ratio varies greatly (from 20% to 60%). If any quantitative interpretation is to be expected from these results, it can be changed dramatically depending on the chosen attack time. Notice that the lowest rate of first register corresponds to the longest attack time s g = 100 ms. Figure 9 shows that the first register regimes are produced on the edges of the zone of oscillation, in a multistability region between the first register and the equilibrium. The longest attack times in this region tend not to lead to a first register, but instead to an equilibrium due to its attraction basin surrounding the origin (see for instance Fig. 7c or multimedia file Supp1.mp4). Figure 11b shows the results for the written D fingering. This case exhibits an outlier: the shortest attack time yields a optimal inharmonicity value of 2.08, whereas the others point to 2.04. In this case, considering several attack times is a way to smooth out outliers due to a particular value of the attack time. Note that Graph 9 (b) can also be subject to an interesting interpretation in terms of musician control strategies: the fact that certain attack time values seem to markedly decrease the rate of production of a certain regime could be used by the musician to avoid producing it. Inharmonicity of the saxophone In this section, the optimal inharmonicity in terms of regime production is studied for the seven lowest fingerings of the instrument. Higher fingerings are not represented because they add no relevant information: first register regime production rates are close to 100% for all the studied inharmonicity values. This corresponds to the saxophonists' experience that the high notes of the instrument's first register are often easier to produce than the low notes, and to the fact that the first impedance peak is much higher than the others on the high fingerings [62]. The optimums are compared with the inharmonicity value measured on the saxophone on which the model is based. Figure 12 summarizes the production ratios for all the fingerings. The optimal inharmonicity seems to vary across the fingerings. It is always greater than two: null inharmonicity does not favor first register production on the low fingerings of the saxophone. The two optimums are close to the measured inharmonicity. Additionally, the trend is respected, with optimal and measured harmonicities increasing for higher fingerings. Note that the optimum for the E fingering is very far from the measured inharmonicity, but the Figure 10. Rate of produced regimes (Eq. (19)) for written low B] fingering. Green: first register, red: second register, blue: quasi-periodic. Linestyles indicate the characteristic time. Dotted: s g = 0.1 ms, dash-dot: s g = 3.2 ms, dashed: s g = 100 ms, solid: averaged rate. An upward triangle marks the maximum first register averaged rate, a downward triangle marks the minimum second register rate. Figure 11. Rate of first register regime (Eq. (19)) produced for (a) written low D] fingering and (b) written low D fingering. Linestyles indicate the characteristic time. Dotted: s g = 0.1 ms, dash-dot: s g = 3.2 ms, dashed: s g = 100 ms, solid: averaged rate. An upward triangle marks the maximums of first register rates. production ratios are almost constant. Overall, the simulation shows that the most first register and least second register is produced by the model for values of inharmonicity near those measured on the saxophone. This result sheds some light on the empirical choice of the acoustical properties on the saxophone. Indeed, if the inharmonicity was far from the values observed on saxophones, the model predicts more second register would be produced. This effect is arguably undesirable. However, this choice comes with a compromise, as it also favors quasi-periodic regime production (see Fig. 10), which are a known issue in low saxophone fingerings. Conclusion In the studied saxophone model, stable regimes coexist throughout large regions in the musician control parameter space. Thus, even an exhaustive description of the stability or instability of each oscillating regime in the control parameter space is only an incomplete answer, as it doesn't suffice to predict which regime emerges in these multistability zones. In particular, in the event that the attraction basin of a stable regime proves too small or unattainable by the musician, this regime should be considered unreachable, almost like an unstable regime. This work demonstrates that saxophone regime stability is a nuanced topic. An adapted combination of two radically different numerical methods provides a more complete description of the model's behavior: the stability study performed by harmonic balance method is completed by time-domain synthesis, which quantifies multistability by outlining the attraction basin of each regime. A varying control scenario explores which regime is produced in multistability zones, with the advantage that it can be tied to plausible musician actions. However, a varying control scenario only provides a very partial view of the attraction basins, and its results deserve to be explicited by representing the attraction basins in the phase space using different initial conditions. Dedicated experimental work, out of the scope of this paper, could help design more realistic control scenarios. Accounting for multistability, the study of synthesized regimes may explain an acoustical choice made by instruments makers: the inharmonicity of the saxophone. An integer ratio between the first and second resonance frequencies does not favor the production of first register. Note that this result adds nuance one of Benade's guidelines stating that an oscillation is favored if the impedance is large at its fundamental and its harmonic frequencies [59]. This work shows that competition between registers also comes into play, depending on more than solely the impedance magnitude at the playing frequency and its harmonics. Instead, an integer ratio between the first and second resonance frequencies tends to favor the production of second register, which is arguably undesirable for a first register fingering. Carefully tuned inharmonic resonances, where the second frequency is higher than twice the first, can lead to more first register production. The optimal inharmonicity value found on the model is close to harmonicities measured on saxophone resonators. This result provides an a posteriori interpretation of the acoustical characteristics of the saxophone, as chosen empirically by instrument makers, as the acoustical characteristic leading to easier production of the first register. Such results are among the first steps towards applying numerical simulations as predictive tools to estimate playability in instrument design. Figure 12. Rate of produced regimes for the lowest fingerings of the alto saxophone (written pitch). Green: first register, red: second register. An upward triangle marks the maximum first register averaged rate, a downward triangle marks the minimum second register rate. Vertical lines mark the measured inharmonicity values.
13,294.4
2021-01-01T00:00:00.000
[ "Physics" ]
Lack of effects on key cellular parameters of MRC-5 human lung fibroblasts exposed to 370 mT static magnetic field The last decades have seen increased interest toward possible adverse effects arising from exposure to intense static magnetic fields. This concern is mainly due to the wider and wider applications of such fields in industry and clinical practice; among them, Magnetic Resonance Imaging (MRI) facilities are the main sources of exposure to static magnetic fields for both general public (patients) and workers. In recent investigations, exposures to static magnetic fields have been demonstrated to elicit, in different cell models, both permanent and transient modifications in cellular endpoints critical for the carcinogenesis process. The World Health Organization has therefore recommended in vitro investigations as important research need, to be carried out under strictly controlled exposure conditions. Here we report on the absence of effects on cell viability, reactive oxygen species levels and DNA integrity in MRC-5 human foetal lung fibroblasts exposed to 370 mT magnetic induction level, under different exposure regimens. Exposures have been performed by using an experimental apparatus designed and realized for operating with the static magnetic field generated by permanent magnets, and confined in a magnetic circuit, to allow cell cultures exposure in absence of confounding factors like heating or electric field components. A comprehensive review of the bio-effects of static magnetic field on rodent models has been recently presented by Yu and Shang 10 in which, potentially therapeutic benefits, in the case of moderate intensity, as well as adverse effects, in the case of acute strong SMFs, have been evidenced. Concerning in vitro laboratory investigations, available studies have been carried out on different cell models, by adopting a wide range of exposure protocols in terms of magnetic induction levels and exposure duration and timing (continuous or intermittent), and by addressing different biological endpoints. In particular, primary human cells as well as cell lines were mainly employed, both healthy and cancer models, but also murine cells were considered. For SMF exposures, either low (60 μ T to 40 mT), moderate (100 to 700 mT) or high (1 to 10 T) induction levels were adopted, and exposure duration ranged from few minutes to hours or several days. The main biological endpoints considered were oxidative stress, gene expression and genotoxicity, cell viability and growth 11 . From the most recent investigations, exposure to SMF resulted in alterations of the expression of specific genes, with dependence of such effect on exposure duration and field gradients. Genotoxic effects have been reported under certain conditions, although in most cases they were repaired and not permanent. Contrasting results have been reported on cell viability and growth, as well as on oxidative stress with some evidences suggesting possible interference of the field with the cell redox status [12][13][14][15] . Overall, despite the fair number of studies published so far, adequate data strongly supporting an appropriate health risk evaluation of SMFs are still lacking, and the World Health Organization (WHO) recommended that authorities should increase the research effort on the study of health effects of SMF 1 . In this study, we present experimental results obtained in human foetal lung fibroblast cells (MRC-5 cell line), exposed to 370 mT magnetic induction under different exposure timing. To this aim, the design and characterization of an experimental apparatus, which allows exposure of cell cultures to SMF under controlled conditions, is also reported. The rationale behind the definition of both magnetic induction level and exposure protocol was to include exposure conditions which likely occur in the framework of MRI clinical procedures. Results Intermittent SMF exposure did not affect cell viability and intracellular ROS levels. Figure 1 presents the results on viability of MRC-5 cells, in terms of metabolic activity (top) and membrane integrity (bottom), as obtained, from the resazurin assay and neutral red assay, respectively. In six independent experiments, control (CTRL), sham exposed (Sham), and SMF-exposed cells, exhibited the same resorufin production and Intermittent (1 h/day for 4 days) SMF exposure did not affect cell viability in the resazurin (top) and neutral red (bottom) assays. Data are presented as mean ± SD of six independent experiments. Results of ethanol treatment (5%, 15 min) are also presented as positive control. *P < 0.05, one-way ANOVA multiplecomparison followed by Bonferroni test. the same ability to take up neutral red. As expected, in both assays, a reduction in cell viability was recorded in ethanol treated cells (5% ethanol for 15 min, positive control) as compared to the other treatments (P < 0.05). In six independent experiments, intracellular reactive oxygen species (ROS) levels also resulted unaffected as shown in Table 1, where the percentage of DCF positive cells in CTRL, Sham, SMF exposed samples is presented. Increased ROS levels were, instead, detected in MRC-5 cells after 30 min treatment with increasing concentration of H 2 O 2 , demonstrating the sensitivity of the applied method. As a matter of fact, in the DCF fluorescence histograms presented in Fig. 2, the percentage of DCF positive cells increased upon increasing H 2 O 2 final concentration, the minimum effective being 5 mM. Continuous SMF exposure for 24 h did not alter DNA integrity. In order to verify whether SMF exposure is capable of inducing primary DNA damage in MRC-5 cells, 24 h continuous exposure to 370 mT magnetic induction level were tested. The results obtained in the comet assay are presented in Table 2, where tail DNA %, tail moment (a.u.), tail length (μ m), as a measure of DNA migration, and the number of hedgehogs, as a measure of early apoptotic events, are presented. In three independent experiments, CTRL, Sham, and SMF-exposed cells, exhibited no statistically significant alteration in the comet parameters analyzed, and a comparable number of hedgehogs. Treatments with H 2 O 2 (25 μ M for 10 min) induced a noticeable increase in the comet parameters and in the number of hedgehogs (p < 0.05). None of the treatments caused a significant decrease in cell viability, in the trypan blue dye exclusion assay, which was above 90% in all cases (data not shown). Discussion and Conclusion In the evaluation of the potential biological effects that could arise from exposure to SMF, well controlled in vitro exposures are required. As a matter of fact, the peer-reviewed literature on in vitro studies is far to be conclusive, and does not provide any foundation or support to in vivo and epidemiological studies for a proper risk evaluation. In very recent literature, great attention has been devoted to study the effects on selected cellular endpoints related to cancer development, in different cell models. Gene expression and genotoxicity [15][16][17][18][19][20] , oxidative stress [21][22][23][24] , cell growth, differentiation and viability 16,22,[25][26][27][28][29] have been investigated. In a consistent portion of the available studies, SMF exposure induced effects in the cellular endpoints that, in some cases, were transient. Some evidences have also suggested that many cell processes can be influenced by combined application of SMF and drugs 20,30,31 . Thus, investigations aimed at clarifying the discrepancies, and verifying the evolution of such transient modifications are of crucial importance. Here we presented the design and characterization of an in vitro device based on Neodymium-Fe-Boron permanent magnets. The procedures described here could represent a tool for researchers who intend to expose cell Table 1. Intracellular ROS levels of MRC-5 cells subjected to intermittent SMF exposure at 370 mT magnetic induction level. The percentage of DCF positive cells in incubator control (CTRL), in sham exposed (Sham) and in SMF exposed (SMF) cells of six independent experiments is reported. cultures to SMF in absence of confounding factors like heating or electric field component, that could affect the results in the case of electromagnets based exposure devices. Moreover, the accurate description of such procedures will allow replication of the experiments in independent laboratories, and thus account for one of the most critical points in bioelectromagnetic research. As a matter of fact, one of the sources of failure of replication studies aimed to reproduce already published investigations reporting effects, is the absence of detailed description of exposure set ups and conditions. Therefore, it is widely recognized that one of the main requirements for in vitro studies addressing biological effects of EMFs exposure is the employment of detailed procedures for the choice, design and set up of exposure system, accurate determination of the electric and/or magnetic fields in the exposed samples, in compliance with physiological conditions for cell cultures. Moreover, the presence of sham-exposed controls is also critical in these type of studies, in order to ascertain that eventual observed effects can be ascribed to the EMF exposure and not to environmental conditions inside the exposure chamber 14 . By using such a device, human foetal lung fibroblasts (MRC-5) were subjected to SMF exposures at induction level of 370 mT, given as on/off cycles of 1 h/day for 4 consecutive days, for which a more complex biological response with respect to continuous exposures is expected. In this case, possible effects on cell viability and ROS formation were evaluated. As a matter of fact, cell viability is one of the first response to be investigated under different stress conditions, and can be regarded as a measure of permanent damage. On the other hand, it has been hypothesized that SMF can increase the activity, concentration and lifetime of paramagnetic free radicals, which may cause oxidative stress, genetic mutation and/or apoptosis and alteration of cell viability 31 . Further, in order to provide some mechanistic insights on the interaction between SMF and human cells, possible effects on DNA integrity have been investigated after 24 h continuous exposures. As a matter of fact, in the alkaline comet assay, effects at the level of DNA molecule that could be repaired, and thus not necessarily translated in detectable effects on cellular response, can be captured. In our experimental conditions, neither permanent effects nor transient modifications have been elicited by SMF exposures. As a matter of fact, neither intermittent nor 24 h continuous exposures to 370 mT magnetic induction level were able to alter viability of MRC-5 cells. Moreover, intermittent and continuous exposures were also unable to evoke effects in the form of ROS formation and primary DNA damage, respectively. ROS levels and DNA migration pattern, as from the intracellular DCF fluorescence measurement and comet assay respectively, capture cellular modifications that could be repaired by intracellular repair mechanisms, and not necessarily translated in unbalanced oxidative metabolism and permanent DNA damage. These two measurements are thus able to detect also transient effects. Furthermore, our experiments, taking advantages from the alkaline comet assay, also allowed to gain some clues about the absence of early apoptotic events under SMF exposure, although effects on apoptosis cannot be ruled out, since the validity of such measurement in detecting apoptosis is still under debate 32 . Some recent studies reported transient increase in ROS formation in different cell types after exposure to either moderate (35 to 300 mT) 22,23 , or very strong (8.5 T) SMF 24 , with exposure duration ranging from few hours up to 24 h. Some investigations also reported permanent increase of ROS with weak SMF (5 mT) with exposure duration ranging from few minutes up to 2 h 21 . Therefore, no indications on possible dose-response effects or of dependence on exposure parameters can be extracted by such positive findings. Both positive and negative findings have been reported on cell viability and DNA damage, under not easily comparable exposure protocols 31 . Our study is in partial contrast with those cited above, and it is not possible, on the basis of the current knowledge, to ascribe such discrepancies to either exposure parameters or to cell type. Nevertheless, also negative findings provide useful information in the framework of the evaluation of possible health effects arising from human exposure to SMF, which has increased over the last years due to their large use in industrial applications and in the clinical practice. Among such applications, MRI facilities are the most widespread sources of exposure, involving both patients and workers, although with different modalities. In the proximity of MRI systems, different types of EMFs, from SMF, to rapidly changing gradient magnetic fields and radiofrequency fields (10-100 MHz), are simultaneously present. Among these different EMFs, the SMF from MRI scanner is always turned on, and therefore both patients and different categories of workers (technicians, medical doctors, nurses, cleaning personnel) can be exposed during the clinical procedures. The magnetic induction level (B) generated by MRI systems typically range from 0.2 T to about 3 T in the bore, where the patient lies during the diagnostic examination, it extends beyond the confines of the scanner bore, and is reduced with the distance from the scanner, generating a spatial gradient of magnetic field (stray field). The occupational exposure takes place while attending patients before and after examination, during particular procedures, when the patient needs assistance during the examination, and also while operating the scanner's console 33 . Health care staff can be in average exposed to B levels of up to 500 mT 8 . Moreover, the actual trend is toward the development of MRI systems operating with higher and higher field strengths, which allow to increase the performance of the diagnostic system by improving the signal-to-noise ratio, the sensitivity to soft tissues, and the spatial resolution of the images 34 . According to the analysis carried out by Schaap and co-workers 8 , the exposure levels to the static stray field in a 3 T MRI facility can be approximately 1.5 times higher than in a 1.5 T MRI, while switching from 3 T to 7 T would result in exposure levels almost 10 times higher. The magnetic induction level of 370 mT adopted in this study can be traced to a real case of exposure, by considering the numerical analysis and the measurements reported by Crozier and co-workers 35 and Fuentes and co-workers 36 . In particular, exposure to 370 mT can occur in MRI plants with either 1.5 T, 4 T or 7 T magnets, at approximate distances from the bore of 1 m, 1.5 m and 2 m, respectively. It is therefore likely that such exposure occurs during routine procedures, like patient assistance and preparation, and involves the professional categories, both technical and medical staff, that are allowed to enter the MRI suite. Further, the intermittent exposure protocol (1 h/day for 4 days) here adopted, represents one of many possible scenarios of occupational exposure, that are strictly related to the specific activities carried out and to working procedures. In conclusion, here we report on the absence of effects on selected cellular endpoints in MRC-5 cells exposed to a moderate intensity, but realistic for occupational exposure, SMF, by using an ad hoc devised system which allows cell cultures exposure and sham-exposures under strictly controlled conditions. Research along the lines adopted in our investigations, across different cell models, warrants further and extensive consideration in order to shed light on possible interactions between SMF and cellular processes. Such a research will also help in establishing guidelines for occupational and patient exposures to static magnetic fields in MRI suites. Materials & Methods Design, realization and characterization of the exposure device. An exposure device was designed, realized and characterized in order to expose cells to SMF under strictly controlled electromagnetic and environmental conditions. In particular, the design was driven by the following working hypotheses: 1) possibility to host both the exposure and the sham/exposure devices inside a cell culture incubator, in order to perform long exposures; 2) device based on small permanent magnets, in such a way to avoid confounding factors like electric field components or heating that could arise in the case of electromagnets; 3) possibility to confine the magnetic field lines generated by the magnets, in such a way to maximize the magnetic induction level and field uniformity in the sample area. The design has been performed by using the CST EM Studio (Darmstadt, Germany) software: magneto-static simulations have been run in order to pursue the best system configuration, allowing to obtain a considerable magnetic induction level for sample exposure, without increasing too much the dimensions of the whole system. The exposure device has been configured as a magnetic circuit able to confine the magnetic field generated by a couple of permanent magnets, placed in the centre of the circuit and at a suitable distance between each other, in such a way to allow the insertion of the sample. For the choice of the permanent magnets, two neodymium-iron-boron (Ne-Fe-B) elements have been considered, selected from those available on the market on the basis of physical and geometrical characteristics. The magnetic circuit has been simulated as an iron structure, while two dishes of the same material, placed between the magnets, have been considered which allow to increase the field uniformity in the sample position. The magneto-static simulations have been configured by setting up a permanent magnet source with axial magnetization, as indicated by the magnets manufacturer (Webcraft GMBH, supermagnete.com). The final configuration of the exposure device is shown in Fig. 3. Cylindrical magnets, 30 mm high, 45 mm diameter, magnetic remanence of 1.32-1.37 T, have been preferred to rectangular ones which would have required larger dimensions, and consequently larger weight of the structure. The magnetic circuit is made by iron plates, 2 cm thick, and the overall structure is 18 cm high, 28 cm wide and 20 cm deep. The two iron dishes placed in the center (1 cm thick, 6 cm in diameter) are distant 1.3 cm between each other, allowing the insertion of a Petri dish in between. The distribution of the magnetic field in the exposure system, as from magneto-static simulations, is shown in Fig. 4. The field is perpendicular to the sample and is perfectly confined by the magnetic circuit. The average magnetic induction level in the sample area is of 370 mT (± 0.09%). The system was then realized by assembling together the iron plates and positioning the magnets and the dishes in the center of the structure. Since the final weight of the system resulted in 22 kg, it has been equipped with wheels to improve its portability (Fig. 5). After setting up the system, it has been characterized by mapping the magnetic induction levels in the sample area, at different positions and heights between the magnets, by means of a Hall gauss-meter (F.W. Bell, model 4048, accuracy ± 2% of reading) fixed on a manual micro-positioner. The measurements resulted in an average value of 366 mT (± 0.27%). The exposure device has been hosted inside a standard cell culture incubator (model 311, Forma Scientific, Freehold, NJ, USA) (Fig. 5). An identical structure, with plastic cylinders instead of the magnets, was set up to allow sham-exposure and hosted in a separate cell culture incubator. Cell culture and maintenance. Human foetal lung fibroblast cell line (MRC-5) was purchased from the National Institute for Cancer Research (Genova, Italy). Cells were cultured in Dulbecco's modified Eagle medium (DMEM) with 10% heat-inactivated foetal bovine serum, 2 mM L-glutamine, 100 U/ml penicillin, and 100 mg/ml streptomycin, at 37 °C in an atmosphere of 95% air and 5% CO 2 . For consistency and reproducibility, cell cultures were routinely maintained as monolayer by sub-culturing twice per week by trypsinization. For the experiments, different cell seeding, according to the biological endpoint under examination, was performed in 3 ml complete medium in 35-mm-diameter Petri dishes (Corning, NY). Experimental procedures. Different experimental procedures were adopted, based on the biological endpoints under examination, and described in the following. To assess cell viability and ROS formation, 4 × 10 4 cells were seeded in 3 ml complete medium. After 72 h of cell growth, culture medium was replaced with fresh medium, and cell cultures were exposed/sham exposed to 370 mT SMF for 1 h/day for 4 consecutive days. Cells were harvested 24 h later. To assess DNA integrity, 7 × 10 4 cells were seeded in 3 ml complete medium. After 72 h of growth, culture medium was replaced with fresh medium, and cell cultures were exposed/sham exposed to 370 mT SMF for 24 h. Immediately after exposure, cells were processed for the alkaline comet assay. At the same time, cell viability by trypan blue exclusion dye assay was also recorded. For the sake of clarity, a schematic representation of the two procedures is presented in Fig. 6. For each endpoint, the groups of samples were: sham exposed cells (Sham), SMF exposed cells (SMF), control cells, i.e. cells kept in standard cell culture incubator (CTRL) and positive control cells i.e. cells subjected to different treatments, able to evoke damage in MRC-5 cells in the specific biological assay. After any exposure/treatment, cell samples were coded in order to keep the treatment groups unknown to the researcher involved in the analysis. Codes were broken only at the end of data analysis. Measurement of cell viability. Resazurin and neutral red assays were employed, which assess the metabolic activity and the plasma and/or lysosomal membrane integrity, respectively, as a measure of cell viability. In the resazurin assay, the non fluorescent compound, resazurin, is used, which is reduced to highly fluorescent resorufin in the growth medium by cell activity, and a direct correlation exists between the reduction of resazurin and the metabolic activity of living cells 37,38 . After treatments, cell monolayers were incubated for 20 min at 37 °C with 10 μ g/mL resazurin in PBS (assay medium). Resorufin production was analysed in the assay medium with a fluorometer (Perkin-Elmer, LS50B, Perkin-Elmer Instruments, Norwalk, CT) at an excitation and emission wavelength of 530 and 590 nm, respectively, and expressed as Relative Fluorescence Unit (RFU). The neutral red assay examines the ability of cells to incorporate the water-soluble dye, neutral red, into lysosomes in an energy requiring process. Treatments damaging plasma and/or lysosomal membranes, or interfering with the normal energy-requiring endocytosis process, will decrease the ability of cells to take up neutral red 39 . After treatments, cell monolayers were treated with 0.066% (v/v final concentration) neutral red for 3 h, washed in PBS, and after trypsinization, cell suspensions were treated with cold lysis buffer prepared with 50 mM TRIS/ HCl, pH 7.4, 150 mM NaCl, 5 mM DTT, 1% Triton X 100, containing 1% acetic acid and 50% absolute ethanol. The optical density of lysed cells at 540 nm (OD 540 nm) was measured (Microplate Reader 680, Bio-Rad Laboratories, Hercules, CA, USA), and was used as an estimation of cell viability. In both assays, ethanol treatment (5% for 15 min) served as positive control, and six independent experiments were carried out. Measurement of intracellular ROS levels. The fluorescent probe 2′ ,7′ -dichlorofluorescin diacetate (DCFH-DA) was used, which is a non-polar compound that easily passes the cell membrane and is hydrolysed by intracellular esterases to the non-fluorescent polar derivative, DCFH. In the presence of ROS, DCFH is oxidised to fluorescent dichlorofluorescein (DCF) 40 . The assay was carried out as follows: after treatments, cell monolayers were loaded (20 min at 37 °C) in absolute DMEM medium w/o serum containing 5 μ M final concentration DCFH-DA. After washing twice in cold PBS, cell monolayers were trypsinized, and DCF fluorescence in the cell suspensions was measured by a flow cytometer (FACScalibur, Becton & Dickinson, San Jose, CA) equipped with a 488 nm argon laser. For each sample, 10000 events were acquired using CELL QUEST software, and the raw data were quantitatively analyzed using the FlowJo analysis program (TreeStar, OR, USA). Treatments of 30 min with increasing concentrations of H 2 O 2 were carried out to trigger ROS formation in MRC-5 cells and test the sensitivity of the method (positive control). Six independent experiments were carried out, and the results were reported as the percentage of DCF positive cells, i.e. cells expressing DCF fluorescence levels above a threshold value which was set on the basis of the background DCF fluorescence in the control population. Measurement of DNA integrity. The alkaline version of comet assay was performed according to the method developed by Singh and co-workers 41 with minor modifications. Optimum lysis, unwinding and electrophoresis conditions were determined, in preliminary experiments, to obtain detectable DNA migration in control cells 42 , and a subsequent higher sensitivity 43 of the method in our hand. The method is basically as follows. After treatments, cells were collected by trypsinization, and cell viability was assessed using the trypan blue exclusion method. For each treatment, 2 slides were set up by suspending aliquots of 5 × 10 4 viable cells in 100 ml low-melting point agarose (0.6% w/v), sandwiched between a lower layer of 1.5% normal-melting agarose at 37 °C and an upper layer of low melting point agarose (1.5% w/v) on microscope slides. The slides were then immersed for 60 min in a freshly prepared cold lysing solution prepared with 2.5 M NaCl, 100 mM Na 2 EDTA, 10 mM Tris, pH 10, with 1% Triton X-100 and 10% dimethyl sulphoxide at 4 °C added just before use. At the end of lysis treatment, slides were drained and placed in a horizontal gel electrophoresis tank with freshly prepared alkaline electrophoresis buffer (300 mM NaOH, 1 mM Na 2 EDTA, pH 13) and left in the solution for 20 min at 4 °C to allow the equilibration and DNA unwinding to express alkali labile damage. Using the same buffer, electrophoresis was carried out at 4 °C for 40 min at 30 V by using an Amersham Pharmacia Biotech power supply (Uppsala, Sweden) and adjusting the current to 300 mA by modulating the buffer level. Then, slides were rinsed three times with Tris (400 mM, pH 7.5), rinsed again in distilled water, and air-dried in the dark. All the steps described were conducted under dimmed light to prevent additional DNA damage. Immediately before analysis, slides were stained with 12 μ g/ml ethidium bromide. For each treatment, images of 500 randomly selected nuclei (250 from each duplicate slide) were analyzed to detect small but significant effects 44 , by using a computerized image analysis system (Delta Sistemi, Rome, Italy) fitted with a Leica DMBL fluorescence microscope (Leica Microsystems, Mannheim, Germany) at 200 X magnification. DNA integrity was evaluated by calculating the percentage of migrated DNA, tail length, and tail moment 42 . On the same slides, the number of "hedgehog" comets, characterized by almost all DNA in the tail and very small head, were recorded to have a gross estimation of possible early apoptosis events. Treatment with H 2 O 2 (25 μ M for 10 min) was employed to evoke DNA damage in MRC-5 cells, and served as positive control. Three independent experiments were carried out. Statistical analysis. Statistical comparisons among the groups of samples (Sham, SMF, CTRL and positive control) were conducted with one-way ANalysis Of VAriance (ANOVA) for multiple comparison at the 95% confidence level. A post hoc Bonferroni test was performed, and P < 0.05 was considered statistically significant.
6,272
2016-01-14T00:00:00.000
[ "Physics" ]
Lexical Bundles in Contract Law Texts: A Corpus-Based Exploration and Implications for Legal Education This paper reports on a study which explores lexical bundles in Contract Law, a key subdivision of the legal discourse. Based on a corpus of full-length texts, a total of 117 patterns are retrieved, refined and further subjected to structural as well as functional analyses. The results show that text authors make use of a wide range of lexical bundles, most of which are structurally phrasal and functionally research-oriented. Text-structuring sequences and participant-oriented bundles appear in the corpus, but are comparably far less employed. Also, the analysis of data established the domain-specific nature of patterns which revolve around the concept of contract. This paper concludes by discussing these findings and their implications for language learning, teaching and the ESP/EAP pedagogy. Introduction Several studies maintain that academic speech and writing involve the use of a large number of recurrent multiword constructions which can be located, retrieved and analyzed for their structural forms and discourse functions (Biber & Barbieri, 2007;Biber, Conrad, & Cortes, 2004;Biber, Johansson, Leech, Conrad, & Finegan, 1999;Cortes, 2004;Hyland, 2008aHyland, , 2008b. These patterns are studied using a range of terms, the most common of which is that of lexical bundles (e.g., Breeze, 2013;Durrant, 2017;Esfandiari & Barbary, 2017). Lexical bundles are perceived as "words which follow each other more frequently than expected by chance, helping to shape text meanings and contributing to our sense of distinctiveness in a register" (Hyland, 2008b, p. 4). The pervasive use of such lexical bundles is not restricted to a particular genre, register or discipline, as evidence shows that domains of various types and dissimilar communicative purposes employ a wide range of structurally different and functionally distinct bundle types. Academic Writing and Lexical Bundles Research Written academic texts have been the subject of several studies aimed at unveiling their rhetorical structures, linguistic features and their communicative purposes. Biber et al. (2004, p. 374) argue that textbooks and classroom teaching are "arguably the two most important registers in the academic lives of university students". Hyland (2009) maintains that "textbooks are indispensable to academic life, facilitating the professional's role as a teacher and constituting one of the primary means by which the concepts and analytical methods of a discipline are acquired" (p. 68). Surveying the differences as well as the similarities that exist across registers, genres and styles, Biber and Conrad (2009) concluded that written textbooks are produced to inform and educate rather than to disseminate fresh ideas. The communicative focus of textbooks is usually placed on laying out well-established facts, rather than announcing previously unknown findings. Academic textbooks have been studied for the use of academic bundles in a range of different contexts. Grabowski (2015) conducted a study in which a corpus of information leaflets, product summaries, clinical trial protocols and chapters from textbooks is examined for both keywords and lexical bundles. The results indicate that textbooks have the greatest number of keywords but the fewest number of lexical bundles compared with the three other sub-corpora. The concentration of a large number of keywords in this corpus, the author argues, is related to the discipline-specific nature of textbooks. The paucity of lexical bundles in textbooks, however, is attributed to the nature of texts which are lexically dense and less formulaic. In a similar study, Breeze (2013) explored the distribution of lexical bundles across four legal sub-registers: academic law, case law, legislation and documents. Drawing on a corpus of two million words, the researcher carried out a structural and functional analysis of repeated formulaic patterns unveiled as a result of the corpus analysis. Legislation and documents corpora manifest the widest range of bundles whereas academic and case laws include the fewest bundles. Structurally, the author adopts a lexico-grammatical approach, thus dividing bundles into four categories: content noun phrases, prepositional phrases, adjectival phrases and fragments containing a verb phrase. With the exception of case law, the greatest number of bundles in the three other register types involves content noun phrases denoting agents, institutions, and documents. Most bundles in academic law refer to either abstract or action entities. In a similar fashion, the corpus incorporating academic textbooks has the smallest range of bundles in a study conducted by Biber and Barbieri (2007) who contrasted the presence of such bundles across a wide range of registers and academic domains. Functionally, textbook corpus is dominated by, first, referentials, and then discourse-organizers. Stance expressions are the least employed bundle type. The use of lexical bundles by nonnatives/novices has been contrasted against the use of the same bundles by native/expert writers with inconsistent, and to some extent contradictory, results (Ädel & Erman, 2012;Bychkovska & Lee, 2017;Chen & Baker, 2010;Cortes, 2004;Esfandiari & Barbary, 2017;Llanes & Muñoz, 2009;Pan et al., 2016). While some studies maintain that native and professional writers demonstrate a thorough understanding of a wider range of different recurrent patters than do nonnatives and less experienced writers (e.g., Ädel & Erman, 2012), some other studies point to the opposite, that is, student-or novice-produced writings incorporate a great number of lexical patterns when compared with writings produced by natives or professionals (e.g., Bychkovska & Lee, 2017). These discrepancies arise as a result of differences in the study design, the discipline under study and the type of genre that is investigated. Overview of the Legal Discourse Legal language has been the subject of several research studies throughout the past decades. Much research into the legal discourse revolves around the syntax and semantics of the legal prose, with a particular attention given to the challenges facing novices and non-experts in understanding the legal content. Statements of legal nature are relatively long, densely nominal and distinctly complex as they comprise archaic and semi-archaic forms (e.g., hereinafter), rare expressions (e.g., annul) and opaque formulae (e.g., corporate veil). Legal texts, furthermore, incorporate a great number of familiar terms carrying unfamiliar meanings (e.g., distress & find), passivized constructions, odd prepositional phrases, performative markers and a wide range of law-specific Latin-origin concepts (Cao, 2007;Haigh, 2015;Trosborg, 1997). Another reason that makes the legal text difficult to decode lies in the fact that legal language is "system-bound" in which "terms denoting concepts derive their meanings from a particular legal system" (Northcott, 2012, p. 218). In this case, a widely used legal term in a specific judiciary system may not have an equivalent term in another system. Vass (2017) adds another layer of difficulty which concerns the increasing number of law students and professionals who come from a non-English-speaking background in which legal concepts, terms and rhetorical conventions are learned and delivered in the students' native language. The inherently complex nature of the legal writings has given rise to what is now known as the Plain English Movement (Hartig & Lu, 2014) which calls for embracing a far clearer, less archaic, and more reader-friendly writing style accessible to a wider base of readership. Over the past few years, there has been a significant amount of research on topics related to the legal discourse from an ESP perspective. While the study of Vass (2017) focuses on verb hedges in a one-million corpus of journal articles, supreme court agreements and supreme court disagreements, thus concluding that lexical verbs serving a hedging function are more pervasive in journal articles than in the other two genres, the research by Cheng and Cheng (2014) attempts to investigate epistemic modality in a corpus of civil cases in Hong Kong and Scotland, revealing no differences between the two legal systems with respect to the distribution of epistemic expressions serving to signal a degree of probability and possibility. In a survey of existing pedagogical resources relevant to legal education, Candlin, Bhatia, and Jensen (2002) conclude that the writing materials available for the students on how to approach legal prose are not fulfilling a clear pedagogical purpose, thus failing to meet the learner's writing needs, ignoring advances in linguistics theory and practice and are mainly delivered in an inaccessible manner. In a corpus-based attempt to draw a line between disciplines, Durrant (2017) maintain that law is closely aligned with history, politics and English, as the distribution of patterns show that they share a great number of similar lexical bundles. Law, however, uses a rather distinctive set of recurrent patterns when compared with other disciplines such as physics, food sciences and chemistry. Methodology In this section, I will outline the corpus upon which this study draws. A discussion of the bundle selection and refinement will follow suit, focusing primarily on the criteria which have been applied while extracting bundles from the corpus and the measures taken to refine the set of bundles resulting from the corpus analysis. Study Corpus A study corpus is created to elicit lexical bundles meeting predetermined frequency and distribution parameters outlined in Bundles Selection Criteria and Refinement Section below. Texts making up the corpus are pooled from a variety of contract law subtopics, such as mistakes in contract law, theory of contract law, the modern law of contract and Chinese contract law (see Appendix A for a full list of books). Sections removed prior to corpus treatment include the publication information, copy rights violations warnings, acknowledgements, appendices, references, footnotes, endnotes, and tables of figures, cases, and statutes. Although there is no way to ascertain the language background of authors, the fact that the text is published by a key publisher attests to these authors' expertise and scholarship. Table 1 gives a comprehensive description of the corpus used in this study. Bundle Selection Criteria and Refinement Although the criteria for selecting bundles from a corpus of naturally-occurring language differ from one study to the other, there seems to be a general consensus among researchers that a target lexical bundle should contain a specific number of words, recur beyond a particular frequency threshold and should also appear across a predetermined number of texts making up corpus under scrutiny. Given the exploratory nature of this study, the length of the bundle, its frequency of occurrence and its distribution across the corpus subparts will determine the process of locating and extracting bundles. Another step to distill data will follow, thus removing overlapped and subsumed bundles. The Cluster Function in the software program Wordsmith Tools (Scott, 2016) is used to synthesize four-word bundles from the corpus and the Concordance Function is also employed to retrieve concordance lines needed to determine the meanings as well as the functions of selected bundles. As for the length of the bundle, it is common practice in previous research to focus on four-word bundles, as three-word bundles are unmanageably greater in number and are sometimes embedded in four-word bundles. Lexical bundles of greater length, such as five-, six-and seven-word bundles, do exist but the rarity by which they occur makes them of little interest to researchers (Cortes, 2013;Esfandiari & Barbary, 2017). With respect to the frequency of occurrence, bundles are selected if they occur 40 times per million words, a normalized score corresponding to a raw frequency of 133. This conservative threshold (see, Esfandiari & Barbary, 2017;Pan et al., 2016) is to ensure that only bundles which recur frequently are selected for the analysis. The total number of bundles meeting the frequency criteria amounts to 150, all of which were copied into an excel sheet for further distilling of the data. The third step involves removing bundles occurring in at least five texts (25% of texts in the corpus). The impetus behind using such a specific minimum range score is to avoid patterns that are idiosyncratically typical of a text or author and since this study draws on a limited set of full-length texts, it is methodologically appropriate to include for analysis the types of bundles with greater tendency to occur across a range of such texts. A total of six bundles occurring in less than 25% of the texts are removed, thus reducing the overall number of bundles to 144. By looking at the list of bundles resulting from applying the sequence and range criteria, it becomes clear that ijel.ccsenet.org International Journal of English Linguistics Vol. 9, No. 2;2019 there is much overlapping between bundles. Chen and Baker (2010) identified two types of overlapping: complete overlapping and complete subsumption. The bundles principles of international commercial and of international commercial contracts are two parts of the extended bundle principles of international commercial contracts. The two bundles share the same frequency and dispersion profiles. Both bundles are combined in a single string with the word contracts enclosed into two brackets: principles of international commercial +(contracts). Complete subsumption occurs when "two or more 4-word bundles overlap and the occurrences of one of the bundles subsume those of the other overlapping bundle" (Chen & Baker, 2010, p. 33). Examples include patterns such as to the terms of and the terms of the which are similar except in two lexical items. Another procedure involves removing bundles which refer to specific judiciary entities such as the British House of Lords and the supreme court of a particular state (e.g., Supreme Court of Michigan). Three such bundles are eliminated because they are extremely context-dependent (Chen & Baker, 2010). Bundles removed due to overlapping and context-dependency amount to 27, thus minimizing the number of bundles to 117. Results A general overview of items on the final list (see Appendix B) reveals some interesting aspects of the legal vocabulary characteristic of the Contract Law. Lexical expressions co-occurring with the word contract are unsurprisingly dominating the list, thus reflecting the topic-specific nature of this register. The recurrent use of the sequences such as for breach of contract, in breach of contract and the breach of contract mirrors a serious concern among the legal community of a possible failure from one or both parties to maintain the binding nature of contractual agreements. Other patterns co-occurring with the term contract discuss what constitutes a contract as a legal document: term(s) of the contract, terms in consumer contract, contents of the contract and matter of the contract. Another interesting pattern emerging from the data concerns the use of lexical bundles which transcend register boundary, thus occurring in distinct contexts. Expressions such as in the case of, on the other hand, in the context of, in respect of the, on the basis of do not seem to be tied to a specific register. In the following two sections, the structural forms of bundles as well as their discourse functions will be discussed with examples taken from the corpus. Structural Patterns of Lexical Bundles One objective of the current study is to account for the grammatical structures of lexical patterns emerging from the corpus analysis. Drawing on the framework developed by Biber et al. (1999), lexical bundles can be broadly classified into noun-based, preposition-based and verb-based groups, each of which can be further classified into subgroups (see Table 2). Noun-based bundles fall into two groups: a noun phrase followed by an embedded of-phrase fragment or a noun phrase which takes other post-modifier fragments. There are thirty-five bundles beginning with a noun or noun phrase followed by a post-modifying of-phrase fragment. A second noun-based subcategory of bundles involves the use of a noun phrase with either a post-nominal clause fragment (e.g., the fact that the, the way in which) or a prepositional phrase fragment (e.g., party to the contract, remedies for breach). Yet a third noun-based subcategory consists of a noun head premodified by nouns, adjectives or both (e.g., the parole evidence rule, the unfair contract terms). The bundle the contract and the is the final pattern which does not seem to belong to any of the subgroups outlined above and is thus considered a fragment. The second major category of lexical bundles in the collection of texts on Contract Law contains forty-three preposition-headed lexical bundles, nearly half of which take an of-phrase fragment as a post-modifier (e.g., for breach of contract, at the time of, in the case of). Yet a third major structural group consists of lexical bundles comprising a verb component. Three verb-based bundles begin with anticipatory It followed by copular verb and then either an adjective (e.g., it is clear that, it is In some cases where functional boundaries blur, an inductive approach (Biber & Barbieri, 2007) is pursued, thus relying on the concordance lines in order to determine the function served by the target lexical bundle. Research-Oriented Bundles As can be seen in Figure 2, bundles serving a participant-oriented function can be divided into sub-groups, each of which contains a number of distinct recurrent expressions. The greatest number of bundles are found in the topic-based category, whereas the smallest range of bundles occur in the description-based category. Time, Entity and Agent Markers According to Hyland (2008b), research-oriented bundles "help writers to structure their activities and experiences of the real world" (p.13). Within this category, bundles can be used to mark time, place or entity. Reference markers alluding to time include two patterns: at the time of and in the course of. Bundles referring to a particular judiciary entity are represented by five bundles: the court of appeal, the house of lords, by the house of, by the court of and of the court of. The widest range of bundles in this sub-category are found to refer to agents. Examples include patterns such as one of the parties, party to the contract, the other party to and the parties to the. Here are examples from the data representing bundles serving to refer to time, entity and agent. • "The contract was illegal at the time of its formation." (time marker) • "The Court of Appeal held that the creditor was bound to be consistent." (entity reference marker) • "It is possible for either both or only one of the parties to intend illegal performance." (agent marker) Procedure Several bundles in the list are found to help account for a specific procedure such as the ruling of a court or the intention of parties to enter into a contractual agreement. These include patterns such as it was held that, the court held that and to create legal relations. • "It was held that it was unreasonable for the defendant to exclude liability for breach of both express and implied terms." • "The status of 'intent to create legal relations' has become disputed." Description The third research-oriented sub-category includes bundles used for describing a particular law-related action or legislation. • "It would perhaps have been different had the purpose of the hire been specifically advertised in these terms." • "The mistake about the application of the Rent Acts was not a ground for declaring the lease void." Intangible Framing Attributes Some bundles within the research-oriented group tend to highlight the real or abstract nature of an entity. Bundles such as the nature of the, the way in which and the value of the help to exemplify the characteristics and qualities of a specific entity: • "The lawyer must explain the nature of the transaction." • "The repairs would have cost twice the value of the ship." Topic-Oriented Bundles The largest group of bundles are domain-specific, that is, they are used to convey meanings typical of the contract law. Most of these domain-specific bundles revolve around the word contract; for breach of contract, terms of the contract, the law of contracts, performance of the contract and matter of the contract. A second set of domain-specific bundles serve to highlight some legislations such as the statute of frauds, of the civil code, the uniform commercial code, the parole evidence law. • "The terms of the contract stated that the contract could be performed by the use of either of two named vessels." • "The law of contract is fundamental to any legal study." ijel.ccsenet. documents and judgments". Text- The third, and by far the smallest, functional group consists of participant-oriented bundles which can be further divided into stance and engagement markers, representing approximately 10% of all expressions unveiled in this study. The limited number of stance and engagement patterns in the list seems to give further credence to Bhatia's observation that legal language is "highly impersonal and decontextualized, in the sense that its illocutionary force holds independently of whoever is the 'speaker' (originator) or the 'hearer' (reader) of the document" (Bhatia, 1993, p. 188). The paucity of participant-oriented bundles can also be interpreted from a genre perspective, as written texts involve minimal interaction between the author and the reader (Biber & Conrad, 2009). Implications This study has key methodological and pedagogical implications. On a methodological level, future researchers will find the analytical frameworks adopted her easy to emulate while designing studies of similar goals. The steps for corpus compilation, extraction and refinement are thickly described in a way that allows for easier replication. It is also possible that items detailed here may be compared against similar ones elicited from texts of another disciplines (e.g., history, English) or texts of a similar sub-disciplines (e.g., common law, labor law). Studies as such are expected to deepen our understanding of the rhetorical practices shaping arguments in distinct as well as similar disciplines. Pedagogically, this study has two important implications. Although the purpose of the current study is not to generate a definitive list of bundles in the contract law, it is hoped that language instructors, materials authors and textbooks compilers find some patterns in the list of greater benefit to their ESP/EAP students. In a short classroom activity, for example, students can be asked to examine the language of a legal contract with the help of the recurrent items in the list in order to determine how these items are functionally used to serve key communicative purposes. Another pedagogical implication is that instructors can draw on the corpus-derived examples outlined in the Findings Section while explaining the meanings as well as the functions of patterns in the list. In this case, learners not only have the opportunity to experience patterns as they occur in real contexts, but also can identify the different senses conveyed by each pattern based on real examples. Conclusion In conclusion, the role played by language in various academic setting is indisputably great, as is neatly encapsulated by Hyland who maintains that "educating students, demonstrating learning, disseminating ideas and constructing knowledge rely on language" (Hyland, 2009, p. 1). The research reported here is an attempt to explore contract law, a key subdivision of the legal register, with the aim of unveiling recurrent multiword patterns. These patterns are then subjected to structural and functional analyses based on approaches and frameworks from corpus linguistics and genre studies. It is hoped that the findings as well as the discussion of these findings will increase our knowledge of the legal discourse in general and the law of contracts in particular.
5,285
2019-02-24T00:00:00.000
[ "Linguistics" ]
MXene-based electrochemical devices applied for healthcare applications The initial part of the review provides an extensive overview about MXenes as novel and exciting 2D nanomaterials describing their basic physico-chemical features, methods of their synthesis, and possible interfacial modifications and techniques, which could be applied to the characterization of MXenes. Unique physico-chemical parameters of MXenes make them attractive for many practical applications, which are shortly discussed. Use of MXenes for healthcare applications is a hot scientific discipline which is discussed in detail. The article focuses on determination of low molecular weight analytes (metabolites), high molecular weight analytes (DNA/RNA and proteins), or even cells, exosomes, and viruses detected using electrochemical sensors and biosensors. Separate chapters are provided to show the potential of MXene-based devices for determination of cancer biomarkers and as wearable sensors and biosensors for monitoring of a wide range of human activities. Graphical Abstract Introduction Many solution-processed two-dimensional (2D) materials were quite small in flake size owing to low mechanical strength leading to the fracture of 2D sheets during delamination [1].A number of early day 2D materials were also hydrophobic [2] and unstable, when exposed to air [3][4][5].Hence, the discovery of a family of 2D carbides and nitrides with metallic conductivity, hydrophilicity, ease of processing, relatively high yields, and large size flakes had a profound effect on the entire field of material science. Ever since then, the realm of 2D materials [6] became much larger and a very dynamic and exciting research field.The fact that MXenes emerged early meant that they attracted significant attention to the field of 2D nanomaterials besides graphene.Soon thereafter, 2D nanomaterials made of Si, Ge, Sn, and several other elements with weakly bonded layered precursors were demonstrated [7].The main initial practical applications of 2D nanomaterials were in microelectronics [8][9][10]. Early transition metal carbides and nitrides were characterized by their high metallic electrical conductivity, hardness, and excellent chemical stability and they were used for decades as bulk ceramic materials mostly for hightemperature applications and as cutting tools.Reducing the dimensionality of metal carbides and nitrides turned out to be a daunting task mainly due to strong bond between the transition metal and carbon/nitrogen atoms (mostly covalent/metallic bonds).In 2011, it was showed that by simple immersing of Ti 3 AlC 2 in hydrofluoric acid (HF) at room temperature, one could selectively etch the Al layers leaving behind a 2D nanomaterial made of titanium carbide (Ti 3 C 2 ) for the first time [6].At some point, it became clear that the synthesis of 2D nanomaterials does not necessarily require van der Waals bonded layered precursors and hence a number of new materials have been discovered including different types of MXenes (Fig. 1, upper image) [11].In fact, Ti 3 C 2 was the first MXene reported in 2011 [6] and shortly after the synthesis of other MXenes, e.g., Ti 2 C and Ta 4 C 3 , from their MAX phase precursors, demonstrating three types of possible structures (M 2 X, M 3 X 2 , and M 4 X 3 ).The MAX phases are layered hexagonal (P 63 /mmc space group) materials and can be described as transition metal carbide/ nitride sheets of octahedral blocks, where the X atoms are in the centers of the octahedrons, glued together with pure A layers.Back in 2011, there were approximately 70 MAX phases known; today, their number exceeds 150, with new ones discovered on a routine basis, proving a large number of precursors for MXene synthesis.Currently, more than 40 MXene compositions exist with the ultimate number being far greater [12]. The field of MXene-based applications is a very active scientific field, what can be documented by the number of publications published in the last 11 years since the first publication in 2011 (Fig. 1A, lower image).Application In [b], the three different sites considered for the T-groups on the MXenes' surface are given: FCC (green), HCP (purple), and bridge (cyan).In order to ease their identification, only one surface group is sketched in these structural models, but all calculations were performed on fully functionalized surfaces, i.e., corresponding to M n+1 X n T 2 compositions (with T = − O, − OH, − F, or − Cl); see the SI, part S2.One should notice that the Mo 2 Ga 2 C structure is different from those of the MAX phases with a double A element layer between the octahedral layers.Structural models were drawn with VESTA software [13].Reproduced with permission from ref. [14]. Copyright 2023 American Chemical Society (upper image).Publication dynamics expressed as number of publications published for the term "MXene" (A) and a combination of the terms "MXene AND (healthcare OR medicinal OR medical OR biomedicinal OR biomedicine OR medicine)" (B).The search was performed using the Web of Science database (lower image) of MXene in healthcare is slightly lagged behind since the first publication was published in 2015, but since then the field is very dynamic (Fig. 1B, lower image).Thus, in this review paper, our aim was to provide overview of the advancements achieved by using MXene for the healthcare applications. A brief literature survey of MXene nanomaterials is shown in Fig. 2 [15].When the "A" atoms of the MAX phase are etched, the freshly exposed and unsaturated transition metal atoms are immediately coordinated by anions present in the etchant, forming the surface terminations T x with a chemical formula M n+1 C n T x [16].MXenes are defined by their general structure of M n+1 X n T x , where M is an early transition metal (Sc, Y, Ti, Zr, Hf, V, Nb, Ta, Cr, Mo, W), X is a carbon and/or nitrogen, and T x stands for surface terminations, such as O, OH, F or Cl, and n = 1-4 [17] MXenes' electronic properties range from metallic to semimetal, semiconducting, and insulating [16].MXenes' unique properties, such as their metal-like electrical conductivity reaching ≈20,000 S cm −1 , extended surface area make them an appealing choice in applications for energy storage, biomedicine, communications, and environmental applications.On the other hand, such high electrical conductivities combined with the surface terminations allow covalent or electrostatic anchoring of other molecules and nanoparticles to design interfaces with strongly associated (bio)polymers or nanoparticles [17]. Inset figure of panel c represents the growth of the MXene literature in only separation applications, along with its percentage over all fields in each year.Reproduced with permission from ref. [37], which is an open access publication.Copyright 2023 American Chemical Society m), demonstrating efficient charge transport between the flakes within the thin films.At the same time, Ti 3 C 2 T x shows high (~ 2 × 10 21 cm −3 ) intrinsic charge carrier density and relatively high (~ 34 cm 2 V −1 s −1 ) carrier mobility, while Mo-based MXenes demonstrated lower intrinsic carrier densities (~ 10 20 cm −3 for Mo 2 Ti 2 C 3 T x and ~ 10 19 cm −3 for Mo 2 TiC 2 T x ).Ti 3 C 2 T x has hence attracted attention as a material for making electronic device contacts, electron emitters, transparent conductor layers in perovskite solar cells, and light-emitting diodes (LED).Further, Ti 3 C 2 T x demonstrates negative magnetoresistance, and Mo-containing MXenes typically exhibit positive magnetoresistance [38].Theory predicts that the bandgap and the magnetic properties could be engineered by adjusting the thin-film chemistry and terminations [39]. MXenes are characterized by high electronic conductivity and wide range of interesting optical absorption properties.These unique properties are the result of quantum confinement effect in the atomically thin 2D layers and are strongly dependent on the layer thickness and composition.The individual titanium oxide nanosheets exhibit large dielectric constant and electronic permittivity making MXenes suitable for applications such as electromagnetic interference (EMI) shielding [40][41][42][43], pressure and molecular sensors [44,45], and transparent conductors [46]. The electronic properties of MXenes such as metal-toinsulator transition, ultralow work function, topological insulator, large electronic anisotropy, and massless Dirac dispersion near the Fermi level have been formerly extensively investigated computationally.Bare MXenes are metallic but some become semiconductors upon surface functionalization.The outer transition metal layers (M′ in M′ 2 M″C 2 T x and M′ 2 M″ 2 C 3 T x ) in ordered multi-elemental transition metal MXenes play a more important role in electronic properties than the M″ inner core metals.OHand F-terminations were predicted to have similar effect on MXenes' electronic structure because they can only receive one electron from the surface metal.OH-termination leads to negative surface dipole moment, and thus decrease in the work function.Hydroxyl-terminated MXenes are expected to have an ultralow work function and thus can be efficient electron emitters that are attractive as field emitter cathodes in field effect transistors.Some MXenes are predicted to be 2D topological insulators with potential applications ranging from basic spintronic devices to quantum computing.Since strong spin-orbit coupling (SOC) is required for topological insulators, MXenes with heavy 4d and 5d transition metals (Mo, W, Zr, and Hf) are suitable candidates [47,48]. MXenes being van der Waals materials exhibit anisotropy of electronic conductivity in the in-plane and out-of-plane directions.It was shown that the in-plane conductivity is an order of magnitude higher than the out-of-plane conductivity.Moreover, effective mass of electrons and holes in the basal plane were calculated to be quite small (< 0.5 m 0 ), while that of electrons and holes perpendicular to the layers were estimated to be infinite.Ti 3 C 2 T x shows optical absorption at 0.8 eV and 1.7 eV that were previously attributed to surface plasmons and interband electronic transitions is located below 1.6 eV and above 3 eV.Moreover, Ti 3 C 2 T x is 93% transparent at thicknesses of about 4 nm, which makes it a great candidate for transparent electrodes [38]. The optical and plasmonic properties of such nanomaterial are attractive for applications in ultrafast lasers [49,50], optical communication [51,52], in surface-enhanced Raman spectroscopy (SERS) [53,54], as broadband absorbers [55], and in light-to-heat conversion [56,57].Ti 3 C 2 T x exhibits nonlinear light absorption (saturable absorption); i.e., the transmission increases nonlinearly with increasing illuminating intensity.Additionally, nonlinear absorption coefficients of Ti 3 C 2 T x as high as − 10 −21 m 2 V −2 were measured indicating potential use in optical switching applications and hence metallic Ti 3 C 2 T x and Ti 3 CNT x were used in femtosecond mode-locked lasers.The nonlinear optical performance of MXenes is comparable, if not superior, to other 2D materials such as transition metal dichalcogenides, graphene, and black phosphorus.Ti 3 C 2 T x exhibits attractive plasmonic properties potentially applicable in SERS applications.Electron energy loss spectroscopy analysis has shown that multi-layered Ti 3 C 2 T x has intense surface plasmons with energy range from 0.3 to 1 eV that dominate over bulk plasmons even at 45-nm layer thickness.The bulk plasmon peak is independent of the layer thickness, unlike other 2D materials where the bulk plasmon peak blue shifts when going from few layers to a bulk state [47,48]. Mechanically, MXenes offer high strength and module of elasticity; Young's module of single layers can be as high as 330 and 390 MPa for Ti 3 C 2 T x and Nb 4 C 3 T x , respectivelyhigher than for graphene oxide or MoS 2 .At the same time, these numbers are the highest among all solution-processable materials, which further supports the use of MXenes in composite applications [38].Furthermore, MXenes provide a combination of conductivity with interesting redox properties [16].Importantly MXenes show no cytotoxicity, and upon degradation they turn into nontoxic products, such as TiO 2 , CO 2 , or CH 4 . Synthesis of MXenes The first MXene generation nanomaterials were synthesized using a selective etching of metal layers from the MAX phases, layered transition metal carbides, and carbonitrides using hydrofluoric acid [6] but alternative synthesis approaches are accessible now.These include selective etching in a mixture of fluoride salts [110] and various acids [111], non-aqueous etchants [112,113], halogens [114], and molten salts [115], allowing to synthetize new MXenes with a better control over their surface chemistries. MXenes can be produced in a range of forms from multilayer powders to inks of delaminated flakes [116] in water that in turn can be printed [117][118][119], sprayed [120][121][122], drawn into fibers [123,124], or filtered into freestanding films [125][126][127][128]. MXenes' hydrophilicity and ability to disperse easily in water without any surfactant simplify their processing.They are prone to oxidation at high temperatures and under oxidizing environments, which can lead to novel architectures of nanohybrid structures of oxides/carbon or oxide/carbon/MXenes with promising use as electrodes for energy storage and conversion. The conversion from MAX to MXene (even in a multilayer form) leads to a distinct, visual color change: while MAX phases are usually gray in color, all MXenes will have their distinct colors which are related to their optical properties, depending on their structure and composition.With delaminated MXenes, the concentrated solutions appear to be black; however, when diluted (< 0.5 mg mL −1 ), a color specific to each MXene becomes apparent [38]. Early on, when the first generations of MXenes were prepared, such MXenes were all synthesized by selectively etching the Al layer from different MAX phases while modification of etching conditions such as acid concentration, temperature, and etching time for each MAX precursor allowed a limited control over the process.MXenes are multilayered materials with a morphology that resembles vermiculite clay-these multilayers are held together by a mixture of hydrogen and van der Waals bonds.This configuration allows to intercalate several chemicals between the layers, e.g., intercalation of dimethyl sulfoxide (DMSO, please note that DMSO is not intercalated into all types of MXenes) in Ti 3 C 2 T x .When such solutions are sonicated, the result is a colloidal solution of delaminated Ti 3 C 2 T x dispersible in water.On the other hand through spontaneous intercalation of cations, large-scale delamination of various MXenes was achieved by intercalating large cations from organic phase solutions such as tetrabutylammonium hydroxide [160], choline hydroxide, and n-butylamine.Other groups have focused on the intercalation of increasingly large alkylammonium ions and other large structures into MXenes, often leading to unique properties of such nanomaterials [38].Cation-intercalated engineering allows controlling the interlayer distance, which is directly proportional to the hydration size of the intercalated species, and tuning of the mechanical and actuation properties of Ti 3 C 2 MXene.This in turn brings an enhancement of the capacitance and tunes interfacial properties for (bio)sensing purposes [39].The surface chemistry (which depends on etching conditions), intercalated species, and even the flake size significantly affect MXene properties [16,38,161]. Microscopically, the etching behavior of the Ti 3 AlC 2 MAX phase, when using different etchants, at the atomic scale has been studied by Naguib et al. [17] using focused ion beam and electron microscopy.They have looked at the structural changes in the Ti 3 AlC 2 phase as a function of etching time and etchant type (LiF/HCl, HF, or NH 4 HF 2 ) to reveal the etching mechanism for the first time.Apparently, the propagation of the etching front occurs in the direction normal to the inner basal plane of MAX phase for all etchants and it was revealed that HF and NH 4 HF 2 etch the grain boundaries of polycrystalline MAX particles to expose more edge sites to the etchant, which is not observed for LiF/HCl etching pair.In contrast, for the LiF/HCl etchant, Li + ions spontaneously intercalate between MXene layers, where they increase the interlayer spacing between MXene sheets and weaken their interaction, eventually resulting in delamination of the MXene sheets during the washing process after etching [17].The scheme of the overall observed mechanism for etching monoatomic Al layers from Ti 3 AlC 2 MAX depending on the type of etchant, LiF/HCl, or HF is demonstrated in Fig. 3. Combination of fluoride salts such as LiF and more benign acids compared to HF such as HCl as etchants was a major breakthrough in the field.The in situ formation of HF not only converted the MAX phase to MXene, but the resulting product behaved like a clay from a rheological point of view and it could be processed into different shapes.Another optional etchants are, e.g., ammonium bifluoride (NH 4 HF 2 ), hydrolyzed F-containing liquids, and molten fluoride salts.Other fluoride-free option for Ti 3 AlC 2 includes aqueous electrolytes of 1.0 M ammonium chloride and 0.2 M tetramethylammonium, hydrothermal treatment by using 27.5 M NaOH at 270 °C, and iodine dissolved in anhydrous acetonitrile at 100 °C to form Ti 3 C 2 I 2 .Fluoride-free synthesis can also be achieved using a Lewis-acidic molten salt such as ZnCl 2 or CuCl 2 in the 500-750 °C temperature range, depending on the salt.Ti 2 SC can be thermally reduced to produce Ti 2 CT x .A salt-solution-based acoustic synthesis of Ti 3 C 2 T x from Ti 3 AlC 2 that utilized LiF in water with surface acoustic waves was shown to produce delaminated MXenes in seconds.Variations of etching conditions such as the ratio of fluoride salt to acid, or bubbling nitrogen gas during etching can change the properties of the resulting MXene significantly.MAX phase chemistry matters, e.g., having excess of Al during the synthesis of Ti 3 AlC 2 , will lead to the formation of highly stoichiometric MAX and MXene.There is a limited number of nitrogen containing MAX phases and synthesis of nitride MXenes is generally difficult, as the nitride layers tend to dissolve in the acids. In summary, when aqueous HF is used, mixed = O, -OH, and -F interfacial terminations are usually found with different ratios, depending on the type of MXene and etching conditions.When molten chloride salts are used, -Cl terminations dominate; when water-free NH 4 HF 2 is used, F-rich surfaces prevail.Moreover, electrochemical study confirmed a significant difference in the negative charge density on the surface of MXene and also in the electrocatalytic activity depending on the etchant (HF or in situ-generated HF from mixture of LiF and HCl) used in the preparation of MXenes [162]. MXenes are prone to oxidation at high temperatures and under oxidizing environments, which can lead to novel architectures of nanohybrid structures of oxides/carbon or oxide/ carbon/MXenes that are found promising for use in electrodes for energy storage and conversion.It was shown that Ti 3 C 2 T x begins to transform to cubic carbide with loss of surface oxygen at ~ 860 °C in a protective environment, and the thermal stability is somewhat dependent on the etching protocol [38].A higher coverage by oxygen-containing species in combination with higher processing temperatures results in amorphization of the sheet and/or formation of TiO 2 phases although the 2D nature of the flake persists.Finally, with extended oxidation at 450 °C, the MXene sheet was structurally transformed into crystalline titanium and amorphous Ti(CO) 2 and while the MXene transforms into titanium layer, species such as H 2 O Reproduced with permission from ref. [38].Copyright 2021 American Chemical Society and CO 2 are desorbed from the surface.MXenes are prone to intercalate and physisorb H 2 O; however, physisorbed water is weakly bonded and desorbs after heating above 200 °C [163].MXene processing steps include exfoliation, size selection, concentration, and deposition.Processing begins with liquidphase exfoliation.The MXene lateral flake size can be measured directly by microscopy methods or indirectly by dynamic light scattering (DLS).Colloidal stability can be measured by zeta potential (ζ-potential) through electrophoretic mobility measurements and since MXenes are negatively charged, the value of zeta potential is expected to be lower than − 30 mV in a wide range of pH values. To measure chemical stability, one should determine how much of the material is degraded over time.V 2 CT x or Ti 2 CT x degrade quickly when dispersed in water and should be used immediately after synthesis [38].Several studies demonstrated successful surface functionalization of Ti 3 C 2 T x with carboxyl or glycine groups and silane coupling agents resulting in improvement of the Ti 3 C 2 T x stability and charge percolation [39]. It is important to note that dense dry films have a much higher stability and a very long lifetime (years), unlike single-layer flakes in solution.There are multiple methods to deposit MXene on surfaces from a solution using vacuumassisted filtration, spray-coating, spin-coating, dip-coating, drop casting, electrophoretic deposition, blade-coating, screen printing, inkjet printing, 3D printing, and electrospinning [38]. A number of techniques are available to determine composition, structure, and properties of MXenes including energy-dispersive X-ray spectroscopy (EDS) [167], X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), Raman spectroscopy [168], scanning electron microscopy (SEM), and scanning transmission electron microscopy (STEM) [169].Oxidation on the surface can be detected with Raman spectroscopy or XPS. Basic characterization of MXenes is frequently carried out by scanning electron microscopy (SEM) as shown in Fig. 4 often complemented by EDS.Additional techniques of choice include pair distribution function analysis, X-ray absorption spectroscopy, and atomic force microscopy (AFM).For investigation of MXene composition, especially surface chemistry, X-ray photoelectron spectroscopy (XPS), Raman spectroscopy, electron energy loss spectroscopy, and nuclear magnetic resonance (NMR) are often applied.Moreover, secondary ion mass spectrometry (SIMS) was successfully applied, as well, providing mass spectra, 2D images, and depth profiles [170,171].Since EDS cannot distinguish between O and OH groups on the surface of MXenes, TEM instruments equipped with electron energy loss spectroscopy could be used for elemental analysis of MXenes. On the other hand, XPS became the popular choice to determine the average material composition, due to its low A cross-sectional SEM image of Ti 3 C 2 T x films made by vacuumassisted filtration of a colloidal solution of Ti 3 C 2 T x in TMAOH (h).g-h Reproduced with permission from ref. [172].Copyright 2018 John Wiley and sons penetration depth, thus surface sensitivity, and ability to acquire information about chemical composition and elemental oxidation states.The regions of interest with respect to MXenes are metal regions, O1s and C1s, and depending on synthesis method also F1s and Cl2p regions are present as well.Multiple oxidations states are possible, complex peak splitting can occur, and peaks can be asymmetric; for instance, the Ti 2p region of Ti 3 C 2 T x is typically fit by multiple components, which represent various oxidation states of Ti (Ti 0 , Ti 2+ , Ti 3+ , Ti 4+ ).The problem in XPS analysis can be the loss of water and OH terminations in high vacuum [38]. Application of MXene-modified interfaces The promising MXene nanomaterials, 2D layered carbides, and nitrides offering a number of alternative compositions, simple processing, relatively high yields and large flakes, hydrophilicity, metal-like electrical conductivity, rich functional groups, and unique optical properties have a profound effect on the entire field of material science.Furthermore, MXene Ti 3 C 2 T x with redox active centers proved as an excellent electrochemical catalyst in, e.g., electrochemical reduction of H 2 O 2 , oxygen reduction reactions [170], and detection of small redox molecules [173].In recent years, an immense increase in a number of affinity-based biosensors [174] employing MXene interfaces [175] has been observed.However, there is a need to pay attention to select appropriate strategies for patterning the MXene interface and subsequent immobilization of target biomolecules.Broad absorption band, favorable energy levels, and plasmon resonance in the visible or near-infrared range make MXenes promising candidates for optical, photothermal, and photoelectrochemical biosensing applications.For example Ti 3 C 2 MXenes serve as fluorescence quenchers and SERS substrates [176]. In order to support the applicability of MXene-modified interfaces in biosensors, interfacial modification of the MXene should be implemented.To achieve this goal and prevent non-specific binding, the modification of Ti 3 C 2 T x MXene interfaces by applying aryldiazonium-based grafting with derivatives bearing a sulpho-(SB) or carboxy-(CB) betaine pendant moiety was established [177].Grafting of aryldiazonium-terminated molecules to MXene was possible due to presence of free electrons (plasmons) in MXene allowing a spontaneous reductive grafting of aryldiazoniumterminated molecules [177]. Glucose Diabetes [178] is a chronic disease that causes high blood glucose levels, which can lead to a variety of serious health issues and therefore diligent and precise blood glucose monitoring becomes critical in the management and prophylaxis of hyperglycaemia [179].Electrochemical glucose (bio)sensing is performed by either enzymatic biosensors or non-enzymatic sensors. Non-enzymatic glucose sensing Non-enzymatic glucose sensors are based on the use of many noble and transition metals such as Pt, Au, Ni, or Cu.The surface modifications of MXene additionally provide direct ion-exchange sites and plasmons within MXene can serve as stable reductant of metallic ions to form metal nanoparticles (NPs) on the surface of MXene.Enhanced surface area provides significant increase of the adsorption rates of the analyte species on the surface of the nanocomposites.To anchor metallic nanoparticles on the surface of MXenes, two strategies have been used: self-reduction and reduction of precursor metallic salt in the presence of an external reducing agent such as NaBH 4 , HCHO, and CO.The reduction of noble-metal ions without the need of an external reducing agent has attracted a lot of interest by forming nanoparticles made of Au, Pd, Pt, and Ag.Electro-reduction is still another way of reducing metallic salts to metallic nanoparticles [179]. Cupric oxide (CuO) NPs have been studied in conjunction with mono, double, and multilayered MXenes nanosheets for non-enzymatic glucose sensing applications.Additionally, due to strong electrostatic interactions, MXene-graphene hybrid composites can be easily synthesized by a simple mixing of the components [180]. Alanazi et al. prepared a composite of aerogel based on MXene and reduced graphene oxide (rGO) nanosheets through hydrothermal method and subsequently added Cu 2 O by a coprecipitation method resulting in a 3D ternary composite with a large surface area and a porous structure (aerogel − Cu 2 O composite, Fig. 5) [182].The fabricated electrode patterned by MXene/rGO/Cu 2 O as the nonenzymatic glucose sensor proved LOD of 1.1 μM and with two wide linear ranges of 0.1-14 mM and 15-40 mM [182]. Enzymatic glucose biosensing Ti 3 C 2 T x MXene nanosheet composites provide substantial surface area for enhanced enzyme immobilization, rapid electron transfer, and the availability of active redox centers.Generally speaking, MXene composites outperform bare MXenes as electrochemical sensors for glucose quantification.Enzymatic glucose biosensors are constructed using an active glucose oxidase (GOx), which catalyzes oxidation of glucose [160]. The selectivity and sensitivity of the enzymatic biosensors are strongly affected by the enzyme contamination, inadequate enzyme immobilization, and denaturation [179]. Delamination of MXene with tetrabutylammonium hydroxide (TBAOH) led to the formation of single and few layers thick MXene, which decreases the distance between the enzyme and the electrode as compared to the bulk and exfoliated counterparts.This allowed a faster electron transfer between the electrode and GOx enzyme.Restacking of the MXene layers is also impeded when MXenes and transition metal oxides are coupled, increasing the interfacial interaction between the electrolyte and electrode during electrochemical sensing analysis. The amperometric glucose biosensor with the immobilized GOx on Nafion solubilized Au/MXene nanocomposite over glassy carbon electrode (GCE) was developed by Rakhi et al. [183].The GOx/Au/MXene/Nafion/GCE biosensor detected glucose with a relatively high sensitivity of 4.2 μA mM −1 cm −2 and a detection limit of 5.9 μM with the linear concentration range from 0.1 to 18 mM [183]. A 3D porous hybrid film, fabricated from Ti 3 C 2 T x MXene and graphene sheets (weight ratio of 1:2 and 1:3), supplied an open structure to facilitate GOx entering the internal pores, which probably enhanced the stable immobilization and retaining of the GOx in the film (Fig. 6) [184].As a result, the biosensor exhibited prominent electrochemical catalytic capability toward glucose biosensing, which was finally applied for glucose assay in sera.The detection limit of the biosensor in air-saturated and O 2 -saturated PBS was calculated to be 0.10 and 0.13 mM, respectively.The proposed biosensor revealed high specificity for glucose analysis over the potential interference species present in biological systems including amino acids, active biological species, and metal ions [184]. Murugan et al. fabricated an enzymatic biosensor by immobilization of GOx using chitosan onto a composite modified electrode [185].An amperometric biosensor determined glucose with the LOD of 22.5 µM within a linear range of 0.5-8 mM.Further, a good reproducibility after continuous use of the biosensor for 20 days was demonstrated [185]. Gao et al. boosted the long-term stability of the enzyme biosensors employing sodium hyaluronate as a protective/ biocompatible film, MXene-Ti 3 C 2 /GOx as the reaction layer, and chitosan/rGO film as the adhesion layer [186].The practical and simple hyaluronate protective layer offered high biocompatibility and could be also applied for construction of other types of biosensors.The layered structure could effectively enhance the fixation between the active layer and the electrode, improving electron transfer between the enzyme and the electrode [186]. Laser scribing of porous graphene electrodes on flexible substrates is another option for developing disposable electrochemical biosensors.A CO 2 laser scribing process was performed under ambient conditions to produce the porous graphene electrodes from lignin [187].The obtained nitrogen doped laser-scribed graphene is a binder-free, hierarchical, and conductive while the interconnected carbon network displayed enhanced electrochemical activity with improved heterogeneous electron transfer rate.Furthermore, the electrodes were decorated with MXene/Prussian blue composite via a simple spray-coating process, designed for sensitive detection of analytes.The final electrodes were functionalized with catalytic enzymes for detecting glucose, lactate, and alcohol.The enzyme electrodes exhibited remarkably enhanced electrochemical activity toward the detection of the analytes.Such types of devices have high potential for applications in personalized healthcare, opening the door toward point-of-care monitoring and personalized sensors [187].Methods like drop-casting, inkjet printing, screen printing, direct pencil drawing, the laser scribing process, and wire or fiber attachment were developed to obtain miniaturized electrodes on paper substrates-an alternative to advanced laboratory instruments, especially for use in remote regions, for emergencies, or for home healthcare Fig. 6 Construction of a glucose biosensor.Preparation of a Ti 3 C 2 T x nanosheets; b pure Ti 3 C 2 T x film, pure graphene film, and a hybrid film for enzyme immobilization.Reproduced with permission from ref. [184].Copyright 2019 American Chemical Society applications.These are perfect candidates for analysis of glucose, lactate, and alcohol present in sweat.In order to detect diabetes mellitus, detecting glucose from sweat has been performed by immobilizing GOx onto a patterned electrode.Glucose could be detected down to 0.3 μM (sensitivity of 49.2 μA mM −1 cm −2 ) and lactate down to 0.5 μM (sensitivity of 21.6 μA mM −1 cm −2 ).Hence, a multianalyte detection was demonstrated from a single sweat sample using a low-cost approach avoiding additional material waste [187]. Wearable glucose (bio)sensors For diabetes treatment, continuous glucose monitoring provides an efficient, real-time, and long-term self-monitoring technique using a wearable device that gives glucose measurements from the interstitial fluid at predetermined regular time intervals.Such a device is usually composed of three parts: a sensor, a transmitter, and a receiver (or a smart device app).The data from the sensor are sent to the transmitter, which then send them to a receiver or a smart device app.The term non-invasive and continuous glucose monitoring using MXene-based glucose biosensors describes measurement of human blood glucose without inflicting tissue damage.The idea comes from the fact that, in addition to glucose in human blood, significant amount of glucose is also found in other body fluids like saliva, tears, sweat, urine, and interstitial fluids.Wearable sensors can be easily affixed to the skin for real-time, continuous, and out-of-clinic health monitoring. For instance, the development of a stretchable, wearable, and modular multifunctional biosensor has been reported comprising MXene/Prussian blue composite for a long-term and sensitive detection of glucose and lactate metabolites in sweat (Fig. 7) [188].Sweat-based sensing still poses several challenges, including easy degradation of enzymes and biomaterials with repeated testing, limited detection range, and sensitivity of enzyme-based biosensors caused by oxygen deficiency in sweat, and a poor stability of biosensors using all-in-one working electrodes patterned by traditional techniques (e.g., electrodeposition and screen printing). A novel stretchable, wearable, and modular multifunctional biosensor was developed, incorporating a innovative composite designed for durable and sensitive detection of biomarkers (e.g., glucose and lactate) in sweat.The implemented solid-liquid-air three-phase interface design led to superior sensor performance and stability.Typical electrochemical sensitivities of 35.3 μA mM −1 cm −2 for glucose and 11.4 μA mM −1 cm −2 for lactate were achieved using artificial sweat.Terminal groups like -OH could be introduced into MXene structures, offering the possibility of immobilizing biological recognition proteins in an oriented way.The applied MXene increased immobilization efficiency of immobilized enzyme and permeability of oxygen into a biosensing layer.These sensors were integrated within flexible polymeric structures and used as wearable biosensing devices for the determination of lactose and glucose in a concentration range of 1-20 mM [188]. Li et al. developed a flexible wearable non-enzymatic electrochemical sensor for personalized diabetes treatment and management via glucose detection in sweat [189].The sensor consisted of Pt/MXene nanocomposite immobilized onto a conductive hydrogel and microfluidic patches (Fig. 8) that were seamlessly integrated to improve the robustness and stability of the electrochemical sensors.Glucose was determined with LOD of 29.15 μmol L −1 and sensitivity of 3.43 μA mM −1 cm −2 in a linear concentration range of 0−1 mM (S/N = 3) by a chronoamperometric method [189]. Biosensors for analysis of other low molecular weight analytes Continuous measurements of a wide range of chemicals/ biomolecules in vivo are of great significance since real-time data are key indicators providing clinicians a valuable window into patients' health and their response to therapeutics.Electrochemical sensors, due to their low cost, easy operation, high sensitivity, etc., are a suitable candidate device for continuous biomarker measurement, wherein modification of electrodes with other agents is beneficial and even indispensable to enhance and ensure sensing performance. Using MXene-modified screen-printed electrode (SPE) in a microfluidic chip, continuous measurement of multiple analytes was realized and the sensor system featured miniaturization and automatization [190].In one instance, MXene-Ti 3 C 2 T x -based SPE incorporated with a dialysis microfluidic chip was constructed for a direct and continuous multicomponent analysis of whole blood.The three biomarkers (uric acid, urea, and creatinine) in renal function examination were tested as model analytes by using the newly developed sensor.These analytes are also important indicators for patients with severe kidney injury and requiring hemodialysis treatment.The chip consisted of four layers, the channel in the top layer is set aside for blood flow, and the second layer is a dialysis membrane that allows penetration of molecules smaller than 1000 Da, like urea, uric acid, and creatinine (Fig. 9).Subsequently, the third layer contained the flow channel for isotonic solutions and the detection chamber.The analytes in blood can be dialyzed into this channel and gathered in the detection chamber, and the sensing electrode located in the bottom layer could capture these targets and generate the signals.Urea was detected with the average sensitivity of ~ 0.34 μA μM −1 with LOD (S/N = 3) of 5 × 10 −6 M. Creatinine was analyzed in the range of 10-400 × 10 −6 M with LOD down to 1.2 × 10 −6 M (S/N = 3).Multicomponent detection proved to be accurate, reliable, and interference-free method, which can perfectly meet the clinical and user requirements.Moreover, the microfluidic chip also showed the great potential as a promising assay device for pointof-care test in terms of cost, stability, adaptability in different/adverse detection environments, miniaturization, and automation of the tests [190]. Zhang et al. [191] have developed cholesterol oxidaseimmobilized MXene/sodium alginate/silica@ n-docosane hierarchical microcapsules as a thermoregulatory electrode material to design electrochemical biosensors to meet the requirement of ultrasensitive detection of cholesterol at high temperature (Fig. 10).The developed biosensor achieved a higher sensitivity of 4.63 µA mM −1 cm −2 and a low LOD of 0.081 mM at high temperature, providing highly accurate and reliable detection of cholesterol for real biological samples over a wide temperature range [191]. In the work of Xu et al. [192], a biosensor for determination of H 2 O 2 was prepared using an horseradish peroxidase (HRP)/Ti 3 C 2 /Nafion film-modified GCE.The biosensor offered a wide linear range (5-8000 μM) and low LOD of 1 μM (S/N = 3).The biosensor was used to detect H 2 O 2 in clinical serum samples of normal controls and patients with acute myocardial infarction before and after percutaneous coronary intervention [192]. A photoreduction technique was used to increase the surface enhanced Raman spectroscopy (SERS) activity of MXene and to increase the ability to detect antipsychotic drugs [194].Due to a cooperative action of chemical and electromagnetic mechanisms, MXene anchored with gold nanoparticles (AuNPs) caused a strong SERS amplification.The platform was used to detect chlorpromazine with LOD of 3.92 × 10 −11 M in a wide linear range of 10 −1 -10 −10 M [194]. Chen with co-workers coupled benefits of colorimetry and electrochemical methods to distinguish uric acid with LOD of 0.19 μM in the linear range of 2-400 μM [196].The peroxidase-like activity and electrocatalytic activity of nitrogen and sulfur co-doped Ti 3 C 2 nanosheets (Fig. 12) were successfully proved by the dissociation and adsorption of H 2 O 2 and by the protonation of H 2 O 2 -containing peroxidase substrate 3,3′,5,5′-tetramethylbenzidine (TMB) [196]. The signal amplification sensing strategy relying on the electrode surface area modified with MXene/VS 2 Fig. 10 Schematic fabrication strategy for construction of a cholesterol biosensor.Reproduced with permission from ref. [191].Copyright 2023 Royal Society Chemistry Publishing nanocomposite and CeCu 2 O 4 bimetallic nanoparticles as nanozyme was performed by Tian et al. (Fig. 13) [197].Kanamycin presenting an aminoglycoside antibiotic and effectively inhibiting Gram-positive and Gram-negative bacteria was detected with a high specificity by profiling five other antibiotics, with LOD of 0.6 pM (linear range from 5 pM to 5 μM) [197]. An enzymatic biosensor composed of Ti 3 C 2 T x nanosheets and β-hydroxybutyrate dehydrogenase was able to determine β-hydroxybutyrate used for the diagnosis of diabetic ketoacidosis/diabetic ketosis with LOD of 45 μM and a sensitivity of 0.480 μA mM −1 cm −2 (a linear range of 0.36-17.9mM) [199]. Further, Elumalai et al. applied a label-free AuNP@ Ti 3 C 2 T x nanocomposite patterning GCE electrode to detect simultaneously uric acid and folic acid.LODs of 11.5 nM for uric acid (a linear range of 0.03-1520 μM) and 6.20 nM for folic acid (a linear range of 0.02-3580 μM for FA) were reached, respectively [200]. Biosensors for detection of high-molecular weight analytes As a proof of concept, MXene@PAMAM-based nanobiosensing platform was applied to develop an immunosensor for detecting human cardiac troponin T [201].A fast, sensitive, and highly selective response toward the target in the presence of a [Fe(CN) 6 ] 3−/4− redox marker was realized, ensuring a wide detection range of 0.1-1000 ng mL −1 with a LOD of 0.069 ng mL −1 .Moreover, the sensor's signal only decreased by 4.38% after 3 weeks, demonstrating that it exhibited satisfactory stability and better results than previously reported MXene-based biosensors [201]. A sensitive dual-signal sandwich-type electrochemical immunosensor was designed for neutrophil gelatinase-associated lipocalin detection using a square wave voltammetry (SWV) and current-time (i-t) curves [202].MXene-loaded polyaniline nanocomposites were fabricated and utilized as the sensing platform for anchoring AuNPs and immobilizing primary antibodies.The biosensor exhibited optimal analytical performance in the linear range of 0.00001-10 ng mL −1 with LODs of 0.0074 pg mL −1 (SWV) and 0.0405 pg mL −1 (i − t) for the analyte determination [202]. The abnormal expression of polynucleotide kinase, an enzyme playing a crucial role in phosphorylation-related DNA repair, can lead to cardiovascular disease, central nervous system disorders, Rosemond-Thomson syndrome, etc.For this purpose, Wang et al. proposed electrochemiluminescence biosensor based on Ti 3 C 2 T X nanosheets patterned by AuNPs and Ru(bpy) 3 2+ (Fig. 14) [203].The DNA phosphorylated by the enzyme was successfully recognized by the chelation between Ti and phosphate group with LOD of 0.0002 U mL −1 and with a linear range from 0.002 to 10 U mL −1 [203].The electrochemical rat liver microsome biosensor employing Au@MXene nanocomposite determined aflatoxin B1, carcinogenic, embryotoxic, mutagenic, teratogenic, and hepatotoxic metabolite to humans, with LOD of 2.8 nM in the linear range of 0.01-50 μM [204]. 2D MXene together with bovine serum albumin previously denatured by urea resulted in the anti-fouling sensing surface for IgG determination with LOD of 23 pg mL −1 and offering a linear concentration range of 0.1 ng mL −1 -10 μg mL −1 [205]. Biosensors for detection of cancer biomarkers Cancer diseases present an enormous problem with 19.3 million new cancer cases and 10.0 million cancer-associated deaths worldwide in 2020 and the number of deaths will increase by 47% by 2040 [207].Thus, there is high demand for ultrasensitive and selective sensing platforms able to detect cancer biomarkers down to very low levels. The (bio)sensors based on functionalized MXene surface due to their specific properties and complex layered structure in combination with electrochemical methods allow achieving low LOD and high specificity of analysis [208].MXene-enabled electrochemical aptasensors have shown great promise for the cancer biomarkers detection with LODs down to fM level [209]. The 2D MXene-based interfaces with a large surface area are suitable for glycoprofiling of cancer biomarkers or glycans (complex carbohydrates).The efficient MXenecartridge-based columns for specific and selective enrichment of cancer-associated sialylated and bisecting N-glycans present in complex serum samples were utilized [210]. Small molecules Sarcosine, N-methylglycine, presents an intermediate metabolite involved in glycine synthesis and degradation.The correlation between changed sarcosine levels and prostate cancer was referred in a number of studies [211,212].Since significantly elevated levels of sarcosine can be present in urine (from 20 nM to 5 µM), urine is the biofluid of choice allowing non-invasive detection of cancer biomarker.The amperometric miniaturized portable enzymatic nanobiosensor for the ultrasensitive analysis of sarcosine was designed [213].Disposable screen-printed carbon electrodes together with MXene Ti 3 C 2 T x @chitosan composite and sarcosine oxidase provided a reliable, sensitive, and quick detection nanoplatform.A satisfactory LOD value of 10.4 nM was achieved by the biosensor during measurement in a drop of 100 μL.The as-fabricated biosensor had shown a good stability with only a 6.8% decrease in a current response within a period of at least 5 weeks after its preparation [213].Moreover, an enzymatic biosensor based on Ti 3 C 2 T X /Pt-Pd nanocomposite developed by Ran et al. was able to detect sarcosine with LOD of 0.16 μM and a sensitivity of 84.1 μA mM −1 cm −2 with a linear range of 1-1000 μM [214]. DNA/RNA and microRNA 2D MXene nanosheet-anchored AuNP-decorated biomimetic bilayer lipid membrane biosensor was introduced for the attachment of thiolated single-stranded DNA for detection of DNA [215].The biosensor gave hybridization signals to the complementary DNA sequence within a linear range from 10 zM to 1 μM with LOD of 1 zM.The BRCA1 gene mutation related to breast cancer was successfully detected [215]. The label-free electrochemical biosensor combining MXene-MoS 2 heteronanostructure with a catalytic hairpin assembly amplification approach was applied for detection of microRNA-21 [216].Thionine together with AuNPs was applied for patterning the surface of MXene-MoS 2 heteronanostructure.The biosensor exhibited LOD of 26 fM and could be applied for detection of microRNA-21 in a concentration range from 100 fM to 100 nM [216]. The novel electrochemical biosensor amplified with hierarchical flower-like gold, poly(n-butyl acrylate), and MXene nanocomposite and activated by highly special antisense single-stranded DNA determined miRNA-122 with unprecedented LOD of 0.0035 aM [217]. The performance of the electrochemiluminescent biosensor toward miRNA-141 detection was enhanced through Ti 2 C 3 MXene-based hybrid nanocomposite [218].The nanocomposite exhibiting UV absorption was utilized as the resonance energy transfer acceptor (Fig. 15).The miRNA-141 could be detected in the range from 0.6 pM to 4000 pM with LOD of 0.26 pM [218]. Mohammadniaei and colleagues combined MXenebased electrochemical signal amplification and a duplex-specific nuclease-based amplification system for rapid, attomolar, and concurrent quantification of multiple microRNAs on a single platform in total plasma (Fig. 16) [219].Presence of MXene provided biofouling resistance and enhanced the electrochemical signals by almost fourfold of magnitude, attributed to its surface area and remarkable charge mobility.This synergetic strategy reduced the assay time to 80 min and provided multiplexing, antifouling activity, substantial sensitivity, and specificity (single mutation recognition).The LOD for the proposed biosensor for microRNA-21 and micro-RNA-141 was 204 aM and 138 aM, respectively, and able to detect analytes up to 50 nM [219]. Meng et al. patterned the surface area of the indium tin oxide electrode with ZnSe nanodisks:Ti 3 C 2 MXene complex to detect the non-small-cell cancer biomarker ctDNA KRAS G12D with LOD of 0.2 fM within the linear range of 0.5 ~ 100 fM [220].non-complementary (ncDNA) and double-base mismatch oligonucleotide DNA (dmmDNA) sequences [215]. Proteins GCE modified by MXene Ti 3 C 2 T x interface was further patterned with a mixed zwitterionic carboxy and sulfobetaine layer deposited on the surface by an electrochemical trigger with subsequent covalent immobilization of anti-CA15-3 antibody as a bioreceptive probe for detection of a breast cancer biomarker [221].CA 15-3, a candidate breast cancer biomarker with a molecular weight of 290-400 kDa, occurs normally at level of 3-30 U mL −1 in serum [222].The designed immunosensor was able to detect glycoproteinbased CA 15-3 biomarker in a clinically relevant concentration window of up to 50 U mL −1 [221].Moreover, it was confirmed, that Ru(NH 3 ) 6 Cl 3 redox probe has a potential to be applied for better understanding of interfacial properties onto the proteins modifying electrode surfaces [221]. Soomro with co-workers applied photo-active NiWO 4 NPs to induce partial surface oxidation of Ti 3 C 2 T x , sheets resulting in the formation of a hybrid composite (Fig. 18) [223].The developed biosensor with photo-electrochemical characteristics of the hybrid composite was able to detect prostate specific antigen with LOD of 0.15 fg mL −1 in a wide concentration range from 1.2 fg mL −1 to 0.18 mg mL −1 [223]. The nanocomposite of MXene loaded with AuNPs and methylene blue (MB) exhibited excellent conductivity, where the AuNPs were able to capture biomolecules containing sulfhydryl terminus, and the MB molecules were used to generate an electrochemical signal [224].In the presence of a model target prostate specific antigen (an enzyme, i.e., protease), the recognizing sequence was recognized and cleaved, and the ratiometric signal of Fc and MB indicated the concentration of the analyte accurately with high sensitivity within a detection range from 5 pg mL −1 to 10 ng mL −1 and with LOD down to 0.83 pg mL −1 .The electrochemical biosensor possessed high selectivity, accuracy, and sensitivity even in real complex biological samples because of the excellent antifouling ability [224]. Song et al. developed a label-free and aptamer-based sensitive assay platform detecting carcinoembryonic antigen with LOD of 0.32 fg mL −1 by applying the trimetallic nanoparticle-decorated MXene nanosheet-modified electrode as the catalytic interface and an exonuclease III-assisted dualamplification strategy [225]. The polypyrrole-modified hybrid NP-based aptasensor (Fig. 19) could detect a phosphoprotein osteopontin associated with human cervical cancer in a sensitive way with LOD of 0.98 fg mL −1 within a linear concentration range of 0.05 pg mL −1 to 10.0 ng mL −1 [226]. The affinity-based biosensor (BSA/anti-CEA/f-Ti 3 C 2 -MXene/GCE) was applied for detection of carcinoembryonic antigen, a cancer biomarker related to different types of cancer diseases, with LOD of 0.000018 ng mL −1 within a linear concentration range of 0.0001-2000 ng mL −1 [227]. The amplification of the amperometric signal and transistor's performance was performed by Xu et al. detecting survivin related to osteosarcoma, an aggressive malignant cancer affecting the health of children, adolescents, and young adults, by applying MXene/PEDOT:PSS-based organic electrochemical transistor biosensor offering LOD down to 10 pg mL −1 [228]. Qu et al. described an electrochemical immunosensor evaluating carbohydrate antigen 125 (CA125) within serum via the dual metal-organic framework (MOF) sandwich strategy [80].The composite combined electrically conductive uniform MXene together with mesoporous and Corresponding heterojunction shows efficient charge-carrier transfer at the in situ engineered interface during photo-catalytic oxidation of mediator.Reproduced with permission from ref. [223].Copyright 2021 Elsevier catalytically active MIL-101(Fe)-NH 2 material containing rich amino groups to attach primary antibodies.MOF loaded with methylene blue (MB) as a signal tag increased the loading rates of the secondary antibody and generated a redox signal (Fig. 20).The LOD of 0.006 U mL −1 or CA125 was achieved with the proposed immunosensor [80]. Kalkal et al. employed the air-brush spray coating technique to deposit the uniform thin films of amine functionalized graphene (f-graphene) and Ti 3 C 2 -MXene nanohybrid on ITOcoated glass substrate for efficient carcinoembryonic antigen (CEA) detection [229].The monoclonal anti-CEA antibodies were attached onto the deposited thin films through the EDC-NHS chemistry and further the non-specific binding sites were blocked with BSA (Fig. 21).An electrochemical BSA/anti-CEA/f-graphene@Ti 3 C 2 -MXene/ITO immunoelectrode was able to detect CEA biomarker with LOD of 0.30 pg mL −1 and a sensitivity of 28.88 μA [log (pg mL −1 )] −1 cm −2 in a linear range from 0.01 pg mL −1 to 2000 ng mL −1 [229]. Analysis of cells/exosomes/viruses Exosomes as the novel carrier of potential cancer biomarkers were analyzed by Zhang et al. with electrochemical hybrid nanoprobe prepared by in situ generated Prussian Blue on the surface of Ti 3 C 2 MXene [230].A CD63 aptamermodified poly(amidoamine) (PAMAM)-AuNP electrode interface can specifically interact with the CD63 protein on the exosomes derived from OVCAR cells (Fig. 22).The achieved LOD was 229 particles μL −1 and exosomes could be determined in a wide a linear range from 5 × 10 2 particles μL −1 to 5 × 10 5 particles μL −1 [230].MXene-based nanoplatforms capable of in vitro detection of tumor markers such as exosomes and CEA have been successfully verified [231]. Duan with co-workers demonstrated AuNPs/MXene Ti 3 C 2 -based clustered regularly interspaced short palindromic repeats powered electrochemical sensor for detection of human papillomavirus 18 (HPV-18) DNA (Fig. 23) with LOD of 1.95 pM in a linear concentration range from 10 pM to 500 nM [232] Wang together with colleagues produced an electrochemical luminescence biosensor based on Ti 3 C 2 T x /ZIF-8 nanocomposite as an emitter to determine human immunodeficiency virus (HIV-1 protein) causing acquired immune deficiency syndrome (AIDS) with LOD of 0.3 fM in the linear range from 1 fM to 1 nM.In this approach, K 2 S 2 O 8 as the co-reactant and conductive carbon black combined with magnetic nanoparticles as the quenching agent were employed [233]. Bharti et al. utilized a disposable screen printed carbon electrode (SPCE) modified with Ti 3 C 2 T x MXene nanosheets followed by amino-functionalized probe DNA (NH 2 -pDNA) as a robust surface for the sensing of SARS-CoV-2 (Fig. 24) [83].The NH 2 -pDNA/Ti 3 C 2 T x / SPCE bioelectrode determined SARS-CoV-2 by applying electrochemical impedance spectroscopy method within target DNA concentration of 0.1 pM-1 μM and with LOD of 0.004 pM.Moreover, LOD of 0.003 pM was obtained for SARS-CoV-2 target in a spiked serum sample.The shelf life up to 40 days at storage temperature of 4 °C was observed [83]. In an effort to improve antifouling and biocompatible properties of electrochemically active surface, Lian et al. developed a sandwich-type immunoassay utilizing platelet membrane/Au nanoparticle/delaminated V 2 C nanosheets as the sensing electrode interface and methylene blue/aminated metal organic framework as an electrochemical signal probe.The LOD for CD44-positive cancer cell in complex liquids reached 1.4 pg mL −1 in a linear range from 0.5 to 500 ng mL −1 [235]. Different wearable sensors Advances in wearable sensors with their ability to sense various body parameters precisely have helped in accelerating the personalized healthcare revolution.Sensing materials for wearable applications, in general, are expected to be flexible, biocompatible, electrically conducting, electrochemically active, and of low cost.The discovery of MXenes has opened up new prospects in wearable sensing as most MXenes are predicted to have metallic conductivity, while a few combinations exhibit semiconductor behavior.Importantly, the surface functional groups are strongly coupled to the electronic properties of MXene.Moreover, the structural defects and mixed surface groups introduced during the synthesis of MXene influence its electrical conductivity.The etching process and intercalation method can also have an impact on the conductivity of MXene as intercalation of the Li + cation results in better conductivity than organic intercalation.The high electrical conductivity of MXene with controlled alignment of 2D sheets enables the piezoresistive sensing mechanism suitable for wearable sensing applications [236]. There is an increased demand for flexible, soft, highly efficient and high-performance sensing devices [237,238].22 The principle of the electrochemical biosensor for exosomes activity detection using a signal amplification strategy.Reproduced with permission from ref. [230].Copyright 2021 Elsevier Specifically, stretchable, wearable, and highly sensitive or responsive strain sensors have gained enormous research interest owing to their potential applications in soft robotics, monitoring human health, monitoring human activity, and human-machine interfacing.Generally, flexible wearable sensors encompass piezoelectric, piezoresistive, capacitive, and triboelectric sensors.Piezoresistive sensors transduce applied pressure into a resistance signal and are thus ideally suited for portable healthcare monitoring.Ti 3 C 2 -MXene-based sensors were applied to monitor joint bending, swallowing, and coughing, for the recognition of various human activities (to monitor the subtle movement caused by microexpression) such as eye blinking, cheek bulging, and throat swallowing as well as variation in the Copyright 2022 American Chemical Society Fig. 24 Application of screen printed carbon electrodes for detection of SARS-CoV-2 using impedimetric assays.Reproduced with permission from ref. [83].Copyright 2023 Elsevier current for the bending-releasing activity of the elbow, fingers, and ankle.The corresponding sensor was attached in series to a microcircuit embedded with a Bluetooth system for transforming various current or resistance variations into wireless electromagnetic wave signals.MXenes and graphene-based wearable biochemical sensors were applied in a number of areas including but not limited to electrolyte monitoring, glucose monitoring, micro/macromolecular organics metabolite, volatile gases monitoring, and humidity sensing [239]. Ti 3 C 2 MXene-cotton textile-based flexible piezoresistive pressure sensor has been demonstrated by a simple and low-cost dip-coating method [240].The as-fabricated highly flexible sensors were attached to the radial artery of the wrist using a scotch tape.It exhibited high sensitivity with a rapid response time (26 ms) and exceptional cyclic stability for 5600 cycles.The sensor was utilized for real time monitoring of human physiological signals namely wrist pulse, voice detection, and finger motions [240]. In another instance, a percolative network consisting of Ti 3 C 2 T x MXene/carbon nanotube (CNT) composites resulted into a versatile strain sensor (Fig. 25) [241].A layer-by-layer spray coating technique was applied delivering an ultrathin device (device dimension < 2 mm) exhibited extremely low LOD of 0.1% strain, high sensitivity, and tunable sensing range (30-130% strain).The exceptional sensing performance allowed successful detection of both small deformations such as phonation as well as large motions such as walking, running, and jumping.Voice recognition ability of this sensor makes it potential material for voice recuperation and human-machine interfacing [241]. Another example is Ti 3 C 2 T x -based wearable electrochemical impedimetric immunosensor with a 3-D electrode network for non-invasive cortisol biomarker identification in human sweat [242].Laser-induced graphene was the basic material used for construction of the electrode since it is stable and had good electrical properties.The cortisol sensor had a very low LOD of 3.88 pM and excellent selectivity [242]. A sensitive dopamine sensor was created using a bionanocomposite with MXene nanoparticles serving as a conductive matrix for attachment of Pd/Pt NPs [243].The hydrophobic aromatic group adsorbed on the surface of MXenes induces the in situ growth of PdNPs and Pd/Pt NPs.The sensor showed excellent linearity for detection of dopamine in the concentration range of 0.2-1000 μM, as well as high selectivity against ascorbic acid, glucose, and uric acid [243]. Pressure/strain sensors In order to detect transient changes in pressure, a flexible, highly sensitive, and degradable wearable sensor based on Ti 3 C 2 T x Mxene nanosheets was developed impregnated with tissue paper sandwiched between a polylactic acid sheet and an interdigitated conducting electrode coated polylactic acid sheet (Fig. 26) [244].The as-fabricated flexible pressure sensor demonstrated high sensitivity with low LOD (10.2 Pa), wide range up to 30 kPa, fast response (11 ms), excellent reproducibility (over 10,000 cycles), low consumption of energy (10-8 W), and good degradability [244]. A newly developed microchannel restricted Ti 3 C 2 T x MXene-derived flexible piezoresistive sensor allowed simultaneous sensing of pressure, sound, and acceleration [245].f-h resistance responses of the sensor in detecting human leg motion: walking, running, and jumping.Reproduced with permission from ref. [241].Copyright 2018 American Chemical Society It exhibited high sensitivity (99.5 kPa −1 ), a low LOD (9 Pa), fast response (4 ms), and exceptional durability (over 10,000 cycles).The flexible piezoresistive sensor was attached to the throat and wrist pulse for human activity monitoring.The sensor was able to record the current variations upon speaking different words and hence capable to recognize the signals of weak throat vibrations [245]. A flexible piezoresistive pressure sensor was derived from polyurethane and chitosan sponge coated with Ti 3 C 2 T x sheet sensor providing a versatile sensing platform for monitoring small as well as large pressure signals [246].The sensor exhibited highly compressible and stable piezoresistive response for the compressive strains up to 85% and a stress of 245.7 kPa and a reproducibility for around 5,000 loading-unloading cycles with a response time of 19 ms.The sensor was used for monitoring human physiological signals and the movements of insects as well as for detecting human voices and breaths in a non-contact mode [246]. In yet another example, a 3D hybrid Ti 3 C 2 T x MXene-based sponge network with porous structure was applied as a piezoresistive sensor [247].The Ti 3 C 2 T x -based sponge was prepared by a facile and efficient dip-coating technique where semiconducting polyvinylalcohol nanowires were used as a spacer (Fig. 27).It exhibited excellent sensitivity over a broad range of pressure, a low LOD of 9 Pa, and a rapid response time of 138 ms with exceptional durability over 10,000 cycles.This Ti 3 C 2 T x MXene sponge/ PVA NW-derived sensor exhibited the higher sensitivity in comparison with the Ti 3 C 2 T x MXene sponge sensor, additionally showing rapid response and recovery times of 138 ms and 127 ms, respectively.The sponge-sensor was further utilized for real-time monitoring of small strain, human physiological behavior, and the change in the balloon size.Specifically, characteristic peaks corresponding to three waveforms related to percussion, tidal, and diastolic can be seen which indicates excellent sensitivity of the sensor [247]. A highly sensitive piezoresistive sensor was demonstrated based on Ti 3 C 2 MXene with bioinspired micro spine-like structure formed by a facile abrasive paper stencil printing method [248].It exhibited high sensitivity (151.4 kPa −1 ), short response time (< 130 ms), very low LOD of 4.4 Pa, and exceptional cyclic stability (over 10,000 cycles).Besides, the fabricated piezoresistive sensor demonstrated excellent performance toward detection of physiological signals and quantitatively monitoring pressure distributions as well as remote and real-time monitoring of the motion of an intelligent robot [248]. It has been shown that compressible and elastic carbon aerogels derived from Ti 3 C 2 MXene and cellulose nanocrystals can be applied as wearable piezoresistive sensors [249].Cellulose nanocrystals were employed as a dispersant and nano-support to attach Ti 3 C 2 nanosheets into a lamellar carbon aerogel with improved mechanical strength.The interaction between Ti 3 C 2 MXene and cellulose nanocrystals resulted in a continuous wave-shaped lamellar structure which can withstand exceedingly high compression strain (95%) and long-lasting compression (10,000 cycles) at 50% strain.The aerogel sensor exhibited ultrahigh linear sensitivity in low pressure (114.6 kPa −1 ) as well as high pressure (45.5 kPa −1 ) regions with a very low LOD of pressure change detection with reproducibility for more than 2,000 cycles.All these superior characteristics of the carbon aerogel make it a prosperous material for wearable piezoresistive devices as pressure or strain sensors [249]. Another strain sensor was derived from a unique hybrid network of Ti 3 C 2 T x MXene NPs and nanosheets [250].The synergistic movement of NPs and nanosheets confers the [244].Copyright 2019 American Chemical Society hybrid network with excellent electrical and mechanical properties.The fabricated strain sensor exhibited excellent sensitivity over a broad stretching range (0-53%), extremely low LOD (0.025%), and excellent recycling durability (over 5,000 cycles).Such kind of performance renders the strain sensors capable of detecting full range of human movements [250]. Fan et al. came up with a biocompatible, breathable, and highly sensitive silk fibroin (SF)/propolis (EEP)/ graphene(GR)/MXene nanocomposite-based flexible wearable sensor with antibacterial properties due to the inclusion of propolis [251].Graphene and MXene dispersions were step by step sprayed onto nanocomposite fiber membranes (Fig. 28).The developed sensor exhibited a wide sensing range of 1-50 kPa, repeatability of 100 cycles and high sensitivity of 3 kPa −1 .The movements of finger, wrist, elbow, and knee joints could be monitored with this sensor [251]. Gong et al. fabricated a novel type of Ti 3 C 2 T x MXene-based nanochannel hydrogel sensor taking advantage of the unique structure of electrospun fiber textile and the properties of the double network hydrogel [252].The nanofibers were synthesized through electrostatic spinning, and then the nanochannels within the device were formed.In the cavity of the nanochannels, the Ti 3 C 2 T x MXene nanosheets had more space for moving in response to varying degrees of deformation, which enhanced the sensor's sensitivity.In an effort to improve the self-adhesion properties of wearable sensors, tannin (TA) was added to the hydrogel system (Fig. 29).The hydrogel sensor successfully detects different human motions and physiological signals (e.g., low pulse signals) with high stability and sensitivity [252]. Yang et al. prepared wearable Ti 3 C 2 T x MXene sensor modules with in-sensor machine learning models, either functioning through wireless streaming or edge computing, for full-body motion classifications and avatar reconstruction [253].The wearable strain sensor modules due to topographic design on piezoresistive nanolayers performed ultrahigh sensitivities within the working windows that meet all joint deformation ranges.The edge sensor module was made through the integration of the wearable sensors with a machine learning chip enabling in-sensor reconstruction of high-precision avatar animations that mimic continuous fullbody motions with an average avatar determination error of 3.5 cm, without additional computing devices (Fig. 30).The approach described in the article addresses the challenge in wearable sensors to enable transmission of high density data obtained from several sensors in an effective way followed Reproduced with permission from ref. [247].Copyright 2018 Elsevier Fig. 28 Flow chart of sensor preparation (upper image).SEM images of SF composite films.(a1-a3) Pure silk fibroin film with a concentration of 18wt%, 20wt%, and 22wt% under magnification of 10 k. (b1-b3) Under magnification of 10 k, silk fibroin concentration was 20%, propolis concentrations were 0.5wt%, 1wt%, and 2wt% of the composite films, respectively.(c1-c3) Under magnification of 10 k, composite membrane with silk fibroin concentration of 20wt%, propolis concentration of 1wt%, and voltage of 16 kV, 18 kV, and 20 kV, respectively.(d1-d3) Under magnification of 5 k, the silk fibroin concentration was 20wt%, the propolis concentration was 1wt%, the voltage was 18 kV, and the injection speed were 0.004 ml min −1 , 0.006 ml min −1 , and 0.008 ml min −1 of the composite membranes, respectively.(e) Under magnification of 10 k, SEM image of the composite membrane (lower image).Reproduced with permission from ref. [251].Copyright 2023 Springer Fig. 29 Graphic Illustration of the preparation nanofibers by combining electrospinning and the template method (upper image).Strain sensing ability of the device.)Expected strain sensing mechanism of device; B the sensitivity of the strain sensor at 0-280% strain; C resistance changes of the strain sensor under 30-180% strains; D resistance changes under cyclic tests (3,000 times); and resistance changes of the sensor during monitoring different human activities including E knee bending, F finger bending, and G wrist bending (lower image).Reproduced with permission from ref. [252].Copyright 2023 American Chemical Society Other healthcare applications A "hospital-on-a-chip" system has been demonstrated with multifunctional microneedle electrodes for biosensing and electrostimulation using highly stable MXene nanosheets [254].Microneedles are composed of dozens of micronsized needles that can be used as an effective and painless transdermal patch to puncture the skin for drug delivery or biosensing purposes since they are directly in contact with the dermal layer inside the human body.The wearable MXene nanosheet-based microneedles can sense the tiny electric potential difference generated from the human eye movements or muscle contraction from the human arm.Therefore, the diseases associated with neuromuscular abnormalities such as myasthenia gravis can be monitoredconsequently, the transcutaneous electrical nerve stimulation treatment can be applied according to the feedback of the micro-sensors [254]. A self-powered, flexible, multimodal, MXene-based wearable device was developed for continuous, real-time physiological biosignal observation.The system included multipurpose electronics, very sensitive pressure sensors, and power-efficient triboelectric nanogenerators [255].The main component was a 3D-printable MXene joined to a platform that resembled skin and had considerable stretchability and positive triboelectric characteristics.This selfpowered physiological sensor device allowed for constant radial artery pulse waveform observation without the need for independent energy thanks to its sensitivity (6.03 kPa −1 ), power output (816.6 mW m −2 ), the limit of detection (9 Pa), and quick reaction time (80 ms).Near-field communication was used to transmit wireless data and power, as well as its continuous, on-demand, fully self-powered rapid assessment program supervision [255]. Wound infection is a life-threatening healthcare issue that can cause severe pain, sepsis, and even amputation.Typical biomarkers, sortase A and pyocyanin, corresponding to two major types of bacterial infection, Grampositive Staphylococcus aureus and Gram-negative Pseudomonas aeruginosa, were detected with electrochemical DPV with Ti 3 C 2 T x MXene applied to the electrode to enhance the sensitivity [256].Integration of near-field communication module realized wireless energy harvesting and data transmission with a smartphone.The fully integrated system (Fig. 31) demonstrated good linearity and high sensitivity, with wide detection ranges from 1 pg mL −1 to 100 ng mL −1 for sortase A, and of 1 μM to 100 μM for pyocyanin.This wearable system provides a non-invasive, convenient, and efficient platform for in situ bacterial virulence factors detection, offering great potential for the management of the infected wound [256]. Conductive hydrogels have received widespread attention in the applications of biosensors, human-machine interface, and health recording electrodes.The authors have developed the hydrogels with anti-freezing, antidehydration, and re-moldability using MXene as conductive material [257].The resulting sensor had the characteristics of high sensitivity (gauge factor of 2.30), good linearity (R 2 = 0.999), wide strain detection range (559%), and fast response (0.165 s).These excellent properties showed that the as-prepared conductive hydrogels have significance in promoting the construction of multifunctional wearable sensors.The hydrogelbased strain sensor can be used to monitor large strains and also has excellent sensitivity to micro strains (1-5%).They concluded that conductive PCMG hydrogels can realize the purpose of human motion detection accurately in harsh environment, opening up a new development path for flexible wearable sensors and ion skin (Fig. 32) [257]. Summary MXenes due to fascinating interfacial properties are 2D nanomaterials of choice for many different healthcare applications.The first MXene-based healthcare application was described in 2015 with an increasing interest to use such 2D nanomaterials for plethora of biomedical applications.Initially, MXenes were extensively applied as sensors for detection of various low-molecular-weight analytes including also hybrid nanoparticles used as nanozymes (peroxidase-and oxidase-like activities).There is, however, an increasing interest to apply MXene for construction of biosensors integrating bioaffinity probes (DNA/RNA, DNA aptamers and antibodies) for detection of high-molecular-weight analytes including also cancer biomarkers.Unfortunately, there are only few examples describing development and application of biosensors for analysis of such high-molecular-weight disease biomarkers.A separate application path is to apply MXene-based devices as wearable sensors for monitoring of human activities.Interestingly, there is already a prototype CA125 GCE-MXene/MIL-101(Fe)-NH 2 /UiO66@MB, GCE-MXene-CSMIL101-Ab1-Ag-Ab2-UiO66@MB DPV 0.006 U mL −1 0.2-1000 U mL −1 [80] cTnT MXene@PAMAM/SPCE DPV 0.069 ng mL −1 0.1-1000 ng mL −1 [201] integrating several wearable sensors enabling reconstruction of avatar animation mimicking full body motions with high spatial precision/resolution.The authors believe that such approach can be applied for monitoring of movement in sports and also for underwater soft robots [253]. Outlook The beauty of using MXenes is their low cytotoxicity, for example upon degradation of Ti 3 C 2 T x MXene nontoxic products (such as TiO 2 , CO 2 , or CH 4 are produced), what can further accelerate their integration into many healthcare applications.The main challenges for MXenebased devices, which need to be addressed, are to prepare MXenes from MAX phases in a highly reproducible way with tailor made interfacial properties and to enhance stability of MXenes, when exposed to air or humidity.Furthermore, electrochemical MXene-based devices face another challenge, i.e., anodic oxidation significantly influencing electrochemical properties of such surfaces [170] (Table 1).This is why it is very important to properly choose redox mediator operating rather in a cathodic potential window such as Ru(NH 3 ) 6 3+ [221].The other issue is to make MXene or hybrid MXene interfaces biocompatible.MXene-based biosensors strongly rely on nanohybrid biocompatibility; thus, there should be focused research on the surface chemistry of MXenes to solve the problems based on the affinity and stability of biomolecules present on the MXene surfaces.One of the ways how to design biocompatible MXene interfaces is to use free plasmons for spontaneous grafting of (bio) polymers via aryldiazonium-based grafting [177].In the case of wearable sensors, MXene nanomaterial is oxidized when it is continuously in contact with air.This reduces the conductivity and affects the sensing ability.However, on the other hand, the external polymer coating to prevent oxidation in the MXene affects the breathability and comfort of the wearable biosensors.Thus, an in-depth understanding is needed to design sensors that could maintain the conductivity of the MXene, while still being convenient for the user [258].One approach toward right direction is to prepare wrinkle-free MXene layers with control of the crack propagation [253].Furthermore, there is high potential to combine MXene affinity toward glycans (complex carbohydrates) [210] with electrochemical detection platform for detection of novel types of biomolecules, i.e., glycoproteins.Thus, we envisage that the future of MXene interfaces in combination with electrochemistry and other detection methods in the healthcare sector is very bright once the challenges described above will be properly addressed. Fig. 1 Fig. 1 Crystallographic structures of MAX phases with n = 1 [a], 2 [b], and 3 [c] octahedral layers (highlighted in gray) between the A element layers (in blue).The octahedral layers forming the skeleton of the corresponding MXene are circled in gray.The M element is represented as red spheres and the X element as black ones.In [b], the three different sites considered for the T-groups on the MXenes' surface are given: FCC (green), HCP (purple), and bridge (cyan).In order to ease their identification, only one surface group is sketched in these structural models, but all calculations were performed on fully functionalized surfaces, i.e., corresponding to M n+1 X n T 2 compositions (with T = − O, − OH, − F, or − Cl); see the Fig. 3 Fig. 3 Schematic of the Al etching mechanism for LiF/HCl and HF solutions.(a) Polycrystalline particle of the pristine Ti 3 AlC 2 MAX phase before the etching process.The etching mechanism for poly- Fig. 4 Fig. 4 SEM images of MAX and MXene powders.Reproduced from an open access publication [166].SEM images of multilayer Ti 3 C 2 T x powders synthesized by etching with 30 wt% (a), 10 wt% (b), and 5 wt% HF (c).SEM images of Ti 3 AlC 2 (MAX) powder (d).SEM images of Ti 3 C 2 T x powders synthesized with ammonium hydrogen fluoride (e) and 10 M LiF in 9 M HCl (f).a-f Reproduced with per- Fig. 7 Fig. 8 aFig. 9 Fig. 7 Schematic drawings and corresponding images of the wearable biosensor patch.a Schematic illustration of the sensor patch system, which is composed of a sweat-uptake layer, a sensor layer, and a cover layer.b Front-side optical image of the sensor array (left and right), reference electrode (top), counter electrode (middle), and Fig. 11 Fig.11Schematic representation of MXene synthesis process, mechanism of electrocatalytic oxidation, and the utilization of MXene/SPE sensor for the detection of acetaminophen and isoniazid.Reproduced with permission from ref.[195].Copyright 2019 Elsevier Fig. 12 Fig. 12 Schematic illustration of the synthesis and application of Ti 3 C 2 nanosheets.Reproduced with permission from ref. [196].Copyright 2022 Elsevier Fig. 13 Fig. 14 Fig.13 Schematic illustration showing preparation of the biosensor and the electrochemical detection strategy for analysis of kanamycin.Reproduced with permission from ref.[197].Copyright 2023 Elsevier Fig. 15 Fig.15 The construction process for the biosensor (a) electrochemiluminescent signal generation within the nanocomposite with a co-reactant H 2 O 2 (b).Reproduced with permission from ref.[218].Copyright 2022 Springer )[215].The developed biosensor confirmed hybridization signals only to the complementary DNA (cDNA) sequence with LOD of 1zM in a linear range of 10 zM-1 μM.Moreover, a good specificity of biosensor was proved using Fig. 18 Fig. 18 Surface adsorption of NiWO 4 NPs over ultra-thin Ti 3 C 2 T x sheets in solution, with surface-bound interactions leading to surface fracturing and, ultimately, partial surface oxidation of Ti 3 C 2 T x , realizing in situ TiO 2 formation in MX-NiWO 4 .Corresponding heterojunction shows efficient charge-carrier transfer at the in situ engineered interface during photo-catalytic oxidation of mediator.Reproduced with permission from ref.[223].Copyright 2021 Elsevier Fig. 19 Fig. 19 Schematic diagram of the aptasensor fabrication based on PPy@Ti 3 C 2 T x /PMO 12 for the osteopontin detection, including (I) the preparation of the PPy@Ti 3 C 2 T x /PMo 12 hybrid, (II) the aptamer Fig. 25 a Fig. 25 a Ti 3 C 2 T x MXene/CNT strain sensor attached to a person throat; b-d response curves obtained when individual spoke "carbon," "sensor," and "MXene"; e sensor attached to the human knee; Fig. 26 Fig. 26 Schematic representation of the procedure to fabricate MXene nanosheet-based flexible wearable transient pressure sensors.Reproduced with permission from ref. [244].Copyright 2019 American Chemical Society Fig. 27 Fig. 27 Schematic representation of a Ti 3 C 2 T x MXene sponge fabrication, b, c construction of Ti 3 C 2 T x MXene sponge/PVA NW-derived sensor.Reproduced with permission from ref. [247].Copyright 2018 Elsevier Fig. 30 Fig. 30 Wireless sensor module for full-body motion classification.a Photos of seven M n sensors attached on the back waist (one M p ), left/right shoulders (two M p-w-p ), left/right elbows (two M w ), and left/right knees (two M w-p-w ) of a volunteer.b Signal outputs, S ε , of a M w-p-w sensor attached on the back waist during repeated stoop motions were too small to be distinguished from noise signals.Signal outputs, S ε , of two M p sensors attached on c the left shoulder, d the left elbow, and e the left knee during repeated movements.Symbol "!" indicates that the M p sensors' resistances increased to infinite, where M p sensors lost their strain sensing capabilities.f Signal Fig. 31 Fig. 31 The wireless and battery-free smart bandage system.a Overall design of the smart bandage.b Schematic of the smart bandage system for in situ bacterial virulence factors detection.c Photo of the smart bandage interfaced on the arm, with a smartphone for wireless energy and data transmission.d The wireless communication between flexible circuit board and the near-field communication (NFC)-enabled mobile terminals during bending.The inset showed the resonant frequency of the circuit.e The wireless communication Fig. 32 Fig. 32 Comparison of the brightness of LEDs with a PVA and b PCMG hydrogels as conductors.c The conductivity of hydrogels with different MXene content.d The conductivity of hydrogels at room temperature and frozen at − 18 °C for 24 h with different glyc- Table 1 A brief summary of electrochemical MXene patterned platforms utilized for healthcare applications
17,518.2
2024-01-11T00:00:00.000
[ "Medicine", "Materials Science", "Chemistry", "Engineering" ]
Interval-Valued Fermatean Hesitant Fuzzy Sets and Infectious Diseases Application The Hesitant Fuzzy Set, which is a generalization of fuzzy sets, is an important tool in dealing with the difficulties that arise in determining the membership of an element to a set when there is doubt between several different values in decision-making problems. In this study, Fermatean hesitant fuzzy set is given to ensure to operate the conditions in which professionals evaluate an alternative in probable membership values and non-membership values. Aggregation operators of newly defined sets are defined to implement to multi-attributed group decision-making problems. The main properties of the new sets were examined. A new score function and accuracy function are given to compare two interval-valued numbers. Finally, a numeric example exactly demonstrates the feasibility, practicality, and effectiveness of the offered technique. Introduction The reasoning and decision-making (DM ) processes of people in the face of daily events are studied by many disciplines, including psychology, philosophy, cognitive science, and artificial intelligence. These processes are generally tried to be described based on various mathematical and statistical models. In this process, the problem of decision-making arises. DM is defined as the operation of selecting one or more of the alternative forms of behavior faced by a person or an institution in order to achieve a specific goal. Research shows that while it is sufficient to make many daily decisions intuitively, this path alone is not enough for complex and vital decisions. Multi-Attribute Decision Making (M A DM ) refers to the decision-making process in discrete situations where the alternatives examined in the decision problem are finite and clearly defined. In M A DM problems, the alternatives are a predetermined number. M A DM approaches are frequently used in decision problems such as choosing among alternatives, ranking, and comparing alternatives. They are frequently preferred methods in that they allow quick decision-making without requiring heavy mathematical operations and using a package program. There is only one purpose in the M A DM method. The aim is to determine the most ideal (most benefit, least cost) alternative for the decision problem. For the example problem above, the purpose of the decision problem can be expressed as "determination of the most suitable supplier alternative". Group decision making (GDM) is about using the unified wisdom and experience of those involved in the group to make decisions that are likely to provide affirmative benefits. One of the key advantages of GDM is its potential to involve people from different backgrounds and thought processes so that the issues facing the group can be explored from a wider range of perspectives. Individuals want to overcome the difficulties they face to reach their goals. Sometimes this task becomes so large and complex that the individual can't solve it alone. In such cases, it is a more rational approach to making decisions by using group power. Whether working around a desk or dispersed in digital environments, the synergy that emerges as a group is an important tool in improving decisions and solving problems. Thus, individuals achieve some of their needs and goals that they cannot achieve alone through groups. Thus, although group members have their own thoughts and motivations, when they want to solve a problem, the problem will no longer be the process of choosing the best option according to a single decision-maker. The resulting group decision-making process involves the conflicts of different interest groups, different goals, and objectives, different criteria, political behaviour, etc. would be expanded to take into account. At this point, the final solution is not left to the initiative of a single decision maker, that is, the responsibility of all decision makers occurs. In general, uncertainty is the situation in which a given event may have different consequences and there is no information about the probabilities of those consequences. Therefore, uncertainty is a very important notion for the DM process. It is not easy to know the probabilities of events happening in real-life. Therefore, the DM process occurs under uncertainty. Fuzzy logic theory [36] proposes a strong logical inference structure in the face of uncertain and imprecise knowledge. Fuzzy logic theory gives computers the ability to process people's linguistic data and work using people's experiences. While gaining this ability, it uses symbolic expressions instead of numerical expressions. These symbolic expressions are called fuzzy sets(FS). It is understood that the elements of fuzzy sets are actually decision variables containing probability states. Instead of probability values of possibilities, fuzzy sets arise by assigning membership degrees to each of them objectively. Yager [35] introduced the q-step orthopair fuzzy set. The basic rule in this set theory is that the sum of MD with ND should not be greater than 1. Based on this idea, Senepati and Yager [21] introduced the Fermatean fuzzy set(FFS) and examined its basic features. In [22], Fermatean arithmetic mean, division, and subtraction which are new transactions for FFS, are defined and some of their properties are examined. In [23], new weighted aggregated operators related to FFSs are defined. [13] have defined Fermatean fuzzy soft set(FFSS) and entropy measures. Shahzadi and Akram [24] offered a new decision support algorithm with respect to the FFSS and defined the new aggregated operators. Garg et al. [6] new FFS type aggregated operators were defined by utilizing the t-norm and t-conorm. The FS notion was generalized to the HFS notion by Torra [27]. This new set of the FS can handle the situations that the complexity in building the MD does not get up from a margin of error or a certain probability distribution of the probable values, however, originates from hesitation among a few several values [37]. Hence the HFS can more precisely reflect the people's hesitancy in stating their preferences over objects, compared to the FS and its other generalizations. Later, HFS and IFS were combined to obtain a new HFS which is called IHFS [20]. The fundamental notion is to form the situation in which instead of a individual MD and ND, human beings hesitate among a set of MD and ND and they require to symbolize such a hesitation. In [39], the notion of a dual HFS was improved and was given some properties. As an extension of the dual IVHFS, the HIVIFS approach was given [16]. IIn [17], the notion of IHFS to GDM problems using fuzzy cross-entropy was applied. The PFHS was initially given by Khan et al [7]. PHFS compensates the case that the sum of its MDs is less than 1. The Fermatean hesitant fuzzy set has been defined by Kirisci [15]. This work is dedicated to extending FHFSs to IVFH and improving M A G DM processes to IVFHF environments by aggregation operators. Score functions and accuracy functions are defined. The basic properties are studied together with the definition of IVFH. An algorithm is given by introducing the scenario describing the idea of M A G DM in IVFHF environments. A medical application showing the feasibility and applicability of the offered technique is given. Preliminaries Throughout the paper, U, as the initial universe set, respectively will be denoted. Thr ID of u to F is described as θ F (u) = 3 1 − (ζ 3 F (u) + η 3 F (u)), for any FFS F and u ∈ U. , some operations as follows [21]: The properties of complement of FFS as follows [21]: is said to be a score function. Take the two FFSs If the following condition (A) is hold, then it is called a natural quasi-ordering concerning the FFS [21]: For the two FFSs F 1 and F 2 ; is said to be an accuracy function. For the two FFSs For two FFS F , G , from a binary relation ≤ (SF,AF) , it may be shown as F ≤ (SF,AF) G iff the condition (B) holds: Definition 2.3. [28] The set is called HFS, where τ Γ (u) indicates the set of some values in unit interval, that is probable MD of u ∈ U to Γ. From now on, HFN will be used as τ = τ Γ (u) throughout the paper. Definition 2.4. The following operations are hold for three HFNs τ, τ 1 , τ 2 : Definition 2.5. The set , showing a probable MD and ND of u ∈ U in P Γ respectively. New Hesitant Fuzzy Sets In this section, IVFH will be introduced and its properties will be examined in order to get better results in preventing information loss and to increase the flexibility and applicability of decision-making techniques when dealing with qualitative information. where ζ F (u) is the possible Fermatean membership interval and η F (u) is the possible Fermatean non-membership intervals of F . Throughout this article, ϒ will show the set of all IVFHs. Apparently, if There is only one pair of intervals in h F (u), the IVFH converts into an IVFFS, if both ( ζ F (u), η F (u)) converts one singleton, the IVFH may be viewed as an FHFS, if η F (u)) = [0, 0], the IVFH may be viewed as an IVHFS, if ζ + F + η + F ≤ 1 the IVFH can be seen as an IVIHFS, for each u ∈ U. and α > 0. Then, we get: Proof. It is clear that F C is an IVFE. is called a GIFHG operator, where GIFHG : ϒ n → ϒ and´ F σ (i) is the largest ith of` F k = ( F ) nω k (k = 1, 2, · · · , n). Using the Definition 3.9, P(SC(` The sets of alternatives, attributes and l professional persons denoted by A = {A i : i = 1, 2, · · · , m}, K = {K j : j = 1, 2, · · · , n}, P = {P k : k = 1, 2, · · · , l}. i j )}, (i = 1, 2, · · · , m; j = 1, 2, · · · , n) is an IVFE served by the professional person P k , in which ( ζ (k) i j ) points out the probable membership intervals that the alternative A i satisfies the attribute K j and ( η For MAGDM, we can say that the larger the value of the attribute, the benefit attribute, and the smaller the value of the attribute, the cost attribute. Therefore, convert the cost attribute values into the benefit attribute values and normalize the IVFHM for benefit attributeC j , Based on these considerations, a new technique was constructed for M A G DM in IVFHF environments. The algorithm for this technique is as follows: Algorithm: Step 1: Step 2: For ρ = (ρ 1 , ρ 2 , · · · , ρ l ) T is the weight vector of professionals P k (k = 1, 2, · · · , l), employ the GIFHA (or GIFHG) operator to , and the certain operation is as follows: i j , · · · , F (n) Step 3: For the associated weight vector κ = (κ 1 , κ 2 , · · · , κ n ) T , utilize the GIFHA (or GIFHG) operator to collect all the preference values F i all over. The certain operation is as follows: Here, ω is the weight vector of the attributes K j . Hence,` ) can be defined as Step 4: Calculate the score values SC( F i ) and the accuracy values AF( F i ). Step 5: Obtain the priority of the alternatives A i by ranking SC( F i ). Example 5.1. Let's choose the A i (i = 1, 2, 3) as the set of alternatives made up of hospital management system software. Denote the set P k (k = 1, 2, 3) three physicians and the set K three criteria. The first criterion is "price", which is the cost type. The second and third criterion are "speed" and "efficiency" respectively, which are the benefit type. The weight vector of the physicians is ρ = (0. 18 (Tables 1-3), where F (k) i j is an IVFE offered by the professionals P k (Tables 4-7). Step 1. Convert the matrix D (k) into the matrix E (k) = F (k) i j m×n (Tables 4-6). Step Step 5: Obtain the priority of the alternatives by ranking the score functions. Then, the ranking order A 1 > A 3 > A 2 . So the optimal scheme is A 1 .
2,933
2022-01-26T00:00:00.000
[ "Computer Science" ]
Genome-Wide Identification of CYP72A Gene Family and Expression Patterns Related to Jasmonic Acid Treatment and Steroidal Saponin Accumulation in Dioscorea zingiberensis Dioscorea zingiberensis is a medicinal herb containing a large amount of steroidal saponins, which are the major bioactive compounds and the primary storage form of diosgenin. The CYP72A gene family, belonging to cytochromes P450, exerts indispensable effects on the biosynthesis of numerous bioactive compounds. In this work, a total of 25 CYP72A genes were identified in D. zingiberensis and categorized into two groups according to the homology of protein sequences. The characteristics of their phylogenetic relationship, intron–exon organization, conserved motifs and cis-regulatory elements were performed by bioinformatics methods. The transcriptome data demonstrated that expression patterns of DzCYP72As varied by tissues. Moreover, qRT-PCR results displayed diverse expression profiles of DzCYP72As under different concentrations of jasmonic acid (JA). Likewise, eight metabolites in the biosynthesis pathway of steroidal saponins (four phytosterols, diosgenin, parvifloside, protodeltonin and dioscin) exhibited different contents under different concentrations of JA, and the content of total steroidal saponin was largest at the dose of 100 μmol/L of JA. The redundant analysis showed that 12 DzCYP72As had a strong correlation with specialized metabolites. Those genes were negatively correlated with stigmasterol and cholesterol but positively correlated with six other specialized metabolites. Among all DzCYP72As evaluated, DzCYP72A6, DzCYP72A16 and DzCYP72A17 contributed the most to the variation of specialized metabolites in the biosynthesis pathway of steroidal saponins. This study provides valuable information for further research on the biological functions related to steroidal saponin biosynthesis. Introduction Cytochromes P450 (CYPs) are a superfamily of heme-containing enzymes that mainly function as monooxygenases in all kingdoms of life. The genes of CYPs in the plant kingdom are usually larger than those in the kingdoms of animals and microorganisms, accounting for approximately 1% of the protein-coding genes [1]. In plants, CYPs are involved in the biosynthesis of diverse specialized metabolites, such as phytohormones, fatty acids and flavonoids, playing a crucial role in plant growth and development [2]. CYPs can be divided into many subfamilies, and CYPs belonging to the same subfamilies have the same catalytic functions in organisms. The CYPs are mainly composed of single-family clans and multiple-family clans based on phylogenetic classification [3]. The CYP72A gene subfamily, an important component of the multiple-family clan in CYPs, catalyzes numerous crucial reactions in the biosynthesis pathway of many specialized metabolites. Gibberellins (GAs), especially GAs with low bioactivity, exert positive influences on seed dormancy. AtCYP72A9 has been confirmed to encode bioactive GA 13-hydroxylase in Arabidopsis thaliana, which contributes to the accumulation of such lowbioactivity GAs [4]. Moreover, a great number of CYP72A genes play indispensable roles in the biosynthesis of medicinal metabolites. In the Medicago genus, MtCYP72A67 encodes a key enzyme that catalyzes hydroxylation at the C-2 position in the hemolytic sapogenin biosynthesis [5]. GsCYP72As have been shown to be involved in the biosynthesis of triterpenoid saponins in Gleditsia sinensis [6]. In addition, the expression of some CYP72A genes can be largely influenced by biotic or abiotic stresses. The methylation level of LpCYP72A161 is closely connected with the response to temperature stress in ryegrass [7]. Dioscorea zingiberensis, a perennial vine, is mainly utilized as a source for the production of diosgenin and has attracted much more interest on account of its pharmaceutical value. Diosgenin is an indispensable precursor to steroidal hormones, and it also shows pharmacological activities against many cancers, such as liver cancer and pancreatic cancer [8,9]. However, natural diosgenin is not abundant in D. zingiberensis, and it will immediately be converted into steroidal saponins by combining with aglycone structures [10]. Steroidal saponins are not only the primary storage form of diosgenin but also the main bioactive compounds in D. zingiberensis [11]. Steroidal saponins exhibited beneficial activities toward decreasing the risk of hyperlipidemia and cardiovascular diseases [12,13]. Previous studies have showed that the diosgenin was biosynthesized by the mevalonate acid (MVA) pathway. Acetyl coenzyme A, the initial substrate in the MVA pathway, can be subsequently converted to phytosterols and diosgenin after being catalyzed via a variety of enzymes. In fenugreek, it was proposed that the CYP72A gene family played a crucial role in the diosgenin biosynthesis and might participate in the biosynthesis pathway of phytosterols [14]. Moreover, enzymes encoded by the CYP72A gene family in legumes were reported to have a catalytic activity in the formation of triterpenoids, such as oleanolic acid, ursolic acid and betulinic acid [15,16]. Nevertheless, research on D. zingiberensis has mainly been focused on developing pharmacological activities [17]. Hence, it is urgent to explore the correlation between the CYP72A gene family and diosgenin biosynthesis. Considering the importance of CYP72A proteins in many medicinal plants, the objective of this study was to analyze the characters of the CYP72A family using bioinformatics based on the genome of D. zingiberensis, including phylogenetic relationship, gene structure organization, conserved motif analysis, etc. In the meantime, specialized metabolites in the biosynthesis pathway of steroidal saponins (phytosterols, diosgenin and steroidal saponins) were determined, and qRT-PCR was applied to examine the expression difference of the CYP72A gene family under jasmonic acid (JA) treatment. Moreover, important CYP72A genes were screened by investigating the correlation between gene family and steroidal saponin biosynthesis. The results obtained will provide valuable insights into the function of DzCYP72As and identification of gene resources for breeding of D. zingiberensis. Identification of the CYP72A Proteins in D. zingiberensis The published CYP72A protein data of A. thaliana were used as the reference sequence against the genomic information of D. zingiberensis to identify DzCYP72A proteins, and 25 sequences were identified as CYP72A proteins (Table S1). Such identified CYP72As were mapped on three chromosomes (Chr1, Chr4 and Chr9) and assigned as DzCYP72A1-DzCYP72A25 based on their genomic location ( Table 1). The DzCYP72As were unevenly distributed across the three chromosomes, and three genes of the family were located on unknown chromosomes. The majority of DzCYP72A proteins were localized on Chr1 (n = 14, 56%). By contrast, Chr9 contained seven DzCYP72A genes (28%), and Chr4 contained only one (4%). The detailed information and physicochemical properties of each DzCYP72A protein were predicted by the ExPASy online tool. The amino acids, molecular weight, isoelectric point, instability index, aliphatic index and grand average of hydropathicity are exhibited in Table 1. The lengths of the DzCYP72A proteins varied from 173 (DzCYP72A19) to 525 (DzCYP72A14) amino acids, with a theoretical molecular weight ranging from 19.63 (DzCYP72A19) to 60.09 (DzCYP72A25) kD. The predicted isoelectric point (pI) values of DzCYP72A proteins ranged from 6.29 (DzCYP72A19) to 9.44 (DzCYP72A14). Dz-CYP72A15 displayed the better thermostability while DzCYP72A7 showed relatively poor thermostability. According to the predicted results, all CYP72As were a kind of hydrophilic protein (GRAVY < 0), and most of them were unstable proteins. DzCYP72A15 was the only stable protein as its instability index was only 37.99. The subcellular localization of DzCYP72A proteins was predicted by four different online tools, and the predicted results are displayed in Table 2. Most proteins were presumed to be located in membranous organelles, such as the chloroplast, mitochondrion and endoplasmic reticulum. Phylogenetic Analysis and Multiple Sequence Alignment of DzCYP72A Genes The CYP72A proteins in Asparagus officinalis, Solanum lycopersicum, Dioscorea rotundata and Oryza sativa were identified in the same way as DzCYP72A proteins. The phylogenetic relationship among 25 DzCYP72As, 6 AoCYP72As, 13 SlCYP72As, 6 DrCYP72As, 9 OsCYP72As and 9 AtCYP72As was analyzed by MEGA7 based on the aligned results of protein sequences ( Figure 1). In summary, all CYP72A proteins could be divided into two categories: monocotyledons and dicotyledons. In the meantime, the DzCYP72A family was categorized into two clades. A total of eight DzCYP72As, together with two DrCYP72As and five AoCYP72As were categorized as clade I. Clade II contained the remaining 17 Dz-CYP72As, 6 OsCYP72As and 1 AoCYP72A. Clade III comprised four DrCYP72As and three OsCYP72As. All AtCYP72As (n = 9) and the majority of SlCYP72As (n = 12) were categorized as clade IV. Multiple alignment of DzCYP72A proteins was performed using DNAMAN8 software ( Figure 2). The DzCYP72A protein sequence in clade II had higher homology than that in clade I. All DzCYP72A proteins could be divided into two clusters, which kept consistence with the phylogenetic results. Gene Structure and Motif Analysis of DzCYP72As The exon/intron composition of DzCYP72As displayed the structural diversity and complexity of this gene family ( Figure 3). The exon number of DzCYP72As in clade Ⅰ ranged from one to six. DzCYP72A18 had the largest gene length, and it had the maximum number of exons. In contrast, the gene length of DzCYP72A19 was the shortest, and it had Gene Structure and Motif Analysis of DzCYP72As The exon/intron composition of DzCYP72As displayed the structural diversity and complexity of this gene family ( Figure 3). The exon number of DzCYP72As in clade Ⅰ ranged from one to six. DzCYP72A18 had the largest gene length, and it had the maximum number of exons. In contrast, the gene length of DzCYP72A19 was the shortest, and it had Gene Structure and Motif Analysis of DzCYP72As The exon/intron composition of DzCYP72As displayed the structural diversity and complexity of this gene family ( Figure 3). The exon number of DzCYP72As in clade I ranged from one to six. DzCYP72A18 had the largest gene length, and it had the maximum number of exons. In contrast, the gene length of DzCYP72A19 was the shortest, and it had only two exons. All DzCYP72As in clade II contained five exons, and DzCYP72A5 was intron free. The coding sequence of DzCYP72As is listed in Table S2. of conserved motifs among all CYP72A proteins in D. zingiberensis. The motif number of all DzCYP72A proteins was in the range of 4-10, and motif 1 and 2 were the common composition in clade Ⅰ. The motif 1, 2, 8 and 10 were fundamental components of the DzCYP72A proteins in clade Ⅱ. The DzCYP72As in clade Ⅱ contained more motifs than those in clade Ⅰ, suggesting that such diverse motifs may be strongly correlated with corresponding functions. In the meanwhile, the conserved motifs of DzCYP72A proteins were further analyzed by the MEME online search tool. A total of 10 motifs were found, and the detailed information is displayed in Figure 4. Based on the protein sequence of all DzCYP72As, we found that all DzCYP72A proteins contained motif 1, implying that motif 1 might be one of conserved motifs among all CYP72A proteins in D. zingiberensis. The motif number of all DzCYP72A proteins was in the range of 4-10, and motif 1 and 2 were the common composition in clade I. The motif 1, 2, 8 and 10 were fundamental components of the DzCYP72A proteins in clade II. The DzCYP72As in clade II contained more motifs than those in clade I, suggesting that such diverse motifs may be strongly correlated with corresponding functions. Analysis of Cis-Regulatory Elements in the Promoters of DzCYP72As Cis-regulatory elements (CREs) in the promoter region exert a crucial influence on gene functions. In order to further conduct research on the genetic functions and regulatory mechanisms of the DzCYP72A gene family, the 2 kb upstream sequences of 25 DzCYP72As were uploaded to the PlantCARE online tool. A total of 10 representative CREs (light responsive, auxin responsive, abscisic acid responsive, MYB binding site, MeJA responsive, low-temperature responsive, anaerobic induction, gibberellin responsive, salicylic acid responsive and defense and stress responsive) were visualized on each gene and are shown in Figure 5. Among all CREs, the proportion of light-responsive elements was the largest, accounting for 47%. The MeJA-responsive CREs were the second most abundant and stood at 15%. The proportions of anaerobic induction, abscisic acid responsive and MYB binding site were similar, with 9%, 8% and 7%, respectively. The proportions of auxin-responsive, gibberellin-responsive and salicylic-acid-responsive Analysis of Cis-Regulatory Elements in the Promoters of DzCYP72As Cis-regulatory elements (CREs) in the promoter region exert a crucial influence on gene functions. In order to further conduct research on the genetic functions and regulatory mechanisms of the DzCYP72A gene family, the 2 kb upstream sequences of 25 DzCYP72As were uploaded to the PlantCARE online tool. A total of 10 representative CREs (light responsive, auxin responsive, abscisic acid responsive, MYB binding site, MeJA responsive, low-temperature responsive, anaerobic induction, gibberellin responsive, salicylic acid responsive and defense and stress responsive) were visualized on each gene and are shown in Figure 5. Among all CREs, the proportion of light-responsive elements was the largest, accounting for 47%. The MeJA-responsive CREs were the second most abundant and stood at 15%. The proportions of anaerobic induction, abscisic acid responsive and MYB binding site were similar, with 9%, 8% and 7%, respectively. The proportions of auxinresponsive, gibberellin-responsive and salicylic-acid-responsive CREs were at the same level, approximately making up 3%. Additionally, the CRE related to defense and stress responsiveness occupied the least proportion, only 2%. The detailed information on CREs is exhibited in Table S3. CREs were at the same level, approximately making up 3%. Additionally, the CRE related to defense and stress responsiveness occupied the least proportion, only 2%. The detailed information on CREs is exhibited in Table S3. Expression Profiles of CYP72As in D. zingiberensis In order to further explore the functions of DzCYP72A genes, we analyzed the transcriptome data and investigated the tissue-specific expression patters of each DzCYP72A gene. The expression patterns of DzCYP72A genes varied by tissues ( Figure 6A). DzCYP72A23 and DzCYP72A15 displayed the highest transcript level in both the leaf and the stem, whereas DzCYP72A15 exhibited the highest transcript level in the rhizome. DzCYP72A4 and DzCYP72A5 showed the lowest expression level in the leaf, while DzCYP72A14 represented the lowest expression level in the stem. In addition, both DzCYP72A25 and DzCYP72A19 exhibited the lowest expression level in the rhizome. These results implied that DzCYP72As may have indispensable influences on the growth and development of D. zingiberensis. Expression Profiles of CYP72As in D. zingiberensis In order to further explore the functions of DzCYP72A genes, we analyzed the transcriptome data and investigated the tissue-specific expression patters of each DzCYP72A gene. The expression patterns of DzCYP72A genes varied by tissues ( Figure 6A). Dz-CYP72A23 and DzCYP72A15 displayed the highest transcript level in both the leaf and the stem, whereas DzCYP72A15 exhibited the highest transcript level in the rhizome. DzCYP72A4 and DzCYP72A5 showed the lowest expression level in the leaf, while Dz-CYP72A14 represented the lowest expression level in the stem. In addition, both Dz-CYP72A25 and DzCYP72A19 exhibited the lowest expression level in the rhizome. These results implied that DzCYP72As may have indispensable influences on the growth and development of D. zingiberensis. Numerous studies have illustrated that JA and its derivates exerted positive effects on the biosynthesis of saponins [18][19][20]. Moreover, the DzCYP72A gene family played a key role in the biosynthesis of steroidal saponins and the analysis of cis-regulatory elements suggested that DzCYP72As had a strong correlation with JA. Thus, the primers of Dz-CYP72As were designed (Table S4) and qRT-PCR was applied to analyze the transcriptional expressions of DzCYP72As under treatment with different concentrations of JA ( Figure 6B). Almost all DzCYP72As displayed diverse expression patterns under all treatments, but it was difficult to obtain acceptable qRT-PCR results for gene expression analysis of Dz-CYP72A18, DzCYP72A19 and DzCYP72A25 in rhizomes on account of their low expression abundance and amplification efficiency, which cohered with the transcriptome data. Some genes, such as DzCYP72A24, DzCYP72A13 and DzCYP72A7, responded rapidly and displayed higher expression levels at a relatively low concentration of JA (25 µmol/L). The transcriptional expressions of DzCYP72A13, DzCYP72A14, DzCYP72A16, DzCYP72A20 and DzCYP72A24 showed significant upregulation. The majority of DzCYP72A genes exhibited the highest expression level under JA with 100 µmol/L. Numerous studies have illustrated that JA and its derivates exerted positive effects on the biosynthesis of saponins [18][19][20]. Moreover, the DzCYP72A gene family played a key role in the biosynthesis of steroidal saponins and the analysis of cis-regulatory elements suggested that DzCYP72As had a strong correlation with JA. Thus, the primers of DzCYP72As were designed (Table S4) and qRT-PCR was applied to analyze the transcriptional expressions of DzCYP72As under treatment with different concentrations of JA (Figure 6B). Almost all DzCYP72As displayed diverse expression patterns under all treatments, but it was difficult to obtain acceptable qRT-PCR results for gene expression analysis of DzCYP72A18, DzCYP72A19 and DzCYP72A25 in rhizomes on account of their low expression abundance and amplification efficiency, which cohered with the transcriptome data. Some genes, such as DzCYP72A24, DzCYP72A13 and DzCYP72A7, responded rapidly and displayed higher expression levels at a relatively low concentration of JA (25 μmol/L). The transcriptional expressions of DzCYP72A13, DzCYP72A14, DzCYP72A16, DzCYP72A20 and DzCYP72A24 showed significant upregulation. The majority of DzCYP72A genes exhibited the highest expression level under JA with 100 μmol/L. Effects of JA Concentration on the Specialized Metabolites in D. zingiberensis Phytosterols, such as cholesterol, campesterol, stigmasterol and β-sitosterol, were reported as the intermediates in the biosynthesis pathway of steroidal saponins [21]. Therefore, the contents of specialized metabolites, including phytosterols, diosgenin and steroidal saponins, were determined to investigate the impacts of JA on steroidal saponin biosynthesis. It has been found that JA treatment exerted significant effects on the contents of bioactive compounds compared to those in untreated rhizomes (Table 3). Both dioscin and protodeltonin responded rapidly to low concentrations of JA, exhibiting the highest content of 38.25 μg/g and 15.69 mg/g, respectively, at JA with 50 μmol/L. The accumulation of parvifloside showed a positive relationship with the higher concentrations of JA and was approximately 1.6-fold increase over that in the control group at a dose of 100 μmol/L. Therefore, the total saponins obtained the maximum content (69.26 mg/g) at JA Effects of JA Concentration on the Specialized Metabolites in D. zingiberensis Phytosterols, such as cholesterol, campesterol, stigmasterol and β-sitosterol, were reported as the intermediates in the biosynthesis pathway of steroidal saponins [21]. Therefore, the contents of specialized metabolites, including phytosterols, diosgenin and steroidal saponins, were determined to investigate the impacts of JA on steroidal saponin biosynthesis. It has been found that JA treatment exerted significant effects on the contents of bioactive compounds compared to those in untreated rhizomes (Table 3). Both dioscin and protodeltonin responded rapidly to low concentrations of JA, exhibiting the highest content of 38.25 µg/g and 15.69 mg/g, respectively, at JA with 50 µmol/L. The accumulation of parvifloside showed a positive relationship with the higher concentrations of JA and was approximately 1.6-fold increase over that in the control group at a dose of 100 µmol/L. Therefore, the total saponins obtained the maximum content (69.26 mg/g) at JA with 100 µmol/L, which was 1.8-fold higher than in the untreated group. Likewise, the content of diosgenin also positively responded to the JA and obtained the highest yield (647.18 µg/g) at a dose of 100 µmol/L. However, the content of cholesterol showed no significant difference under different concentrations of JA. In contrast, the yield of campesterol and β-sitosterol varied with different concentrations of JA, and the maximum yields of these two metabolites all occurred at a dose of 100 µmol/L. Stigmasterol obtained the maximum content at a dose of 50 µmol/L of JA, and its accumulation was decreased by relatively higher concentrations of JA. Correlations among DzCYP72As, Phytosterols, Diosgenin and Steroidal Saponins The de-trended correspondence analysis (DCA) was performed on phytosterols, diosgenin and steroidal saponins, and then was applied to calculate the maximum length of the gradient axis (LGA). Table S5 shows that the LGA value of DCA was below one. According to previous research, the redundancy analysis (RDA) is suitable for subsequent analyses when the LGA values are <3 [22]. Therefore, RDA was carried out to analyze the correlation between specialized metabolites and DzCYP72As. In the RDA algorithm, a total of 22 DzCYP72As responding to JA were used as the explanatory variables, and 8 metabolites were used as the response variables. In the meantime, a forward direction of the Akaike information criterion (AIC) was applied to further screen explanatory variables. In the first two axes of RDA analysis, these genes constrained 91.46% of variance in eight metabolites, suggesting these two axes could represent the total constrained proportion (Table S6). As shown in Table 4, a total of 12 DzCYP72As had correlations with specialized metabolites of steroidal saponin biosynthesis. However, only 9 DzCYP72As were significantly correlated with those metabolites. Among these, DzCYP72A6, DzCYP72A16 and DzCYP72A17 displayed a highly significant correlation with such bioactive compounds. DzCYP72A1, Dz-CYP72A3, DzCYP72A9, DzCYP72A11, DzCYP72A14 and DzCYP72A20 showed significant correlations with specialized metabolites. In view of Figure 7, the screened DzCYP72As were positively associated with parvifloside, protodeltonin, dioscin, diosgenin, campesterol and β-sitosterol. In contrast, cholesterol and stigmasterol were negatively correlated with these DzCYP72As. Note: * represents significant; ** represents highly significant. Discussion The CYP72A gene family plays crucial roles in catalyzing many considerable reactions in Plantae, and many CYP72A genes have been cloned from various plants [4,5]. Nevertheless, genome-wide analysis of the CYP72A gene family has not been performed in D. zingiberensis, an important source of diosgenin. In this study, a total of 25 DzCYP72A genes were identified in D. zingiberensis and assigned as DzCYP72A1-25 on the base of their chromosomal location. The phylogenetic relationship, gene structure, conserved motifs, cis-regulatory elements and expression patterns under JA treatment were analyzed. Meanwhile, eight specialized metabolites related to the biosynthesis of steroidal saponins were determined, and the correlation between DzCYP72As and such metabolites was also investigated. This work provided valuable information for subsequent functional analysis of DzCYP72As. The phylogenetic tree indicated that DzCYP72A proteins had stronger homology with O. sativa, A. officinalis and D. rotundata than with A. thaliana (Figure 1). D. zingiberensis belongs to monocotyledon and A. thaliana belongs to dicotyledon; therefore, O. sativa, A. officinalis and D. rotundata, belonging to monocotyledon, are inclined to have strong homology with D. zingiberensis. Moreover, the functions of CYP gene subfamilies catalyze considerable reactions in the biosynthesis pathway of numerous specialized metabolites Discussion The CYP72A gene family plays crucial roles in catalyzing many considerable reactions in Plantae, and many CYP72A genes have been cloned from various plants [4,5]. Nevertheless, genome-wide analysis of the CYP72A gene family has not been performed in D. zingiberensis, an important source of diosgenin. In this study, a total of 25 DzCYP72A genes were identified in D. zingiberensis and assigned as DzCYP72A1-25 on the base of their chromosomal location. The phylogenetic relationship, gene structure, conserved motifs, cis-regulatory elements and expression patterns under JA treatment were analyzed. Meanwhile, eight specialized metabolites related to the biosynthesis of steroidal saponins were determined, and the correlation between DzCYP72As and such metabolites was also investigated. This work provided valuable information for subsequent functional analysis of DzCYP72As. The phylogenetic tree indicated that DzCYP72A proteins had stronger homology with O. sativa, A. officinalis and D. rotundata than with A. thaliana (Figure 1). D. zingiberensis belongs to monocotyledon and A. thaliana belongs to dicotyledon; therefore, O. sativa, A. officinalis and D. rotundata, belonging to monocotyledon, are inclined to have strong homology with D. zingiberensis. Moreover, the functions of CYP gene subfamilies catalyze considerable reactions in the biosynthesis pathway of numerous specialized metabolites in plants, and previous research illustrated that the CYP72A gene family had a profound correlation with the biosynthesis of saponins [23,24]. Hence, the reason for D. zingiberensis containing more DzCYP72A genes may be the large accumulation of steroidal saponins in the rhizome of D. zingiberensis. Likewise, the large amount of steroidal saponins in A. officinalis may be one of the reasons why all AoCYP72A proteins had strong homology with DzCYP72A proteins. Although D. rotundata also belongs to the Dioscorea genus, it contains abundant starch and only a trace amount of steroidal saponins [25]. Therefore, the large amount of starch may be closely related to the high homology of amino acid sequences between DrCYP72A and OsCYP72A proteins. In addition, S. lycopersicum also belongs to dicotyledon, but one SlCYP72A protein has remarkable homology with DzCYP72A proteins. According to previous research, S. lycopersicum contains abundant steroidal glycoalkaloids, which are also biosynthesized from phytosterols [26]. The similar biosynthesis pathway may be connected to the high homology between SlCYP72A and DzCYP72A proteins. Similar to the case in other gene families in plants, the expression profiles of Dz-CYP72As displayed special variations in different tissues, which may be closely related to their functions in D. zingiberensis [27,28]. Many DzCYP72A genes had high expressions in the leaf and stem, but the steroidal saponins mainly accumulated in the rhizomes. According to previous research, the CYP72A gene family mainly participates in the biosynthesis of phytosterols, which are not only the intermediates in the biosynthesis pathway of steroidal saponins but also indispensable components for cell membranes [29,30]. Generally, the leaves have higher metabolism than rhizomes, and rhizomes cultivated for at least 3 years can satisfy the industrial requirements. Therefore, the reason for DzCYP72A genes having a high transcript level might be that more phytosterols are needed for maintaining the functions of cell membranes in leaves under a high metabolic rate. Abiotic stresses, such as extreme temperature, abscisic acid, salicylic acid, JA and its derivates, exert considerable effects on plant development and the accumulation of specialized metabolisms [31,32]. In this study, the MeJA-responsive CREs were the most abundant among all CREs responsive to phytohormones, indicating that JA and its derivates profoundly correlated with the functions of DzCYP72As. Moreover, previous research demonstrated that JA and its derivates had positive correlations with the biosynthesis of steroidal saponins [18][19][20]. Therefore, most DzCYP72As were upregulated under different concentrations of JA. In most cases, MeJA was used as an elicitor in medicinal crops for enhancing the yields of bioactive compounds [33]. However, MeJA is essentially a nonbioactive compound, but it is a volatile substance, which can be helpful to increase the capacity of JA to easily enter plants via the stomata [34]. In the meantime, JA is more effective than the methylation of JA in stimulating the biosynthesis of saponins [19]. Therefore, JA was applied to the rhizomes of D. zingiberensis in this work. According to previous research, Ankang of Shaanxi and Shiyan of Hubei are the original cities of D. zingiberensis, and the rhizomes from these two regions contained more steroidal saponins than those from other locations in China [21,35]. Therefore, seedlings from Ankang of Shaanxi were used as materials to investigate the effects of JA on Dz-CYP72As and specialized metabolites. The content of dioscin in this study was lower than that in mature rhizomes, but the contents of cholesterol, stigmasterol, β-sitosterol and diosgenin were higher than that in mature rhizomes [21]. Solar energy provides the source of energy for plants and is stored via photosynthesis [36]. When a plant reaches natural maturation, the aerial part of D. zingiberensis has already withered, inhibiting photosynthesis and energy storage. Meanwhile, the low metabolic rate and insufficient energy supply may pose negative influences on the biosynthesis of steroidal saponins, which might be one of the reasons why mature rhizomes contained only a trace amount of cholesterol, stigmasterol, β-sitosterol and diosgenin. Cholesterol has been proved to be one of the important compounds that provide carbon skeletons for the biosynthesis of steroidal saponins [37]. Nevertheless, the content of cholesterols had no correlation with JA in this study. In contrast, β-sitosterol had a profound correlation with JA. In the meanwhile, it was found that β-sitosterol can also biosynthesize the diosgenin and steroidal saponins [38]. Thus, the reason for the increase of steroidal saponins might be that a large amount of β-sitosterol participates in the biosynthesis of steroidal saponins under JA treatment. Moreover, β-sitosterol is also reported to be the precursor for stigmasterol, but the content of stigmasterol decreased with increased β-sitosterol in the high concentrations of JA, implying the accumulated β-sitosterol may be used for the biosynthesis of steroidal saponins [39]. In most cases, various bioactive compounds had different pharmacological activities, and parvifloside, protodeltonin and dioscin showed diverse responses to different concentrations of JA, which provided a targeted scientific basis for enhancing those metabolites. In addition, other gene families may also have strong correlations with the biosynthesis of steroidal saponins. Meanwhile, other environmental variables, such as soil salinization and moisture, can also affect expression patterns of DzCYP72A genes and other gene families. Therefore, further research should pay more attention to the correlation between specialized metabolites and other gene families under different environmental conditions. Plant Materials The seeds of D. zingiberensis were obtained from a plantation of Shaanxi Ankang in October of 2020. D. zingiberensis seeds were cultivated in an environmentally controlled greenhouse with 26 ± 2°C and 16 h light/day. According to the manufacturer's instruction and previous research, jasmonic acid (JA, Macklin, 98%) was dissolved in 80% ethanol and diluted in 1/2 Hoagland solution to obtain 25, 50, 100 and 200 µmol/L solutions [40,41]. Ethanol was diluted in 1/2 Hoagland solution and used as a control solution. Three-monthold seedlings with similar growth status were watered with enough control solution and JA solutions for six days. The seedlings were watered every three days, and all solutions were freshly prepared just before use. The rhizomes were collected, and every treatment was replicated three times. We finally obtained the JA-treated samples: 0 µmol/L (S1), 25 µmol/L (S2), 50 µmol/L (S3), 100 µmol/L (S4) and 200 µmol/L (S5). Some samples were promptly frozen in liquid nitrogen pending RNA extraction. While the others were freeze dried to constant weight at −80°C and ground to powders with a tissue lyser. Standards and Chemical Reagents Methanol, acetonitrile, chloroform, ethanol and n-hexane were purchased from Thermo RNA Extraction and Gene Expression Analysis The total RNA of rhizomes was extracted using RNAiso plus (Takara, Beijing, China) according to its instructions. RNA quality and concentration were determined with the NanoDrop 2000 (Thermo Scientific, Waltham, MA, USA). Subsequently, 1 µg of total RNA was subjected to a reverse transcription reaction according to HiScript III 1st Strand cDNA Synthesis Kit (Vazyme, Nanjing, China). All primers were designed by Primer 5.0 and are listed in Table S4. Quantitative realtime PCR (qRT-PCR) was conducted according to the instructions of 2 × TransStart Green qPCR SuperMix (TransGen, Beijing, China). The expression patterns of the DzCYP72A gene family under JA treatment were analyzed using the CFX96 Real-Time PCR Detection System (Bio-Rad, Los Angeles, CA, USA). DzActin and DzGAPDH were used as internal controls. The gene's relative expression was normalized by the 2 −∆∆Ct method. Identification and Screening of CYP72A Family Genes The published nine CYP72A protein sequences of A. thaliana were downloaded from the database (http://www.p450.kvl.dk/At_cyps/family.shtml#72A accessed on 10 July 2021) [42]. The basic local alignment search tool (BLAST) was used to identify and screen DzCYP72A proteins. In brief, these nine published AtCYP72A protein sequences were used as a query to further screen the CYP72A proteins in D. zingiberensis, and 25 sequences fulfilling requirements were finally identified. The genome information of D. zingiberensis was uploaded to the NCBI database under project PRJNA716093 (unpublished). The genome information of A. officinalis, S. lycopersicum (TAG 3.2) and O. sativa (V7.0) was downloaded from the Joint Genome Institute (https://phytozome.jgi.doe.gov accessed on 10 July 2021). In the meantime, the genome information of D. rotundata was downloaded from the NCBI (project number: PRJNA695139). The CYP72A protein sequences of these species were confirmed in the same way as for DzCYP72A proteins. Analysis of Specialized Metabolites About 50 mg freeze-dried rhizome powder was saponified with 2 mL 2 mol/L KOHethanol in the water bath for one hour at 80°C. The unsaponifiable part was extracted using n-hexane and filtered through a membrane solution filter (0.22 µm). The supernatant was evaporated under vacuum at room temperature. Dried samples were derivatized by 50 µL N-methyl-N-(trimethylsilyl) trifluoroacetamide in room temperature for 30 min. Subsequently, 150 µL n-hexane was added pending the determination of phytosterols. A 2:1 chloroform/methanol mixture solvent was used to extract diosgenin. Approximately 50 mg freeze-dried rhizome powder was dissolved in 1 mL of the mixture solvent and extracted by ultrasonication for 30 min. Diosgenin was obtained by centrifugation at 12,000× g for 10 min. The extraction of each sample was repeated three times. The supernatant extracted three times was merged and filtered through a membrane solution filter (0.22 µm) pending determination. The contents of phytosterols and diosgenin were detected using the Thermo Trace gas chromatography-mass spectrometry instrument equipped with the TG-5 ms column (30 m × 0.25 mm × 0.25 µm). The operation conditions were performed as described previously [21]. Subsequently, 80% ethanol was used for the extraction of steroidal saponins. Approximately 50 mg freeze-dried rhizome powder was dissolved in 1 mL extraction solution and centrifuged at 12,000× g for 10 min. This powder (50 mg) was extracted three times. The obtained supernatant was merged and filtered through a membrane solution filter (0.22 µm) for subsequent analysis. The steroidal saponins were determined by an ultra-performance liquid chromatography instrument equipped with the Q Exactive hybrid quadrupole mass spectrometer (Thermo Scientific, Waltham, MA, USA) and a reverse-phase C18 column (Thermo, 100 mm × 2.1 mm, 3 µm). The mobile phases, flow rate and temperature of chromatography column were operated as described previously [21]. The gradient program was: 0-2 min, isocratic 12% B; 2-8 min, linear gradient of 12-35% B; 8-20 min, isocratic 45% B; 20-20.1 min, linear gradient of 45-20% B; 20.1-22 min, isocratic 20% B. The data collection was performed by full scan mode (m/z 300-200) and selected ion monitoring mode with the diagnostic ion monitored at m/z 869.48. Statistical Analyses The qualitative information regarding phytosterols, diosgenin and steroidal saponins was collected from mass spectra, and the quantified results were calculated by the peak areas of external standards (Table S7). The mean and standard deviation of each specialized metabolite were determined using SPSS 19.0 software. DCA was performed with the package vegan in R.4.0.2, and RDA was carried out on the online tool Gene Denovo (https://www.omicshare.com/tools/ accessed on 15 August 2021). The correlation between DzCYP72As and steroidal saponin biosynthesis was analyzed using Rstudio. Conclusions In this study, a total of 25 CYP72A genes were screened and isolated from the genome of D. zingiberensis. The physicochemical characteristics, subcellular localization, phylogenetic analysis, exon-intron organization, motifs, cis-regulatory elements and tissue-specific expression were investigated with diverse bioinformatics methods. The results of qRT-PCR and eight metabolites revealed that DzCYP72As and specialized metabolites displayed significant responses to JA treatment. Moreover, the total steroidal saponins, parvifloside, natural diosgenin campesterol and β-sitosterol were the most abundant at a dose of 100 µmol/L of JA, while protodeltonin, dioscin and stigmasterol were the most abundant at the concentration of 50 µmol/L of JA. The spearman analysis revealed that DzCYP72As have a strong correlation with eight metabolites in the biosynthesis pathway of steroidal saponins. The obtained results provide useful information in relation to the DzCYP72 gene family and steroidal saponins, which is beneficial for further investigations into the evolution and functions of these genes.
8,023.4
2021-10-01T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Relating amplitude and PDF factorisation through Wilson-line geometries We study long-distance singularities governing different physical quantities involving massless partons in perturbative QCD by using factorisation in terms of Wilson-line correlators. By isolating the process-independent hard-collinear singularities from quark and gluon form factors, and identifying these with the ones governing the elastic limit of the perturbative Parton Distribution Functions (PDFs) -- $\delta(1-x)$ in the large-$x$ limit of DGLAP splitting functions -- we extract the anomalous dimension controlling soft singularities of the PDFs, verifying that it admits Casimir scaling. We then perform an independent diagrammatic computation of the latter using its definition in terms of Wilson lines, confirming explicitly the above result through two loops. By comparing our eikonal PDF calculation to that of the eikonal form factor by Erdogan and Sterman and the classical computation of the closed parallelogram by Korchemsky and Korchemskaya, a consistent picture emerges whereby all singularities emerge in diagrammatic configurations localised at the cusps or along lightlike lines, but where distinct contributions to the anomalous dimensions are associated with finite (closed) lightlike segments as compared to infinite (open) ones. Both are relevant for resumming large logarithms in physical quantities, notably the anomalous dimension controlling Drell-Yan or Higgs production near threshold on the one hand, and the gluon Regge trajectory controlling the high-energy limit of partonic scattering on the other. Introduction It is well known that perturbative QCD at fixed order in α s , which is highly successful in describing hard processes at colliders, loses its predictive power in kinematic regions where there is a large hierarchy of scales. Familiar examples are Drell-Yan or Higgs production near threshold, see e.g. [1][2][3][4][5][6], or at small transverse momentum, which are dominated by soft-gluon radiation. Another example is the high-energy limit of QCD scattering, where the centre-of-mass energy is much larger than the momentum transfer [7][8][9][10][11][12][13][14]. In each of these cases, and many others, factorisation techniques allow us to derive all-order resummation formulae, which extend the predictive power of QCD, leading to highly successful phenomenology in many cases. The theory underlying factorisation relies on identifying the origin of any parametrically-enhanced corrections through operators, which capture the relevant divergences. Independently of whether one uses QCD fields [15,16], or Soft-Collinear Effective Theory [17] ones, the relevant operators involve Wilson lines, which follow the trajectory of fast-moving partons, and capture their interactions with soft gluons. These operators obey evolution equations, governed by corresponding anomalous dimensions, which are computable order by order in QCD perturbation theory. The most familiar amongst these is the (lightlike) cusp anomalous dimension [18], γ cusp , which in particular describes double poles in the Sudakov form factor, originating in overlapping soft and collinear singularities. While the cusp anomalous dimension occurs universally, governing the leading singularities in any kinematic limit, single-logarithmic contributions characterising separately large-angle soft or hard-collinear or rapidity divergences, are somewhat less universal, and yet -as we shall see -recur in a variety of physical quantities that are not a priori related. Resummation formulae are obtained upon solving the aforementioned evolution equations, leading to exponentiation. The anomalous dimensions therefore have a central role in the predictive power of QCD, and in certain cases their computation has been recently pushed to threeloop order, e.g. [19][20][21][22][23][24], with very recent progress towards four loops [25][26][27][28][29][30][31][32][33] (even more is known in maximally supersymmetric Yang-Mills theory, see e.g. [34][35][36][37][38][39]). Despite this impressive progress, there remains several unresolved questions regarding the anomalous dimensions governing single-logarithmic corrections and their universality, some of which we address below. In the present paper we study two fundamental physical quantities, which are recurrent ingredients in the factorisation of amplitudes and cross sections [16,40]. The first is the massless on-shell form factor, associated e.g. with an electromagnetic vector current in the case of quarks, or effective Higgs production vertex, gg → H, in the case of gluons. The second is parton distribution functions (PDFs), or more precisely, the large-x limit of diagonal qq and gg Altarelli-Parisi splitting functions, governing the scale dependence of PDFs according to the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equation [41][42][43]. Each of these physical quantities is important in its own sake, and their infrared factorisation will be discussed in some detail in sections 2 and 3, respectively. The main motivation to our study comes from the relation between the two, namely a particular combination of single-pole anomalous dimensions, which respectively capture collinear singularities in these two quantities. The relation holds separately for quarks and for gluons: where γ q G (γ g G ) is defined by the function G (see eq. (2.6)), which along with the cusp anomalous dimension, governs the infrared structure of the quark (gluon) form factor in eq. (2.1) below; and B q δ (B g δ ) is the coefficient of the δ(1 − x) term, in the large-x limit of the quark-quark (gluon-gluon) splitting function, see eq. (3.6) below. It was observed long ago [44,45] that while the separate perturbative results for γ G and B δ are very different between quarks and gluons (this is expected: collinear singularities are known to depend on the parton's spin), the combination (1.1) vanishes at one loop in both cases, and admits a Casimir-scaling relation 1 at two loops, namely The same Casimir-scaling property persists at three loops [45]. This is a clear indication that f eik has an interpretation purely in terms of Wilson lines -hence the name, an eikonal function. A Wilson-line-based definition would explain why the result does not depend on the parton's spin, while it depends on its colour representation in proportion to the relevant quadratic Casimir through three loops. The question we would like to address is what is the Wilson-loop correlator corresponding to f eik . Before describing our approach to answer this question, let us note that the combination in (1.1) has a direct physical interpretation as the soft anomalous dimension associated with Drell-Yan production near partonic threshold [1][2][3][4][5][6], namely γ q G − 2 B q δ = 1 2 Γ DY . Similarly γ g G − 2 B g δ is associated with Higgs production through gluon-gluon fusion near threshold. The corresponding soft function is defined at cross-section level, by replacing the energetic partons, which move in opposite lightlike directions (before annihilating at the hard interaction vertex), by Wilson lines that follow the same trajectory, in both the amplitude and its complex conjugate. The cusp where the complex-conjugate amplitude Wilson lines meet is displaced by a timelike distance with respect to the amplitude: this distance is the Fourier conjugate variable to the energy fraction carried by soft partons. 2 Final-state radiation, namely the set of soft particles connecting the amplitude side to the complexconjugate amplitude side, are described by cut propagators. This soft function admits an evolution equation governed by γ cusp and Γ DY (see e.g. eq. (9) in ref. [3], or eqs. (43)(44) in ref. [6]). The latter was computed through three loops directly based on the aforementioned Wilson-line definition [20,52], and the results agree with the combination of anomalous dimensions in (1.1), which were extracted from independent QCD computations of the form factor [44,45,53] and DGLAP splitting functions [54][55][56][57][58][59][60][61]. Thus, from this perspective, this physical quantity is well understood, and its Casimir-scaling property simply follows from the above-mentioned Wilson-line definition. Our own investigation starts with the simple observation that the two-loop result for γ G −2 B δ in (1.2) also agrees, up to an overall factor of 4, with the result for the parallelogram Wilson loop made of four lightlike segments (see figure 1c), which was computed in 1992 by Korchemsky and Korchemskaya [62]. It is a highly appealing proposition that 3 holds to all orders. 4 The parallelogram Wilson loop, is a very simple object: being compact it has no infrared divergences, so the singularities arise here from short distances, and the calculation can be done directly in dimensional regularisation. Importantly, in contrast to the Drell-Yan soft function described above, real corrections and cut propagators do not arise here. The natural questions to ask then are first, does the relation in (1.3) indeed hold to all orders, and second, can we see how a parallelogram Wilson loop arises from the definitions of the objects on the left-hand side of eq. (1.3), the form factor and the PDF. Establishing this relation is one of the main goals of the present paper. The infrared factorisation of the form factor is well understood [16,40,63], and has been used as the starting point for the factorisation of massless amplitudes with any number of legs in general kinematics [64][65][66][67][68][69][70][71]. The form-factor factorisation gives rise to a different Wilson-line configuration, namely a couple of semi-infinite lightlike Wilson lines (with different 4-velocities) meeting at the hard-interaction vertex, see figure 1a. We shall refer to this contour as the ∧ geometry. We emphasise that in contrast with the Drell-Yan soft function described above, where the cross section was considered [20,52], here the Wilson-line configuration is defined at amplitude level. At a difference with the parallelogram of [62], the ∧ geometry is non-compact, and thus gives rise to infrared divergences, in addition to ultraviolet ones. We shall return to the ∧ geometry and its properties below. At this point it suffices to say that considering the infrared factorisation of the form factor, the origin of the relation between γ G − 2B δ and the parallelogram geometry remains obscure: the ∧ geometry has no finite segments while the parallelogram consists exclusively of such. An important step in explaining the eikonal nature of f eik in (1.1), based on the infrared factorisation properties of the form factor and the PDF, was taken in 2008 in a paper by Dixon, Magnea and Sterman [63]. The fundamental explanation is that spin-dependent hard-collinear contributions are common to both γ G and 2 B δ and drop in the difference, leaving behind a purely eikonal component. This is the premise we shall follow here as well. However, ref. [63] relied on the assumption that B δ , as the coefficient of δ(1 − x), is a purely virtual quantity and hence the factorisation of the PDF could be done at "amplitude level". According to the factorisation outlined in [63] the eikonal component of B δ should correspond to Wilson lines with a ∧−geometry, much like the form factor. Taking this at face value, if the eikonal components of γ G and B δ on the right-hand side of (1.3) indeed both correspond to the ∧−geometry, one concludes that the ∧ and the anomalous dimensions must be proportional to each other, at least through two loops, or, put differently, one may deduce the anomalous dimension of the ∧−geometry from (1.2). The first direct two-loop computation of ∧−geometry Wilson loop was performed only in 2015, by Erdogan and Sterman [72]. This calculation is an important step forward also in the sense that it presents a new method for dealing directly with (semi)-infinite lightlike Wilson lines in configuration space (which a priori lead to scaleless integrals) without resorting to an extra regulator. This is done by cleverly using the exponentiation properties and isolating a well-defined integrand, before renormalising ultraviolet divergences by means of a suitable cutoff. We shall adopt and generalise this method in section 4 below. The result of ref. [72] is that the anomalous dimension corresponding to the ∧−geometry Wilson loop is given by where C i = C F for Wilson lines in the fundamental representation and C A for the adjoint. As with f eik and Γ ∧ above, we omit the superscript q/g for Γ ∧ wherever it is not necessary. While the result in (1.4) bears a striking resemblance to f eik in (1.2), it is evidently not identical; the coefficient of the ζ 3 term is entirely different. The authors of ref. [72] further provided a detailed diagrammatic analysis, comparing their calculation to that of the parallelogram in ref. [62], and explaining the origin of the difference in the coefficient of ζ 3 as emanating from endpoint contributions that are present in finite lightlike segments, but are absent in infinite ones. This conclusion can be confirmed by a momentum-space computation. It is useful to bear in mind that infinite and semi-infinite Wilson-line configurations (but not finite ones!) are of direct relevance to partonic scattering amplitudes in the high-energy limit (the Regge limit) [11][12][13][14]. Also, the explicit two-loop combination in (1.4) appeared in the literature in that context long before the computation of the ∧ configuration in ref. [72]. Specifically, considering gg → gg, qq → qq or qg → qg scattering in the limit where the centre-of-mass energy is much larger than the momentum transfer, s ≫ −t, the leading and next-to-leading logarithms in s/(−t) in the (real part of the) amplitude exponentiate according to a simple replacement of the t-channel gluon propagator (dubbed gluon Reggeisation): where α(t, ǫ) is the gluon Regge trajectory 5 [74][75][76][77][78] given by: where α s = α s (−t, ǫ), with ǫ = (4 − d)/2 the dimensional regularisation parameter,b 0 is the one-loop QCD beta function of (2.3a), γ g (n) cusp are the coefficient of the cusp anomalous dimension of eq. (2.4) for the gluon, and Γ g (2) ∧ is the two-loop coefficient in eq. (1.4), again with C i = C A . We further recall that the overall similarity between the parallelogram Wilson loop in [62] and the gluon Regge trajectory in (1.6), as well as the peculiar difference between them in the coefficient of ζ 3 , were already observed early on, in ref. [79], where an evolution equation for the Regge trajectory was derived, considering the forward limit of crossed Wilson lines. However, this raises no difficulty: as stressed above, it is the infinite Wilson-line geometry which is expected to be relevant for the factorisation of partonic scattering amplitudes, not the parallelogram. A real puzzle arises, however, upon considering the explicit result for the ∧−geometry anomalous dimension in eq. (1.4) in view of eq. (1.2), if the conclusion of ref. [63] is taken at face value. Given that the factorisation of the form factor is well understood, and the eikonal component of γ G is determined by the ∧−geometry, we are compelled to revisit the assumption of ref. [63] that B δ is a purely virtual quantity, systematically establish the infrared factorisation of the PDFs at large x, and identify the eikonal component of B δ , which clearly must not be proportional to Γ ∧ . We proceed as follows. In section 2 we review the factorisation of long-distance singularities of the QCD form factor and identify the process-independent spin-dependent hard-collinear component of γ G . In turn, in section 3 we discuss the factorisation of PDFs in the limit x → 1. We show explicitly that the calculation of B δ requires both real and virtual corrections. To this end we perform an explicit two-loop calculation of the splitting functions at large x (the details are presented in appendix A). Next we identify the eikonal component of B δ as the anomalous dimension associated with a ⊓-shaped Wilson-line geometry, see figure 1b. By using the known value of B δ along with the hard-collinear anomalous dimension extracted from the form factor, we then predict the Γ ⊓ anomalous dimension at two loops. Then, in section 4 we compute Γ ⊓ directly to this order, finding agreement with the extracted result of section 3. In section 4 we also derive an evolution equation for the ⊓-shaped Wilson-line and show that while in the ultraviolet it is characterised by double poles, as any other cusped Wilson loop, its infrared properties are different, displaying strictly single poles, in agreement with single-pole nature of PDFs themselves. In section 5 we put together our results for the factorisation of the form factor and the PDF, and establish the relation of (1.3) with the parallelogram to all orders. We further summarise the 5 See also a more recent observation in ref. [73] that the two-loop coefficient Γ g (2) ∧ occurs also in the QCD impact factor. state-of-the-art knowledge of higher-order corrections to Γ in view of its relations with other physical quantities. We briefly summarise our conclusions in section 6. Infrared factorisation of the on-shell form factor Let us review the well-known factorisation of a colour-singlet on-shell form factor of coloured massless particles (quarks or gluons) in QCD [15,16,40,45,63,80]. We label the external momenta by p 1 (incoming) and p 2 (outgoing) with the momentum transfer Q 2 ≡ −(p 1 −p 2 ) 2 , and, as usual, we renormalise all ultraviolet singularities in the MS scheme, denoting the renormalisation scale by µ 2 . The quark form factor is defined in terms of the electromagnetic vector current, proportional toψγ µ ψ, which does not renormalise. The gluon form factor in turn is defined using an effective local interaction vertex with the Higgs field, HG a µν G µνa , and it does renormalise, proportionally to the QCD beta function [44]. The distinct ultraviolet properties of the quark and gluon form factors will be of little relevance for us: we focus instead on the infrared singularities of the form factor, which have a rather similar structure for massless quarks and gluons. For large Q 2 the form factor F Q 2 /µ 2 , α s (µ 2 ), ǫ features large logarithms in the ratio Q 2 /µ 2 , and fixed-order perturbation theory breaks down. These large logarithms can be resummed using a renormalisation-group equation (see e.g. [80]), giving the following allorder formula for the form factor, where we set the renormalisation scale µ 2 = Q 2 for simplicity. Note that we have absorbed into the function G any operator renormalisation terms -see [44,45] for more details. Infrared singularities are generated in eq. (2.1) through an integration, from λ 2 = 0, over the d = 4 − 2ǫ dimensional running coupling α s (µ 2 , ǫ), which obeys We report the coefficientsb 0 ,b 1 andb 2 of the QCD beta function respectively at one [81][82][83][84], two [85][86][87][88] and three loops [89,90], because we will use them in the rest of this paper with the quadratic Casimir defined by T a T a = C R 1, with T being the SU(N c ) generator in the representation R and C A corresponds to the Casimir in the adjoint representation; n f is the number of light quarks and the normalisation of the generators t a in the fundamental representation, Tr(t a t b ) = T f δ ab , is conventionally set to T f = 1/2. Equation (2.1) applies for both quarks and gluons, but with distinct functions γ cusp (α s ) and G(Q 2 /µ 2 , α s , ǫ), which do depend on the type of particles (although this is suppressed in our notation). The former, which captures all double poles, depends solely on the colour representation of the particles (fundamental and adjoint for quarks and gluons, respectively) while the latter, which controls single poles, depends also on their spin. This distinction will be crucial in what follows and it is a direct consequence of the fact that γ cusp is an eikonal quantity, namely one that can be defined exclusively in terms of Wilson lines, while G(Q 2 /µ 2 , α s , ǫ) instead, contains hard-collinear effects, which cannot fully be described by Wilson lines. Specifically, γ cusp is the lightlike cusp anomalous dimension [18], defined as the coefficient of the leading ultraviolet divergence occurring in a cusped Wilson loop, which evaluates to where C i , defined above, is the quadratic Casimir in the fundamental or the adjoint representation for quarks and gluons, respectively. The three-loop value of γ cusp was computed in [91], and recently there has been significant progress towards a four-loop determination [29][30][31][32][33]. Through three loops, the cusp anomalous dimension, much like other quantities that are defined exclusively in terms of Wilson lines, depends on the colour representation proportionally to the quadratic Casimir C i , as in (2.4), adhering to the so-called Casimir scaling property. Starting at four loops quartic Casimirs, d ij ≡ d abcd i d abcd j , appear as well, making the dependence of the colour representation more involved. Differently from γ cusp , the function G(1, α s (λ 2 , ǫ), ǫ) has an expansion both in α s and ǫ, as follows therefore it generates both infrared poles and non-negative powers of ǫ upon integrating over the scale λ 2 of the running coupling as in eq. (2.1). We isolate the divergent contribution order-by-order in α s , by defining the anomalous dimension γ G such that where γ G depends on ǫ only through the coupling. The coefficients γ G for the quark and for the gluon are well known in the literature; they are referred to sometimes as "collinear anomalous dimensions" and were denoted by G in [92], by G 0 in [39] and by γ q and γ g in appendix I of [17]. The latter has a conventional factor of −2. In practice, we derive here γ G to four loops by substituting the d−dimensional running coupling of eq. (2.2) into eq. (2.6) and then identifying the singularities arising on the two sides of equation (2.6), getting where G(l, n) are defined in eq. (2.5) and their values can be extracted from refs. [44,45,53] where the form factors have been computed to three loops. For the purpose of this paper we only need explicit results for the collinear anomalous dimensions through two loops, which read where we added superscripts i = q, g to distinguish between quarks and gluons. Infrared factorisation At high energy (Q 2 → ∞) the infrared behaviour decouples from the hard scattering where the jet function J i , one for each external leg, captures the collinear singularities, the soft function S contains the contribution of any long-wavelength gluons exchanged between the external particles and the eikonal jet function J i captures all the singularities that are present both in S and in the jet function J i , which are associated with exchanges that are both soft and collinear to the massless external particles. Therefore, the ratio S J 1 J 2 in eq. (2.9) includes only the divergences associated with soft wide-angle emissions. H is the hard function, found from matching to the full form factor. Each other factor in eq. (2.9) has an operator definition which dictates their functional dependences in eq. (2.9), involving the momenta p i of the external particles and their lightlike velocities β i , defined by where Q 0 is an arbitrary normalisation and would typically be of the order of the hard scale of the process, Q. The operator definitions of S, J i and J i are written in terms of the expectation values of Wilson lines, defined as where v is the direction of the line and x and y are its endpoints. In general, the vector v can be either lightlike v 2 = 0 or non-lightlike v 2 = 0. In the context of the on-shell massless form factor, lightlike kinematics for the external legs, β 2 i = 0, is dictated by eq. (2.10), and we define the functions entering the factorisation formula (2.9) by: where n i is an auxiliary non-lightlike vector and the dependence on its choice must cancel in eq. (2.9). The contour defining S is shown in figure 1a. In eq. (2.14) we presented the jet function J i for fermion fields; for a definition of the gluon jet function see refs. [93][94][95]. The representation of the Wilson lines in eq. (2.13) is the representation of the corresponding external particle. Any function built solely from Wilson lines, such as S and J i , is called eikonal. As mentioned in the context of the cusp anomalous dimension, one of the properties of eikonal quantities is that they admit Casimir scaling up to three loops; this is a consequence of non-Abelian exponentiation. Beyond three loops there are quartic (and eventually higher order) Casimir contributions, but given that the same Wilson-line diagrams contribute for quarks and gluons, differing just by the representations of the Wilson lines, one expects a relation between these quantities. Indeed, a conjectural relation was proposed in [29] based on partial four-loop computations; we shall return to this in section 5.2 below. The individual functions in eqs. (2.12)-(2.14) are heavily constrained by kinematic considerations, such as the dependence on the auxiliary vectors n i , and by renormalisation group evolution. These constraints can be solved to give explicit formulae [70,71], where Γ J and Γ ∧ are constants to be determined by direct calculation. Note that Γ ∧ was denoted in the literature [63,72] as −G eik . As in eq. (2.1), the infrared singularities of J i and S are generated by integrating over the d dimensional running coupling α s (λ 2 , ǫ) from λ 2 = 0. We notice that the soft function and the product of the eikonal jets share the same dependence on γ cusp ln µ 2 /λ 2 , which is associated with the overlapping soft-collinear singularities of these two quantities. This fact ensures that the ratio S J 1 J 2 is free of overlapping divergences and depends only on the logarithm of the kinematic variable which is insensitive to the normalisation of the vectors β i in the definition eq. (2.10). Using the factorisation equation eq. (2.9), we determine the partonic jet function by dividing the form factor in eq. (2.1) by the ratio S J 1 J 2 , yielding is a matching coefficient that captures the finite parts of the jet function and γ i , with i = q for the quark and i = g for the gluon, is the anomalous dimension of the field i in axial gauge. The latter is only concerned with the ultraviolet behaviour of the jet function and indeed it is not associated with any IR pole, because the contribution from the IR region λ 2 ≃ 0 is absent in the second term of eq. (2.18). All the IR poles of the form factor are generated by the second integral in the equation above, involving the anomalous dimensions γ cusp , Γ ∧ , Γ J and the resummation function G(1, α s , ǫ). The dependence on γ cusp is such that the combination with S J 1 J 2 reconstructs the kinematic dependence of the form factor eq. (2.1) through definition in eq. (2.6), and we get the ratio where on the last line we have defined the anomalous dimension γ J/J As mentioned above, the collinear anomalous dimension γ G is known to three loops [44,45,53] for both quarks and gluons, and we quoted the corresponding expressions through two loops in eq. (2.8). The anomalous dimension Γ ∧ , in turn, is derived from the renormalisation of the soft function S, that can be read off eq. (2.16) The equation above clarifies the meaning of the subscript ∧, which symbolises the contour of the lightlike Wilson loop in the definition of the soft function in eq. (2.13) that defines Γ ∧ . This notation will be used throughout this paper and it will be generalised for different contours. Γ ∧ is known to two loops [72] by direct computation of the equation above where C i is the quadratic Casimir dependent on the representation of the Wilson lines in eq. (2.13). Using the results in eqs. (2.8) and (2.24) we determine γ J/J to two loops. First for quarks we have Then for gluons, We have thus isolated the hard-collinear singularities of the form factor and found the quantity γ J/J that governs this behaviour for quark and for gluons according to eq. (2.21). We emphasises that in contrast to the conventional collinear anomalous dimension γ G given in eq. (2.8), which is specific to the form factor (recall eqs. (2.6) and (2.1)), the hard-collinear anomalous dimension γ J/J defined here is process independent. This universality will now be put to use. In the next section we will consider the factorisation of parton distribution functions (PDFs) at large x where we will use the above two-loop results for γ q J/J and γ g J/J given in eqs. (2.25) and (2.26) respectively, and ultimately identify the eikonal anomalous dimension relevant to the PDF evolution. Parton distribution functions at large x Parton distribution functions, f AB (x), describe the probability of finding parton A with momentum fraction x inside hadron (or parton) B. We will be interested here in PDF evolution, which is the same for the partonic and for the hadronic quantities, and will therefore consider partonic PDFs. PDFs are inherently defined at cross-section level with the need to combine real and virtual radiation to cancel soft singularities such that only pure collinear singularities associated with the massless initial-state parton are kept. We will see that in the elastic limit, x → 1, the contributions from different regions factorise and claim that the hard-collinear behaviour of the initial-state partons is described by γ J/J , the same anomalous dimension we identified in the factorisation of the form factor. Definition The light-cone PDF for a quark (gluon) in a parton P of momentum p with longitudinal momentum fraction x is given by [96] f bare The Wilson-line operator W u is defined in eq. (2.11) and |P is either an on-shell quark or gluon, P = q, g. We take the lightlike momentum p to be in the (+) direction and then the velocity four-vector u is in the (−) direction. It is worthwhile noting here that the bare PDFs f bare j ′ j (x, ǫ) are scaleless. This will be used later in the context of factorisation. They are renormalised through a convolution, where Z jj ′ is a renormalisation factor, removing the UV divergences from the bare PDF in the MS scheme and f jk is the renormalised PDF. From Z jj ′ (x, α s , ǫ) we can get the splitting functions, The RG evolution of the PDFs is governed by the DGLAP equations [41][42][43]: The DGLAP splitting kernels P jk are known to three loops [43, 54-61, 91, 97-100] with some recent results at four loops [25,27,29]. Perturbative calculation at large x In the limit x → 1 the diagonal terms in the splitting functions, P qq and P gg , feature divergent contributions [46,[101][102][103], namely where the label i = q, g indicates quarks and gluons, respectively, and the plus distribution is defined as usual, see e.g. [43]. The splitting functions are determined from the UV singularities of the PDFs defined in eqs. (3.1) and (3.2), which can be computed perturbatively. We can relate these definitions to time-ordered products by the discontinuity in x, This relation, which is illustrated diagrammatically in figure 2, can be derived as follows. One first splits the Wilson line in eq. (3.1) into two Wilson lines that extend to infinity, W u (y, ∞)W u (∞, 0), one then inserts a complete set of states between them and finally identifies the result as the discontinuity of the time-ordered product. This relies on the fact that the condition x ≤ 1 selects the cuts with positive energy [16,104]. One can think that the coefficient B δ in eq. (3.6) is entirely determined by the contribution of the virtual diagrams, such as the second term in the left-handside in figure 2, however the explicit calculation will lead to a different conclusion. At one loop, the relevant diagram is shown in the right-hand side of figure 2, which in Feynman gauge reads (3.8) where we used p and k respectively to denote the incoming and outgoing quark momenta, and q the gluon momentum. For brevity, we also drop the superscript bare. It is straightforward to compute the integral over q − by complex analysis. This places a bound on q + i.e. p + > q + > 0. The q T integral is scaleless but as we are interested only in the UV divergence it is simply a matter of replacing, We then scale out p + by defining q + = p + w to produce an elegant integral representation, where we have absorbed the (e γ E 4π) ǫ factors in the MS coupling. The representation in eq. (3.10) has the advantage of compactly displaying the sum over cuts: individual cuts can be isolated by computing the residues corresponding to each of the propagator poles. Using partial fractioning, so the full discontinuity of the integrand equals, and we find Here the first term is a real emission cut, while the second, a virtual correction. As usual, the endpoint divergence in the first term is combined with the divergence as w → 0 in the second to give, (3.14) We emphasise that it is ambiguous to determine which cuts have contributed to the δ(1− x) term, as its coefficient is only finite after the cancellation of the soft divergences between the real and the virtual cuts. We combine eq. (3.14) with the mirror diagram representing the correction of the right vertex, which yields an identical result, and with the box-type diagram, which does not contribute to divergent terms at large x. We complete the calculation of the (bare) PDF by including the two diagrams featuring radiative corrections on the external legs where we used the wavefunction renormalisation Z 2 at one loop. The expression of the UV singularities of the bare PDF at one loop reads Following eq. (3.3), we derive the renormalisation factor Z qq that cancels the ultraviolet divergence in the equation above Finally, we obtain the splitting function by computing the derivative with respect to the renormalisation scale eq. (3.5), which yields the well-known result for the qq splitting function The one-loop calculation with on-shell states is straightforward but at two loops and beyond it becomes complicated to disentangle the UV from the IR in the transverse integrals. To regularise the IR we can take the initial states to be off-shell p 2 = 0. The intermediate expressions become more verbose but introduce no major conceptual issues. As the states are now unphysical the correlators become gauge dependent. It means that the running of the gauge parameter, ξ → Z A ξ has to be taken into account in O(ǫ 0 ) finite terms. A similar observation was made in [32]. Using this method we are able to arrive at the integral representation similar to eq. (3.10) for each two-loop diagram. For two loops it is a two parameter integral with integrals over the plus component of the two loop momenta. As an example, the diagram in figure 3 can be represented as . The three denominators correspond to the three Wilson-line propagators after integration over the (−) and transverse components of the two loop momenta. We distinguish the contribution of the real emission and the ones of the virtual corrections by applying partial fractioning as in eq. (3.11). The discontinuity of the first propagator in eq. (3.19) is proportional to δ(1 − x) and it determines the virtual contribution. The other two propagators in eq. (3.19) correspond to real emissions. Each term features infrared divergences, which cancel in the sum of all cuts. Furthermore, we notice that the real emission cuts yield UV poles that are proportional to δ(1−x) and therefore contribute to the P qq splitting function. This particular calculation is detailed in section A, where we also present the full two-loop results for quarks and gluons, diagram by diagram. Our final result for the splitting functions eqs. (A. 19) and (A.22) reproduces the known results [43,[54][55][56][57][58][59][60][61][97][98][99]. These previous splitting function calculations have been performed using different methods, including extracting them from corresponding deep inelastic structure-function calculations [61], by means of the operator product expansion [54-57, 60, 97-99], by means of light-cone axial gauge [105,106], or by relating them to splitting amplitudes [107]. To our knowledge, our direct calculation is the first of its kind. This method has the advantage to show that not all the diagrams contribute to the singular behaviour of the splitting functions in eq. (3.6) and that the coefficient B δ includes both the virtual and the real corrections. Factorisation As x → 1 the momentum of the final-state parton tends to the initial-state one, meaning that the contribution from soft gluon radiation dominates. It then implies a factorisation of the renormalised PDFs at large x, allowing us to separate the hard-collinear divergences from the soft divergences [46,108]. In the following we shall only consider diagonal splitting functions and since the formulae apply to both quarks and gluons we shall drop the subscript jj on the partonic PDF and related quantities and only specialise when needed. To factorise the PDFs we shall transform into Mellin space, where convolutions become products. In this space the divergent terms become, The large-x limit corresponds to the large-N limit. The factorisation works in much the same way as the form factor by defining two jet functions and two corresponding eikonal jet functions along with a soft function [46,108], where the four-velocity β is in the p direction and L and R indicate which side of the cut the jet functions are (see figure 2). The renormalised parton distribution functions are defined as pure counterterms in minimal subtraction schemes, because they can only depend on the factorisation scale. Since the hard function H and the jet functions J i are the only functions with finite terms it must mean that their non-divergent terms cancel such that eq. (3.22) contains only poles, where J| pole has the same meaning as in eq. (2.20), that it is only the poles of the jet function. As in the case of the form factor, the soft functionS ⊓ resums the emission of gluons with vanishing momenta in the eikonal approximation. We shall shortly see however that while its ultraviolet behaviour is qualitatively the same as that of the form-factor soft function in eq. (2.13), its infrared behaviour is qualitatively different, as it presents only single poles. The functionS ⊓ is defined by the Mellin transform of the x−space soft function where W ⊓ is the Wilson loop with ⊓−shaped contour, see figure 1b (in ref. [46] it is defined in the axial gauge), Note that the time-ordering operation here acts on the product of the three Wilson lines together. The soft function can be written in this way, despite coming from a cross-section definition because of the particular relation between path-ordering and time-ordering [101]. The definition (3.24) determines two important properties concerning the analytic structure of W ⊓ , as argued in [101]. First of all, the soft function has support in the physical region with x ≤ 1 only if the singularities of W ⊓ are located on the positive imaginary axis in the complex y−plane. Indeed, if this is the case, for x > 1 we can close the integration contour in y in eq. (3.24) through the lower half-plane getting a vanishing result. Furthermore, the reality of soft function implies that W ⊓ is unchanged by the transformation y → −y followed by complex conjugation. Both these conditions are satisfied if W ⊓ is a holomorphic function in the variable In section 4 we show that we can write the renormalised W ⊓ as, where the factor √ 2 was introduced in order to identify µ as the MS renormalisation scale. The quantity Γ ⊓ will admit Casimir scaling to three loops and the scaling is determined by the representation of the Wilson lines in eq. (3.25). Following ref. [101], the soft functionS in the limit of large N , which is conjugate to the behaviour of W ⊓ at large y through the Fourier transform in eq. (3.24), is obtained to leading power in N by replacing y → −iN in eq. (3.27), which leads to so thatS ⊓ admits the following evolution equation Note that the UV behaviour ofS ⊓ is double logarithmic: the right-hand side of eq. (3.29) is dominated by γ cusp (α s (µ 2 )) log µ 2 and therefore it has the same UV behaviour as the one of the form-factor soft function S in eq. As before with the form factor, we seek to isolate the hard-collinear and the purely soft contributions from the Mellin transform (3.20) of the splitting functions in eq. (3.4), P (N, α s ). The following argument is in the spirit of [63]. As mentioned earlier, the bare PDFsf bare (N, ǫ) formally vanish because they are scaleless in dimensional regularisation [109]. They feature UV divergences which are renormalised by the splitting func-tionsP (N, α s ) throughZ(N, α s , ǫ), see eq. (3.3). They are also infrared divergent because there are massless on-shell incoming partons. The IR divergences are the same as in the renormalised PDFs described by eq. (3.23). In perturbation theory it must mean that iñ f bare (N, ǫ) the IR poles match the UV poles. In a minimal subtraction scheme the factor Z in eq. (3.3) consists of only poles. We are then able to constructf bare (N, ǫ) in a way that separates the UV from the IR, The kinematic dependence in the argument of the logarithm cancels upon identifying p·n β·n = p·u β·u = p + β + . We now require the Mellin transform of eq. (3.6) at large N [46,[101][102][103], Substituting this into eq. (3.32) the dependence on γ cusp drops. This shows that the factor √ 2 present in eq. (3.27) is indeed necessary for µ to be identified as MS scale. Comparing the non-logarithmic terms in eqs. (3.32) and (3.33) we finally arrive at the relation, (3.34) The above equation mirrors the form factor equation for γ G in eq. (2.22). In both equations the same hard-collinear anomalous dimension γ J/J is present. We now proceed to use its universality to extract Γ ⊓ at two loops from the above equation. As in the form factor case to specialise to quarks or gluons we simply add a superscript i = q, g. Up to two loops the expressions for B δ may be read off the results in eq. (A.19) and eq. (A.22) of the calculation in the appendix, in agreement with refs. [54][55][56][57][58][59][60][61]. They read Substituting these results into eq. (3.34) along with the values of γ i J/J calculated in eq. (2.25) and eq. (2.26) we arrive at the same quantity for Γ ⊓ for quarks and gluons up to an overall Casimir scaling: The fact that Casimir scaling is recovered is expected of course, as this quantity is defined by Wilson lines. Nevertheless, recovering it by subtracting non-eikonal quantities is a non-trivial consistency check. It is worthwhile noting that only the ζ 3 term is different between Γ⊓ 2 and Γ ∧ in eq. (2.24). The factor of two is present because there are two cusp contributions for the ⊓ contour as opposed to one for the ∧ contour. The different coefficient in front of ζ 3 will be discussed further in section 5. We have found the anomalous dimension that controls the non-collinear soft divergences of the diagonal DGLAP kernels by separating it from the hard-collinear behaviour that is identical to that in the form factor. We shall now verify the above result, eq. The derivation of eq. (3.27) consists of two parts: firstly we will compute the bare diagrams and the UV counterterms related to the renormalisation of the QCD coupling constant, then we will subtract the short-distance singularities associated with the Wilsonline operators, thus completing the renormalisation of log W ⊓ . The non-Abelian exponentiation theorem [110][111][112][113] allows us to determine directly log W ⊓ by computing only the webs that capture the maximally non-Abelian colour factors of each Feynman diagram, as defined in [113]. Moreover, log W ⊓ has a simpler singularity structure compared to W ⊓ , which allows us to setup the renormalisation procedure directly at the level of the webs. We introduce the following parameterisation for the contour of the Wilson loop We use the following Feynman rules in configuration space for the gluon propagator in Feynman gauge and for the gluon emission from the eikonal lines, respectively 2) where T a is the SU(N ) generator in the appropriate representation and N = − Γ(1−ǫ) 4π 2−ǫ . In section 4.1 we consider the one-loop calculation of log (W ⊓ ) and then establish its general form before and after renormalisation. In section 4.3 we perform the calculation at two loops, verifying the general structure and obtaining an explicit result for Γ ⊓ consistent with eq. (3.36). One-loop calculation As a direct consequence of the Feynman rules given above, all the diagrams that feature a gluon exchange between two lines with the same lightlike velocity v are proportional to v 2 and therefore they are automatically zero. At one loop there will be only two non-vanishing webs contributing to log (W ⊓ ) which differ only by a translation and therefore yield the same result where C i , with i = A, F is the quadratic Casimir in the adjoint or in the fundamental representation. We notice that the integral over the parameter t diverges both in the UV limit t → 0 and in the IR regime t → ∞. This fact is a consequence of the absence of any scale associated with the integration over an infinite Wilson line and it implies that the bare diagram in eq. (4.4) yields a vanishing contribution. Nevertheless, the diagram is non-trivial after the renormalisation procedure, which subtracts the divergence for t → 0 and allows us to define uniquely the integrand in eq. (4.4). In order to expose the analytic structure of eq. (4.4) in terms of the variable ρ defined in eq. (3.26), we rotate the path along the negative imaginary axis in the complex t−plane. Then we change variables t = −i √ 2λ, The complete result for log (W ⊓ ) at one loop is given by twice the contribution of eq. (4.5). It is convenient to write it with the factor (4πe γ E ) ǫ absorbed into the MS running coupling as follows The label "bare" reminds us that eq. (4.6) still has the UV divergences associated to the cusps of the Wilson loop in eq. (4.1), which must be subtracted before IR singularities can be identified. Indeed, it is convenient to show explicitly that eq. (4.6) is independent on the renormalisation scale, by writing the running coupling as which leads to the expression Exponentiation and renormalisation The integrand in eq. (4.8) is finite in the limit ǫ → 0 and the singularities of log W ⊓ arise only after the integration over λ, σ. In particular, following the coordinate-space analysis of refs. [72,114,115], we distinguish three possible types of singular behaviour: cusp singularities, which are associated to the limit λ ≃ σ → 0 in which all the vertices approach a cusp of the Wilson loop; collinear singularities, which arise if either λ or σ approaches the cusp, while the other parameter stays finite; finally, the large-distance region with λ → ∞, which determines the IR pole. At higher perturbative orders, individual diagrams feature soft and collinear subdivergences when a subset of the vertices approaches one of these limits, which give rise to poles of higher order compared to those in eq. (4.8). However, owing to its exponentiation property, upon considering the logarithm of the Wilson-line correlator, all the subdivergences cancel in the sum of webs at each perturbative order [72,108,[115][116][117]. It is always possible to organise the calculation of log (W ⊓ ) such that the integral over the position of the vertex that is located at the largest distance along the infinite Wilson line is performed last. Thus, the single infrared pole will be generated only in the final integration, while all the subdivergences of individual diagrams cancel in the sum of webs. This procedure, which follows the prescriptions of ref. [72], allows us to generalise the representation of eq. (4.8) to all orders where the integrand w has an expansion in ǫ that involves only non-negative powers The representation eq. (4.9) is analogous to the one derived in [72] for the soft function of the form factor, defined in eq. (2.13), with the difference that in the latter case the integrals over both the parameters are unbounded. This is consistent with the presence of a double pole of long-distance origin in the form factor, as compared to the single pole of this type arising in eq. (3.27). We now proceed with the renormalisation of the singularities of short-distance origin that are present in the bare expression of eq. (4.9). Following [72], we notice that the integral of w 0 in eq. (4.9) generates double UV poles, which are subtracted by cutting the integration domain with λ < 1 µ , σ < 1 µ in eq. (4.9), where µ defines the subtraction point. The contributions of w i with i ≥ 1 generate at most one UV singularity, which we subtract in the last integration. In conclusion we derive the representation for the sum of renormalised webs in configuration space where we performed the integral over σ by expanding the coupling constant α s 1 λσ at the scale 1 λ 2 , as in eq. (4.7) Eq. (4.11) directly leads to the result eq. (3.27) from the web integrals in coordinate space and it allows us to extract the coefficients γ cusp and Γ ⊓ . At one-loop order, we expand the web in eq. (4.8) and we get Applying the renormalisation procedure described above we find where we have used the fact that W ⊓ consists of pure poles. The pole is infrared and is exactly the one that replicates the soft divergence of the PDF. We compare eq. (4.14) with the poles of eq. (3.27) getting (4.15) Two-loop calculation We now apply the renormalisation procedure to the two-loop webs. Only a few diagrams contribute to this order and they are represented below 3s = , (4.16) where we omit the configurations that are simply obtained by mirror symmetry. The diagrams in the first row of eq. (4.16) are computed following the same steps as the oneloop case. As in the one-loop case, we write the bare webs using the representation in eq. (4.9) and we define the integrand w From now on we drop the arguments on d i and w i which are understood to have the above arguments unless otherwise stated. The first diagram, d SE is obtained from eq. (4.4) by replacing the gluon propagator eq. (4.2) with the one-loop expression where we discarded the longitudinal components of the propagator, that are proportional to ∂ µ ∂ ν , because they decouple from the amplitude via Ward identities [72]. The result is (4.19) in agreement with the results of [62,72]. We notice immediately that, at two-loop level, the representation eq. (4.17) of the individual webs has subdivergences, which are manifest as explicit poles in the integrand w i . In this case the subdivergence is cancelled by the coupling renormalisation in the QCD Lagrangian that will be taken into account later in this section. The double gluon exchange diagrams give w (2) Both results are in agreement with the maximally non-Abelian contributions of the diagrams W c and W d reported in [62]. The integrand of the diagram d These subdivergences are not related to QCD renormalisation and they will cancel in the sum of all webs. Eq. (4.21) is finite when ǫ → 0 as we discuss more in detail in appendix B.2. The diagrams in the second row of eq. (4.16) involve the three-gluon vertex, whose Feynman rule in configuration space reads We notice that the diagrams d Ys and d Y L are not related by symmetry transformations, because the former has two gluon attachments on the segment of finite length y, while the latter has two emissions from the semi-infinite line. We begin with the calculation of d where we introduced the normalisation factor K Y = ig 4 We write the differential operators in eq. (4.23) in terms of total derivatives as follows which allows us to perform immediately the integrals over s 2 and over s 1 , respectively in the first and in the second term in curly brackets, by evaluating the appropriate propagator at the endpoints of the integration interval. Eq. (4.23) becomes in terms of the functions where the prescription +i0 is understood in every factor appearing in the integrals. Each function has a clear diagrammatic interpretation, because the integrands are products of scalar propagators in coordinate space. Thus, d Ys is decomposed in a sum of diagrams, as discussed in [72], giving in one-to-one correspondence with the three terms in eq. Let us discuss the singularity structure of the separate integrals above. The only one which is separately finite is eq. (4.29a), which corresponds to the integrand of the first diagram in eq. (4.28). Eq. (4.29b) has single and double poles that will cancel the corresponding singularities in eq. (4.20). Indeed, the second diagram in eq. (4.28), which is associated to the integrand in eq. (4.29b), has subdivergences of short distance origin when the threegluon vertex approaches the cusp, similarly to the behaviour shown by diagram d (2) X 2 . The single pole in eq. (4.29c) is entirely due to the presence of a one-particle-irreducible UV divergent subgraph in the last diagram in eq. (4.28). Therefore, this singularity is removed by the counterterms of the QCD Lagrangian. Using these results, the total contribution of the diagram d (2) Ys in eq. (4.25) agrees with the corresponding expression for diagram W e in [62] and in the notation of eq. (4.17) it reads w (2) The next diagram, d Y L , differs from eq. (4.23) only by the presence of the two gluon attachments on the semi-infinite Wilson line rather than on the finite one. Once again, we write the three gluon vertex in terms of total derivative and we decompose the diagram as Y L , the Wilson line is infinite and this term is absent. This result was shown in [72], by introducing a cutoff on the infinite line and carefully taking the limit to infinity, which does not commute with the integration over z. The same conclusion is found by computing d (2) Y L in momentum space, as shown in appendix B.1. Using eqs. (4.29a), (4.29b) and (4.29c) we get By construction, the expression above has the same singularities as w (2) Ys , because the integrand of the diagram d (2) Y L differs from d (2) Ys only by the function in eq. (4.29a), which is finite. We compute the diagram d (2) 3s using the same procedure which is finite because it involves only the function in eq. (4.29a). We renormalise the UV divergences associated with the QCD vertices and propagators by means of the one-loop counterterm where d (1) is the result of the one-loop diagram eq. (4.5). Finally, we sum all the diagrams depicted in eq. (4.16), including the symmetric configurations which are not shown there, getting 36) where to get to the second line we used the identity 2d (2) Ys = 2d (2) Y L + d (2) 3s that is obtained by comparing eqs. (4.28), (4.31) and eq. (4.33). The terms in curly brackets in the final expression are the same that appear in the calculation of the cusped Wilson loop with two semi-infinite lightlike lines, discussed in [72]. The last two contributions in eq. (4.36) are special to the configuration of W ⊓ , where the semi-infinite lines are connected by a finite lightlike segment. The final expression in eq. (4.36) follows the decomposition of polygonshaped Wilson loops presented in [72]. The distinction between the terms inside and outside the curly brackets in eq. (4.36) stems from the structure of their singularities. The former ones give rise to cusp configurations characterised by double UV poles and therefore they can be written in terms of the representation in eq. (4.9) with a finite integrand. The last contributions in eq (4.36) generate at most a single pole, associated to the configurations where all the vertices simultaneously approach a lightlike segment, and therefore their combination will give rise to an integrand of order ǫ in eq. (4.9), as we verify by expanding eqs. (4.21) and (4.34) (4.38) By expanding the equation above in ǫ we get where Γ (2) ⊓ is the two-loop contribution to Γ ⊓ eq. (3.36), Now we renormalise using the procedure outlined in the one-loop case, see eq. (4.11). For the terms of O(ǫ 0 ), the cusp terms, we integrate from 1/µ on both integrals. For all the subsequent terms, the σ integral is performed first, integrating from 0 to ρ √ 2 . Then the parameter λ integrated from 1 µ . By doing this we get, Again it is reminded that we have used the fact that W ⊓ consists of pure poles. The above is reproduced by eq. (3.27) By this point we have determined the anomalous dimension Γ ⊓ in two different ways, first by extracting it from the evolution of PDFs using the universality of the hard-collinear poles J/J and now by a direct computation of the renormalisation of the corresponding Wilson-line correlator. Relating Wilson-line geometries to physical quantities In this section we establish a set of relations between different physical quantities, based on the properties of the Wilson loops discussed in section 4. In section 5.1 we will show that the single infrared poles in the quark and in the gluon form factors are related to the corresponding diagonal term in the DGLAP kernels by a precise eikonal quantity that is associated to the geometry of the Wilson loops with lightlike lines. The latter emerges as the difference between the anomalous dimensions associated with a wedged Wilson loop with two semi-infinite lines and a ⊓−shaped Wilson loop. This difference, in turn, can be expressed as the anomalous dimension associated with a parallelogram (or more generally) a polygon with lightlike segments. In section 5.2 we use this relation to extract the anomalous dimensions associated with a polygon Wilson loop to three loops, which is related to the soft anomalous dimension appearing in the resummation of threshold logarithms in the Drell-Yan process. Finally we extract the fermionic components of the four-loop result in the planar limit. Relating the form factor with the DGLAP kernels The direct calculation of the anomalous dimension Γ ⊓ in section 4 confirms the identity in eq. (3.34) which follows from the factorisation of the parton distribution functions for large x. This identity is interpreted as a decomposition of B δ , which was defined in eq. (3.33) as the coefficient of the delta distribution in the splitting functions in the limit x → 1, into the contribution of the hard-collinear radiation, γ J/J , and the purely soft one, which is encoded by Γ ⊓ . In eq. (5.1) we suppressed the dependence on the external parton: the relation holds for both quarks and gluons. The hard-collinear contribution γ J/J is process independent, as discussed in section 2.2 in the context of the infrared factorisation of the form factor. Indeed, eq. (2.22) provides the analog of eq. (5.1) where γ G is the anomalous dimension that determines the single poles of the form factor. By comparing eq. (5.1) and eq. (5.2) we derive the relation which connects the single poles in the form factor with the diagonal DGLAP kernels. The two quantities appearing on the left-hand side of eq. (5.3) depend on both the spin and the colour representation of the external particles in a non-trivial way. In contrast, the righthand side involves the anomalous dimensions of two eikonal quantities, which depend only on the colour representation of the particles and obey Casimir scaling up to three loops. Therefore, eq. (5.3) allows us to interpret the function f eik eq. (1.1), which was introduced in [44,45] as the difference f eik ≡ γ G − 2B δ , in terms of the anomalous dimensions of Wilson-line correlators. By substituting the two-loop expressions of Γ ⊓ and Γ ∧ from direct calculations, respectively in eqs. (3.36) and (2.24), into the right-hand side of eq. (5.3) we reproduced the two-loop result obtained from the difference of γ G and B δ in ref. [45], namely thus verifying eq. (5.3) through two loops. The difference of anomalous dimensions appearing on the right-hand side of eq. (5.3) has also a geometric interpretation, which suggests to define it as universal quantity. Following the analysis of the singularities of the Wilson loops with lightlike lines detailed in ref. [72] and the calculation in section 4 above, the anomalous dimensions Γ ⊓ and Γ ∧ receive contributions only from the singular configurations, in which all the vertices approach one lightlike line. In this sense, these anomalous dimensions depend only on the features of each lightlike line separately and they are insensitive to the global shape of the Wilson loop. Both Γ ⊓ and Γ ∧ encode the collinear singularities associated with the two semi-infinite lightlike lines, but the former receives an additional contribution from the configurations that are collinear to the finite segment. Such singularities differ from the ones originating from infinite lines by the presence of endpoint contributions, as we showed by computing the diagrams d (2) Ys and d (2) Y L in eqs. (4.28) and (4.31). It is therefore useful to define the difference of Γ ⊓ and Γ ∧ as the anomalous dimension that captures the collinear singularities of a finite lightlike segment. Similarly, we define also the collinear anomalous dimension associated to infinite lines in terms of Γ ∧ only The two-loop expression of Γ fin co coincides with the right-hand side of eq. (5.4), while Γ inf co to the same order is obtained from eq. (2.24). Comparing the two expressions we get The factor of two multiplying Γ inf co is consistent with the fact that the finite Wilson line is obtained as a contour involving two semi-infinite lines. The remaining discrepancy proportional to ζ 3 is related to the endpoint contributions in eq. (4.37). The geometric interpretation of Γ fin co and Γ inf co allows one to derive the anomalous dimensions of Wilson loops with the contour consisting of arbitrary, possibly open, polygons with lightlike lines. The first example is the parallelogram-shaped Wilson loop W that features four lightlike segments of finite length (see figure 1c), whose renormalisation was given in [62] where x and y are the four-vectors that define the sides of the parallelogram. Γ receives contributions from the collinear divergences of four finite segments in lightlike directions, therefore it is By replacing in the equation above the two-loop value of Γ fin co in eq. (5.4), we reproduce the results of Γ in ref. [62]. In the case of a generic polygonal Wilson loop W i with lightlike lines the evolution equation in eq. (5.8) generalises [62,72] where the sum is extended over all the cusps in the contour and x a , x a−1 define the sides adjacent to the cusp a. The anomalous dimension Γ i collects all the collinear contributions and it can be derived by summing the appropriate multiples of Γ fin co and Γ inf co , given by the number of finite and infinite sides, respectively. Finally, having identified the difference Γ ⊓ − Γ ∧ = Γ 4 in eq. (5.9), we may notice that eq. (5.3) provides yet another identity relating the form factor, the DGLAP kernel and the Wilson loop W computed in ref. [62], namely thus explaining the numerical agreement of these two quantities computed respectively in ref. [45] and in ref. [62]. The Drell-Yan soft function and Γ beyond two loops We move to relate the abstract W to a physical quantity relevant for soft-gluon resummation. It is known that the Drell-Yan cross-section factorises near threshold [1][2][3][4][5] (see also the more recent literature in the Soft Collinear Effective Theory [6,17]). The hard-collinear region is described by the PDFs, the hard function by a squared timelike form factor and the soft region by Wilson lines in the DY configuration [52]. This leads to the all-order relation γ G − 2B δ = Γ DY /2, where Γ DY is the anomalous dimension associated to the DY configuration of Wilson lines (see e.g. [5,6]). Using eq. (5.11) we have, The ideas in section 5.1 will allow us to test the identification in eq. (5.12). The three-loop value for γ G − 2B δ was first extracted in [45] using the three-loop results for γ G and B δ . If we expand Γ as, using the values in [45] we can then state Γ (1) = 0 (5.14) As mentioned in section 5.1, the two-loop Γ (2) was calculated explicitly using Wilson lines in [62] and agrees with the extracted value in eq. (5.15). The three-loop Γ (3) displayed in eq. (5.16) should be regarded as a prediction to be verified by direct calculation. For the Drell-Yan configuration of Wilson lines, Γ DY was computed at two loops in [52] and three loops in [20]. The three-loop Γ DY coincides with eq. (5.16). This is a non-trivial three-loop test of the identification in eq. (5.12), we arrive at the same value for Γ At four loops the complete picture in QCD for Γ is unknown but in planar N = 4 super Yang-Mills the four-loop result for the difference γ G − 2B δ was found in [39]. We identify this as Γ planar N =4 and quote the result here, It is well-known that to reach the above result one can simply take the QCD result and take the limit N c → ∞ and the maximal transcendental weight term at each order in α s . We can do this at two and three loops by looking at eqs. (5.4) and (5.16) respectively. Above three loops, strict Casimir scaling has been proven to fail [27,28,35]. As such, we need to distinguish between quarks and gluons or rather particles in the fundamental and adjoint representation. We focus on the quark case. To compute Γ (4),q we need B q δ and γ q G at four loops. The state of the art is that some colour structures are known for both B q δ [25,27] and γ q G [26] in the planar limit, N c → ∞. Using the values in [26,27] we can extract the following terms in that limit for Γ (4),q , where we have used T f = 1 2 . We are unable to deduce the N 4 c n 0 f term as it is unknown for γ G but it is known for B δ [27]. In the planar limit N c → ∞ the quartic Casimirs d (4) F F ≡ d abcd F 2 contribute to the colour factor in eq. (5.18) since, It means that we are unable to fully construct the Casimir scaling of N 3 c n f . The full (planar and non-planar) contribution of the quartic Casimir colour factor d (4) F F to γ G is known [30] but not to B δ . Only the low-N values of the splitting functions or γ cusp is known [29,32]. In [29] it was also found that, within numerical errors, quartic Casimir contribution to the cusp anomalous dimension did not depend on the representation, i.e. it is the same for gluons and quarks. It was conjectured that although Casimir scaling is violated there is a generalised version where quartic factors are simply exchanged depending on gluon or quarks, where N F/A denote the dimensions of the corresponding representations, namely N F = N c = C A and N A = N 2 c − 1 = 2N c C F . The relation in eq. (5.3) may be used as an interesting test for a generalised Casimir scaling extension to the anomalous dimension Γ . However, the quartic Casimirs do not appear in the n 2 f or n 3 f terms of eqs. (5.19) and (5.20). We are then able to use these terms for a leading-N c , Casimir-scaling part of Γ (4) . We put these terms together with the conjectured generalised scaling to create an ansatz for Γ (4) , where the ellipsis represents all terms subleading in N c , including the n 1 f and n 0 f terms, which are not found from the quartic Casimirs when they are expanded in N c . Conclusions We have presented a detailed study of the infrared factorisation of form factors and PDFs at large x using a common formalism. By identifying the universal contributions from the hard-collinear region in both quantities, those controlled by the anomalous dimension γ J/J , we were able to derive the relation in eq. (5.3), That is, the difference between anomalous dimension describing single poles in the on-shell form factor of quarks (gluons) and that associated with the δ(1 − x) term in the largex limit of the quark (gluon) diagonal DGLAP splitting function, reduces to a difference between corresponding eikonal quantities, Γ ∧ and Γ ⊓ defined directly in terms of Wilson loops. Furthermore, based on the configuration-space origin of the contributions to these two eikonal quantities we concluded that their difference simply corresponds to the anomalous dimension associated with a closed polygonal Wilson loop, such as the parallelogram analysed first in ref. [62]. The contributions of the semi-infinite Wilson lines in W ⊓ and W ∧ cancel in the difference. We emphasise that while each of the quantities on the left-hand side of eq. (6.1) depends in a non-trivial way on the spin of the partons, in addition to their colour representations, yielding very different results for quarks and for gluons, the eikonal quantities, by definition, depend only on the colour representation of these partons, and in particular admit Casimir scaling through three loops. We stress that the relation in eq. (6.1) is expected to hold to all orders in perturbation theory. An obvious next step is to compute Γ to three loops in order to check it explicitly to this order. In establishing the relation between Γ and Γ ⊓ − Γ ∧ we used the fact that singularities arise only from configurations where all the vertices approach a cusp or one where they all approach a particular lightlike segment [72]. This underlies the cancellation of the two infinite segments, isolating a remaining finite segment. The very same logic may be applied to other, more complicated Wilson-line geometries involving both finite and semi-infinite lightlike segments. Specifically, the double pole is always governed by γ cusp while the singlepole anomalous dimension is written as a sum of terms, building blocks, each corresponding to either a finite or semi-infinite segment, which contribute Γ fin co and Γ inf co , respectively. An example of such a construction with only finite segments can be found in refs. [118][119][120], where polygons of up to six sides were computed to two loops. Following our discussion in section 5.1 it may be interesting to explicitly compute other Wilson-line configurations involving both finite and infinite segments. A simple example of direct relevance to physics is the non-forward amplitude, generalising the ⊓ configuration. One interesting aspect that we have encountered is that W ⊓ behaves very differently in the ultraviolet as compared to the infrared, as can be seen explicitly in eq. (4.42). In the ultraviolet, one encounters a double logarithmic dependence on the scale µ 2 , originating from the cusp singularity, while in the infrared there is just a single pole. This stands in sharp contrast to the W ∧ , corresponding to the soft function of the form factor (2.16) (or more generally, in soft functions corresponding to multi-leg amplitudes) where the infrared behaviour entails a double pole, mirroring the ultraviolet. The absence of any distance scale in the relevant Wilson-line contour implies such mirroring. Indeed the symmetry between the ultraviolet and the infrared is broken in W ⊓ due to the presence of the scale β · y. The single-pole character of W ⊓ can be seen as intermediate in comparing W , which, lacking infinite rays, is infrared finite, to W ∧ , which is double logarithmic. The relation in eq. (5.12) between the soft anomalous dimension in Drell-Yan production and the parallelogram W is interesting in its own right. The Drell-Yan soft function involves real gluon emission diagrams where the propagators connecting the amplitude side to the complex-conjugate one are cut, while in W there are no cut propagators. A possible way to explain 6 this is to recall that a parallelogram made of four lightlike segments features two cusps where the exchanged gluons span timelike distances and two others where gluons span spacelike distances. The latter correspond to diagrams that feature in virtual corrections to the Drell-Yan process (these propagators are not cut). In turn, the former are naturally time-ordered, because there path-ordering coincides with time-ordering (just as in the case of the W ⊓ , discussed below eq. (3.25)) and could be computed using either cut propagators or ordinary ones, giving the same answer. This way the calculation of the parallelogram can be mapped into that of the Drell-Yan soft function. It would be interesting to turn this argument into a proof relating the two Wilson line configurations directly. It would also be interesting to explore in this context the conformal mapping techniques of refs. [50,51]. Another interesting direction to explore is the connection between partonic amplitudes in the Regge limit and anomalous dimensions of Wilson lines. In particular, one would like to derive the relation between the Regge trajectory and W ∧ in eq. (1.6) and understand its generalisation to higher orders. A Direct calculation of the splitting functions at large x In this appendix we present a calculation for parton distribution splitting functions directly using the definitions (3.1) and (3.2). As explained in the main text we take incoming partons to be off shell p 2 = 0 but with zero transverse momentum p = (p + , p 2 2p + , 0 d−2 ). This regulates the infrared such that we are only exposed to UV poles. To calculate a single diagram there is a general strategy talked through in the main text with a slight change for the off shell case: • Write down the integral using Feynman rules • Integrate over the transverse component of all loop momenta. As we are only interested in the UV divergent terms, this can be simplified to just calculating iterated bubbles at two loops. Although we do need to calculate the finite terms of one loop graphs to perform the renormalisation. • Rescale the plus component to arrive at a general form such as, The denominators correspond to the Wilson line propagators. • Now we take the discontinuity in x and perform the final integrations. Often these integrals evaluate to 2 F 1 functions at two loops. • Finally we expand in ǫ using, For brevity of results we shall define, Also, all the following expressions are valid up to but not including terms that diverge as slowly as log(1 − x). For PDFs we expand in powers of αs 4π , Example. Let us illustrate the above steps. For this we choose the two loop diagram in figure 4e. The Feynman rules for the diagram, in Feynman gauge, give, where k · u = xp + and the +iε prescription is implied. It is reminded that the Wilson line direction u is in the (-) direction, u = (0, 1, 0 d−2 ). We shall define f (2),(e) qq to be the contribution to f (2) qq of diagram 4e. When integrating the q 1− and q 2− components using the residue theorem, constraints are placed on the plus momenta, The integrations over the transverse components are just iterated bubbles. Rescaling the plus component we arrive at, . Taking the discontinuity we use, . There are three separate terms now to calculate. The first is the virtual cut and evaluates to, The second and third are real cuts, In the sum we find, Above we see a salient feature of two loop diagrams: individual cuts are ǫ −4 and ǫ −3 divergent. These are poles from when the emitted gluon goes soft and cancel in the sum of real and virtual cuts. The remaining divergences are UV, whose renormalisation gives the splitting functions. The L and L 2 terms are present in individual diagrams but cancel in the combination such that the splitting functions diverge as in eq. (3.33). Another feature is that we found that the real cuts, eqs. (A.10) and (A.11), contribute to B δ . Rather than inferring the coefficient of δ from sum or momentum conservation rules, we are able to state that for the off-shell extraction of the splitting functions, real cuts contribute to δ(1 − x). We now calculate the two loop diagonal splitting functions at large x for quarks and gluons. A.1 Calculating P qq The one loop contributions are figures 4a and 4b and the self energy on each external leg. They sum to, where ξ is the gauge parameter in a general covariant gauge. The two loop contributions are shown in figures 4c-4m. They exclude self energies on external legs. Their calculation was performed in Feynman gauge ξ = 1 and the results are, Figure 4: Large-x divergent contributions to the quark-quark parton distribution up to two loops. The grey blob represents a self energy insertion. Each diagram has a multiple factor displayed. Insertions on external legs are excluded. Summing the two loop contributions with the factors shown in figure 4 we find, At two loops we need to take into account the running from the one loop contribution, . This is found by replacing ξ → 1 + αs 4πǫ 10 6 C A − 4 3 T f n f ξ and α s → 1 + αsb 0 πǫ α s . We then specialise to Feynman gauge ξ = 1. We then find the Z qq that minimally subtracts the divergences in δ + αs 4π f qq . As the renormalisation is multiplicative, convolutions need to be taken into account for one loop squared terms. For example, Equivalently the renormalisation can be transformed to Mellin space, eq. 3.20, where the convolutions become products ensuring that, is finite in ǫ. We can then extract the splitting functions to two loops from, where Z q is the wavefunction renormalisation in MS for the quark. Up to two loops, Converting back to x space we find, Notice that we find that all L n terms cancel. This reproduces B q δ , the coefficient of δ in eq. (3.35), and shows that the coefficient of P is γ cusp as in eq. (2.4). A.2 Calculating P gg The one loop contributions for the gluon gluon distribution function are shown in figures 5a and 5b. The total one loop contributions are, The two loop contributions are shown in figures 5c-5p. The two loop contributions are, f (2),(j) The total two loop contribution is, The extraction of the splitting function from above is the same as in the quark case. Instead of Z q we use the gluon field renormalisation in MS, Performing those steps we find, Again this aligns with B g δ in eq. (3.35) and γ cusp in eq. (2.4). We have replicated previous splitting function calculations at large x directly from the definitions (3.1) and (3.2) in a covariant gauge. By taking the incoming partons off shell, p 2 = 0, we regulate the infrared divergences allowing the extraction of the UV poles of the PDFs. Although the divergent terms remain gauge independent the finite terms become gauge dependent. It means that we need to take into account the running of the gauge parameter ξ → Z A ξ in finite terms, even when working in Feynman gauge. B Particular two-loop diagrams contributing to W ⊓ In this appendix we elaborate on aspects of the calculation of W ⊓ presented in section 4. We consider two specific diagrams where some subtle points arise. In section B.1 we discuss the endpoint contributions in diagram d In section 4 we revisited the analysis of non-Abelian contributions to the correlators of finite and semi-infinite Wilson lines [72]. Specifically, we derived the representations of the twoloop diagrams that contain a three-gluon vertex and made a clear distinction between ones where two gluons are emitted from a finite Wilson-line segment as compared to the case where two emissions emerge from a semi-infinite line, corresponding respectively to diagrams d (2) Ys and d (2) Y L in (4.16). The difference is that in the former case both endpoint contributions appear, as in (4.28), while in the latter case there is no endpoint contribution from infinity, so the representation of d (2) Y L simplifies to (4.31). Let us now present this calculation in detail using momentum space and show explicitly that this endpoint contribution is indeed absent. Using the Feynman rules given in section 4, diagram d which is analogous to eq. (4.23). In the equation above, we integrate over z using the momentum-space representation of the propagators After taking the derivatives with respect to s 1 β, s 2 β and integrating over the infinite line we get where the prescription +i0 in the denominators ensures the convergence of the integrals for s 1 → −∞. The expression above may be conveniently rewritten as This directly leads to the representation of eq. (4.31), as we now show. Upon introducing an auxiliary integration constrained by momentum conservation we obtain: The representation of the delta function (2π) d δ d (k 1 + k 2 + k 3 ) = d d z e i(k 1 +k 2 +k 3 )·z is interpreted as an integral over the position of the scalar "three gluon" vertex in eq. (4.31). Using eq. (B.2) we recover the expression of the three gluon propagators in coordinate space, carrying momenta k 1 , k 2 and k 3 , obtaining Substituting the definitions in eqs. (4.29a), (4.29b) and (4.29c) we verify the result in eq. (4.31). B.2 The diagram d (2) X3 connecting three Wilson lines In this section we derive the representation of eq. (4.21) of the diagram d (2) X 3 that connects two cusps with a lightlike segment of finite length. Following the discussion of ref. [72], the singularities of the webs of this kind are associated with the configuration where all the vertices approach the lightlike segment of finite length. These webs do not contribute to the cusp singularities because there is not any region of configuration space where all the vertices are in proximity of the cusp. By using the Feynman rules in eq. (4.2), the diagram d where the factor − C A C F 2 corresponds to the maximally non-Abelian part of the colour factor of the diagram, which is exponentiated [110][111][112][113]. We expose the overall infrared singularity in the last integration by rewriting the integration domain using θ(t 1 − t 3 ) + θ(t 3 − t 1 ) = 1 and changing the order of integrations. Thus we obtain d (2) We stress that the expression above still has infrared singularities from the limit t → ∞ in the upper bound of the t ′ integral. Therefore we decouple the infrared contributions by applying the changes of variables The parameters a 1 and a 2 are integrated immediately, leading to d (2) Thus we apply the change of variables introduced before eq. (4.5) and we get d (2)
19,646.2
2019-09-02T00:00:00.000
[ "Physics" ]
On integrable deformations of superstring sigma models related to AdS_n x S^n supercosets We consider two integrable deformations of 2d sigma models on supercosets associated with AdS_n x S^n. The first, the"eta-deformation"(based on the Yang-Baxter sigma model), is a one-parameter generalization of the standard superstring action on AdS_n x S^n, while the second, the"lambda-deformation"(based on the deformed gauged WZW model), is a generalization of the non-abelian T-dual of the AdS_n x S^n superstring. We show that the eta-deformed model may be obtained from the lambda-deformed one by a special scaling limit and analytic continuation in coordinates combined with a particular identification of the parameters of the two models. The relation between the couplings and deformation parameters is consistent with the interpretation of the first model as a real quantum deformation and the second as a root of unity quantum deformation. For the AdS_2 x S^2 case we then explore the effect of this limit on the supergravity background associated to the lambda-deformed model. We also suggest that the two models may form a dual Poisson-Lie pair and provide direct evidence for this in the case of the integrable deformations of the coset associated with S^2. Introduction Recently there has been significant interest in two special integrable models that are closely associated with the superstring sigma model on AdS n × S n . First, in [1] a particular integrable deformation of the AdS 5 × S 5 supercoset model was considered, generalizing the bosonic Yang-Baxter sigma model of [2][3][4]. Second, in [5,6] (generalizing the bosonic model of [7]) an integrable model based on theF /F gauged WZW model was constructed, which is also closely associated to the AdS 5 × S 5 supercoset. The latter model may be interpreted as an integrable deformation of the non-abelian T-dual of the AdS 5 × S 5 supercoset action. We shall simply refer to the first model as the "η-model" and to the second as the "λ-model". As they contain, as special points, the original F/G coset model and its non-abelian T-dual model respectively, one may suspect that they are related by some sort of duality provided one properly identifies their parameters. Indeed, we shall provide evidence (in the simplest 2d target space case) that they are such a pair of Poisson-Lie dual models [8][9][10][11][12] hence representing the two "faces" of a single interpolating or "double" theory. At the same time, it turns out there is also another, more surprising, relation: the η-model can be obtained directly from the λ-model as a special limit (combined with an analytic continuation), which in some sense cuts off the asymptotically flat region. 1 The special point κ = i of the η-model is a pp-wave background [13] that for low-dimensional examples is equivalent in the light-cone gauge to the Pohlmeyer-reduced (PR) model for the coset theory. This provides therefore a direct link between the special limit of the λ-model and the PR model (conjectured in [5,6] and recently made explicit in [14]). This special limit is of particular interest for understanding the relation of the λ-model to the qdeformation of the light-cone gauge S-matrix [15] for q being a phase. For q real the S-matrix is unitary and has been shown to be in perturbative agreement [16,17] with the η-model of [1]. For q equal a phase, unitarity can be restored [18], and the resulting S-matrix has been conjectured to be related to the λ-model [5,6]. However, as the λ-model has no isometries one cannot fix the associated light-cone gauge and hence there is no apparent connection to the S-matrix of [15]. An important feature of the special limit is that it generates isometries. It is therefore natural to conjecture that taking an appropriate limit in the λ-model associated to the AdS 5 × S 5 supercoset will give the deformed model whose light-cone gauge S-matrix is that of [18]. We shall start in section 2 with a review of the actions of the η-deformed and λ-deformed models, considering in detail the relation between the parameters and also the truncations to the bosonic models. Then in section 3 we shall describe the scaling limit and analytic continuation that allows one to obtain the metric of the η-model from that of the λ-model. We shall discuss the action of this limit on the corresponding supergravity solution of [19,20] in section 4 for the models related to AdS 2 × S 2 supercoset. Finally, in section 5 we will conjecture that the two models form a dual Poisson-Lie pair [8,9] and directly verify this in the case of the integrable deformations of the coset associated to S 2 . In the appendix we shall give different simple forms of the conformally-flat metrics of the deformed models associated with S 2 . .1 Supercoset based actions We shall consider two integrable 2d models based on the supercosets where F is a supergroup (e.g. P SU (2, 2|4) in AdS 5 × S 5 case) and F i and G i are bosonic subgroups. The superalgebraf of F admits the usual Z 4 grading, with the zero-graded part corresponding to the algebra of G 1 × G 2 , and the bilinear form STr = Tr F1 − Tr F2 . The first "η-model" is defined by the deformed supercoset action of [1] (generalizing the bosonic model where g ∈ F and Here Ad g (M ) = gM g −1 , P r are projectors onto the Z 4 -graded spaces off and the constant matrix R is an antisymmetric solution of the non-split modified classical Yang-Baxter equation forf. The overall coupling h is the analog of string tension and η is the deformation parameter. 3 This action possesses the following Z 2 symmetry: parity , h → h , η → −η . (2.4) In the undeformed limit, the action (2.2) reduces to the standard supercoset action [21,22] I h,0 (g) = h 2 d 2 x STr g −1 ∂ + g P g −1 ∂ − g , P = P η η=0 = P 2 + 1 2 (P 1 − P 3 ) . (2.5) The global F symmetry of this undeformed action is broken by the η-deformation to its abelian Cartan subgroup. The second "λ-model" [6] (generalizing the bosonic model of [7,23]) is defined by the action where f ∈ F , A ± ∈f and The first two lines of (2.6) correspond to the F / F gauged WZW model with coupling (level) k and λ is a deformation parameter. This action possesses the following Z 2 symmetry In contrast to (2.2) this action has no global symmetry (there is a G 1 × G 2 gauge symmetry, which in the end we will always fix). The interpretation of this action can be understood by considering the special limit k → ∞, λ → 1 combined with scaling f → 1 as [7] f = exp(− 4π where thef valued field v and the constant h are kept fixed in the limit. This leads to the following where P = P λ λ=1 is given in (2.5). This may be interpreted as a first-order action interpolating between the supercoset action (2.5) (if one first integrates out v giving A ± = g −1 ∂ ± g) and its non-abelian T-dual model (if one first integrates out A ± ). Thus the meaning of (2.6) is a deformation of the first-order interpolating action (2.10). If one first integrates out A ± in (2.6) and gauge-fixes the supergroup element f the resulting sigma model may be 3 Here the bilinear form Tr (STr) is related to the usual matrix trace tr (supertrace str) by Tr = ν −1 tr for some representation-dependent normalization ν. We fix this normalization ν such that in the undeformed limit h plays the role of the usual string tension in AdSn × S n backgrounds. In particular, this means that in the AdS 2 × S 2 case with η = 0 the bosonic part of the action is given by , where κ 2 is the string tension parameter used in [7,5,6] (the definition of ∂ ± used therein had an extra factor of 1/2 compared to that used here). viewed as a deformation of the non-abelian T-dual of the original supercoset model (2.5). At the same time, explicitly integrating out f in (2.6) is not possible in general, so (2.6) does not apparently have a direct relation to a deformation of the supercoset model (2.5). While there is a close on-shell connection between the models (2.2) and (2.6) at the level of classical Hamiltonian (Poisson-bracket) structures [1,5,6], establishing their correspondence at the level of the actions (and thus eventually at the quantum level) remains an open problem that we will attempt to address below. 5 Relations between parameters Let us now comment on relations between the deformation parameters of the two models (2.2) and (2.6). The deformation parameters in the two actions of [1] and [6] may be defined in terms of the parameter 2 ∈ R that appears in the deformed classical Poisson algebra relations. 6 The relation to the parameter η of [1] (or κ introduced in [16]) is given by is a natural deformation parameter appearing in the bosonic part of the model (2.2). Here the ranges describe the deformation considered in [1,16]. Note that we could also take to cover the ranges 2 ∈ [0, 1] and κ 2 ∈ [0, ∞]. This is a consequence of the fact that the complex η 2 plane covers the complex 2 and κ 2 planes twice. This can be seen explicitly from the relation (2.14) The deformation parameter λ in the action (2.6) of [6] is related to 2 by where we have introduced which is again a natural deformation parameter in the bosonic part of (2.6). Here the ranges describe the deformation considered in [6], but we could also take 5 Note that integrability, together with expected quantum UV finiteness, suggest that classical relations may in some way extend to the quantum level. 6 For both deformed models, there was a paper focussing on the bosonic case, [4] and [5], written before the papers discussing the deformation of the superstring, [1] and [6] respectively. The parameter η b of [4] is related to the parameter η of [1] by [5] is related to the parameter λ of [6] by To avoid confusion, we will always use the definitions of parameters as given in the papers discussing the superstring [1,6]. to cover the range 2 ∈ [−∞, 0]. This is again a consequence of the fact that the complex λ 2 or b 2 planes cover the complex 2 plane twice, which can be seen explicitly from the relations For a particular value of 2 there are four equivalent values of η, b and λ and two equivalent values of κ as described in the table: The first and second columns and the third and fourth columns give rise to equivalent theories in both the two deformations as they are related by the Z 2 symmetries (2.4) and (2.8). Furthermore, restricting to the bosonic models, the first and third columns and the second and fourth columns give rise to identical deformed theories. This is a consequence of the fact that the bosonic truncation of (2.2) depends only on κ, while the bosonic truncation of (2.6) depends only on λ 2 . Comparing (2.11) and (2.15) suggests that the parameters of the two deformed models may be related by an analytic continuation (choosing signs so that λ = 0, 1 corresponds to η = i, 0) 19) or, equivalently, (2.20) Below we will see that (2.20) is indeed the relation that allows one to obtain the η-model (2.2) as a special limit (combined with an analytic continuation) of the λ-model (2.6). In addition, this will require us to relate the overall couplings of the two models by the following analytic continuation (assuming the plus sign in (2.19)) . (2.21) Indeed, (2.21) is implied by (2.20) and the expression for λ in (2.9), which was required to obtain the interpolating model (2.10) for large k: with λ → 1 − πh k we find from (2.16) that b 2 → k 2πh and thus, from (2.20), that κ → iπh k , in agreement with (2.21). The relation (2.21) is also consistent with the Pohlmeyer reduction limit, which in the context of the η-deformation [1] corresponds to taking κ → ±i, as discussed in [13], with h being proportional to the level of the underlying G/H gauged WZW model. This then ties in with the Pohlmeyer reduction limit of the deformation of [6] for which k plays the role of the level [14]. Remarkably, (2.21) corresponds to the expected relation between the quantum deformation parameters q for the two models (cf. [1,16,5,6]): with the real q corresponding to the η-model (2.2) and the root of unity q to the λ-model (2.6). Indeed, q = exp(− iπ k ) is the standard expectation for the q-deformation parameter of a WZW type model. Bosonic actions It is useful to consider explicitly the bosonic parts of the two models (2.2) and (2.6). We shall concentrate on the part corresponding to one (compact) F/G factor in (2.1). The bosonic counterpart of the η-model where g ∈ F , P = P 2 is the projector onto the F/G coset part of the algebra f of F and R is a solution of the modified classical YBE for f. For κ = 0 this becomes the standard F/G coset sigma model. To make the structure of this action more transparent let us rewrite it in a first-order form. Since (κR g P ) n and P 2 = P , introducing an auxiliary field B a in the coset part of f (i.e. P B a = B a ) we get Replacing B a by the field A a in f, adding a term A a C a where C a ∈ g is in the algebra of G and then redefining Ad g (A a ) = gA a g −1 → A a we find the following first-order form of (2.23), which has a rightaction G-gauge symmetry This model has parameters (h, κ) and for κ = 0 the global F symmetry is broken to its Cartan torus directions. 7 In the first-order action (2.26) the deformation corresponds simply to adding the quadratic κA + RA − term. Indeed, we can rewrite (2.26) as For κ = 0 one can integrate out A a giving the standard coset sigma model action. 8 The bosonic part of the λ-model action (2.6) has parameters (k, λ) and a local G symmetry where v ∈ f we find from (2.29) the bosonic truncation of (2.10) [7] The canonical choice of R annihilates Cartan generators and preserves (up to factors) the positive and negative root The simplicity of the first-order action (2.26) is related to the simplicity of the corresponding classical Hamiltonian description [4]. At the same time, its superstring generalization is not straightforward as Pη in (2.2) is not a projector and where F ab is the field strength of A a . This is the interpolating action for the F/G coset sigma model and its non-abelian T-dual: if we first integrate over v we get A a = g −1 ∂ a g, g ∈ F , and thus the original F/G coset model with tension h; if we first integrate over A a and C a we get a sigma model for v which is the non-abelian dual of the F/G coset model. This suggests that (2.29) may be viewed as an interpolating model between the λ-deformation of the non-abelian T-dual model (a model for the field f found by first integrating out A a and C a ) and a deformation of the F/G coset sigma model found by parameterizing A a in terms of the fields g and g (e.g., as A a = g −1 ∂ a g + abg −1 ∂ bg ) and integrating out all fields (f,g, C) other than g. The latter procedure need not, however, give a local action for g away from the k → ∞, b −2 → 2π k h point. 9 While the actions (2.26) and (2.29) look very different, having, in particular, different symmetries, one possibility is that they may be viewed as two dual faces of a "doubled" model related by Poisson-Lie type duality [8][9][10]. The η-model may then be the analog of the "solvable" member of the dual pair. We shall provide explicit evidence for this in section 5 below. Another possibility to relate the λ-model to the η-model is by a limit that will break the F/G symmetric structure of (2.29) to reflect the presence of the R-matrix in (2.23),(2.26). This limit will involve a certain scaling (and analytic continuation) of the group element f plus the map between the parameters (2.20),(2.21). We shall demonstrate the existence of such limit on various relevant F/G coset examples in the next section. We shall then study the effect of this limit on the corresponding supergravity backgrounds in section 4. Relating the λ-model to the η-model by a limit The target space backgrounds that correspond to the η-model (2.2),(2.23) have abelian isometries associated to the Cartan directions of the algebra of F that are preserved by R-matrix. At the same time, the backgrounds that correspond to the λ-model (2.6),(2.29) (found by integrating out A a and fixing a G-gauge on f ) do not have isometries at all. 10 To be able to relate the corresponding metrics we thus need to take a certain scaling limit of the λ-model in the coordinates corresponding to the Cartan directions of F . 11 Below we shall first explicitly demonstrate the existence of such limits on particular low-dimensional cases, AdS 2 × S 2 and AdS 3 × S 3 , and then explain the general construction for S n and similar spaces related by analytic continuation. We shall also explain the relation to the Pohlmeyer reduced model. AdS 2 × S 2 In the case of AdS 2 × S 2 the relevant bosonic coset space is At the same time, since the deformed η-model action (2.26) depends not only on the current but also explicitly on g it does not allow a dualization in an obvious way, i.e. an analog of a dual model should be non-local. 10 This is also a common feature of backgrounds corresponding to F/G gauged WZW models with a non-abelian G, but for a non-trivial λ deformation it applies also to the abelian G case [7]. 11 The special role of these coordinates may be anticipated from the fact that the λ-model (2.29) can be viewed as a deformation of the F/F gauged WZW model, which is a topological theory [24]. In the F/F gauged WZW model the gauge symmetry (f = w −1 f w, w ∈ F ) allows one to gauge away all but the Cartan directions, i.e. to choose the only solutions. One may then use these moduli parameters a i to define certain limits of the deformed background. Starting with the λ-model action (2.29), integrating out the gauge field and gauge-fixing the SO(1, 2) × we find the following metric 13 Note that here, for the AdS 2 part, we are considering a different patch of the deformed space than used in [19] which corresponds tof i.e. related to (3.3) via the analytic continuatioñ The reason we consider the patch (3.3) is that it admits a special (singular) field redefinition with which we can recover the metric corresponding to the η-deformed AdS 2 × S 2 model [4,1]. Let us now consider the following (complex) coordinate redefinition (t, ξ; ϕ, ζ) → (t, ρ; ϕ, r) combined with infinite imaginary shifts of the (t, ϕ) directions (turning them into isometries): Here we have introduced the parameter κ, which is assumed to be related to b by (2.20). We shall also assume that k is related to h by (2.21), i.e. . i.e. becomes exactly the η-deformed AdS 2 × S 2 metric [1,16,13,25] with h as a tension. Indeed, this metric corresponds to (2.23) with g parameterized as 10) 12 Here σ i are Pauli matrices and {(σ 3 ⊕0), (0⊕iσ 3 )} generates the gauge group. We also take Tr = 2tr, where tr is the usual matrix trace, i.e. ν = 1 2 in footnote 3. 13 We shall use the following notation to relate the bosonic part of the action to the metric: I = d 2 x Gmn(X)∂ + X m ∂ − X n with ds 2 = Gmn(X)dX m dX n , i.e. we will absorb all overall constants in the action into the metric. All the bosonic backgrounds we will consider below will not have a non-trivial B field [27,20]. and the R-matrix chosen to annihilate the Cartan directions {iσ 1 ⊕0, 0⊕iσ 1 }. This relation between (3.3) and (3.9) involving complex coordinate redefinitions (3.7) and a complex map between parameters (3.8) suggests that the λ-model and η-model may correspond to different real "slices" of some larger complexified model. To shed more light on the meaning of the infinite imaginary shift of t and ϕ in (3.7) that plays a central role in the above relation between (3.3) and (3.9) it is useful to repeat the discussion using a simpler (algebraic) choice of coordinates in which the metric becomes conformally flat. Starting with (3.3) and doing the coordinate redefinition (t, ξ; ϕ, ζ) → (x, y; p, q) Formally continuing to the region for which x 2 − y 2 > 1 represents (3.5), i.e. the original metric of [19]. Furthermore, one can check that x 2 − y 2 = 1 is a curvature singularity and hence the two patches covered by (3.3) and (3.5) are separated by this singularity. Using again the relation between (k, b) and (h, κ) in (3.8) and making an infinite rescaling of the coordinates (3.14) This may be interpreted as the metric of η-deformed H 2 ×dS 2 (euclidean AdS 2 times 2d de Sitter space) 14 background which is related to AdS 2 × S 2 by an analytic continuation. 15 We will elaborate on this limit (giving its alternative form) focussing on the S 2 part of (3.3) in Appendix A. The infinite scaling limit (3.13) relating the λ-model to the η-model amounts to dropping the constants 1 in the denominators in (3.12). It thus corresponds to decoupling the asymptotically flat region of the λ-model metric (3.12) so that the η-model metric may be interpreted as emerging in a "near-horizon" limit (combined with an analytic continuation of the parameters according to (2.20),(2.21)). A couple of comments are in order. First, it is worth noting that for n odd the last exponential factor in (3.24) is in the sequence and hence the prescription tells us that we should take the limit in the corresponding field. In the S 3 and S 5 examples below this final limit is not necessary: the previous limits already lead to this direction being an isometry and hence the limit (3.25) would be trivial (the same should also be true for all odd n). A related observation is that it always appears to be possible to truncate easily from n = 2N + 1 to n = 2N by just setting this final angle to zero. It transpires that to go from n = 2N to n = 2N − 1 is not so trivial. This is not so much to do with taking the limit, rather with the field redefinitions and analytic continuations that we need to perform to recover the metrics of [16,13,29]. In the following we will consider the two non-trivial cases n = 3 (already discussed in section 3.2 above) and n = 5, with the n = 2 and n = 4 examples following as simple truncations. It will be useful to define the following functions (3.27) n = 3 and n = 2: Starting with (2.29) and taking the limits as described above we end up with a metric with two isometric directions ϕ and φ 1 . There are then two analytic continuations/coordinate redefinitions that are of particular interest. The first is given by 28) and the resulting metric is as in (3.19) This metric is precisely the deformation of S 3 arising from the corresponding η-model [16,13,28,26]: it follows from the η-model action (2.2),(2.23) with g ∈ F parameterized as with the resulting metric being 2h −1 ds 2 = g (f −1 dϕ 2 + f dr 2 ) + r −2 dφ 2 1 . (3.32) This metric is related to (3.29) by two T-dualities -in each of the isometric directions ϕ and φ 1 . Furthermore, there is a formal map between the two metrics (3.29) and (3.32) given by To recover the corresponding expressions for n = 2 one can consistently truncate by setting φ 1 = 0. n = 5 and n = 4: Taking the limits as described above, from (2.29) we find a metric with three isometric directions ϕ, φ 1 and φ 2 . There are again two analytic continuations/coordinate redefinitions that are of particular interest. The first is given by 34) and the resulting metric is (with f , g , v defined in (3.27)) As shown in [29], this metric is T-dual to the metric constructed in [16], which follows from the η-model Here the T-duality should be done in just the φ 1 isometry, making the metric diagonal but generating a non-zero B-field, in agreement with the background found in [16]. 19 The second change of variables is given by leading to 2h −1 ds 2 = g (f −1 dϕ 2 + f dr 2 ) + (dφ 1 + κr 4 v sin θ cos θdθ) 2 r 2 v cos 2 θ + r 2 v dθ 2 + r −2 csc 2 θ dφ 2 2 . (3.38) This metric (related to (3.35) by two T-dualities) is also T-dual to the metric found in [16]: here one needs three T-dualities -in each of the isometric directions ϕ, φ 1 and φ 2 . There is again a formal map between the two metrics (3.35) and (3.38) given by For AdS n one choice of analytic continuation is given by for which the subalgebra commuting with T 12 , spanned by Tâb, remains so(n − 1). This corresponds to analytically continuing the fields as follows Here we also need to flip the overall sign of the metrics. Other possible analytic continuations involve T 12 → iT 12 , so that the subalgebra commuting with this generator is then so(1, n − 2). It is an analytic continuation of this form that is required to obtain the first line of (3.5) from the second line and was considered in the supergravity constructions of [19,20]. For dS n one choice of the analytic continuation is given by for which the subalgebra commuting with T 12 , spanned by Tâb, remains so(n − 1). This corresponds to analytically continuing the fields as follows The remaining analytic continuations, which we will not explore in detail here, involve leaving T 12 as is, so that the subalgebra commuting with this generator is again so(1, n − 2). To recover the coset and deformed models associated to H n we analytically continue T 1ā → iT 1ā , ϕ → iϕ , r → ir ,ā = 2, . . . , n + 1 , (3.45) and, as for AdS n , flip the overall sign of the metrics. It will also be useful to give the direct analytic continuation of the fields from AdS n to H n , i.e. combining the inverse of (3.42) and (3.45) the relation between the overall couplings (2.21) becomes h = k π . As discussed beneath (2.29) the b → 0 limit of the λ-model gives the F/G gauged WZW model. On the other hand, it was shown in [13] that for the η-models arising as deformations of AdS 2 × S 2 and AdS 3 × S 3 models the κ → i limit of (3.29) can be taken in such a way (combining it with a coordinate redefinition) that it gives a string action in a pp-wave type background, whose light-cone gauge-fixing is the Pohlmeyer reduction (PR) [27,30,31] of these AdS n × S n models. 20 20 If one takes the κ → i limit of the η-model without rescaling the coordinates the resulting action gives the same model without the potential term, i.e. one time and one space dimension decouple. The metric in the "transverse" directions is In section 3.3 we considered a sequence of special coordinate redefinitions that led from the λ-model to (T-duals) of the η-model. In the cases of S 2 and S 3 there was only one limit in this sequence (3.28). One can thus see the emergence of the PR model from the λ-model in a special limit (cf. also [5,14]). In the AdS 5 × S 5 case the κ → i limit of the η-model did not lead directly to the PR model, but rather to a closely related theory with an imaginary B field [13]. It is now clear that there is a natural "intermediate" candidate model for recovering the PR model found by making only the first coordinate redefinition in the sequence (3.25), (3.34) along with the corresponding one for AdS 5 and using the relation of the parameters in (3.8). It is interesting to note that considering the analytic continuation to H 5 × dS 5 given in (3.44),(3.46) this becomes which for κ 2 ∈ (0, −1] is a real field redefinition and real limit. Furthermore, for κ in this range the map between the parameters (3.8) also becomes real. Therefore, this limit of the AdS 5 × S 5 λ-model can be thought of as first an analytic continuation to H 5 × dS 5 , then a real limit and field redefinition and finally analytically continuing back. Following this procedure we find a somewhat involved metric, which has isometric directions t and ϕ and importantly is real for κ 2 ∈ (0, −1]. 21 Therefore, it is natural to conjecture that the light-cone gauge-fixing of this model is related to the kink S-matrix of [18]. 22 The limit of [13] t for the AdS 3 × S 3 η-model gives a pp-wave type model whose light-cone gauge fixing is the Pohlmeyer reduction of strings on AdS 3 × S 3 [31] with axial gauging of the associated gauged WZW model. In higher dimensions the gauge group of the PR theory is no longer abelian and hence axial gauging is not possible. Therefore, the limit (3.49) needs a mild modification to extract the vector gauged model Taking this limit in the model obtained by the special limit (3.47) of the λ-model associated to AdS 5 × S 5 we find a pp-wave type metric (recall that in this limit we get from (2.21) that h = k π ) 21 Recall that if we take the second special limit for φ 1 in (3.34) the off-diagonal terms in the resulting metric (3.35) are imaginary for this range of κ. 22 This discussion is also true if we only consider the first coordinate redefinition in the sequence (3.25), (3.37), however, the resulting metrics are diffeomorphic as they are related by the map which is real for κ 2 ∈ (0, −1]. where the "transverse" metrics ds 2 A⊥ and ds 2 S⊥ are those of the gauged WZW model for SO (5) SO (4) and SO(1,4) SO (4) respectively. 23 The light-cone gauge-fixing of this model (x + = µτ ) corresponds therefore to the Pohlmeyer-reduced theory for strings on AdS 5 × S 5 [27]. Note that as for the AdS 2 × S 2 and AdS 3 × S 3 cases, the roles of the AdS n and S n are effectively interchanged, i.e. the κ → i limit of the deformed AdS 5 metric leads to the PR model for the string on R × S 5 and vice versa. 4 Supergravity backgrounds for deformed models: AdS 2 × S 2 Having discussed the form of the metrics corresponding to the η-model and λ-model let us now consider their extension to the full type IIB supergravity backgrounds expected to be associated with the superstring actions (2.2) and (2.6). The direct construction of such backgrounds supporting the metrics of η-model turns out to be quite non-trivial [16,29]. At the same time, the RR backgrounds supporting the λ-model metrics appear to be much simpler and they were found explicitly in the AdS n × S n cases in [19] (n = 2, 3) and [20] (n = 5). Given that the metrics of η-model can be obtained, as explained above, from the metrics of the λmodel by a special scaling limit and analytic continuation, one may reconstruct the full supergravity backgrounds that emerge when this limit is applied to the solutions of [19,20]. This will be explored below on the simplest AdS 2 ×S 2 example. Surprisingly, the resulting limiting background will be different from the one constructed in [29], even though the two share the same metric (3.9). Understanding the proper meaning of this solution (that takes a very simple form in the algebraic coordinates introduced in (3.11),(3.12)) will be left for the future. To discuss the deformed backgrounds associated with the AdS 2 ×S 2 supercoset it is useful to follow [29] and consider the compactification of 10d type IIB supergravity to four dimensions on T 6 retaining only the metric, dilaton and a single RR 1-form potential A = A m dx m . 24 The resulting bosonic 4d action is then given by The corresponding equations of motion are Angular coordinates Our starting point will be the supergravity solution of [19] supporting the λ-model metric (3.5) and integrate out the gauge field. 24 The corresponding 10d 5-form strength will be expressed in terms of the product of the 2-form F and holomorphic 3-form on T 6 as in (A.19) of [29]. Here the free constants c 1 and c 2 satisfy and encode the usual freedom of U (1) electromagnetic duality rotations in 4d. The choice c 1 = c 2 = 1 √ 2 ensures symmetry between the two coset factors. Analytically continuing the AdS 2 coset part to the patch of interest (3.2) gives the following solution of the equations of motion (4.2) supporting the metric (3.3) This raises an interesting question. If this background does correspond to the λ-deformation (2.6) [6] of the superstring sigma model, then for some (perfectly legitimate) choices of the SO(1, 2) gauge-fixed group field (3.2) we should end up with an action that is not manifestly real. However, the reality of the action (2.6) seems to follow in the usual way from considering the real form of the superalgebra. The non-reality should only manifest itself in the fermionic sector (as i appears in the RR flux) and could arise from an obstruction in the procedure of gauge-fixing the supergroup field of (2.6) and integrating out the superalgebra-valued gauge field, but it is not immediately clear why this should happen. At the same time, the imaginary RR flux may be expected, given that (2.6) can be interpreted as a deformation of the non-abelian T-dual of the AdS n × S n string model with the duality applied to all space-time dimensions including time (cf. [13,20,32]). Note, however, that the gauge field in the action (2.6) of the λ-model belongs to the superalgebra, and thus the non-abelian T-duality in (2.10) is performed also in the fermionic directions (cf. [35]), which may also have an effect on the issue of the reality of the corresponding RR flux. As here we are interested in the special limit (and analytic continuation) (3.7) of the above background combined with the analytic continuation of the parameters (i.e. with b and k taken complex as in (2.20),(2.21)) we may formally consider the solutions of the complexified theory, discussing the reality issue only at the end. It is worth recalling however, as discussed in section 3.47, that if we analytically continue to H 2 × dS 2 using (3.44),(3.46), while the background (3.3) still has an imaginary 1-form, the special limits we consider below become real for real b (as in (3.48) compared to (3.47)). The first limit we will take is as in (3.7) combined with infinite shift of the dilaton Starting from (4.6) we then get the following solution of the 4d supergravity equations (4.2) supporting the metric (3.9) of the η-model where we have defined the frame fields This background looks strange: the κ → 0 limit of (4.8) gives the undeformed AdS 2 ×S 2 metric supported by a non-trivial complex dilaton and RR flux that explicitly depend on t and ϕ. While t and ϕ are still isometries of the metric and e Φ F , which enter the classical GS superstring action, the dilaton and RR This is different from the expected Bertotti-Robinson type flux supporting AdS 2 × S 2 . If we instead consider the κ → ∞ limit of (4.8), as taken in [33], i.e. first rescaling we find the following real supergravity solution This is precisely the solution of the "mirror" model constructed in [33] and is related to a dS 2 × H 2 background by T-dualities in t and ϕ, giving an imaginary RR flux as might be expected (cf. [32]). The second limit we will consider is The resulting solution of (4.2) is given by where the frame fields are given by There is a formal map between the two solutions (4.8) and (4.13) given by (4.14) The metric of (4.13) is the double T-dual (in t and ϕ) of the metric of (4.8). However, this T-duality relation does not obviously extend to the full backgrounds as shifts in t and ϕ are not isometries of the dilaton and the RR 1-form. 26 Again they are only invariant under the combined transformation (4.9). The κ → 0 limit of (4.13) is much simpler than that of (4.8) 27 Performing T-dualities in both t and ϕ we recover the standard Bertotti-Robinson solution with constant dilaton and homogeneous RR flux: 26 It may still be possible to define a generalization of the T-duality rules that will apply in the present situation. The dilaton coupling in the string action is given by Therefore, if Φ has a term linear in a target-space direction (which is otherwise isometric, i.e. enters the string action only through its derivatives), we can integrate by parts and then perform the T-duality transformation in the usual manner. The resulting action will have a term proportional to (∂ω) 2 whose role is to cancel the conformal anomaly. As the dilaton coupling term is subleading in α the T-dual classical superstring action can be found by the usual rules. One can then formally read off the corresponding metric, B field and e Φ times the RR fluxes from the resulting action. They need not by themselves satisfy the Type IIB supergravity equations of motion as these follow from the vanishing of the one-loop Weyl anomaly beta-functions and thus are sensitive to the full dilaton coupling and, in particular, the central charge shift mentioned above. The resulting dilaton of the T-dual background may then be determined by solving these equations. 27 The apparent divergence of the RR potential turns out to be a total derivative and can therefore be removed by an appropriate gauge transformation This suggests that if the metric and e Φ F of the solution (4.13) can be formally T-dualized for κ = 0 (e.g. by applying the standard T-duality rules to just these combinations of the background fields, see footnote 26) it will give a real "background" for the metric (3.9) (the T-duality in t will remove the factor of i in F ). It would be interesting to see if this bears any relation to the η-deformation (2.2) of the AdS 2 × S 2 supercoset model. Having a factorized (but not isometric) dilaton, this background will be obviously different from the solution constructed in [29] and its meaning remains to be understood. Finally, given that the standard Bertotti-Robinson solution appears (after T-dualities) in the κ → 0 limit of (4.13), while the "mirror" model (4.11) of [33] appears in the κ → ∞ limit of (4.8), it would be interesting to see if the map (4.14) between the two backgrounds (4.8),(4.13) is related to the "mirror duality" of [33,34]. Algebraic coordinates The λ-model solutions (4.3) and (4.6) take remarkably simple forms in the algebraic coordinates introduced in (3.11), (3.12). The solution (4.6) becomes 29 Note that a formal analytic continuation of this background by setting x = iy , y = ix gives a real solution 2πk −1 ds 2 = 1 28 One can also use (3.50) leading to the same pp-wave type background. This is a consequence of the formal map (4.14) between (4.8) and (4.13). 29 This form of the solution manifestly realizes the observation of [20] that the λ-deformation amounts to rescaling the tangent space directions of the gauged WZW model for F/G (here If instead we formally continue (4.19) to the region for which x 2 − y 2 > 1, we find (after setting e Φ0 = ie Φ0 ) a different real background, which represents the solution (4.3), i.e. the original solution of [19] corresponding to the metric in the coordinate patch in (3.5). A similar background representing the real deformation of AdS 2 × S 2 may be found using a different real slice of the diagonal coordinates as in (A.8). The metric and e Φ F of (4.21) are invariant under separate rescalings of (x, y) and (p, q), however, as discussed above the dilaton and RR-form are only invariant when these rescalings are correlated as (x, y) → ec(x, y), (p, q) → e −c (p, q), which corresponds to the symmetry (4.9) of the backgrounds (4.8),(4.13). Poisson-Lie duality interpretation Apart from the relation between the λ-model and η-model through a scaling limit and analytic continuation described in section 3, which is somewhat unexpected (though partly prompted by the natural map between the parameters (2.20),(2.21),(2.22)), one may anticipate that the two models may be in some sense dual to each other. Indeed, the undeformed limit of the η-model is the standard supercoset model, while the undeformed limit of the λ-model is the non-abelian T-dual of the latter (cf. (2.10),(2.30)). A natural suggestion is then that the two models may be related by the Poisson-Lie (PL) duality of [8,9]. Below we will directly verify this conjecture on the simplest example of the bosonic S 2 coset. The corresponding metric of the λ-model is in the second line of (3.3) (or, in diagonal form, the second term of (3.12)), and its η-model counterpart is in the second line of (3.9). We are going to compare them with the PL dual pair of models associated to the SL(2, C) double [9,11]: the first corresponds to the SU (2) subgroup and the second to the Borel subgroup B 2 (upper triangular matrices with reals on diagonal). The corresponding metrics are given, e.g., in equations 3.18 and 3.19 of [11] with two free parameters a, b and with an overall coefficient T. 31 The first metric is T a a 2 + (b − cos θ) 2 (dθ 2 + sin 2 θdϕ 2 ) . (5.1) Setting b = 0 (which is required to get the integrable model we are interested in here) and we find that (5.1) becomes precisely the corresponding η-model metric in (3.9) (where r = cos θ). 30 To recall, c 1 and c 2 are arbitrary constants satisfying c 2 1 + c 2 2 = 1, so a symmetric choice is c 1 = c 2 = 1 √ 2 . 31 We denote the parameters a, b of [9,11] by roman letters. (A.5) Here the m = 0 limit corresponds to the SO(3)/SO(2) gauged WZW metric. 32 One option to take a limit of this metric is to do an infinite rescaling of P and Q (combined with the replacement of m by κ as in (A.1)), i.e. to drop the constant 1 in (A.5) (and reverse overall sign of the metric). This leads to a scale-invariant (i.e. it has an isometry) metric as in (3.13),(3.14) that is a deformation of H 2 ds 2 = 1 P 2 − κ 2 Q 2 (dP 2 + dQ 2 ) . This is, indeed, the metric of the η-deformed S 2 space, 33 i.e. it is equivalent to the second term of (3.9) (ϕ = U, r = tanh V ) [25,4,13]. A similar discussion can be repeated for the AdS 2 coset part of (3.3), obtaining the first term of (3.9) in the limit.
10,572.8
2015-04-27T00:00:00.000
[ "Physics" ]
Experimental and Techno-Economic Study on the Use of Microalgae for Paper Industry Effluents Remediation : Humanity is facing some major global threats, namely lack of environmental sustainability, the energy crisis associated with the unsustainable reliance on fossil fuels, and water scarcity, which will be exacerbated with the rapid growth of urban areas. Researchers have drawn their attention to microalgae, photosynthetic microorganisms known for their environmental applications, such as wastewater remediation and lipids accumulation, to produce third-generation biofuels to solve some of these major issues. Considering this dual role, this study evaluated the potential of the microalga Chlorella vulgaris on nutrient removal from a paper industry effluent and bioenergy production. Firstly, experiments were performed to assess the potential of this microalga to: (i) successfully grow in different concentrations of a paper industry effluent (20% to 100%); and (ii) treat the industrial effluent, reducing phosphorus concentrations to values below the accepted legal limits. Then, a techno-economic assessment was performed to study the viability of a C. vulgaris biorefinery targeting the remediation of a paper industry effluent and bioenergy production. The results have shown that C. vulgaris was able to successfully grow and treat the paper industry effluent. Under these conditions, average biomass productivities determined for this microalga ranged between 15.5 ± 0.5 and 26 ± 1 mg dry weight (DW) L − 1 d − 1 , with maximum biomass concentrations reaching values between 337 ± 9 and 495 ± 25 mg DW L − 1 d − 1 . Moreover, final phosphorus concentrations ranged between 0.12 ± 0.01 and 0.5 ± 0.3 mg P L − 1 , values below the legal limits imposed by the Portuguese Environment Agency on the paper industry. Regarding the proposal of a microalgal biorefinery for the bioremediation of paper industry effluents with bioenergy production, the techno-economic study demonstrated that six of the seven studied scenarios resulted in an economically-viable infrastructure. The highest net present value (15.4 million euros) and lowest discounted payback period (13 years) were determined for Scenario 3, which assumed a photosynthetic efficiency of 3%, a lipids extraction efficiency of 75%, and an anaerobic digestion efficiency of 45%. Therefore, it was possible to conclude that besides being economically viable, the proposed biorefinery presents several environmental benefits: (i) the remediation of an industrial effluent; (ii) CO 2 uptake for microalgal growth, which contributes to a reduction in greenhouse gases emissions; (iii) production of clean and renewable energy; (iv) soil regeneration; and (v) of a circular Introduction The booming world population, economic growth, and improved living standards are leading to an increase in the global demand for energy and natural and non-natural resources, putting huge pressure on the environment [1,2]. Most of the anthropogenic activities required to satisfy these increasing demands rely on the burning of fossil fuels, Experimental Setup Experiments regarding microalgal growth in the paper industry effluent were performed in batch mode, using 1000-mL borosilicate bottles as a cultivation system. To offset possible toxic effects of the paper industry effluent on microalgal growth, as well as light limitation due to the effluent color, different effluent concentrations were evaluated: 20%, 40%, 60%, 80%, and 100%. These concentrations were prepared by diluting the nitrogen-supplemented effluent with distilled water. In addition to these conditions, microalgal growth was promoted using the modified OECD test medium as the culture medium (positive control), and the raw effluent (100%) supplemented with nitrogen without microalgae was submitted to the same culturing conditions (negative control). The different effluent compositions and the positive control were inoculated with 40 mL of a previously centrifuged C. vulgaris inoculum, giving an initial biomass concentration of approximately 120 mg DW L −1 . Except for the positive and negative controls, all the experiments were performed in duplicates. After inoculation, the cultures were allowed to grow for 14 d under continuous light supply provided by light-emitting diode (LED) lamps placed in parallel with the bottles, with photosynthetically-active radiation (PAR) of 202.9 µmol m −2 s −1 . The CO 2 necessary for microalgal photosynthesis was supplied to the cultures through the continuous injection of atmospheric air, previously filtered by 0.22-µm cellulose acetate membrane filters, at a flow rate of 1.5 L min −1 , using AP-180 air pumps (Trixie, Flensburg, Germany). Air injection was also performed to promote the Sustainability 2021, 13, 1314 4 of 29 cultures' mixing and avoid microalgal sedimentation. To keep the adequate temperature for microalgal growth in the coldest days, a 265-W heating tape (J.P. Selecta, Barcelona, Spain) was placed in the experimental facility. Figure 1 presents a picture of the experimental setup. performed in duplicates. After inoculation, the cultures were allowed to grow for 14 d under continuous light supply provided by light-emitting diode (LED) lamps placed in parallel with the bottles, with photosynthetically-active radiation (PAR) of 202.9 μmol m −2 s −1 . The CO2 necessary for microalgal photosynthesis was supplied to the cultures through the continuous injection of atmospheric air, previously filtered by 0.22-μm cellulose acetate membrane filters, at a flow rate of 1.5 L min −1 , using AP-180 air pumps (Trixie, Flensburg, Germany). Air injection was also performed to promote the cultures' mixing and avoid microalgal sedimentation. To keep the adequate temperature for microalgal growth in the coldest days, a 265-W heating tape (J.P. Selecta, Barcelona, Spain) was placed in the experimental facility. Figure 1 presents a picture of the experimental setup. Microalgal Growth Monitoring and Determination of Growth Parameters Operational parameters, such as pH and temperature, were daily monitored using a single channel multi-parameter analyzer (C6010, Consort, Turnhout, Belgium). Microalgal growth was also analyzed daily, by measuring the optical density at 680 nm, OD680, using a GENESYS 10 UV spectrophotometer (Thermo Scientific, Waltham, MA, USA). Biomass concentration was obtained through a previously determined calibration curve that establishes the relation between OD680, y, and the cell dry weight, x, presented in Equation (1): y = (0.0023 ± 0.0002)x − (0.123 ± 0.008); (R = 0.9758) With biomass concentration values, the specific growth rate (μ, d −1 ), maximum biomass concentration (X , mg DW L −1 ), and biomass productivities (P , mg DW L −1 d −1 ) were determined. Specific growth rates were obtained for each culture using a pseudofirst-order kinetic model (Equation (2)). With the graphical representation of Ln (X) versus time (t, d), it was possible to define the exponential growth phase and determine the specific growth rate for each tested condition: Microalgal Growth Monitoring and Determination of Growth Parameters Operational parameters, such as pH and temperature, were daily monitored using a single channel multi-parameter analyzer (C6010, Consort, Turnhout, Belgium). Microalgal growth was also analyzed daily, by measuring the optical density at 680 nm, OD 680 , using a GENESYS 10 UV spectrophotometer (Thermo Scientific, Waltham, MA, USA). Biomass concentration was obtained through a previously determined calibration curve that establishes the relation between OD 680 , y, and the cell dry weight, x, presented in Equation (1): y = (0.0023 ± 0.0002)x − (0.123 ± 0.008); R 2 = 0.9758 (1) With biomass concentration values, the specific growth rate (µ, d −1 ), maximum biomass concentration (X max , mg DW L −1 ), and biomass productivities (P X , mg DW L −1 d −1 ) were determined. Specific growth rates were obtained for each culture using a pseudofirst-order kinetic model (Equation (2)). With the graphical representation of Ln (X) versus time (t, d), it was possible to define the exponential growth phase and determine the specific growth rate for each tested condition: where X 1 and X 0 are the biomass concentrations (mg DW L −1 ) at final time t 1 (d) and initial time (t 0 , d) of the exponential phase of microalgal growth curves, respectively. Biomass productivities were calculated for each pair of consecutive experimental points (X z and X z+1 ), according to Equation (3): Sustainability 2021, 13, 1314 5 of 29 With the obtained results, the maximum biomass productivity (P X,max ) was determined, which corresponds to the highest value from the P X set of values. On the other hand, average biomass productivity (P X,avg ) was obtained from the ratio between the biomass produced during the assay and the elapsed experimental time, as defined in Equation (4): where X f and X i are the biomass concentrations (mg DW L −1 ) in the final (t f ) and initial (t i ) instants of the cultivation period, respectively. Phosphorus Concentration Evaluation and Determination of Nutrients Removal Kinetics For phosphorus concentration analyses, 20-mL samples of each culture were collected on days 0, 1, 2, 3, 4, 7, 9, 11, and 14. Then, the samples were centrifuged at a rotational speed of 4000 rpm, for 10 min, in an Eppendorf 5819 R centrifuge (Eppendorf, Hamburg, Germany). As previously mentioned, phosphorus was analyzed in terms of phosphatephosphorus using the Spectroquant phosphate kit test. To evaluate phosphorus removal under the studied conditions, removal efficiencies (RE (%)), mass removal per unit of volume (MR, mg P L −1 ), and average removal rates (RR, mg P L −1 d −1 ) were calculated according to Equations (5)-(7), respectively: where S i and S f represent the phosphorus concentrations (mg P L −1 ) at the initial (t i ) and final (t f ) instants of the cultivation period, respectively, and S P<0.5 corresponds to the phosphorus concentration (mg P L −1 ) at the instant at which phosphorus concentration reached the most demanding limit for phosphorus discharge defined by Portuguese Environment Agency (APA) in the environmental permit of the paper industry company (0.5 mg P L −1 ). Moreover, with the values of average biomass productivity and average removal rate, both determined for the time interval at which phosphorus concentration reached 0.5 mg P L −1 , specific biomass yields (Y X/P , g DW g −1 P) were calculated according to Equation (8): Microalgal-Based Biorefinery for Paper Industry Effluent Remediation: Techno-Economic and Sustainability Assessment The major aims of the proposed C. vulgaris biorefinery are (i) to promote the treatment of an effluent resulting from the paper industry using the microalga C. vulgaris and (ii) to produce lipids and biogas from the resulting biomass. To design the production plant and study the viability and sustainability of this project, a techno-economic assessment (TEA) was performed. This TEA framework is structured according to the following steps: (i) infrastructure location; (ii) process flowsheet description; (iii) scenarios description; (iv) mass and energy balances; (v) economic assessment and sensitivity analysis; and (vi) sustainability assessment. Microalgal-Based Biorefinery Location When growing autotrophically, microalgal growth depends on (i) nutritional factors, such as inorganic carbon source (e.g., CO 2 ), inorganic salts, and nutrients, like nitrogen and phosphorus; and (ii) environmental factors, such as light intensity, temperature, and Sustainability 2021, 13, 1314 6 of 29 environment pH. This study proposes the cultivation of microalgae in high rate ponds (HRPs). Biomass productivities in open reactors are much more dependent on environmental factors [24]. Therefore, the plant location must be strategically chosen to optimize microalgal production and target products' accumulation. The selection should consider: (i) the weather conditions; (ii) the availability of a suitable culture medium (with all the nutrients and inorganic salts required for microalgal growth); and (iii) the existence of enough land to build the microalgal production unit [25]. Considering the above referred aspects, the proposed location for constructing this infrastructure was Setúbal, a Portuguese municipality that lies within the Lisbon metropolitan area. According to data collected from the Photovoltaic Geographical Information System (PVGIS, European Commission), average horizontal solar irradiation in this area is approximately 4.98 kWh m −2 d −1 and the average annual temperature is 16.8 • C. Regarding the risk of evaporation losses, Rodrigues [26] estimated evaporation rates in different reservoirs in southern Portugal. With the values obtained for the three reservoirs nearest to the proposed construction area, a weighted average was done, leading to an annual average evaporation rate of 0.075 m month −1 . In addition to the appropriate weather and light availability conditions, the plant site chosen has other advantages: (i) the presence of an agro-industrial company with a biodiesel production plant from oilseeds and to whom biofertilizers can be sold (located at less than 50 km from the biorefinery); (ii) the presence of a paper company that will supply the culture medium and CO 2 for microalgal growth and the sludge for anaerobic treatment (located at less than 5 km from the facility); (iii) proximity to water for the discharge of the treated effluent and, if necessary, to be used as an alternative culture medium (brackish water); and (iv) flat topography, which avoids the necessity for land preparation for the biorefinery construction. Process Flow Diagram Regarding suspended cultivation systems, they can be open or closed reactors. Despite all the drawbacks of growing microalgae in open reactors, these systems are less expensive than closed ones, are more suitable for large-scale production of microalgal biomass, and can achieve promising biomass productivities, when the selected strain is robust, and the environmental conditions are adequate for their growth [27,28]. Considering these advantages, open systems were proposed in this project. Figure 2 shows the process flow diagram of the proposed biorefinery plant. To ensure an adequate microalgal growth, it is necessary to supply the cultures with the essential nutrients for their growth. Nitrogen and phosphorus are available in the culture medium that enters the HRP through S 01 stream. This medium is a mixture of the effluent resulting from the paper company (S PE ) with the plant recycle effluent (S WR ). Carbon is supplied to the culture through the injection of flue gases resulting from: (i) the biomass plant of the paper company; and (ii) the combined heat and power (CHP) units from the proposed biorefinery and the paper company, where generators burn biogas to produce, simultaneously, electricity and heat. The carbon is assumed to be supplied as 100% CO 2 and to be free from any impurity that might negatively impact microalgal growth (e.g., sulfur or nitrogen oxides) [28]. Biomass harvesting is done following a two-stage approach [29,30]: (i) first, a thickening step, flocculation, is applied to make biomass settle faster in the clarifier and to separate it from the culture medium (S 03 ) more easily; and (ii) a dewatering step will be promoted, where biomass is centrifuged to end up with a concentration of approximately 20% (w/w) (S 04 ). From all the cell disruption and products' extraction techniques, pulsed electric field (PEF) was the selected one. This technique presents several advantages: (i) avoids the use of chemicals; (ii) can be easily scaled up; (iii) does not require a dewatering process; (iv) has a very short treatment time; and (v) has low energetic requirements and operational costs [31][32][33][34]. The lipids extracted (S 05 ) can then be sold to the above mentioned agro-industrial company that will transform microalgal lipids into biodiesel through transesterification. The biomass resulting from the lipids' extraction step is then sent to an anaerobic digester (AD) to be stabilized (S 06 ). Part of the sludge Sustainability 2021, 13, 1314 7 of 29 from the paper industry wastewater treatment plant (WWTP), S 09 , is also sent to the AD. As a result of this process, biogas (S 07 ) and biofertilizers (S 08 ) are produced. The biogas is burned to produce electricity and heat that can be further used to fulfill the biorefinery energy and heating needs. To ensure an adequate microalgal growth, it is necessary to supply the cultures with the essential nutrients for their growth. Nitrogen and phosphorus are available in the culture medium that enters the HRP through S01 stream. This medium is a mixture of the effluent resulting from the paper company (SPE) with the plant recycle effluent (SWR). Carbon is supplied to the culture through the injection of flue gases resulting from: (i) the biomass plant of the paper company; and (ii) the combined heat and power (CHP) units from the proposed biorefinery and the paper company, where generators burn biogas to produce, simultaneously, electricity and heat. The carbon is assumed to be supplied as 100% CO2 and to be free from any impurity that might negatively impact microalgal growth (e.g., sulfur or nitrogen oxides) [28]. Biomass harvesting is done following a twostage approach [29,30]: (i) first, a thickening step, flocculation, is applied to make biomass settle faster in the clarifier and to separate it from the culture medium (S03) more easily; and (ii) a dewatering step will be promoted, where biomass is centrifuged to end up with a concentration of approximately 20% (w/w) (S04). From all the cell disruption and products' extraction techniques, pulsed electric field (PEF) was the selected one. This technique presents several advantages: (i) avoids the use of chemicals; (ii) can be easily scaled up; (iii) does not require a dewatering process; (iv) has a very short treatment time; and (v) has low energetic requirements and operational costs [31][32][33][34]. The lipids extracted (S05) can then be sold to the above mentioned agro-industrial company that will transform microalgal lipids into biodiesel through transesterification. The biomass resulting from the lipids' extraction step is then sent to an anaerobic digester (AD) to be stabilized (S06). Part Scenarios Description Seven scenarios were considered in this study, being characterized according to three important parameters for the biorefinery performance (Table 1). Scenario 1 (defined as the base scenario) is characterized by a photosynthetic efficiency (PE) of 2%, which corresponds to a biomass productivity of 15.7 g m −2 d −1 , a lipid extraction efficiency of 75%, and an anaerobic digestion efficiency of 45%. Scenarios 2 and 3 are different from Scenario 1 concerning PE in order to evaluate the influence of this parameter on the biorefinery viability. In Scenarios 2 and 3, PE was considered 1% and 3%, respectively. According to Carvalho et al. [35], in outdoor reactors, the PE values rarely exceed 6%. Other authors compared different outdoor reactors and determined that the highest PE obtained for HRPs was 1.5% [36]. These values are typically low, due to different losses caused by reflection, photoinhibition, photon absorption, light saturation, among others [37]. Scenarios 4 and 5 are different from the base scenario regarding the efficiency of lipids extraction by the PEF unit. Lipids extraction efficiencies of 60% and 90% were defined in Scenarios 4 and 5, respectively. Numerous studies have already presented this range of values (60-90%) for PEF extraction efficiency [38,39]. To evaluate the impact of the anaerobic digestion efficiency on the plant performance, this parameter was defined as 30% and 60% in Scenarios 6 and 7, respectively. This range of values for anaerobic digestion efficiency has already been reported in the literature [40]. All the streams involved in the production of microalgal biomass and by-products are presented in Figure 2. The overall process can be divided into four major steps: (i) microalgal growth; (ii) biomass harvesting; (iii) cell disruption and lipids extraction; and (iv) anaerobic digestion and cogeneration. The first step of the process, microalgal cultivation, was done in 25 equal HRPs with 0.3 m height, 60 m large, and 690 m long, similar to those described by Lundquist et al. [41]. Therefore, the plant presents a total area of 100 ha and a total volume of 300,000 m 3 for microalgal growth. Average biomass productivities estimated for these systems in each scenario are presented in Table 1. This parameter was calculated considering the average horizontal solar irradiation in the Setúbal area, microalgal energetic value, and the PE for each scenario (1% to 3%). Microalgal energetic value was calculated assuming a lipid content of 25% (w/w), according to Chen et al. [42] and Dong et al. [43], and the remaining 75% (w/w) was assumed to correspond to carbohydrates and protein contents. Lipids energetic value was considered to be 37.5 MJ kg −1 and carbohydrates and proteins energetic value 18 MJ kg −1 [44]. The output stream (S 02 ) was determined assuming the HRP as a continuous stirred-tank reactor and considering 0.5 g L −1 as a typical value for biomass concentration in open reactors [45]. Concerning the input stream (S 01 ), it was first estimated considering the output stream (S 02 ) plus the evaporation flow rate (2500 m 3 d −1 ). Then, taking into account the annual average biomass productivities and the assumed molecular formula for microalgal biomass (CH 1.70 N 0.10 P 0.0004 S 0.0009 ), the nutrient removal rates were determined [46,47]. In a second step, an optimization process was done to determine the optimum volumes of wastewater from the paper industry (S PE ) and recycling water (S WR ), that is, the volumes that will allow inlet nutrient concentrations as close as possible to the minimum concentrations required for microalgal growth in the input stream (S 01 ). Regarding biomass harvesting, thickening and dewatering steps were considered. The thickening step consisted of adding sodium hydroxide (NaOH) to increase the pH and induce autoflocculation, which leads to the formation of large flocs that can be easily separated from the medium by gravity sedimentation [28]. To induce the flocculation of 1 g of microalgal biomass at a pH of 10.8, it was assumed that 9 mg of NaOH were required, which accounts for a concentration factor of 4 [48]. In the dewatering step, centrifugation was proposed, leading to a concentration factor of 100. Centrifugation is, indeed, the most expensive harvesting method, but it is also one of the most suitable for large-scale processing. The overall harvesting efficiency was considered to be 95%, as reported by several authors in the literature [49,50]. Lipid extraction flow rate (S 05 ) and the anaerobic digestion input stream (S 06 ) were determined taking into account the flow rate that enters in the PEF unit (S 04 ), the technique efficiency (60-90%, depending on the scenario) and microalgal contents in lipids (25% (w/w)). During the anaerobic digestion stage of the biorefinery plant, microorganisms break down organic matter, converting it into methane and carbon dioxide in the absence of oxygen. Nutrient removal rates were estimated considering the input flow, as well as microalgal elemental composition after lipids extraction (CH 1.70 N 0.125 P 0.005 S 0.001 ) and the digestion efficiency (30-60%, depending on the scenario). With those values and the biogas composition, assumed to be 60% of CH 4 and 40% of CO 2 (as reported in several studies), the resulting biogas stream (S 07 ) and biofertilizer stream (S 08 ) were estimated [51,52]. CO 2 Sustainability 2021, 13, 1314 9 of 29 formed in the CHP unit was determined taking into account its fraction in the biogas stream. Energy Balance The stages of the process considered for this energy balance include microalgal cultivation, biomass harvesting, cell disruption and lipids extraction, for energy consumption, and the CHP unit, for energy production. Regarding microalgal cultivation, energy consumption was considered at the following levels: open pond mixing, water pumping, and blowers for flue gas injection. The energy required to mix the HRP was determined assuming an average mixing velocity of 0.23 m s −1 , according to Lundquist et al. [41] and Milledge et al. [53]. Major and minor head losses were also calculated. In this HRP, major head loss accrues from the friction at the bottom of the pond and can be calculated according to the Manning's equation. The Manning's roughness coefficient for clay channels was assumed as 0.018, according to the literature [54]. For this reactor, two minor head losses were taken into account: (i) head loss from flow around both 180 • bends (h b ) and (ii) head loss caused by the two carbonation sumps (h s ) in each pond. These were estimated according to the Darcy-Weisbach equation, assuming a kinetic loss coefficient of 1.5 for the 180 • bend and 4 for the carbonation sump. The power required for the mixing was then calculated, assuming a paddle wheel efficiency of 40% and an average diurnal and night period of 12 h [55]. The energy required for water pumping was determined considering the manometric head, the input flow (S 01 ), the specific effluent weight, and pump and motor efficiencies (88% and 83%, respectively). The flow leaves the ponds by gravity. Based on the carbon requirements for microalgae in each scenario, and assuming a 7% (v/v) concentration of CO 2 in the flue gas, the energy required for the distribution of CO 2 was estimated [56] considering an air blowers efficiency of 75% [57]. For biomass harvesting, the only energy requirement considered is the one for centrifugation. This energy consumption was determined considering the input flow (S 03 ), as well as the centrifuge specific energy consumption that, according to Milledge and Heaven [58], accounts for 1.4 kWh m −3 . Energy consumption in the cell disruption stage corresponds to the energy required for the PEF unit. Specific energy consumption for this equipment was considered to be 4 kW m −3 , as reported by Flisar et al. [59]. Knowing this value and the input stream resulting from the harvesting process (S 04 ), the energy requirement was calculated. Concerning the cogeneration unit, thermal and electrical energy are produced from the biogas formed in the AD. Energy production estimations were performed considering the gas flow rate that enters the CHP unit (S 07 ) and the biogas calorific value. According to the literature, a normal cubic meter of CH 4 has a calorific value of 10 kWh [60]. The energetic output was considered to be 40% for electrical energy and 45% for thermal energy [61]. Economic Assessment To study the feasibility of implementing this biorefinery plant, for each scenario (i) the net present value (NPV), (ii) the discounted payback period (DPP), and (iii) the internal rate of return (IRR) were determined. NPV was determined considering the estimated annual capital investment, annual production costs, and the expected annual revenues from the explored products and services provided by the algal facility. Capital investment, or fixed capital, corresponds to the total investment to create the biorefinery plant, including major equipment purchase, as well as all the direct and indirect costs associated with these. All main equipment was considered in this study: the HRPs, air blowers, clarifier, centrifuge, decanter, PEF unit, AD, and CHP unit. Equipment costs were determined based on values reported in the literature and updated using euro values for the year of 2019 using the Chemical Engineering Plant Cost Index (CEPCI), according to Equation (9): where 1 represents a year in time at which cost and the index value are known and 2 represents a year in time at which index value is known, but the cost is not. Total equipment acquisition cost was assumed to account for 85% of total investment costs. Direct costs included in this assessment are the piping system, yard improvements, equipment installation, instrumentation and control, support buildings, switchboards, and service facilities. The indirect costs include the construction expenses, the contractor's fee, contingency, and engineering and supervision [62]. For each of these investment components, a fraction of the total equipment purchase cost was assumed, according to information retrieved from the literature (see Table S1 from the Supplementary Material). Regarding the production/operational costs, they can be divided into variable, fixed, and other costs. Variable costs include costs that may vary throughout the year, depending on seasonal productivity, e.g., raw materials, the miscellaneous materials, energy required for the plant operation, the amount of NaOH needed for the biomass thickening step, the amount of nitrogen (NH 4 NO 3 ) needed to guarantee an adequate N:P molar ratio for microalgal growth, and shipping and packaging. Fixed costs do not depend on productivity fluctuation during the year. These expenses include equipment maintenance, operating labor, laboratory costs, supervision, plant overheads, insurance, local taxes, and royalties. For the investment costs, a percentage of the fixed capital or other costs was attributed to each of the mentioned components. For the proposed biorefinery, the following revenues were assumed: (i) lipids extracted from microalgal biomass and sold to biodiesel production industries at 1 € kg −1 ; (ii) treatment of the paper industry effluent with a credit of 2.40 € per kg of phosphorus removed [63,64]; (iii) steam and electricity production in the CHP unit and sale at 0.14 € kWh −1 [63,65]; (iv) treatment of the sludge from the paper industry WWTP at 25 € ton −1 ; (v) biofertilizer production in the anaerobic digestion stage, sold at 0.40 € kg −1 [65]; and (vi) CO 2 uptake for microalgal growth, with a credit of 30 € t −1 [64,66]. Considering the expenses and incomes, the project investment analysis was done considering a project time of 30 years. Inflation was assumed to be 1.5%, according to the last values reported in PORDATA [67]. Corporate income tax (CIT) was considered 21%, value-added tax (VAT) 10%, working capital needs 5%, and cost of capital 6% [68][69][70]. NPV was calculated by adding the present values of annual cash-flows, according to Equation (10): where CF i is the cash flow in the year i, and r is the interest rate. If the NPV is positive, then the project is viable because costs are lower than the net income. DPP corresponds to the number of years that takes to break even, which is when the net cash flows that are generated cover the initial investment of the proposed project. In this case, the DPP must be less than 30 years for the project to be viable. IRR is the annual rate of growth that an investment is estimated to generate. This parameter is determined using the same concept as the NPV, but in this one, the NPV is set to zero. The project is viable if the IRR is higher than the interest rate, so that there are more incomes than outcomes. The higher the IRR, the more attractive the project is for investors. Sustainability Assessment Two sustainability parameters were determined: (i) energy returned on energy invested (EROEI) and (ii) net CO 2 balance. EROEI was determined considering the energy consumed and produced in each scenario, according to Equation (11): Energy produced in the microalgal facility Total energy required (11) When the EROEI index is higher than one, the biorefinery is energy self-sustained, which means that it does not need to buy energy from the network. For the net CO 2 balance, sources of consumption and production of CO 2 were determined. On the one hand, microalgae uptake CO 2 for their growth. This requirement was calculated taking into account the carbon removal rate, total cultivation area, and CO 2 capture efficiency, which was assumed to be 80% [71,72]. On the other hand, CO 2 is released in the CHP unit. Results and Discussion 3.1. Paper Industry Effluent Remediation Using Microalgae: Experimental Work 3.1.1. Biomass Growth Figure 3 shows the biomass concentration over time for C. vulgaris grown under different paper industry effluent concentrations. It is possible to observe that this species has successfully grown in all culture medium compositions, which indicates that the evaluated paper industry effluent did not have an inhibitory effect on biomass growth. For all cultures, the microalgal lag phase was either inexistent or very short (less than one day). However, the duration of this phase increased with the increase in the effluent percentage, being more notorious in the cultures grown with 80% and 100% of effluent. These results seem to indicate that although the paper industry effluent was not inhibitory for microalgal growth, a longer adaptation period was required for higher effluent loads, which may be due to [72][73][74]: (i) the higher color intensity of the concentrated effluent (that can limit microalgal access to light); and (ii) the presence of lignin, humic acids, furans, dioxins, aluminum, and manganese, which can slow down microalgal growth and increase the lag period. Regarding the exponential growth phase, all cultures achieved the end of this phase before day 5 of the experiments. At the end of the cultivation period, almost all cultures were in the deceleration or stationary growth phase. In the assays with higher effluent percentage (60%, 80%, and 100%), it is possible to observe that higher biomass concentrations were achieved. Moreover, C. vulgaris growth behavior in these assays was similar to the one observed for the positive control assay (C+). The higher biomass concentrations achieved in these three assays may be explained by the fact that these culture medium compositions have higher nitrogen and phosphorus concentrations. With more nutrients available, microalgae grow and reproduce more and faster. For this same reason, the assays with a reduced concentration of effluent from the paper company-20% and 40%-resulted in lower biomass concentrations at the end of the experiments. Table 2 shows the main growth parameters (µ, X max , P X,max , P X,avg ) determined for C. vulgaris cultures. According to these data specific growth rate values ranged from 0.155 ± 0.005 to 0.33 ± 0.07 d −1 , the lowest growth rate being observed for the 20% assay and the highest for the 80% effluent assay. The specific growth rate determined for the positive control (modified OECD medium) was 0.299 d −1 . Regarding the maximum biomass concentration results, the assays with 60% and 100% of effluent were the ones presenting the highest value of X max (495 ± 2 and 495 ± 25 mg DW L −1 , respectively). On the other hand, the lowest value of this parameter was registered for the 20% effluent assay (337 ± 9 mg DW L −1 ). These results demonstrate once more that low concentrations of nutrients (mainly nitrogen and phosphorus) limit microalgal growth. For the positive control, the highest biomass concentration achieved was (617 ± 5) mg DW L −1 , which shows that, although C. vulgaris grew well in the paper industry effluent, better results can be achieved when biomass is grown in a synthetic growth medium, such as the modified OECD test medium. medium compositions have higher nitrogen and phosphorus concentrations. With more nutrients available, microalgae grow and reproduce more and faster. For this same reason, the assays with a reduced concentration of effluent from the paper company-20% and 40%-resulted in lower biomass concentrations at the end of the experiments. Table 2 shows the main growth parameters (μ, X max , P X,max , P X,avg ) determined for C. vulgaris cultures. According to these data specific growth rate values ranged from 0.155 ± 0.005 to 0.33 ± 0.07 d −1 , the lowest growth rate being observed for the 20% assay and the highest for the 80% effluent assay. The specific growth rate determined for the positive control (modified OECD medium) was 0.299 d −1 . µ-specific growth rate; X max -maximum biomass concentration; P X,max -maximum biomass productivity; P X,avg -average biomass productivity; DW-dry weight. Concerning maximum biomass productivity values, the highest value was 83 ± 8 mg DW L −1 d −1 for the 80% effluent assay, as it would be expected, given that it was also in this assay that the highest specific growth rate was achieved. Still, the P X,max value was slightly lower than the one obtained for the positive control, 104 ± 3 mg DW L −1 d −1 , due to the color and presence of potentially inhibitory substances in the experiments dealing with real effluent compositions. Experiments with 60% and 100% effluent also showed high maximum biomass productivities (73 ± 7 and 74 ± 2 mg DW L −1 d −1 , respectively). The 20% effluent test was the one that presented the lowest value for this parameter (31 ± 2 mg DW L −1 d −1 ), which is in line with the results presented so far. Considering the average biomass productivity, the obtained values ranged from 15.5 ± 0.5 to 26 ± 1 mg DW L −1 d −1 . Contrary to what was observed for specific growth rates and maximum biomass productivities, the assays with 60% and 100% of effluent registered the highest values of P X,avg : 26 ± 1 and 25 ± 2 mg DW L −1 d −1 , respectively. Since the 80% assay registered the highest values of specific growth rate, it reached the stationary and death phases faster, leading to negative productivities at the end of the experiment and, consequently, to lower average biomass productivities, when compared to the 40%, 60%, and 100% assays. In the positive control, there was still a significant increase in biomass concentration at the end of the 14-day experiments, meaning that C. vulgaris did not reach the stationary growth phase during this period. Therefore, the average biomass productivity in this control (35.4 ± 0.2 mg DW L −1 d −1 ) was significantly higher than those obtained in the 60% and 100% assays. Although some authors have studied the potential of microalgae for contaminants removal from paper industry effluents, only a few studies have evaluated microalgal growth behavior, some examples being represented in Table 3. When growing the microalga Nannochloropsis oculata in effluents resulting from pulp and paper industry for eicosapentaenoic acid production, Polishchuk et al. [74] determined a specific growth rate of 0.405 d −1 . This value was higher than the values reported in the present study, which may be related with the higher nutrients concentrations (especially nitrogen and phosphorus) supplied in the reference study. Moreover, the microalga N. oculata may exhibit higher tolerance to this effluent type, achieving higher growth rates. In the study performed by Tao et al. [72], maximum biomass concentrations determined for C. vulgaris and Scenedesmus acuminatus grown in paper industry effluents were 291 and 822 mg DW L −1 , respectively. Maximum biomass concentrations determined for C. vulgaris in the present study were higher than those reported by the authors, which confirms that the studied effluent did not have an acute inhibitory effect on C. vulgaris growth. In the case of S. acuminatus, the higher biomass concentrations may be associated with the higher ability of this microalga to grow in the paper industry effluent. More recently, Porto et al. [47] evaluated C. vulgaris growth in different concentrations of an effluent resulting from a Portuguese paper company, determining the following growth parameters: (i) specific growth rates ranging between 0.093 and 0.16 d −1 ; (ii) maximum biomass concentrations ranging between 136 and 249 mg DW L −1 ; and (iii) average biomass productivities ranging between 6.22 and 16 mg DW L −1 d −1 . All parameters determined in this study were considerably lower than those obtained in the present study, which may be associated with a higher inhibitory effect of the effluent used in the reference study and also with the cultivation conditions. For example, in the study performed by Porto et al. [47], the cultures were supplied with a PAR of 30-40 µmol m −2 s −1 , whereas in this study, a PAR of 202.9 µmol m −2 s −1 was used, indicating a possible limitation by light in the reference study. Table 3. Microalgal growth and nutrients uptake parameters determined in this study and other studies reporting microalgal growth in paper industry effluents. PO 4 -P-phosphate-phosphorus; µ-specific growth rate; X max -maximum biomass concentration P X,avg -average biomass productivity; RE-removal efficiency; RR-removal rate; DW-dry weight; P-phosphorus. Nutrient Removal The time-course evolution of phosphorus concentration in the different tested conditions is shown in Figure 4. Analysis of this figure demonstrates that C. vulgaris successfully removed phosphorus from all tested culture medium compositions. At the end of the 14th day, the final phosphorus concentrations varied between 0.12 ± 0.01 and 0.5 ± 0.3 mg P L −1 . Since these values are lower than the phosphorus discharge limit established by APA (0.8 mg P L −1 , or 0.5 mg P L −1 at exceptional periods of the year), it is possible to confirm the potential of this species for the treatment of secondary-treated effluents resulting from the paper industry. The lowest value of phosphorus registered at the end of the experiments was obtained for the 100% effluent assay, whereas the highest was obtained at 20%. For more accurate quantification of the potential of C. vulgaris for phosphorus removal from a paper industry effluent, some removal parameters were determined: RE (%), RR (mg P L −1 d −1 ), MR (mg P L −1 ), and Y X/P (g DW g −1 P). These results are presented in Table 4. Nutrient Removal The time-course evolution of phosphorus concentration in the different tested conditions is shown in Figure 4. Analysis of this figure demonstrates that C. vulgaris successfully removed phosphorus from all tested culture medium compositions. At the end of the 14th day, the final phosphorus concentrations varied between 0.12 ± 0.01 and 0.5 ± 0.3 mg P L −1 . Since these values are lower than the phosphorus discharge limit established by APA (0.8 mg P L −1 , or 0.5 mg P L −1 at exceptional periods of the year), it is possible to confirm the potential of this species for the treatment of secondary-treated effluents resulting from the paper industry. The lowest value of phosphorus registered at the end of the experiments was obtained for the 100% effluent assay, whereas the highest was obtained at 20%. For more accurate quantification of the potential of C. vulgaris for phosphorus removal from a paper industry effluent, some removal parameters were determined: RE (%), RR (mg P L −1 d −1 ), MR (mg P L −1 ), and Y X P ⁄ (g DW g −1 P). These results are presented in Table 4. Regarding the removal efficiencies, the obtained values ranged between 71.6 ± 0.2 and 96.9 ± 0.1%, indicating that there is substantial phosphorus removal, particularly in the experiments performed with the highest effluent loads: the highest removal efficiency was obtained for the 100% effluent assay, this value being very close to the one achieved in the positive control (96.5%). These results are in agreement with the results obtained for the growth parameters and biomass growth curves. Phosphorus removal rate values ranged between 0.6 ± 0.3 and 0.80 ± 0.04 mg P L −1 d −1 . Again, the highest value was obtained for the 100% effluent assay, but the lowest value for the 80% assay. The lowest value would be expected to be registered for the 20% assay. However, the range of values obtained for this parameter is relatively small, meaning that all values determined in the different culture conditions are very similar. Concerning the mass removal values per unit of volume, a higher removal of phosphate-phosphorus was observed in the experiments carried out with 100% effluent (3.60 ± 0.06 mg P L −1 ), this value being the same as the one registered for the positive control. These results are not surprising, due to the higher phosphorus concentrations supplied in these experiments. The similar behavior between both conditions was also expected because both nitrogen and phosphorus were supplied in similar amounts. As it was also predictable, the lowest values of mass removal per unit of volume were determined in the 20% and 40% effluent assays: 1.4 ± 0.6 and 1.3 ± 0.1 mg P L −1 , respectively. Regarding the specific biomass yields, the range of values calculated, 34 ± 7 to 70 ± 4 g DW g −1 P, is lower than those reported by Pereira et al. [77] (20-150 g DW g −1 P) and Silva et al. [78] (37.0-150.2 g DW g −1 P). The value obtained for the test with 100% paper industry effluent (64 ± 3 g DW g −1 P) was lower than the one from the 80% assay and practically the same as the one obtained in the 60% assay: 70 ± 4 and 65 ± 6 g DW g −1 P, respectively. These results indicate that, for the same amount of phosphorus, the cultures grown with 80% of effluent produce more biomass, and cultures grown with 60% effluent produce practically the same biomass as the ones grown with 100% effluent. Phosphorus removal from paper industry effluents has already been reported in the literature. Table 3 summarizes some of the obtained values. According to these data, it is possible to observe that phosphorus removal efficiencies and removal rates obtained in the present study were in the same order of magnitude as those reported in the studies performed by Gentili [76] and Tao et al. [72]. In the study performed by Gentili [76], cultivation of the microalgae Scenedesmus sp., Scenedesmus dimorphus, and Selenastrum minutum in mixtures of pulp and paper industry effluents with municipal and dairy ones resulted in phosphate-phosphorus removal efficiencies ranging from 90 to 98% and in average removal rates of 0.26-1.7 mg P L −1 d −1 . More recently, in the study carried out by Tao et al. [72], phosphorus removal efficiencies higher than 97% were determined for the microalgae C. vulgaris and S. acuminatus. These results indicate the potential of C. vulgaris for phosphorus removal from paper industry effluents, provided that the adequate culturing conditions (e.g., non-limiting and non-inhibitory light conditions, adequate N:P molar ratios, among others) are supplied. This was confirmed by the higher removal efficiencies and removal rates determined in this study when compared to the ones obtained in a previous study performed by Porto et al. [47], where the authors concluded that microalgal uptake efficiencies could have been improved by modulating microalgal culturing conditions, such as light intensity, temperature, and pH. 3.2. Microalgal-Based Biorefinery for Paper Industry Effluent Remediation: Techno-Economic and Sustainability Assessment 3.2.1. Mass Balance Table 5 shows all input and output streams determined in the mass balance step. HRP output streams vary from 1.6 to 4.7 × 10 4 m 3 d −1 , the lowest value being registered in Scenario 2 and the highest value in Scenario 3. As biomass productivity is higher in Scenario 3, and lower in Scenario 2, a higher input flow is required in Scenario 3 to satisfy the high nutrients requirements and, on the other hand, a lower input flow is required in Scenario 2. In all the other scenarios, where a 2% PE was assumed, a S 02 flow rate of 3. Table 6. The theoretical nitrogen, phosphorus, and carbon removal rates in the base scenario are 0.82, 0.075, and 6.8 g m −2 d −1 , respectively. Carbon removal rates are higher because carbon is the major constituent of microalgae, representing 43% of biomass dry weight, according to C. vulgaris elemental composition. Considering the streams resulting from the harvesting steps, S 03 and S 04 , the values determined for the base scenario were 7.8 × 10 3 and 75 m 3 d −1 , respectively. The decrease in flow rate at the end of the dewatering step results from an increase in the biomass concentration from 2 to 200 g L −1 . Knowing the volume of water collected in these two harvesting stages and the one required to satisfy the nutrients input concentrations in each scenario, it was possible to calculate the value of the discharged (S ED ) stream: 2.2 × 10 4 m 3 d −1 in Scenario 1. Lipids extraction stream (S 05 ) in Scenarios 1, 4, and 5 ranged between 2.6 and 3.9 m 3 d −1 . The lowest value from this range was registered in Scenario 4, which presented a PEF efficiency of 60%, and the highest in Scenario 5, where a 90% extraction efficiency was assumed. However, considering all studied scenarios, the highest value was obtained in the third scenario (4.8 m 3 d −1 ), due to the higher PE defined in this scenario, and, hence, to the higher biomass productivity and greater accumulation of the target product observed in these conditions. For the same reason, the lowest value (1.6 m 3 d −1 ) was determined in the second scenario, the one presenting the lowest PE. It is also possible to observe that lower PEF efficiencies, which result in lower lipids extraction, lead to higher biomass flows after the lipids extraction (S 06 ). For the anaerobic digestion step, three different efficiencies of this process were evaluated: 45, 30, and 60%, in Scenarios 1, 6, and 7, respectively. The results of the mass balance step showed that the biogas flow rate was higher in Scenario 7 (6.5 t d −1 ) and lower in Scenario 6 (3.3 t d −1 ). Regarding the biofertilizer stream (S 08 ), values obtained in Scenarios 1, 6, and 7 were, respectively, 66, 68, and 64 t d −1 . Scenarios with a more efficient anaerobic digestion have a higher conversion of organic matter into gases and, therefore, have fewer residues resulting from this process. Energy Balance This step of the TEA allowed the evaluation of the electrical requirements and profits of each of the studied scenarios, the results of which are shown in Table 7. Energy requirements for the cultivation, harvesting, and lipids extraction steps were the same for Scenarios 1, 4, 5, 6, and 7, being 5.8 × 10 3 , 11 × 10 3 , and 7.1 × 10 3 kWh d −1 , respectively. Although the PEF unit efficiencies were different between Scenarios 1, 4, and 5, no variation was observed between the results of the energy consumption in the extraction step, because the value only depends on the unit input flow (S 04 ) and on the specific energy consumption of the equipment, which was considered to be the same in all the studied scenarios. Regarding the energy obtained from the lipids extracted in these three scenarios, a positive and negative variation of 20% was observed between the results in Scenarios 1 and 5 and 1 and 4, respectively. Concerning the electrical and thermal energy produced in the CHP unit in Scenarios 1, 6, and 7, it is possible to conclude that there was greater energy production (electrical and thermal) in the scenario with higher anaerobic digestion efficiency (1.3 × 10 4 and 1.5 × 10 4 kWh d −1 ), and less energy formed in the scenario with lower efficiency (0.65 × 10 4 and 0.73 × 10 4 kWh d −1 ). From the different microalgal production steps, harvesting, which includes the energy consumed during centrifugation, is the one with the greatest weight in terms of the energy consumed in the biorefinery, representing 46, 39, and 49% of all the energy consumed in Scenarios 1, 2, and 3, respectively. On the other hand, cultivation has the lowest impact on the overall energy consumption of the proposed biorefinery, except for Scenario 2. Table 8 shows all the costs associated with equipment purchase and physical plant construction (capital investment). Total equipment cost ranged between 7.2 and 8.4 million euros, and capital investment between 18.0 and 21.2 million euros, the lowest values being determined in Scenario 2 and the highest in Scenario 3, due to the higher PE assumed in this scenario. When compared with the base scenario (Scenario 1), there was a relative increase of 20% in the total capital investment of Scenario 3, and a relative decrease of 7% in the result obtained in Scenario 2. Table 9 presents the results obtained for expenses associated with microalgal production in the proposed biorefinery, showing both variable and fixed costs. From all the costs associated with the production process, utilities present the highest impact on the value of the total expenses. That is in line with what was expected, since the utilities in this study account for the electrical energy required for the well-functioning of the biorefinery. On the other hand, laboratory and supervision costs represent the smallest fraction of the production costs in all seven scenarios. The obtained annual production costs vary from 2.6 to 4.1 million euros, being 3.4 million euros in the base scenario. Table 9. Annual production costs (in k€), variable and fixed, determined for each scenario. Maintenance 971 900 1060 971 971 971 971 Operating labor 32 32 32 32 32 32 32 Laboratory costs 6 6 6 6 6 6 6 Supervision 6 6 6 6 6 6 6 Plant overheads 16 16 16 16 16 16 16 Insurance 194 180 212 194 194 194 194 Local taxes 389 360 424 389 389 389 389 Royalties 194 180 212 194 194 194 194 Total (k€) 3372 2615 4163 3372 3372 3372 3372 Regarding the biorefinery revenues, the main aim of this facility is to treat the industrial effluent and produce lipids and energy. However, other profits were considered, as mentioned in Section 2.2.6. Figure 5 shows the results obtained for each scenario. The industrial effluent treatment, the sale from the accumulated lipids, and the sale of biofertilizers represent the products/services that generate the most profit to the biorefinery in almost all scenarios. Scenario 2 is an exception, with the sludge treatment representing the largest revenue. That is because the same digester volume was assumed for all scenarios. In Scenario 2, the anaerobic reactor has more free space to receive sludge from other sources, which results in a higher profit resulting from sludge treatment. CO 2 capture represents the profit with the least impact in the annual revenues of the facility in all scenarios. Although Scenario 3 has the highest value of total annual revenues, and Scenario 2 the lowest, only small variations in the revenues generated between Scenarios 1, 4, 5, 6, and 7 were observed, which result from the different values of the AD or PEF unit efficiency assumed. Regarding the viability analysis, the values assumed for the different financial variables are summarized in Table 10, and the results of this analysis are presented in Table 11. From the seven studied scenarios, six are economically viable. Only Scenario 2 presents a negative NPV (−7.4 million euros). On the other hand, Scenario 3 shows the highest NPV, with a relative increase of 203% compared to the base scenario. Regarding the DPP results, Scenario 3 is the one in which the project is paid in a shorter period (13 years) and, in contrast, in Scenario 2, the biorefinery is not even paid over the project lifetime (>30 years). From the other five scenarios, Scenario 4 is the one that presented the worst estimate for the time needed to pay the initial investment (27 years). These results were expected, as it was also in this scenario that the lowest value of annual revenues was recorded. On the other hand, Scenario 5 showed the best result, being paid three years earlier than the project defined in the base scenario. Nevertheless, although the NPV was positive in six scenarios, the time necessary to pay the project can be considered too long. Even in the best scenario, it would take 13 years for the project to start making profit, which, from an investor's point of view, can make the project less appealing and riskier. With regard to IRR value, the project is feasible if the IRR is higher than the defined cost of capital, meaning that the annual revenues are enough to pay the investment capital and the required return. Taking this into consideration, only Scenario 2 showed an IRR value lower than the cost of capital (3%). Scenario 3 presented the highest IRR, being twice the cost of capital (12%). Regarding the viability analysis, the values assumed for the different financial variables are summarized in Table 10, and the results of this analysis are presented in Table 11. From the seven studied scenarios, six are economically viable. Only Scenario 2 presents a negative NPV (−7.4 million euros). On the other hand, Scenario 3 shows the highest NPV, with a relative increase of 203% compared to the base scenario. Regarding the DPP results, Scenario 3 is the one in which the project is paid in a shorter period (13 years) and, in contrast, in Scenario 2, the biorefinery is not even paid over the project lifetime (>30 years). From the other five scenarios, Scenario 4 is the one that presented the worst estimate for the time needed to pay the initial investment (27 years). These results were expected, as it was also in this scenario that the lowest value of annual revenues was recorded. On the other hand, Scenario 5 showed the best result, being paid three years earlier than the project defined in the base scenario. Nevertheless, although the NPV was positive in six scenarios, the time necessary to pay the project can be considered too long. Even in the best scenario, it would take 13 years for the project to start making profit, which, from an investor's point of view, can make the project less appealing and riskier. With regard to IRR value, the project is feasible if the IRR is higher than the defined cost of capital, meaning that the annual revenues are enough to pay the investment capital and the required return. Taking this into consideration, only Scenario 2 showed an IRR value lower than the cost of capital (3%). Scenario 3 presented the highest IRR, being twice the cost of capital (12%). Economic Assessment To study the influence that each of the three variables evaluated in the different scenarios has on the NPV, a sensitivity analysis was performed, and the results are shown in Figure 6. PE is the variable that has the greatest impact on the project NPV, ranging between −7.4 and 19.5 million euros, for PEs of 1 and 3%, respectively. Concerning the other two variables, lipids extraction efficiency has a slightly greater influence on the economic viability of the project than the anaerobic digestion efficiency. Indeed, a greater oscillation in the NPV was observed in the scenarios assuming a PEF unit efficiency of 60, 75, or 90%: 4.7-8.2 million euros. On the other hand, for anaerobic digestion efficiencies between 30 and 60%, the NPV varies from 5.3 to 7.5 million euros. To study the influence that each of the three variables evaluated in the different scenarios has on the NPV, a sensitivity analysis was performed, and the results are shown in Figure 6. PE is the variable that has the greatest impact on the project NPV, ranging between −7.4 and 19.5 million euros, for PEs of 1 and 3%, respectively. Concerning the other two variables, lipids extraction efficiency has a slightly greater influence on the economic viability of the project than the anaerobic digestion efficiency. Indeed, a greater oscillation in the NPV was observed in the scenarios assuming a PEF unit efficiency of 60, 75, or 90%: 4.7-8.2 million euros. On the other hand, for anaerobic digestion efficiencies between 30 and 60%, the NPV varies from 5.3 to 7.5 million euros. Improved economic indicators were obtained in a similar study performed by Gonçalves et al. [25]. In this study, aiming at evaluating the economic viability of bioenergy production from microalgae using municipal wastewaters as culture medium, the authors obtained the following results: (i) a NPV ranging between -12.1 and 22.6 million euros; (ii) a PP of 4 to 8 years; and (iii) an IRR ranging between 13 and 26%. The authors also concluded that the scenario assuming the lowest PE was not economically viable. The highest performance of this microalgal facility when compared to the one proposed in the present study may be a result of different factors: (i) the effluent to be treated in the present study required an external nitrogen source, which increases the operational costs; (ii) although the same equipment was assumed in the acquisition costs, it was defined that the dimensioned equipment corresponded only to 85% of the total acquisition costs instead of the 90% assumed in the reference study; (iii) the values assumed for equipment acquisition costs and financial parameters were adjusted to the current reality of the country. Other studies from the literature have also demonstrated the economic viability of microalgal production for different applications. Thomassen et al. [79] performed an environmental TEA for several value chains using microalgal biorefineries as a model. In this study, the authors concluded that the optimal value chains included as main processes: open pond cultivation and medium recycling, as in the microalgal biorefinery proposed in the present study, and spray drying (a step that was avoided in this study because the PEF technology used for lipids extraction can be applied to wet biomass). Comparing different Improved economic indicators were obtained in a similar study performed by Gonçalves et al. [25]. In this study, aiming at evaluating the economic viability of bioenergy production from microalgae using municipal wastewaters as culture medium, the authors obtained the following results: (i) a NPV ranging between -12.1 and 22.6 million euros; (ii) a PP of 4 to 8 years; and (iii) an IRR ranging between 13 and 26%. The authors also concluded that the scenario assuming the lowest PE was not economically viable. The highest performance of this microalgal facility when compared to the one proposed in the present study may be a result of different factors: (i) the effluent to be treated in the present study required an external nitrogen source, which increases the operational costs; (ii) although the same equipment was assumed in the acquisition costs, it was defined that the dimensioned equipment corresponded only to 85% of the total acquisition costs instead of the 90% assumed in the reference study; (iii) the values assumed for equipment acquisition costs and financial parameters were adjusted to the current reality of the country. Other studies from the literature have also demonstrated the economic viability of microalgal production for different applications. Thomassen et al. [79] performed an environmental TEA for several value chains using microalgal biorefineries as a model. In this study, the authors concluded that the optimal value chains included as main processes: open pond cultivation and medium recycling, as in the microalgal biorefinery proposed in the present study, and spray drying (a step that was avoided in this study because the PEF technology used for lipids extraction can be applied to wet biomass). Comparing different value chains, the authors concluded that the most economic was the one using Nannochloropsis sp. for the production of fish larvae feed, presenting a NPV of 180 million euros. The authors also evaluated two value chains intended for bioenergy production through anaerobic digestion, obtaining a NPV of 17.1 and 154 million euros for Dunaliella salina and Haematococcus pluvialis, respectively. Similarly, Ahmad Ansari et al. [80] evaluated the economic feasibility of using the microalga Scenedesmus obliquus in aquaculture. In this study, the authors evaluated two scenarios: in scenario 1, microalgal biomass was directly used for fish production, and in scenario 2, microalgal biomass was firstly used for lipids extraction and biodiesel production and the residual biomass was applied in fishmeal diets. With this study, the authors concluded that the use of microalgal biomass for the dual purpose of biodiesel production and fishmeal formulations was more profitable than the single use of microalgal biomass for fish production: (i) the net profit determined in scenarios 1 and 2 was 426 and 531 thousand euros per year, respectively; and (ii) the PP determined for scenario 1 was 7.5 years, whereas the PP determined for scenario 2 was 6.8 years. These results confirm the advantages of exploring several products/applications of microalgal biomass within the biorefinery approach, as it was proposed in the present study. Sustainability Assessment The results of the two analyzed sustainability parameters are shown in Table 12. An analysis of the net CO 2 balance demonstrated that this parameter was negative in all the studied scenarios, meaning that there is more consumption of CO 2 by microalgae than that released in the biorefinery processes. The scenario with the highest PE, Scenario 3, has higher biomass productivity and, therefore, requires more CO 2 , resulting in a higher value of the net CO 2 balance in absolute terms. Concerning the EROEI results, they can be analyzed in two different ways: (i) considering only the energy produced in the biorefinery; and (ii) considering the energy that is produced in the industrial plant, as well as the energy that can be obtained from the main exploited product, lipids. Looking at the first case results, the value of this parameter is lower than 1 in all scenarios, except for Scenario 7, due to the higher efficiency of the anaerobic digestion assumed (60%). Therefore, the proposed project would only be energetically efficient in one of the studied scenarios, promoting a self-sustained biorefinery. In the other scenarios, it would be necessary to buy energy from the network to fulfill the energy needs of the plant. In the worst scenario, Scenario 6, where a low efficiency of the AD was assumed (30%), approximately 10,000 kWh from the energy network would be required per day. Regarding the second case, EROEI was higher than 1 in all studied scenarios. These results demonstrate that this biorefinery can promote clean energy production in greater quantities than what it consumes, even in Scenario 2, which always presented the worst results in the other assessments. Scenario 7, with an EROEI of 2.5, demonstrated the best capacity to promote energy production. The reason why the results of this second case are not so accurate is that the energy obtained from the extracted lipids is being accounted for, but the energy consumed outside the biorefinery boundaries in the transesterification of lipids into biodiesel is not. In addition to these parameters, from a sustainability point of view, it is also important to highlight other benefits of this project that may be intangible and/or unmeasurable, but add value to it: (i) reduction of GHGs emissions and mitigation of the effects of climate change; (ii) contribution to a circular economy, with the production of biofertilizers from sludge (which returns nutrients to the soil), and with the use of wastewater as a culture medium; (iii) soils' regeneration; (iv) contribution to mitigate environmental issues, such as eutrophication; (v) creation of new jobs, with reasonable salary conditions; and (vi) promotion of seven of the United Nations Sustainable Development Goals (SDGs). This project can benefit from the current and future policies from the paper industry company. There is a commitment that all units should be neutral in carbon emissions by 2035 and an investment of more than 100 M€ was estimated. Moreover, the implementation of this biological process will enable to reduce the requirements of freshwater for the processes, enabling the possibility of wastewater reuse after treatment. The production of fertilizers with lower environmental impact from wastewater can also benefit from national and European policies from nutrient recycling. Conclusions This work was divided into two parts: (i) a laboratory step to evaluate the growth behavior of the microalga C. vulgaris in a paper industry effluent and its ability to remove phosphorus from this culture medium, and (ii) a techno-economic analysis to design a microalgal-based remediation and bioenergy production plant. The study demonstrated that the paper industry effluent did not have an inhibitory effect on C. vulgaris growth and showed the feasibility of using this microalga for phosphorus removal from paper industry effluents. Phosphorus final concentrations determined on each assay ranged between 0.12 ± 0.01 and 0.5 ± 0.3 mg P L −1 , being below the legal limits imposed by APA for this industry. Concerning the TEA results, this paper presents an economically viable microalgalbased biorefinery for industrial effluent treatment and bioenergy production, with a NPV of 15.4 million euros and a 12% IRR in the best studied scenario. In this scenario, a 3% PE, a PEF extraction efficiency of 75%, and an anaerobic digestion efficiency of 45% were considered. When analyzing the DPP values, the best scenario presents a DPP of 13 years, which can make the project less appealing for possible stakeholders. Nevertheless, this project presents several benefits, especially at the sustainability level: (i) a reduction in GHGs emissions; (ii) the treatment of an effluent that is commonly associated with the eutrophication phenomenon (due to its phosphorus concentration); (iii) the possibility of nutrients' recycling and soils' regeneration; (iv) the production of carbon-neutral biofuels; and (v) the development of a circular economy. Supplementary Materials: The following are available online at https://www.mdpi.com/2071-105 0/13/3/1314/s1, Table S1: Fractions of the total equipment purchase cost assumed to determine the direct and indirect costs of the investment capital and values assumed for the determination of variable and fixed production costs. Conflicts of Interest: The authors declare no conflict of interest. Average carbon uptake rate (mg CO 2 L −1 d −1 ) RR N Average nitrogen uptake rate (mg N L −1 d −1 ) RR P Average phosphorus uptake rate (mg P L −1 d −1 ) S i Phosphorus concentration in the initial instant of the cultivation time (mg P L −1 ) S f Phosphorus concentration in the final instant of the cultivation time (mg P L −1 ) S P<0. 5 Phosphorus concentration value lower than the limit for phosphorus discharge defined by APA for the paper industry company (mg P L −1 ) t Time (d) t 0 Initial instant of the exponential growth phase (d) t 1 Final instant of the exponential growth phase (d) t i Initial instant of the cultivation period (d) t f Final instant of the cultivation period (d)
16,116.8
2021-01-27T00:00:00.000
[ "Engineering" ]
On Demand Light-Degradable Polymers Based on 9,10-Dialkoxyanthracenes Dr. F. Becker, M. Klaiber, Prof. M. Franzreb, Prof. J. Lahann Institute of Functional Interfaces Karlsruhe Institute of Technology Hermann-von-Helmholtz-Platz 1 Eggenstein-Leopoldshafen 76344, Germany E-mail<EMAIL_ADDRESS>Prof. S. Bräse Institute of Organic Chemistry Karlsruhe Institute of Technology Fritz-Haber-Weg 6, Karlsruhe 76131, Germany Prof. S. Bräse Institute of Biological and Chemical Systems – IBCS-FMS Karlsruhe Institute of Technology Hermann-von-Helmholtz-Platz 1 Eggenstein-Leopoldshafen 76344, Germany Prof. J. Lahann Biointerfaces Institute and Departments of Biomedical Engineering and Chemical Engineering University of Michigan 2800 Plymouth Road, Ann Arbor, MI 48109, USA The ORCID identification number(s) for the author(s) of this article can be found under https://doi.org/10.1002/marc.202000314. DOI: 10.1002/marc.202000314 for degradation, because it acts as an external trigger that can be controlled with exceptional time and space resolution. Typically explored functional groups for light-induced degradation include ortho-nitrobenzyl esters, [9,10] truxillic acids (TRA), [11] and coumarins (CO), [12] among others. Unfortunately, polymers with these functional groups have their λ max in the UV range (oNP ≈350 nm; TRA < 260 nm; CO ≈250 nm). [8] Extended exposure to UV light has been associated with phototoxicity in the past, [13] and among other factors, photocytotoxicity is one reason, why broader applicability of UV-degradable polymers has been limited. [14] In contrast, functional groups that can be cleaved by visible light are far less common [14,15] and, so far, have not yet been broadly explored in polymer chemistry [14][15][16] due to their very limited applications in our everyday life, where light is always present. Fundamentally, light-mediated degradation can be separated into light absorption and bond dissociation. In the past, DAs have been used in applications as a light cleavable linker for small molecules, [17] macromolecules, [16] and block copolymers, [14] in side chain modifications of polymers, [18,19] and as a sensor for singlet oxygen ( 1 O 2 ). [20,21] Here, we report a new class of visible light-degradable polymers based on 9,10-dialkoxyanthracenes (DA, compound 1 in Scheme 1) that undergo aerobic degradation, but remain stable under anaerobic conditions, even in the presence of light. Decoupling the light absorption process from bond scission fundamentally enhances the level of control exhibited during polymer degradation. To demonstrate cleavage of DAs, a photosensitizer, green light, and oxygen are required. [17] Eosin Y (5) as a photosensitizer was excited by green light (λ max = 519 nm [22] ) and transferred this energy to oxygen, generating singlet oxygen (Scheme 1). This then underwent a [2 + 4] cycloaddition reaction with the DA, forming an endo-peroxide (EPO) which was cleaved by catalytic amounts of protons. [23][24][25] Using this approach, the bond cleavage events are still orchestrated by the incoming light pulse, which is both, temporally and spatially controllable. Backbone cleavage occurred only when a photosensitizer and oxygen were concomitantly present. Since DAs as such do not bear functional groups which would allow a polymerization, we decided to use AA-type-and BB-type monomers for copper(I)-catalyzed azide alkyne cycloaddition (CuAAC) polymerization (Scheme 2). This polymerization strategy allows for tuning the physico-chemical properties and solubilities of both, the final Light induced degradation of polymers has drawn increasing interest due to the need for externally controllable modulation of materials properties. However, the portfolio of polymers, that undergo precisely controllable degradation, is limited and typically requires UV light. A novel class of backbonedegradable polymers that undergo aerobic degradation in the presence of visible light, yet remain stable against broad-spectrum light under anaerobic conditions is reported. In this design, the polymer backbone is comprised of 9,10-dialkoxyanthracene units that are selectively cleaved by singlet oxygen in the presence of green light as confirmed by NMR and UV/vis spectroscopy. The resulting polymers have been processed by electrohydrodynamic (EHD) co-jetting into bicompartmental microfibers, where one hemisphere is selectively degraded on demand. Backbone-degradable polymeric materials have attracted significant attention of polymer researchers due to their perspectives in drug delivery [1] , biomaterials [2] , nanocontainers, [3] and microreactors. [4] Backbone degradation can be achieved by different routes, for example, pH-controlled hydrolysis, [4,5] enzymatic degradation, [6] oxidation, [7] or light-induced reactions. [8] In many applications, light is a particularly compelling stimulus polymer and the degradation products by modification of the diazidomonomer and the dialkyne-DA. For example, instead of diadzidotriethyleneglyol (10), an aromatic or aliphatic diazidomonomer can easily be employed. There is a wide variety of natural products with an anthraquinone core, including drugs (anti-tumor, anti-inflammatory, anti-arthritic, anti-fungal, antibacterial, anti-malarial, antioxidant, and diuretic activities) and dyes, [26] hence employing a substituted DA, the degradation product could be a substituted AQ with designed secondary functions. Starting from triethyleneglycol (9), the diazidomonomer 10 was synthesized in two steps with an overall yield of 86%. The dialkyne-DA 8 was synthesized starting from 9,10-anthraquinone (4). Compound 4 was reduced to the corresponding dihydroxyanthracene, followed by in situ alkylation with tert-butyl bromoacetate, yielding the DA 6 in 83% yield. Subsequent ester reduction to the diol 7 with LiAlH 4 and alkylation with propargyl bromide yielded 8. In this sequence, only 8 was purified by column chromatography, whereas 6 and 7 only required a washing step with pentanes, making 8 accessible in three easy steps with a good overall yield of 40%. After monomer synthesis, 8 and 10 were subjected to CuAAC polyaddition (Scheme 3). Employing copper sulphate and ascorbic acid as a catalyst system in DMF, poly[(1,2-bis(2azidoethoxy)ethane)-alt-(9,10-bis(2-(prop-2-yn-1-yloxy)ethoxy) anthracene)] (PAPA) was obtained after 2.5 d at room temperature (M n,NMR = 14.8 kg × mol −1 , M n,GPC = 12.7 kg × mol −1 , Đ m,GPC = 1.74, [27] see ESI for full characterization). Prolonged reaction times led to solidification of the reaction mixture. The polymer was partly soluble, hindering GPC measurements for molecular weight determination. [28] With PAPA in hand, we next examined its on-demand degradation. A solution of PAPA and Eosin Y in DMSO-d 6 was illuminated with a 1 W green light LED in the presence of air. [29] Samples were withdrawn at different time points to monitor the reaction via 1 H-NMR. [30] To quantify the degradation under aerobic conditions, the integrals of the signals from the anthracene core from PAPA (δ = 7.47 and 8.30 ppm), the endo-peroxide (δ = 7.30 and 7.56 ppm) and of those from 4 (δ = 7.95 and 8.23 ppm) were determined. As expected, the signals from PAPA disappeared rapidly, when the solution was illuminated in the presence of air (Figure 1). After 70 min, the signals of the EPO still continued to grow before they ultimately disappeared. Concomitantly, the signals of 4 began to rise sharply after 35 min, indicating successful backbone cleavage. Full backbone cleavage was achieved within 3.5 h. In addition to the 1 H-NMR study, we also examined the reaction using UV/Vis analysis. UV/Vis traces were recorded at different time points during the cleavage experiment (Figure 2). The vanishing of anthracene signals around 400 nm was used as an indicator for backbone degradation of PAPA. [31] In order to exclude other cleavage mechanisms, control experiments were carried out. Under anaerobic conditions or in the absence of Eosin Y, no backbone cleavage was observed (Figure 3). Without oxygen, 1% cleavage is observed after 24 h, most likely due to diffusion from oxygen through the septum of the reaction mixture. It is noteworthy, that with the use of Eosin Y disodium salt instead of Eosin Y, a delayed cleavage was observed. In the presence of 12 wt.% of Eosin Y disodium salt, polymer cleavage started 1.5 h after illumination and was completed within 2 h (see ESI Figure 1 for details). Next, we intended to further elucidate the degradation process in the context of a more realistic biomaterial. To directly compare the degradation of PAPA relative to a hydrolytically degradable polyester, such as poly(lactide-co-glycolide) (PLGA), Scheme 1. Cleavage of 9,10-dialkoxyanthracenes (DA). Absorption of light is decoupled from the polymer backbone to Eosin. The hereby exited oxygen adds to the DA in a [2 + 4] fashion, converting it to the endoperoxide 2 which is subsequently cleaved to yield 9,10-anthraquinone (4). we created bicompartmental microfibers, [32] where one hemisphere contained PAPA as a major component and the second hemisphere was made entirely of PLGA. We prepared these anisotropic, bicompartmental microfibers by electrohydrodynamic (EHD) co-jetting. As this technique has been used to prepare multicompartmental microparticles, [33][34][35][36][37][38] fibers, [32,35,39] and complex 3D scaffolds, [40] successful processability would provide insights into the potential technological utility of PAPA. For EHD co-jetting, polymer solutions were pumped through side-by-side [41] configured needles under a laminar flow. Applying high voltage between the needle and the collector generated a charge in the polymer solution, which accelerated the solution from the formed Taylor cone at the tip of the nozzle towards the collector. Hereby, the polymer solutions are stretched into a fine thread, leading to increased surface and therefore instantaneous drying. Generally, the polymer thread can be collected as a continuous fiber or it can break up into particles, depending on the jetting conditions, which include, for example, flow rate, voltage, and concentration of the polymers. [34] In order to obtain bicompartmental fibers, a ratio of 1:1 for degradable to non-degradable jetting solution was chosen. As polymer for the non-degradable compartment, PLGA (50−75 kDa) was chosen, as it is known for its good jetting properties. To realize the aspired fiber geometry, a side-byside set up of two needles and a rotating counter electrode was used for EHD co-jetting (Figure 4A,B). By adding the dye poly[tris(2,5-bis(hexyloxy)-1,4-phenylenevinylene)-alt-(1,3-phenylenevinylene)] (PTDPV) to the PLGA compartment and using the inherent fluorescent properties of the DAs in PAPA, the fibers were imaged by confocal microscopy ( Figure 4D,E). The bicompartmental microfibers with a diameter of 30 µm show two separate compartments, the PLGA compartment fluorescing green from PTDPV and the PAPA compartment fluorescing blue indicating the presence of anthracene groups. Electrohydrodynamic co-jetting of PAPA and PLGA solutions using equal flow rates in a side-by-side set up resulted in microfibers in which PAPA was restricted to one hemispherical compartment ( Figure 4D). Next, the PAPA-PLGA microfibers were immersed into an aqueous Eosin Y solution (0.06 m) under illumination with green light and continuous bubbling with air to ensure oxygen saturation ( Figure 4C). Not surprisingly, the heterogeneous degradation takes longer than in solution, which leads to a deformation of the PLGA compartment. To confirm selective degradation of one hemispherical compartment, the microfibers were again imaged by CLSM, Figure 4E unambiguously confirms selective degradation of the PAPA polymer under aerobic conditions, whereas the PLGA compartment remained on the glass slide. This communication establishes a new type of aerobically degradable, photoresponsive polymers, where the polymer backbone is rapidly degraded by visible light. This polymer is conveniently synthesized by CuAAC polyaddition from relatively inexpensive starting materials. Furthermore, bicompartmental microfibers, where one hemisphere can be selectively degraded on demand, have been demonstrated. This work thus opens new perspectives for the use of light degradable polymers as advanced materials, addressing some of the important drawbacks of current polymer systems, [8] such as the need for UV-light or the lack of precise temporal control. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
2,576
2020-07-01T00:00:00.000
[ "Materials Science" ]
Internal versus external assessment in vocational qualifications: A commentary on the government’s reforms in England The distinction between external assessment and internal assessment underpins a major reform to vocational qualifications underway in England. To be approved by the Department for Education, vocational qualifications must now include a minimum proportion of external assessment, regardless of subject. This paper discusses the nature and implications of this constraint on qualification design. First, it clarifies the meaning of external assessment and the key arguments underpinning the reform. Second, it evaluates the use and implementation of this blanket rule. The final section discusses the nature of internal assessment in more detail, highlighting its heterogeneity and potential advantages over external assessment. Background The education system in England is currently undergoing major reform by the Department for Education (DfE), affecting both vocational and academic qualifications. This paper provides a commentary on the DfE's decision regarding external assessment in vocational qualifications taken by 14-to 19-year-olds, with a focus on school-based Key Stage 4 provision (typical age: 14 to 16). Before vocational qualifications can be approved for inclusion in school performance tables, they must now include a minimum amount of external assessment, regardless of subject area (meaning that this is a blanket rule). We discuss this reform from two angles. First, we discuss the use and implications of the blanket rule of external assessment for vocational qualifications and the use of blanket rules more generally in qualification design. Second, we evaluate the affordances of internal assessment as an assessment method, since the implication of the reform is that it can never be sufficient unless coupled with external assessment. External versus internal assessment The distinction between external and internal assessment underpins a key reform to vocational qualifications. The DfE (2015a) has defined external assessment as: a form of assessment in which question papers, assignments and tasks are specified by the awarding organisation, then taken under specified conditions … and marking or assessment judgements are made by the awarding organisation. (DfE, 2015a: 18) It is clear that this definition concerns the processes of setting, taking and marking assessments but does not specify the type of task that a student should take. This distinction is important to remember because external assessments are often associated with written, time-bound examinations but do not have to be. Internal assessment is, hence, any form of assessment in which any of the three above assessment processes are controlled by the institution where the student is studying (for example, school or workplace). One well-known example is teacher-based assessment. As part of the reforms, the DfE has categorized vocational qualifications taken by 14-to 19-year-olds into four types (DfE, 2015a;DfE, 2015b), and specified that they must now contain a minimum amount of external assessment in order to be approved for use in school performance tables. For 14-to 16-year-olds, there is only one type: Technical Awards. These are broad, applied qualifications that do not focus on a specific occupation. For 16-to 19-year-olds, there are three types: Applied General, Tech Levels and Technical Certificates. Applied General qualifications enable students to continue their general education through applied learning. Tech Levels and Technical Certificates are technical qualifications that equip students with specialist knowledge of a specific industry or occupation. As Table 1 shows, for all Technical Awards, regardless of subject, external assessment must contribute to at least 25 per cent of the overall grade for 2017 performance tables, and this amount needs to rise to 40 per cent for 2018. In contrast, Tech Levels only need 30 per cent external assessment. Maintained schools in England may still offer other vocational qualifications (that is, not meeting the criteria for performance tables), especially for students with particular education needs. However, the critical importance of performance tables for school accountability in England means that schools are likely to choose qualifications approved for performance table recognition in all but exceptional cases, as the reform intends. One of the prominent reasons for the government's decision to use more external assessment in vocational qualifications is concern with assessment quality. The reform followed the Wolf (2011) review, which argued that external assessment is able to 'safeguard against downward pressure on standards' (112). The DfE (2015a) subsequently emphasized a connection between this type of assessment, rigour and the esteem of academic qualifications. More recently, external assessment was again highlighted as an important feature of the post-16 skills plan that followed the Sainsbury report on technical education; this time it was considered essential for 'comparability and reliability ' (BIS and DfE, 2016: 52). These reports do not give an explicit definition of assessment 'quality' but draw attention to multiple quality-related constructs. We similarly take a multifaceted view of assessment quality, but try to be precise in terms of the facet of quality to which we refer in each instance. For example, 'quality' can mean that qualification results are valid, fit for purpose, reliable, lead to progression within the labour market or further education, and/or increase earnings. There are various reasons why external assessment may increase facets of quality. For example, if the awarding body has control over the assessment processes, it could reduce threats to validity such as malpractice, and may reduce variability in standards between institutions (Wolf, 2011). While external assessment could reduce some threats to quality, in certain situations the overall effect of using external assessment may not necessarily be positive. For example, the level of control that is needed may limit the types of knowledge and skills that can be assessed (this is discussed further below). More generally, because many factors affect the appropriateness of an assessment, it does not seem straightforward to justify having blanket rules that specify a minimum amount of external assessment that does not take into account the subject area, unless, of course, the advantages override the disadvantages overall. Unfortunately, it is not clear whether this is the case for vocational qualifications, since much of the debate about external/internal assessment has focused on academic qualifications (for example, S. Johnson, 2013;QCA, 2006) or on the distinction between summative and formative assessments (for example, SQA, 2007). It is complicated, and may even be inappropriate, to generalize those research findings because vocational qualifications differ from other qualifications in many ways, including the purpose of the qualification and the type of candidates. A blanket rule of external assessment From a policy perspective, this blanket rule is surprising because it contrasts with the government's and regulator's positions on other assessment practices. For example, in 2006, the QCA (Ofqual's predecessor) advised against the use of a blanket rule of 66 per cent internal assessment for GCSEs in vocational subjects (QCA, 2006). In the recent reforms to GCSEs, Ofqual (2017) has allowed some degree of flexibility between the assessment practices of different subjects. Ofqual required exams to be the default method of assessment for all GCSEs, but still considered the use of non-exam assessment on a subject-by-subject basis. The importance of that flexibility is highlighted by the variability in exam assessment among the reformed GCSEs, as shown in Figure 1. The proportion of exam assessment in reformed GCSEs ranges from 0 per cent to 100 per cent. Although the majority of GCSEs now have 100 per cent exam assessment (13 out of 22 subjects, including all English, maths and science GCSEs), there are multiple GCSEs that include non-exam assessment. Even more noteworthy is that one GCSE (art and design) is not required to have any exam assessment at all, and, in fact, is not even required to include external assessment of any kind. This seems to conflict with the requirement of a minimum proportion of external assessment for vocational qualifications. Contrasting GCSEs with vocational qualifications in terms of assessment practices draws attention to differences in the corresponding governmental reforms, especially the use of a blanket (subject-general) rule for the vocational qualifications compared to a subject-driven consideration process for GCSEs. However, it is debatable whether we can, or should, generalize the assessment practices used for academic qualifications to vocational qualifications (Acquah and Malpass, 2017). Another way to evaluate the proportion of external assessment appropriate for vocational qualifications is to ask stakeholders. In 2013, the DfE organized a consultation to assess stakeholders' views of the reforms it was then proposing (DfE, 2013). Respondents (organizations, teachers and employers) were asked their views on the minimum proportion of external assessment in Applied General qualifications. Figure 2 shows a lack of consensus among the respondents. The proportions ranged from 0 per cent to 100 per cent, with 10 out of 64 respondents stating that the level should vary. Almost two-thirds of respondents stated that the proportion of external assessment should be 33 per cent or less. The DfE decided on 40 per cent for Applied General qualifications, but it is unclear why this percentage was chosen from the distribution of responses. Furthermore, the consultation did not ask stakeholders about other types of vocational qualifications, and it is unknown how decisions were made for those qualifications. A third way to evaluate the use of external assessment is to consider practice in existing qualifications, which may provide some insight into the level of demand for external assessment among stakeholders. We investigated Level 1 and 2 vocational qualifications because they target the same age group as GCSEs and would fall into the DfE's Technical Award category. We primarily focused on Cambridge Nationals, offered by OCR (Oxford, Cambridge and RSA). We also investigated comparable qualifications offered by two other awarding bodies in similar subjects: Pearson BTEC Firsts and NCFE V Certs. Figure 3 shows that there is limited use of external assessment in these vocational qualifications. Although the proportion of external assessment ranges from 0 per cent to 50 per cent, the majority of qualifications have 25 per cent external assessment, which is much less than the 40 per cent requirement for 2018 performance tables. Many V Certs (in similar subjects to the Cambridge Nationals) have less than the 25 per cent requirement. This overview of current qualifications suggests the possibility that certain subjects may not be suited to external assessment. Of course, it is important to regularly re-evaluate assessment in light of current context and research evidence, as this may change the consensus on the most appropriate assessment methods for a given domain of skills/knowledge. Although many researchers argue against a 'one size fits all' approach to qualification design, the suitability of any blanket rule is affected by both the homogeneity of the objects of the rule (that is, what is affected by the rule) and the scope of the rule itself (that is, the variation in the outcome of the rule). Vocational qualifications, the object of the external assessment rule, remain by nature a heterogeneous set of qualifications (for example, in terms of subject area and specific occupational content), despite recent reforms reducing their number. The lack of homogeneity among vocational qualifications may not necessarily be problematic for implementing the blanket rule, if the rule is broad enough in scope. The rule needs to ensure that the core skills/knowledge can be assessed appropriately for all subjects and that their assessment is not limited by having to include a certain proportion of external assessment. In principle, external assessment has wider scope than, for example, the requirement of exam assessment that has affected GCSE reform, because it does not specify the type of task that a student needs to do. Besides an exam, an external assessment could be a speaking test that is recorded by the teacher and marked by an external examiner (as in GCSE modern foreign languages), or a performance assessment that is set by the awarding body and marked by a visiting examiner (as in GCSE drama). Both of those examples fall outside Ofqual's definition of exam assessment ('taken by all students at once, under formal supervision, and are set and marked by exam boards' (Ofqual, 2014: 10)) because students are not all assessed at the same time. Computer-based assessments could also be implemented as external assessments. For example, shipping licence examinations in some countries involve PC-based simulation tests that are externally assessed (Gekara et al., 2011). Although examples exist, the current use of externally assessed non-exams is limited. For example, our research into OCR Cambridge Nationals and Pearson's BTEC Firsts (in similar subjects) showed that all their external assessments were exams, either written or on-screen. One reason is that non-exams may not be feasible operationally. Non-exams may increase demands on centres' resources in terms of the physical environment and equipment needed for the assessment, which may affect the manageability and costs of the process. They may also increase demands on the awarding body, including examiner recruitment, setting and marking processes. Therefore, even if the blanket rule is theoretically broad in scope, which allows assessments to be, to some extent, tailored to the specifics of the qualifications, only a limited range of variants may be viable. This is problematic if those variants are not appropriate for the qualification. For example, written exams may not validly assess certain constructs or may induce negative 'washback' effects on teaching and learning (Alderson and Wall, 1993), both of which can threaten the validity of qualifications. The ultimate unfortunate consequence may be to reduce the provision and diversity of qualifications on offer to students. These issues surrounding how to implement external assessment raise the more general concern with a blanket rule (irrespective of scope), which is that it affects, and to some extent conflicts with, approaches to qualification development. The Cambridge Assessment Group, for example, employs an integrated model of assessment (the Cambridge Approach) in which numerous factors are considered during assessment design, including a clear statement of purpose, identification of candidate populations and cataloguing of the constructs that are the focus of the assessment. It acknowledges the importance of complying with national and international criteria but emphasizes that the process should be evidence-based (Cambridge Assessment, 2009). Other approaches to assessment development similarly stress that it should be grounded on a model of cognition and learning (Pellegrino et al., 2001). This clearly contrasts with the use of a blanket rule, which puts the type of assessment at the forefront of qualification design processes. The useful heterogeneity of internal assessments The introduction of a blanket rule of external assessment for vocational qualifications assumes that internal assessment is not fully adequate to assess the constructs of the qualifications. To some extent this seems tenuous, considering the fact that internal assessment is not a uniform construct. Based on the DfE's definition of external assessment, there can be seven variants of internal assessment, which differ in terms of the level of control that the awarding body has over three stages of the assessment process: task setting, task taking and task marking (see Table 2). It is important to note that our discussion of 'control' is concerned with core assessment processes (those that determine an assessment to be internal or external) and not moderation or verification procedures. Some of these alternatives may offer advantages over the awarding body having full control, for certain subjects and qualifications. The variety of internal assessment and its potential usefulness has been acknowledged by national regulators for general academic qualifications (Ofqual, 2013;QCA, 2006) but has not been given attention in discussions of vocational qualifications. Instead, even in the most recent governmental document on vocational (technical) education (the Post-16 Skills Plan), external assessment is, again, stressed as necessary for ensuring comparability and reliability of qualifications (BIS and DfE, 2016). The following sections consider each stage of the assessment process in turn and evaluate the impact of internal control (that is, not by the awarding body). The aim is to examine whether internal control at any of the three stages necessarily results in poor-quality vocational qualifications, or whether in some cases the positive benefits offered by forms of internal assessment might achieve an overall better model. Task setting The aim of task setting is to ensure that students take tasks that are valid and are marked reliably. A student's performance should reflect his/her level of understanding of the topic being assessed (validity) and he/she should perform comparably on assessments that test comparable content (reliability). Arguments against internal control One major concern of tasks that are internally set is that they may have lower validity and reliability than externally set tasks (Wolf, 2011), which would occur if teachers or training providers set tasks that are different in standards. Assessments could also differ in terms of the content or learning outcomes that are chosen to be assessed. Various studies on performance assessments and portfolio use have found that a large proportion of the variability in student scores can be attributed to the sample of tasks with which a student is tested (task-sampling variability), combined with the fact that tasks are typically only performed on one occasion (occasion-sampling variability) (for example, Shavelson et al., 1999). However, negative effects of task variability can be minimized. Shavelson (2013) argues that score variability caused by sampling of tasks can be minimized either by making items more homogeneous or by having a larger number of tasks. He argues that complex domains, such as performance assessment, require the latter option. In line with that argument, because vocational qualifications may be conceived as complex domains, internal assessments may be effective if teachers or trainers are skilled at setting a variety of tasks. The concern is that they might not set appropriate tasks; this contributed to the removal of internal assessment for most GCSEs and A levels (S. Johnson, 2013;QCA, 2005;QCA, 2006). There are many reasons why tasks might not be set appropriately. For example, internal setters might lack experience in designing summative assessments. Teachers might set tasks that they believe suit their students' style of learning, which would threaten validity if the student's performance is not reflective of their level of understanding. The high-stakes accountability system in which vocational qualifications are provided may put pressure on teachers to choose assessments/content that they believe are the easiest for students, such as highly artificial tasks that control the successful production of work that meets grading criteria (Ofqual, 2013). Nevertheless, various procedures can be put in place to help internal setters; for example, awarding bodies could provide advisers, although this has not always been successful (QCA, 2005: 12). Ultimately, the above arguments against internal control rest on the assumption that it is difficult to ensure comparability of standards through external verification or training by the awarding body. Even if validity and reliability could be assured, there may be other disadvantages of asking teachers or training providers to set assessment tasks for their students. A widely used assessment method in vocational qualifications in the UK is evidence accumulation, also known as portfolios. Evidence accumulation has been criticized for being time-consuming, focusing students' attention on assessment and distracting them from their learning (de Bruler, 2001;Wolf, 2011). Similarly, some GCSE teachers have reported that task setting for certain summative assessments is burdensome (Ofqual, 2013). These kinds of negative perceptions might change to positive if assessments are viewed as tools that can facilitate the learning process, rather than merely for measurement (Earl, 2013). Arguments for internal control One of the main arguments for internal control at task setting is that it could produce assessments that are more valid than ones that are centrally (that is, externally) set. In the context of National Vocational Qualifications (NVQs), Jessup and colleagues have argued that specific assessment criteria (that is, an external reference point) and authentic tasks should be at the forefront of assessment design, and that reliability would naturally result from valid assessments (Jessup, 1991). This weighting of validity versus reliability is controversial; ideally, assessments should be both valid and reliable. The idea that validity may increase with internally set tasks is one worthy of further consideration, especially if reliability can also be ensured. There are several ways by which validity may be enhanced. First, it may enable teachers or employers to devise assessments that reflect workplace contexts (that is, assessments are more authentic); this may be especially valuable for technical qualifications, given their vocational nature. Centres may have different facilities or infrastructure in place to assess students, and awarding bodies may not be able to exhaustively specify each possible arrangement. Validity may also increase by allowing the assessment to take into account the characteristics of the students who take the qualification. Despite governmental desires for parity of esteem, vocational qualifications are still disproportionately taken by low-attaining or disaffected students compared to academic qualifications (for example, Smith et al., 2015). Student-centred assessments may enhance performance, motivation and learning (Ecclestone, 2000) and hence constitute a better representation of the student's understanding. This argument is not intended to suggest that each individual student should have a different task but, instead, to highlight the effects of task formats on performance, which could be reduced by internal setters. There are many examples of interactions between the type of assessment that students take and their performance in non-vocational contexts. For example, research has found several types of gender differences, including that boys and girls perform differently on examinations and coursework (Elwood, 1999). The question is whether internal setters have enough expertise to provide tasks that increase validity by giving candidates the opportunity to demonstrate their understanding, rather than merely setting tasks that unfairly facilitate student performance. A compromise may be for the awarding body to set the task (that is, an external assessment approach) but on the condition that they provide a range of tasks. A certain amount of constrained task choice is typical in academic and vocational qualifications. The regulator, Ofqual, appears to recognize that internally set tasks have value, at least in certain cases. The subject-level requirements for GCSE art and design specify that 'an awarding organisation must ensure that of the total marks available for a GCSE Qualification in Art and Design, 60 per cent of those marks shall be made available through tasks set by a Centre ("Internally Set Assessments")' (Ofqual, 2015c: 12). More specifically, the awarding body 'must ensure that each Internally Set Assessment is designed to (a) require a portfolio of work to be completed by the learner'. In this case, despite Ofqual's overall preference for external exam assessment, specific subject needs have led not only to allowing but to requiring 60 per cent of marks to be allocated to internally set, portfolio assessment. Arguments against internal control The major concerns of low control at task taking are malpractice, such as cheating and plagiarism by students. Without external control, there may be more opportunity for students to receive inappropriate help and advice from parents and/or teachers during the task. This would reduce the value of the assessment grade as an indicator of students' understanding of the topic. The QCA's (2005) study of GCSE coursework found variability in the extent to which teachers encouraged revision and redrafting of work, and supplied writing frames, templates and checklists to students. Some parents even reported having drafted their own children's coursework. Arguments for internal control The way in which the task is taken may have effects on students. If the conditions of the tasks are restrictive, then it may lower students' motivation and increase their anxiety, which may ultimately affect their level of performance or increase the risk that they drop out part way through the course (Stasz et al., 2004). For example, in a review of assessment practices in upper secondary education, Dufaux (2012) argued that low-performing students may be affected by the type of task, feeling more discouraged and stressed under the pressure of exams. The conditions under which an assessment is taken may have different challenges for different subject domains and different types of learning outcomes. This may be particularly problematic for vocational qualifications, where the learning outcomes relate to practical and technical skills rather than only to knowledge. For example, it may be difficult to determine the length of time that students should take to complete the task, where students should complete the task (for example, in a classroom, work environment or at home), what information they should have access to when completing the task (for example, book chapters or internet-based resources) and the amount of feedback that teachers should give them. If these conditions are restrictive, then it could affect the skills that students are able to develop or demonstrate through the task, affecting the validity of the assessment. For example, in a review of GCSE controlled (internal) assessment, Ofqual (2013) highlighted that timing and feedback restrictions placed on English literature coursework tasks limited students' opportunity to draft and redraft work, and therefore prevented the assessment from testing those skills. Another advantage of internal control is that it may minimize disruption to teaching, depending on where and when students can do the task. This has also been used as an argument for reducing the quantity of exams in general (Johnson, 2013). Arguments against internal control The major concerns about internal control at task marking are that it increases the risk of malpractice (deliberate or unintentional) and may lower the reliability of the marking. The risk of malpractice during marking is high for qualifications for which teachers are under pressure to give high marks, from their students or from the education system (grade inflation). Vocational qualifications offered to 14-to 19-year-olds are of that kind. They are high-stakes for both students and teachers; in particular, students' grades affect their progression to further study and form part of school accountability and funding regimes. However, although malpractice threatens the validity of assessments, awarding bodies put in place procedures to minimize its incidence by, for example, moderating teachers' marks or statistically screening for malpractice (Ofqual, 2011). Several reviews have been conducted into the reliability of teacher marking for summative or high-stake purposes, but few in the context of vocational qualifications (M. Johnson, 2006;. Harlen (2005) discusses research that has shown high reliability of marking by teachers in school-based assessments but concludes that the evidence is not strongly favourable. Similarly, S. Johnson (2013) concludes that the evidence is limited and often ambiguous. Both authors suggest ways in which it is possible to achieve higher reliability, such as by consensus moderation, training to make markers aware of potential biases in their decision making, or more detailed marking criteria and assessment guidance. Even if reliability could be assured, there may be other disadvantages of teachers marking the assessments. In particular, teachers have complained that marking can be time-consuming (Ofqual, 2013). Arguments for internal control S. Johnson (2013) argues that permitting teachers to mark could broaden the scope of the assessment by exploiting 'the rich base of evidence that teachers have available to them … by virtue of the time spent interacting with … their students, that could in principle lead to greater validity and reliability ' (2013: 92). The same argument could be made for making judgements about students' knowledge and technical skills in a vocational context. An internal marker may have positive effects on students if the marking needs to occur in their presence (such as assessing live drama performance), for example by reducing a student's level of anxiety compared to an external marker. Little research has examined the effects of external markers on students' performance. There is some evidence that the perceived attitude of the examiner is noticed by students during an exam (Siddiqui, 2013), and therefore could, in theory, affect performance if it is negative. In other research, students have been found to be more anxious in assessments that are assessed by examiners in situ, such as oral examinations (Huxham et al., 2012;Pearce and Lee, 2009), although this research did not assess whether this is moderated by the type of examiner. In contrast, another study, in this case with primary school children, has found some evidence that an external examiner may reduce, not increase, levels of anxiety (Bertoni et al., 2013). There may be operational advantages to using internal markers in certain circumstances. For example, an internal marker may be more manageable logistically and less costly when the marking needs to occur during the assessment. It may be time-consuming, if not impossible, for a sufficient number of external examiners to attend a large number of test centres and/or a large cohort of students. Visiting examiners may not be required if the assessments can be delivered to external examiners. This would be straightforward for written assessments, which is the practice for written examinations, coursework and portfolios. It can also be achieved by recording oral and performance assessments. However, those types of recordings might be difficult to achieve if test centres do not have access to the required equipment or lack confidence or expertise in using it. Although such potential difficulties may be overcome through adequate training procedures, internal markers, who can mark in situ, may be more efficient. Once again, it is interesting to consider the example of GCSE art and design. The reformed GCSE does not require any external marking (Ofqual, 2015a: 12), after respondents to the consultation on reformed GCSE art and design 'raised concerns about the practicality and validity of external marking in art and design' (Ofqual, 2015b: 2). Although GCSE and vocational qualifications differ, it is difficult to see how concerns about the 'practicality and validity of external marking' deemed valid for the GCSE in art and design would not also apply to a Level 2 vocational qualification in art and design. Conclusion The DfE has specified that all vocational qualifications in England must include a minimum amount of external assessment in order to gain government recognition in performance tables. This requirement is a blanket rule with no apparent flexibility for the proportion to be modified on a subject-by-subject basis, although it does vary by type of qualification. This blanket rule contrasts with the government's (slightly) more flexible position on assessment regulations for other qualifications (for example, the exam requirement for GCSEs). The DfE has provided little evidence of the rationale or consultation responses that underpin the proportions of external assessment that have been chosen, which is especially important because they diverge from the current use of external assessment in vocational qualifications. Despite these concerns, the blanket rule may not necessarily be problematic to implement, even though vocational qualifications are heterogeneous in nature, because external assessment is theoretically wide in scope. However, in practice, practical and economic factors may mean that external assessment is operationalized as an examination, which may not be appropriate for all subjects. The requirement for a minimum proportion of external assessment implies that internal assessment is unsatisfactory. The main argument against internal assessment at all three main stages of the assessment process (task setting, task taking and task marking) is quality assurance. It is argued that quality is more at risk of being compromised if the awarding body does not have control of the process. Although there is some evidence supporting this possibility, there are also mechanisms that can be put in place to minimize this risk. For each stage of the assessment process, it has also been argued that internal assessment may enhance the quality of the qualification, in particular when we consider the characteristics of the cohort that typically take vocational qualifications and the heterogeneous nature of work environments that students might be exposed to. Assessment decisions must take into account the specific context in which the qualifications are provided. Solutions posited to address concerns about internal assessment are inevitably constrained by practical factors (for example, finances) and their success is likely to be moderated by the high-stakes accountability and funding system in England, which may put pressure on lowering standards (for example, by grade inflation). Since internal assessment has a variety of potential merits, it is critical that it is not dismissed as an assessment method and, instead, that efforts continue to be made to devise feasible ways by which to ensure its validity and reliability (for example, AlphaPlus, 2014). The external versus internal debate exists alongside other key debates on vocational assessments more generally. For example, there is ongoing controversy surrounding the authenticity (or lack of authenticity) of school-based tasks for vocational understanding and whether school teachers have the professional competence to provide vocational courses. It is likely that these debates interact such that advances in resolving one might help enhance the others. It is plausible that more authentic tasks could lead to more valid internal assessment, but also that better internal assessment guidance could free up centres to use more authentic tasks. This paper calls into question the idea that external assessment will inevitably be of higher quality than internal assessment. It highlights the need for any evaluation of internal assessment to include a more comprehensive list of advantages and disadvantages that takes into account the nature of vocational qualifications (for example, type of cohort) and, equally as importantly, evaluates potential disadvantages against potential solutions. This type of evaluation is likely to lead to different conclusions for different qualifications and subjects.
7,651
2017-11-15T00:00:00.000
[ "Economics" ]
Nuclear Quantum Effects from the Analysis of Smoothed Trajectories: Pilot Study for Water Nuclear quantum effects have significant contributions to thermodynamic quantities and structural properties; furthermore, very expensive methods are necessary for their accurate computation. In most calculations, these effects, for instance, zero-point energies, are simply neglected or only taken into account within the quantum harmonic oscillator approximation. Herein, we present a new method, Generalized Smoothed Trajectory Analysis, to determine nuclear quantum effects from molecular dynamics simulations. The broad applicability is demonstrated with the examples of a harmonic oscillator and different states of water. Ab initio molecular dynamics simulations have been performed for ideal gas up to the temperature of 5000 K. Classical molecular dynamics have been carried out for hexagonal ice, liquid water, and vapor at atmospheric pressure. With respect to the experimental heat capacity, our method outperforms previous calculations in the literature in a wide temperature range at lower computational cost than other alternatives. Dynamic and structural nuclear quantum effects of water are also discussed. INTRODUCTION Calculations of reaction free energy profiles and activation barriers are routinely performed within the rigid-rotor and harmonic-oscillator approximation; 1 meanwhile, the more accurate computation of thermodynamic quantities or vibrational spectra is still a great challenge. 2−10 The inclusion of nuclear quantum effects (NQEs), such as zero-point energy (ZPE) or tunneling, is even more difficult. 11−13 Path-integral molecular dynamics (PIMD) and path integral Monte Carlo (PIMC) simulations are accurate, yet highly expensive methods to incorporate NQEs. 14,15 The computational cost of PIMD simulations can be significantly reduced by advanced techniques. 16−18 Recently developed algorithms, such as a colored noise thermostat and quantum thermal bath, are more effective to add quantum effects to classical simulations, 19−21 but settings need to be chosen carefully to prevent zero-point energy leakage. 22,23 When empirical water models were used in PIMD simulations, several properties deviated more from the experiments than in the classical simulations. 24−27 In these quantum simulations, the liquid water becomes less structured and less viscous. This has been explained by double counting of quantum effects: once in the parameter optimization using experimental data, second in the quantum simulations. This is why several water models were reparametrized for accurate PIMD simulations resulting in q-SPC/Fw, 28 q-TIP4P/f, 24 and TIP4PQ/2005 models. 29 Another solution to avoid double counting is the application of PIMD with ab initio methods 9 or force fields trained on ab initio data. 25,27,30 Numerous methodological developments have also been made to calculate quantum free energy values from PIMD simulations. 31 −35 In routine DFT calculations, with the optimized geometry in hand, the free energies are summed for all the different motions such as translation, rotation, and vibration, using the partition functions of the particle in the box, rigid rotor, and harmonic oscillator (RRHO) models. 36 This approach works satisfactorily for small molecules at ambient temperatures, where the normal modes of vibrations can be considered as decoupled harmonic oscillators. For systems, where anharmonicity is significant, and/or the conformational space is extended, the RRHO fails to reproduce the exact thermodynamic quantities. Recognizing the need to address this issue, more sophisticated approaches use slightly modified partition functions on optimized geometries. 37,38 There are a few methods which can estimate quantum corrections from classical MD trajectories, for example, one-and two-phase thermodynamics methods (1PT, 2PT). 39 −41 In those cases the vibrational density of states (VDOS) is determined from molecular dynamics by the Fourier transformation of the velocity autocorrelation function. Quantum corrections are computed by the multiplication of VDOS with weight functions derived from the partition functions of motions. In the 1PT model, only vibrational modes are considered as harmonic oscillators; in the 2PT model, gas phase motions such as rotational and translational modes are also taken into account. The 2PT model is an improved method based on the original work of Berens et al. which corresponds to the 1PT method with anharmonic correction (1PT+AC). 39 The 2PT and 1PT+AC methods were successfully applied for the calculation of thermodynamic properties of several systems such as Lennard-Jones fluids, 40 water, 39,41−47 organic liquids, 48,49 carbon dioxide, 50 adsorbed urea to cellulose, 51 ionic liquids, 52,53 carbohydrates, 54 mixtures, 55 and interfaces. 56 Heat capacity is generally used as a reference property for the benchmark of force fields. 42,48,49,53 The 1PT/2PT methods are still in continuous development in respect to accuracy and applicability. 57−63 Here, we propose the Generalized Smoothed Trajectory Analysis (GSTA) method, which is numerically beneficial to the 1PT/2PT methods and, moreover, addresses their limitations arising from the used approximations. Our theory is demonstrated on the exact reproduction of heat capacity and internal energy of a harmonic oscillator. We have chosen different states of water as real-world examples as the heat capacity varies widely between phases, and it is still one of the most investigated materials in computations. 64 Beyond thermodynamic properties, structural and dynamic NQEs are also investigated. THEORY In a molecular dynamics simulations, the velocity autocorrelation function (VACF) can be defined as follows: (1) where v is the velocity as a function of the time (t). Here, we refer to mass (m)-weighted VACF, but we assume identical masses in the derivations for simplicity. The vibrational density of states (VDOS) is the representation of the autocorrelation function (VACF) in the Fourier domain: (2) where ν is the frequency. Since in our calculations real numbers are used, the Fourier cosine transform is applied. Consequently, the VACF function is the inverse Fourier transform of the VDOS function: Using t = 0, we get the norm of the VDOS function: Applying ν = 0 in eq 2, we get the norm of the VACF function: where U 0 is the reference energy and β = (k B T) −1 , k B is the Boltzmann constant, and T is the temperature. The vibrational weight function w U for the energy originates from the quantum harmonic oscillator model: 36 where h is the Planck constant and coth denotes the hyperbolic cotangent function. The heat capacity is the temperature derivative of the internal energy: The last term in the integral is the weight function for the heat capacity: 36 The weight functions are shown in Figure 1. Using this weight function w c V (ν), the heat capacity can be obtained directly from the classical VDOS by integrating over the frequency domain: In the classical limit (h → 0), the 1PT method always gives k B for the heat capacity, which corresponds to the classical harmonic value, so it cannot model anharmonicity. 2.1.2. Two-Phase Thermodynamics (2PT). Gas-like motions such as translations and rotations are separated from vibrations in the 2PT method. The total VDOS is decomposed into vibrations, translations, and rotations: Translation and rotation are determined for the center of mass and principal axes of the molecules, respectively. Different weight functions are used for the different motions in the calculation of the thermodynamic properties: The weight function of translation and rotation is 1/2 for the heat capacity within the 2PT model. 41 In the classical limit (h → 0), the 2PT heat capacity can vary between k B /2 and k B per atom, not incorporating any anharmonicity, so the 2PT method cannot describe cases where the classical heat capacity is above k B . Another limitation The quantum correction can be determined from the quantum harmonic weight function c V Δ given by If the integral terms are partitioned differently then we get: where the second term is the anharmonic correction. The 1PT +AC internal energy can be gained analogously as the heat capacity: 17) where the anharmonic correction is the deviation of the classical internal energy (U cl ) from the ideal harmonic case: The 1PT+AC model always satisfies the correspondence principle in contrast with the 1PT or 2PT methods: This also implies that the technique is able to describe anharmonic motions to the extent of the method used to obtain the original classical trajectory. Generalized Smoothed Trajectory Analysis (GSTA). In the previous sections, we briefly introduced the relevant methods from the literature about the correction of the VDOS function to get quantized thermodynamic properties. In this section, we present a derivation to show that a similar correction can be performed in the time domain on the VACF function and on the coordinates. 2.2.1. Correction of Velocity Autocorrelation Function. Convolution of two functions f and g is defined as is frequently used in digital processing for the smoothing of signals or the filtering of high frequency noises. 65 On the basis of the convolution theorem, the multiplication of the VDOS function with a weight function w is equivalent to a convolution of the Fourier transforms of the two functions. The quantum corrected VACF function is the inverse Fourier transform of the quantum corrected VDOS function: The Fourier transform of the weight function can be used for the quantum correction of the VACF function: From the corrected VACF function, the thermodynamic properties can also be calculated. For instance, the 1PT+AC heat capacity: where γ c V is the Fourier transform of the weight function in eq 10 according to eq 22. where csch is the hyperbolic cosecant function. The 1PT+AC internal energy takes the form 25) and the weight function for VACF with the internal energy is given by The integral in eq 26 fails to converge as the w U weight function increases monotonously. Fortunately, in real systems, the maximum frequency in the VDOS is always finite, so the weight function can be cut at a high finite frequency with the application of an exponentially decreasing function: where variable b controls the exponential decay. The decay is faster as b increases. This function becomes unity as b approaches infinity: The γ U function can be determined as a limit of an integral: The integral can be evaluated in eq 29, and finally, the γ U function can be expressed as a limit of a piecewise function: where δ denotes the dirac delta function. The norm of the function in eq 30 is 1. The coth function is the primitive integral of the csch 2 function. Since in the simulations the data are represented at discrete time intervals Δt, it is useful to write the γ U function at discrete time steps in such a way that it is directly applicable for integration, i.e., where n is the index of the time step. When n = 0, the γ U function takes the simple form: The weight functions of VACF are shown in Figure 2. Correction of Trajectory. The velocity autocorrelation function is actually a convolution function of the velocity with itself, i.e., f = g = v in eq 20: According to eqs 1 and 2, the VDOS can be written in the form of: Similarly, the quantum corrected counterpart can be written as where ṽis a modified velocity which satisfies eq 35. In the following steps, we determine v. Substituting eqs 34 and 35 into eq 6: Using the convolution theorem: Assuming that w(ν) is a non-negative real-valued function we In the time domain, the multiplication by w(ν) is replaced by Thus, we arrived to a g(t) function whereby convoluting the velocities one can directly obtain ṽand the quantum corrected vibrational density of states in eq 35. If one wants to use a general function, which can be applied any atomic velocity Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article function, then the vibrational weight function (eq 10) can be chosen: where sech means the hyperbolic secant function. In the determination of a kernel function, the weight function of the internal energy can also be used: Similarly to the determination of the γ U function in eq 26, the integral does not converge here either. We could not derive an analytic form for the Fourier transform in eq 42. In order to perform this Fourier transform numerically in a practical way, the weight function of the internal energy can be split into two parts: The Fourier transform of the second term in eq 43 can be readily evaluated numerically, while the Fourier transform of the first part can be determined analytically in a similar way as we did with the γ U function: The result can be given as a limit of a piecewise function: So the g U function can be calculated: When the convolution is performed at discrete time step, Δt, the value of the kernel function in the nth step is At zero time The kernels of g U and g c V are shown in Figure 3. g U is represented in discrete points with Δt = 0.5 fs. Utilizing a kernel function defined in eq 40, one can obtain filtered time-dependent variables such as coordinates, velocities, and forces (x, v, F): Having these variables smoothed, we are able to derive smoothed energy components. The smoothed kinetic energy (Ek in ) can be obtained straightforwardly from smoothed velocities. Correction of Potential Energy. For the definition of the smoothed potential energy (Ep ot ), it can be expected to fulfill the following condition: 53) The smoothed total energy is simply the sum of the smoothed kinetic and potential energies: The kernel function can depend on several parameters like the temperature or the Planck constant h. In order to connect the classical systems with the quantum systems continuously, a fictitious η variable is introduced. η = 0 corresponds to the classical picture, and η = h is the quantum one. With η, we can perform an integration from the classical state to the quantum state. It can be shown that the total energy remains conservative upon smoothing: The total differential of the smoothed total energy is The negative of the first term can be called as work of smoothing: The smoothed potential energy is defined at a specific time (t 0 ) as a correction on the original potential energy (E pot ) with the work of smoothing: Integrating this equation according to time yields the expectation value of smoothed potential energy: The average change of kinetic energy is equal to the average change of the potential energy (more details in Appendix A): So the mean smoothed total energy is Note that eq 62 corresponds to U 1PT+AC in eq 17. So we arrived to the same result as the 1PT+AC method, which is true for the heat capacity as well: Quantum-corrected state functions can be determined from the presented smoothed quantities, as shown by the example of heat capacity below. COMPUTATIONAL DETAILS 3.1. Calculation of Heat Capacity. According to the Theory section, several estimators can be designed to determine the heat capacity depending on what is corrected: VDOS, VACF, or the trajectory. The quantum correction can be introduced with different functions which correspond to the heat capacity or the internal energy, but the resulting thermodynamic functions can be transformed into each other by integration/differentiation. We used the kernel function of the heat capacity (eq 41) for the filtration since it is more convenient; i.e., its analytical form is known. We performed several (50−120) independent NVE simulations around the T target temperature. The classical temperature (T cl ) was determined simply from the average classical kinetic energy for a particular trajectory: The isochoric heat capacity can be determined from the slope of the mean total energies vs T cl functions: The classic temperature originates from the classical normalization factor in eq 35. The isobaric heat capacity can be determined from the slope of the H vs T cl (or H̃vs T cl ) function: For condensed phases, p = 1 atm was applied instead of the calculated average pressure because its fluctuation was larger than several hundred atm. We used linear regression to determine the heat capacities and their errors. The uncertainties of our calculations are given at the 95% confidence level. Representative fittings are shown in the Supporting Information. 3.2. Born−Oppenheimer Molecular Dynamics Simulations. We carried out classical microcanonical normal mode sampling 66 with Gaussian 09 Rev. E for the vibrations of one water molecule using B3PW91/6-311G(d,p) level of theory. 67,68 This functional was chosen since it reproduced the exact frequencies of a water molecule the most accurately with an average error of 6.1 cm −1 . 69 Initial energies were Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article distributed equally between modes according to the equipartition theorem. The equations of motion for the nuclear evolution were integrated employing the velocity Verlet algorithm with a 0.1 fs time step. 70 Then, 50 independent 1 ps long trajectories were generated around the desired temperature, and the total energies represented a χ 2 distribution. 3.3. Classical Molecular Dynamics Simulations. Classical molecular dynamics simulations were performed with the Tinker molecular modeling software. 71 The simulation box always contained 432 water molecules. For the liquid and vapor phases, a cubic box was applied. For Ih ice, water molecules were arranged according to the Bernal− Fowler ice rules 72 having a vanishing net dipole moment. First, we performed 10 ps long NpT simulations starting from previously equilibrated structures followed by 11 ps long NVE simulations. The last 1 ps data were used for the trajectory analysis. Here, 120 independent trajectories were generated to determine the heat capacity at a given temperature. In order to make the linear fitting more effective, the temperature of the thermostat was varied around the target temperature according to a χ 2 distribution. The equations of motion were integrated employing the Bussi−Parrinello algorithm 73 for the NpT ensemble and a modified Beeman algorithm 74 for the NVE ensemble with a 0.5 fs time step. In condensed phase simulations, the particle mesh Ewald (PME) method was applied for the long-range electrostatic interaction with 9.0 Å cutoff distance. 75 For the vapor phase, a larger cutoff of 113.0 Å was used with a 30 × 30 × 30 grid size for the Ewald summation. We performed simulations for an NVT ensemble to determine the structural properties of the SPC/Fw water model at 298.15 K, with a simulation box size of 23.439 Å. Then, 500 configurations were collected, and after 11 ps long NVE simulations, the last 1 ps trajectory was filtered with the g U function with Δt = 0.1 fs to get one filtered structure from each trajectory. These 500 independent structures were used to calculate the distributions of the intramolecular distances as well as the radial distribution functions. For the calculation of the IR absorption spectrum of the AMOEBA14 water model, 120 independent NVE trajectories were used. All simulations were 20 ps long, and they were equilibrated at 298.15 K before. We used a four term Blackman−Harris window before the Fourier transform of the dipole autocorrelation function. 76 3.4. Path Integral Molecular Dynamics Calculations. PIMD simulations were carried out with AMBER12 77 in a canonical ensemble for 216 water molecules using the SPC/Fw model. The settings were taken from ref 28, but 32 beads were used instead of 24. The length of the cubic simulation box was 18.68 Å, according to the equilibrium density. After 1 ns long equilibration, 1000 structures were collected for each bead in an additional 1 ns long simulation. For the calculation of the isochoric heat capacity, we performed canonical simulations at 288.15 and 308.15 K as well. In order to determine the isobaric heat capacity for the liquid phase, NpT simulations were also performed at atmospheric pressure in the temperature range from 260.65 to 385.65 K. RESULTS AND DISCUSSION 4.1. Harmonic Oscillator Model. Here, we show the effect of two filters on the sum of two noncoupled oscillators at 298.15 K. One oscillator has high frequency (3000 cm −1 ), while the other has low frequency (100 cm −1 ). The analytic curves are shown in Figure 4. Here, g c V smooths the vibration with high frequency, while g U enhances the high frequency motion. The filtered function with g U corresponds to the quantum fluctuation. In the following section, we determine the fluctuation of the coordinates for a single harmonic oscillator. The time evolution of the position: where X is the amplitude in a particular trajectory. The probability distribution of the position for the harmonic oscillator in canonical ensemble: where κ is the force constant of the harmonic potential. The filtered position is The probability distribution of the filtered position is given by i k j j j j j j j j j j j j j j i k j j j j j j j j j j j j y { z z z z z z z z z z z z y { z z z z z z z z z z z z z z i k j j j j j j j j j y { z z z z z z z z z This corresponds to the exact quantum fluctuation of the position for the harmonic oscillator. So the GSTA method is exact for the harmonic model for not only in the thermodynamic properties but for the coordinate distribution as well. This is a clear advance of GSTA compared to the 2PT method, since structural quantum effects cannot be investigated with 2PT. Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article oscillator, we have chosen to test the proposed method on a single water molecule as a more realistic system. Born− Oppenheimer molecular dynamics (BOMD) simulations were carried out without rotation in the temperature range from 25 to 5000 K using the B3PW91 functional. 67 The vibrational part of the isochoric heat capacity (c V vib ) was obtained as the slope of the internal energy vs temperature function. For the sake of comparison, we estimated the isobaric heat capacity (including rotation and translation) based on these data as c p = c V vib + 1.5R + 1.5R + R = c V vib + 4R. We compare GSTA results with heat capacities determined using several different methods in Figure 5. The graph also includes the ideal classical value, 7R = 58.2 J/K/mol (blue line) as well as the values determined with the application of the quantum harmonic oscillator model to the optimized geometry (green circles). At low temperature, all methods give very similar results, while at the higher temperature only the trajectory smoothing (red crosses) is close to the experimental values (black triangles). 78 This illustrates that anharmonicity can be described properly by GSTA in contrast with the quantum harmonic oscillator model or the 1PT method (purple squares). 4.3. Properties of Bulk Water. In order to demonstrate that our approach can work in very different circumstances, bulk water was considered as a test case in a wide temperature range. The quantum effects are decreasing while anharmonicity is increasing with temperature, and the heat capacity of water changes significantly from zero to 76.0 J/mol/K between different phases at atmospheric pressure. 79,80 The intramolecular vibrations remain in the ground state at ambient temperature; therefore, many rigid water models are successful in reproducing the experimental values. 64,81 For the sake of generality, a flexible model is desired with which similar approaches can be examined even at high temperatures where excited vibrations can be populated as well. Three-site models are beneficial because analytical Hessian can be calculated for the optimized structures, which is not possible for polarizable water models. Considering these requirements, we have chosen the recently developed SPC/Fw 82 water model which performs well on several physical properties. 81 The parameters of the SPC/Fw water model were developed for classical molecular dynamics simulations to reproduce experimental data such as self-diffusion coefficient, dielectric constant, vibrational frequencies, oxygen−oxygen radial pair distribution function, and heat of evaporation. 82 4.3.1. Static Properties. Classical simulations cannot distinguish between the static properties of light and heavy water. In principle, exactly the same thermodynamic and structural properties should be determined for H 2 O and D 2 O since classical statistical thermodynamics predicts that equilibrium properties (heat capacity, molar density, surface tension, etc.) are independent of the masses of atoms. To reveal the different NQEs of classical trajectories, we need post processing methods like 1PT, 2PT, 1PT+AC, or GSTA. An important issue is to decide the reference which the computed quantities are compared to. Converged PIMD simulations provide exact static properties within the limit of the particular water model. However, empirical potentials are developed to reproduce various experimental data with classical simulations. Thus, our strategy is to compare the different approximate values to both PIMD results and experiments as well. Heat Capacity. At a given temperature, we determined the isobaric heat capacity as the derivative of the enthalpy vs temperature function from 120 independent 1 ps microcanonical trajectories. Three states such as Ih ice, liquid water, and vapor were simulated at temperatures from 25 to 1000 K at atmospheric pressure ( Figure 6). The classical ideal values are also shown with blue lines for the condensed and vapor phases in the figure (9R = 74.8 J/K/mol, 7R = 58.2 J/K/mol, respectively). The nuclear motions can be accurately described by independent harmonic oscillators at low temperature; therefore, the heat capacity can be estimated quite well with all methods in the case of hexagonal ice (black triangles in Figure 6). Our method (denoted with red crosses) also gives values close to the experimental ones. 79 The maximum deviation is 6.0 J/mol/K at T = 200 K, probably due to the fact that the SPC/Fw water model was developed for liquid water and not for ice. A typical indication of this is that the melting point of the SPC/Fw water model is similar to that of the SPC/E water model (215 K), well below the experimental value (273.15 K). 83 For liquid water, the harmonic oscillator model is qualitatively wrong as it fails to describe anharmonicity captured already by classical simulations (yellow diamonds in Figure 6). The smoothing correction successfully takes the coupled motions of the molecules into account, thus performing outstandingly when compared to the experimental values. The isobaric heat capacity calculated by PIMD is somewhat higher (green circles in Figure 6). Most importantly, Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article GSTA performs much better than the classical model or the 1PT method no matter what reference we use, i.e., experimental value or PIMD results. Intramolecular vibrations are filtered out in the g c V -smoothed trajectories. In this aspect, water molecules behave rigidly in these liquid simulations (see the animation in the Supporting Information). This is in line with the experience that the overall performance of rigid water models is comparable to that of the flexible ones at room temperature. 64,81 For the vapor phase, we used the same water model and trajectory analysis as in condensed phases ( Figure 6). GSTA gives reasonable results but overestimates the experimental heat capacities by about 3.2 J/mol/K. This is probably due to the fact that the dipole moment of the SPC/Fw water model is adjusted to liquid properties and considering that it is high compared to the dipole moment of a single water molecule in the gas phase (2.39 vs 1.85 D, respectively). The 1PT method overestimates the experimental values with 16 J/mol/K as a consequence that it considers only harmonic vibrations. Overall, the smoothed SPC/Fw simulations are able to reproduce the heat capacity with an average error of 3.1 J/mol/ K in an extended temperature range at atmospheric pressure. We have found two computational studies in the literature which investigated the heat capacities of the three most important phases of water. Yeh et al. used the 2PT method with different rigid water models, and all calculated heat capacities deviated more than 6.7 J/mol/K from the experiments. 45 Shinoda and Shiga performed PIMD simulations for the three phases using the flexible SPC/F water model. 84 They reproduced the experimental values excellently for ice and gas phase, but the heat capacity of liquid water was significantly underestimated by 16.6 J/mol/K. We have investigated other water models to examine their performance for the calculated heat capacities of liquid water (at T = 298.15 K, p = 1 atm). Besides the discussed SPC/Fw model, other potentials were also tested including rigid, polarizable, and four-site models. Table 1 includes results from previous works as well for comparison. Regarding the classical heat capacities (c p cl ), we have reproduced the literature data with small differences, which validates our simulations. GSTAcorrected heat capacities vary between 72.8 and 80.9 J/mol/K with the different water models around the experimental value. The 1PT and 2PT methods give values from 52.2 to 86.6 J/ mol/K, and generally, rigid models perform better. The PIMD heat capacities range from 58.2 to 92.8 J/mol/K. The PIMD values can be considered as the exact heat capacity for a particular water model. Previously, the PIMD heat capacity of the SPC/Fw water model was found to be 88.7 J/mol/K, significantly higher than that of the experimental (75.4 J/mol/ K) or the GSTA values (72.8 J/mol/K). In our PIMD simulations on an NpT ensemble, the isobaric heat capacity is 79.6 J/mol/K. In order to resolve this contradiction, we performed PIMD simulations on a canonical ensemble with the same settings as in the literature. 28 Using the same number of beads, 24, we obtain 79.8 J/mol/K for the heat capacity, but with 32 beads it decreases to 76.6 J/mol/K. This indicates that there is a discrepancy between the value obtained here and those from earlier simulations. Still, we note that our PIMD heat capacity is in reasonable agreement with the experimental and the GSTA values. Our results indicate that the SPC/Fw water model reproduces the experimental heat capacity sufficiently. On the basis of the comparisons discussed here, we conclude that GSTA is an accurate and robust method in the sense that the calculated heat capacities are less sensitive to the chosen potential. Structure of Liquid Water. A quantum-corrected structure can be obtained after the filtration of the coordinates with the kernel function of the energy (g U ). The largest NQE in the structural properties of water is the quantum fluctuation of the hydrogen atoms. This effect is illustrated in the distribution of the intramolecular distances in Figure 7 and also in the animation (Supporting Information). The distribution determined from PIMD simulations can be considered as the exact reference. The distributions of the intramolecular distances become broader compared to the classic results according to the Heisenberg uncertainty principle. The classical distributions are too narrow, but the average distances are almost identical ( Table 2). The GSTA method reproduces almost perfectly the exact distributions with a slight shift of 0.01 Å for the maxima. The classical and the filtered radial distribution functions are shown in Figure 8 together with the experimental data. 91 The positions of the classical peaks are not altered significantly by the filtration, but they become broadened and get closer to the experimental curves. Note that the 1PT and 2PT methods do not correct the structural properties. 92 in ice, the isotope effect is the opposite and more enhanced. The static dielectric constants of ordinary and heavy ice are 110 and 124 parallel to the c axis of the crystal at −25°C, and the difference is even larger at lower temperature. We have calculated the dielectric constant for both the classical and filtered structures at 298.15 K using the following relationship: where ϵ 0 is the vacuum permittivity, V is the volume of the simulation box, and M is the total dipole moment of the system. We computed the dielectric constant for SPC/Fw and AMOEBA14 water models. In Table 3, we collected our GSTA results along with previous classic and PIMD values from the 92 For the SPC/Fw water model, PIMD gives a smaller dielectric constant by 16 than the classical value. Thus, GSTA predicts a significantly lower quantum effect for the dielectric constant than PIMD (+0.6 vs −16), so even the effect is the opposite. The quantum effect determined by PIMD strongly depends on the water model. In most cases, the dielectric constant decreases drastically, but in some cases, it increases. This implies that the overall effect comes from competing quantum effects, as Habershon et al. proposed for the selfdiffusion coefficient of water. 24 They also mentioned that this may affect equilibrium properties such as the melting point of ice. In order to investigate the possible source of these competing effects, we calculated the molecular dipole moment μ and the Kirkwood g-factor G K for the SPC/Fw water model. (For the AMOEBA14 model, it is not so straightforward to calculate the molecular dipole moment because it is a polarizable water model.) The molecular dipole moment is calculated as a root-mean-square of the individual dipole vectors: 74) where N mol is the number of the molecules. The Kirkwood gfactor is defined as The dielectric constant can be expressed from the molecular dipole moment and the G K factor. The molecular dipole moments and the Kirkwood g-factor are collected in Table 4 as well as the static dielectric constants. The overall effect is a product of both the molecular dipole moment and the Kirkwood g-factor. The sign of these effects can be positive or negative depending on the water model, but in experiments, the overall effect is small. Most of the PIMD simulations indicate large quantum effects on the static dielectric constant which implies a large effect on the molecular dipole moment and/or on the Kirkwood g-factor as well. This does not mean that PIMD would be inaccurate, just indicates that these force fields were not designed to reproduce the experimental NQE on dielectric constant with PIMD simulations. GSTA predicts small effects on these properties, which is in line with the experiments. The q-TIP4P/F force field predicts the smallest NQE in PIMD simulations, and the difference between the dielectric constants is smaller than the uncertainty (Table 3). Dynamical Properties. The investigation of NQEs on dynamical properties is less straightforward than that on static properties. Even classical simulations show drastic differences for H 2 O and D 2 O simply because the deuterium moves slower than protium due to the mass difference. Standard PIMD calculations cannot model time-dependent processes, but there are several approximate PIMD-based methods such as RPMD, CMD, or LSC-IVR simulations which can imitate quantum dynamics. 13,18,97 Hence, it is challenging to separate quantum and classical effects in isotope substitutions, and this makes it difficult to validate new methods like GSTA on dynamical NQEs. Self-Diffusion Coefficient. The self-diffusion coefficient D s can be determined from the zero frequency value of VDOS: Since the weight function w U (ν) is always 1 at zero frequency, the self-diffusion coefficient does not change with the application of the GSTA method (see eqs 8 and 77). The self-diffusion coefficient can be calculated from the mean square displacement of the atoms using the Einstein equation: From this equation it is also evident that one obtains the same self-diffusion coefficient after filtering the classical trajectory, since GSTA perturbs the classical coordinates with a few tenths of Å which becomes negligible compared to the movement of the atoms at a sufficiently long time. Thus, previous classical self-diffusion coefficients are collected in Table 5 which are identical with the GSTA values as well. PIMD-based approximate quantum dynamics methods like CMD, RPMD, or LSC-IVR change the classical value of self-diffusion coefficient. The quantum effect (relative change to the classical value) varies between 180% and −30% depending on the water model and quantum dynamics method. Although GSTA does not show any nuclear quantum effect on the self-diffusion coefficient, classical simulations can also reproduce the experimental isotope effect. The experimental self-diffusion of heavy water is 23% slower than that of the light water, and in classical simulations, this varies between 3% and 25%. We have found only two quantum simulations on heavy water with q-TIP4P/F and TTM2.1-F water models, and the calculated isotope effects (−29% and −18%) are in good agreement with the experiment as well as the absolute values. Infrared Absorption Spectrum. In the classical simulation, the infrared spectrum of the SPC/Fw water model failed to reproduce the experimental spectrum qualitatively because of the harmonic model and the fixed point charges. 28,106 This is why we calculated the infrared absorption spectrum of the AMOEBA14 water model which is polarizable, and the OH bonds are anharmonic. We determined the infrared absorption spectra both from the classical and filtered trajectory at different levels of theory (see details in Appendix B). For classical simulation, the absorption cross section is Using the harmonic quantum correction factor on the classical line shape function we obtain 107,108 The absorption cross section is calculated from the filtered trajectories by the expression Interestingly, almost identical spectra were obtained from the filtered trajectories and from the classical simulation applying the harmonic quantum correction factor (see orange and blue lines in Figure 9). This implies that the dipole moment changes linearly with the coordinates during the filtration. The absorption spectrum of the classical trajectory without quantum correction shows much lower intensities (green line). The relative intensities are also different, but the positions of the peaks are the same as in the quantum corrected spectra. Comparison with the experimental spectrum (black line) shows that the quantum corrected peaks deviate with ±100 cm −1 , and the relative intensities are reproduced qualitatively. 109 The peaks of the symmetric and antisymmetric OH stretches are partially overlapped due to the anharmonicity of the AMOEBA14 water model. The calculated spectrum also Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article reproduces the smaller peaks at around 200 and 2200 cm −1 which are completely missing from the spectrum of nonpolarizable water models. In previous dynamic simulations, the quantum spectra deviated significantly from the ones computed from classical simulations. The classical peaks are always blue-shifted compared to the peaks of quantum simulations, the largest difference is in the OH stretching being shifted by ∼100 cm −1 . 9,24,28,64,97,110−114 Assuming that GSTA does not change the position of the peaks, the spectra of the filtered trajectories are always blue-shifted relative to the spectra of quantum simulations independently from the water model. The justification is that in classic simulation the atoms vibrate at the bottom of the potential well and experience less anharmonicity than in real quantum dynamics where the atoms reach higher potential energies because of the zero-point energy. As mentioned in the Self-Diffusion Coefficient section, filtration does not change the frequencies of the vibrations but their intensities. Since GSTA is based on a harmonic approximation, it cannot reproduce the exact position of the vibrational frequencies as Marx already showed for quantum harmonic correction. 107 In previous simulations, imaginary frequencies were also identified in the vibrational spectrum of water. 112,115 GSTA cannot reproduce these imaginary frequencies, because in classical simulations, the VACF functions are even. Therefore, only real frequencies appear in VDOS after Fourier transform. Real time autocorrelation functions can also be determined by the maximum entropy analytic continuation (MEAC) method for the calculation of quantum properties. 116−119 In MEAC, the imaginary time correlation function is converted into a real time correlation function, and the former is generated from PIMD simulations. MEAC needs an approximate real time function so-called prior which can be a general function (i.e., flat prior) or a more specific one which comes from a CMD, RPMD, or LSC-IVR calculation. The input of GSTA is only a classical trajectory in the traditional sense that it propagates in real time as a single bead. MEAC is useful well below room temperature; therefore, it is not used for liquid water. Typically, it is applied for liquid parahydrogen. 117 4.4. Computational Cost and Accuracy. According to the convolution theorem, the same results can be obtained for the calculation of thermodynamic properties both in the time and frequency domains. The Fourier transform should not affect the results. This is true for continuous periodic signals, but a finite number of data are represented in discrete points in MD simulations. This is why the calculated quantities carry numerical errors, and different estimators of the same property can give different values. Here, we illustrate that the same correction of the VDOS and VACF can give significantly different results. The VDOS function is generally determined via the Fourier transformation of the VACF function. The Fourier transform can be carried out with different numerical integrators. The most common technique corresponds to the left Riemann integral, which is probably the default algorithm in most of the simulation programs: Using the trapezoidal rule for discrete Fourier transform we get n t n t t The calculated 1PT heat capacities are collected in ), then the underestimation is 9%. Using the trapezoidal rule for integration, all methods give the same results within 0.01%. In order to sample the low frequency motions for the calculation of the VDOS, a long simulation time is necessary, and as a rule of thumb, the length of the VACF should be at least as long as the time period of the lowest frequency motion. As was already shown in the Theory2.1 section, the calculation and the correction of the VDOS are performed with a double integral (eqs 2 and 16), which is more CPU-demanding than the correction of the VACF which requires only a single integral (eq 23). One can expect that the calculations will be more accurate if all data are used; i.e., the length of the VACF equals the simulation length. The γ c V weight function and the g c V kernel function converges exponentially to zero; therefore, We can give an upper limit for the maximum time separation (t max ) with the calculation of the heat capacity of ideal monatomic gases. The velocities can be considered as constant between two collisions, and its autocorrelation function is 1. In this case, there is no quantum effect, and the exact heat capacity is 3/2k B . The heat capacity with the filtration of the velocity considering the integration up to t max is where tanh denotes the hyperbolic tangent function. For the VACF correction Both estimators lead to the exact classical result with a long t max . The convergence of the functions are shown in Figure 10 at T = 298.15 K. The exact result is reached within 50 fs with an error smaller than 0.01%. This means that a 50 fs long VACF is enough for the quantum correction, and it is not necessary to calculate the VACF for several ps long. Calculations of the VACF consume considerable amounts of memory which correlate with t max , and the number of the mathematical operations correlate with the square of t max . CONCLUSIONS In summary, we have proposed a new method, GSTA for quantum correction of classical and BOMD simulations. Our qualitative findings are summarized in Table 7 regarding the capability and accuracy of the GSTA method compared to other methods. A clear advance of GSTA compared to 1PT or 2PT methods is that the effect of anharmonicity can be determined rigorously using the work of smoothing defined by eq 53. Another novelty is that structural NQEs can be investigated with the filtration of the coordinates. In more advanced methods, where anharmonicity can be described, the classical dynamics is modified to incorporate NQEs (e.g., ZPEs are added to the different normal modes of vibrations). The good agreement with the experiments indicates the plausibility of our smoothing technique. Zero-point vibrations are automatically taken into account by the proper enhancement of high frequency motions from the classical trajectories. The necessary simulations are orders of magnitude faster than with the golden standard technique, PIMD. GSTA reproduced large NQEs for heat capacity and structural properties. In contrast to PIMD computations, GSTA does not change the IR absorption spectrum or the dielectric constant significantly, and the self-diffusion coefficient remains exactly the same as the classical value. The main reason is that the input of GSTA is a classical trajectory. The quantum fluctuations are added after the simulation, so it does not change what the initial and final states were and how long it took to get there. In contrast, in PIMD calculations, these properties deviate notably from the classical values. While PIMD requires reparametrized or ab initio models to avoid double counting, GSTA can be applied on empirically derived force fields or model potentials. Our method offers an alternative way to estimate NQEs routinely in theoretical investigations. In addition, force fields and water models can be improved using GSTA. The proposed method can be easily combined with molecular modeling programs to perform simulations and analyze the trajectories for which our source code is available at GitHub. 120 Here, we have determined the heat capacity and some structural and dynamical properties for various systems as a proof of concept, but in subsequent studies, we show that other thermodynamic quantities, such as entropy and free energy, which are strongly related to the heat capacity, can be estimated by GSTA. We also intend to test the applicability +++ +++ +++ − − CMD, RPMD, LSC-IVR +++ +++ +++ ++ ++ a −, no effect; +, rough estimation; ++, good approximation; +++, exact for a particular potential. Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article and the limitation of our method on other NQEs, like tunneling, and further spectroscopic and dynamic properties. ■ APPENDIX A Change of Potential and Kinetic Energy upon Smoothing A trajectory can be described as a Fourier series with the coefficients a n : x t a nt ( ) cos( ) Without loss of generality, we assume that the x(t) function is an even function, the periodicity of the trajectory is 2π, and a 0 = 0. The a n coefficients: Then, the filtered trajectories can be written with the w weight functions: Here, n 2π is the actual frequency as the first argument of the w weight function for the Fourier series. The smoothed kinetic energy can be written as The work of smoothing at a specific time: After some trivial transformations and using eqs 93 and 60 we get The classical line shape function is the Fourier transform of the autocorrelation of the classical total dipole moment: The classical infrared spectrum can be obtained as the classical absorption cross section: This classical absorption spectrum can be seen in Figure 9. The simplest incorporation of quantum effect is the application of the harmonic quantum correction factor: 107,108 The quantum corrected absorption cross section is the product of eqs 99 and 100:
10,789.8
2020-04-08T00:00:00.000
[ "Chemistry", "Physics" ]
Research on Technological Development Status of New Energy Vehicles Based on Hydrogen Fuel Cells As an electric energy conversion medium with economic value, hydrogen energy, which is storable and highly adapted to new energy, can be used as an energy storage medium to ensure the stable output of energy and the continuous running of vehicles in the context of the continuous deterioration of the global environment and the constant decline of the total amount of traditional fossil fuels. This paper summarized the development status of hydrogen fuel vehicles and development characteristics of corresponding technologies, and studied the technological development status of hydrogen fuel cell vehicles (HFCEVs) through the analysis of hydrogen energy for vehicles and through summaries of the development of hydrogen fuel vehicles and key technologies of hydrogen fuel cells. Abstract: As an electric energy conversion medium with economic value, hydrogen energy, which is storable and highly adapted to new energy, can be used as an energy storage medium to ensure the stable output of energy and the continuous running of vehicles in the context of the continuous deterioration of the global environment and the constant decline of the total amount of traditional fossil fuels. This paper summarized the development status of hydrogen fuel vehicles and development characteristics of corresponding technologies, and studied the technological development status of hydrogen fuel cell vehicles (HFCEVs) through the analysis of hydrogen energy for vehicles and through summaries of the development of hydrogen fuel vehicles and key technologies of hydrogen fuel cells. 1.Introduction Hydrogen vehicles refer to cars with hydrogen as the energy source, and the energy contained in hydrogen is applied directly or indirectly to the automotive field in different forms. Hydrogen vehicles can be divided into two types by the use of hydrogen energy: first, hydrogen internal combustion engine vehicles (HICEVs) which are driven by the power generated by hydrogen combustion in the internal combustion engine; second, Fuel cell vehicles (FCEVs), which are driven by a motor propelled by currents formed by electrons from the reaction of hydrogen or hydrogen-containing substances with oxygen in the air in fuel cells. HICEVs are more common as a hydrogen storage device is simply added on the basis of traditional vehicles. However, the limited capacity of the hydrogen storage device results in the requirement for frequent hydrogen refueling of hydrogen fuel vehicles. Meanwhile, CO 2 emissions are unavoidable and there is a safety problem about hydrogen storage, which is not in line with the current environmental development trend, as they are modified from traditional vehicles. Therefore, HICEVs cannot become the mainstream of future development. In comparison with HICEVs, FCEVs use hydrogen energy to generate electricity and electric energy is converted into mechanical energy to drive the vehicles. Hydrogen fuel cells truly achieve zero emission and zero pollution based on the characteristics of hydrogen energy [1] . Hydrogen energy is an ideal energy source in the new energy system, which, in comparison with other new energy, avoids the disadvantage of intermittency and has a higher calorific value. Its density (140MJ/kg) is as high as twice that of solid fuel (50MJ/kg). Moreover, the product of hydrogen energy is water, which has zero pollution to the environment. Hydrogen energy is the most environmentally friendly energy, which can be stored in high-pressure tanks in the gaseous phase. It can also be stored in the liquid phase and in the solid state such as porous materials, metal cyanides and complex hydrides. The application of hydrogen energy as an energy carrier in the automotive field has the following advantages: 1) High-efficiency conversion between hydrogen and electric energy can be achieved through water electrolysis technology; 2) Compressed hydrogen and hydrogen energy stored in the solid state have a high energy density; and 3) Zero pollution of vehicles can be realized based on the characteristics of hydrogen energy, making hydrogen fuel vehicles become new energy vehicles in the true sense. Development status of hydrogen fuel vehicles Low carbon transition and green development have become the main theme of future economic development according to the analysis of global economic and energy development trends. President Xi Jinping stated "China will increase its intended nationally determined contributions, work out more powerful policies and measures and strive to reach the peak of CO 2 emissions by 2030 and achieve carbon neutrality by 2060" in the Speech of Xi Jinping at High-level Meetings to Commemorate the 75 th Anniversary of the United Nations. The development of hydrogen energy and the promotion of hydrogen fuel vehicles have become an important part of the future economic development in various countries [2] . Summary of the development status of global hydrogen fuel vehicles The United States, Japan, the European Union, South Korea and other countries and regions have released national hydrogen development plans successively, clarified the development trend of hydrogen energy, and enhanced the research and development of hydrogen fuel-based new energy vehicles and the support for enterprises continuously. Global hydrogen fuel-based new energy vehicles will usher in a period of rapid development. According to the latest research results released by the Hydrogen Council, it is estimated that there will be 30 million hydrogen energy-related jobs worldwide and the use of hydrogen energy will reduce about 6 billion tons of CO 2 emissions by 2050. Hydrogen energy can create a market value of USD 2.5 trillion, accounting for up to 18% in the global energy system. As the most strategically significant energy among new energy resources, hydrogen energy has received more and more attention from organizations, regions and countries. Eighteen economies, accounting for 70% of the world's total economy, have formulated hydrogen-related energy development strategies according to statistics from the authority [3] . Hydrogen 3 fuel vehicles were put into operation and more than 80 hydrogen refueling stations were built as of 2020. A new situation has emerged in the development of hydrogen fuel vehicles under the environment of continuously released policies [4] . Hydrogen energy was first proposed in the 2019 government report, and was included in the definition of energy for the first time in the Energy Law of the People's Republic of China (draft for comments) in April 2020, which is an important point for the development strategy and supporting policy for hydrogen fuel cells in China. In November of the same year, the Development Plan of New Energy Vehicle Industry (2021-2035) clearly pointed out that efforts would be made to achieve the commercial use of hydrogen fuel vehicles with hydrogen fuel cells as an important component in 2035 [5] . The Notice on Pilot Application of Fuel Cell Vehicles was officially released in September 2020, which proposed to strive to master key technologies of hydrogen fuel vehicles and build a complete industrial chain gradually by 2025. More than 40 provinces and cities have made development plans for hydrogen fuel up to now. China has initially established a hydrogen industrial chain of hydrogen fuel cell vehicles with grey hydrogen as the main hydrogen source and high-pressure hydrogen as the main carrier based on the development status of hydrogen fuel. Hydrogen fuel vehicles in China now have the foundation for high-quality development, mainly manifested in the following: 1) The top-level design of the industry of hydrogen energy for vehicles has gradually become clear and government support has been continuously improved; 2) The demonstration operation of hydrogen fuel vehicles in key development regions has achieved great results, and fuel vehicles have strong demonstration and promotion capabilities; 3) Continuous breakthroughs have been made in key technologies. Fuel cell materials have been domestically produced, and hydrogen production technology based on renewable energy has been improved continuously. Hydrogen production technologies for hydrogen fuel vehicles Hydrogen production technologies include fossil fuel restructuring, decomposition, photolysis and water electrolysis. The global annual hydrogen energy consumption is about 4 billion tons currently. More than 95% of the hydrogen is obtained from fossil fuels. On the one hand, traditional fossil fuels have the disadvantage of being non-renewable. Meanwhile, their CO 2 emissions have a significant impact on environmental issues such as global warming. Therefore, hydrogen production based on traditional fossil fuels cannot fit with the future industrial development trend. On the other hand, hydrogen produced via water electrolysis has purity as high as 99%, and the product, water, is environmentally friendly, which is in line with the current global trend of technological change. Moreover, renewable energy such as solar energy and wind energy can be combined in the hydrogen production process, achieving green hydrogen production with zero emission and pollution. The hydrogen production process can be completed; besides, it is conducive to the consumption and storage of renewable energy. Electric energy is used to decompose water into hydrogen and oxygen in terms of the principle of hydrogen production via water electrolysis. Water electrolysis is classified by electrolyte into alkaline water electrolysis, proton exchange membrane water electrolysis with solid polymer (also known as proton exchange membrane (PEM) water electrolysis), anion exchange membrane water electrolysis with solid polymer (also known as anion exchange membrane (AEM) water electrolysis) and solid oxide electrolysis (SOEC). The four hydrogen production technologies have the same principle. Besides applied materials, the storage and operating conditions are also different [6] . The technology of hydrogen production via alkaline water electrolysis is quite mature. It is a low-temperature electrolysis technology, in which KOH and NaOH are used as solutions to generate hydrogen and oxygen under the action of direct current. Its efficiency can be up to 60%. However, the technology has problems such as long start-up time and poor adaptability. Therefore, it is more PEM water electrolysis is a hydrogen production technology which developed from PEM fuel cell technology and catalyst technology. With a higher operating current density and fast start and stop, it is one of the most potential hydrogen production technologies recognized. AEM water electrolysis and SOEC have not yet been commercialized though they have a high conversion rate. Therefore, they are both called new water electrolysis technologies. SOEC has an efficiency as high as 90% in the case of comprehensive waste heat utilization, but the technology is still in the research and development stage. In addition to SOEC, AEM water electrolysis technology combining the advantages of traditional alkaline liquid electrolyte water electrolysis and PEM water electrolysis has significant development potential in the future. The four methods of hydrogen production are organized as follows based on public data: suitable for engineering, national energy layout and other major aspects due to its problems about safety regulations to be settled urgently. It cannot be adopted for hydrogen storage in the application of hydrogen fuel cells. 3) Solid-state hydrogen storage technology Hydrogen is stored through hydrogen-containing compounds and released from the solid-state compounds through reaction during use under the technology. The solid-state hydrogen storage with hydrogen storage materials as a medium is recognized as a promising hydrogen storage technology. The technology tends to be mature at present. Commercially viable hydrogen storage materials mainly include rare earth AB5, Ti-Fe AB, Ti-Mn AB2, Ti-V solid solution and magnesium hydrogen storage materials, etc. The development of the technology is one of the key factors for the commercialization of hydrogen fuel cell vehicles. Its research and development focus on the high efficiency of hydrogen storage, process safety and material feasibility. The research and development of hydrogen storage materials are the core factor restricting the development of solid hydrogen storage. Hydrogen carrier technology and composite hydrogen storage and transport technology have emerged with the rapid development of hydrogen energy. However, a new hydrogen storage technology still requires a long development and demonstration process due to the short development time. Technology of fuel cells for vehicles Fuel cells are different from traditional electric energy storage batteries. Their chemical energy is converted into electric energy through the fuel of internal materials. They have a high power generation efficiency. Most waste from power generation is safe and environmentally friendly. Meanwhile, only hydrogen and oxygen are consumed in hydrogen fuel cells. Therefore, hydrogen fuel cells have a broader market application prospect compared with traditional electric energy storage batteries. In fuel cell reactors, the addition and design of catalysts are undoubtedly the key factors to improve the performance of fuel cells. Considering environmental protection and other factors, commonly used catalyst products and key factors are organized as follows according to relevant data of enterprises: Among fuel cells, hydrogen fuel cells mainly include molten carbonate fuel cells (MCFCs), solid oxide fuel cells (SOFCs), polymer electrolyte fuel cells (PEFCs), phosphoric acid fuel cells (PAFCs). PEFC is the current research hotspot, which has the advantages such as low operating temperature and short start-up time. Conclusion The continuous "hydrogen heat" has facilitated the development of the global hydrogen fuel cell industry. Hydrogen fuel vehicles are now the focus and hotspot of research in the automotive field. Many companies around the world have designed and put into operation demonstration projects. The application of hydrogen fuel cells to large vehicles such as buses is also researched and developed besides their application to common household cars. Hydrogen fuel cell vehicles are gradually entering the stage of quality improvement in China according to the analysis of current conditions. In particular, they have developed rapidly in the past two years under the continuous stimulation of policies, but problems still exist in key links, such as incomplete hydrogen storage materials, low
3,096.8
2021-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Wage Gaps between Native and Migrant Graduates of Higher Education Institutions in the Netherlands In the Netherlands the share of immigrants in the total population has steadily increased in recent decades. The present paper takes a look at wage differences between natives and migrants who are equally educated. This reduces potential skills biases in our analysis. We apply a Mincer equation in estimating the wage differences between natives and migrants. In our study we analyze only young graduates, so that conventional human capital factors cannot explain the differences in monthly gross wages. Therefore, we focused on "otherness" factors, such as parents' roots to find an alternative explanation. Our empirical results show that acquiring Dutch human capital, Dutch-specific skills, language proficiency, and integration in the long-term (second-generation with non-OECD background) are not sufficient to overcome wage differences in the Dutch labor market, especially for migrants with parents from non-OECD countries. Introduction The share of foreign-born population has greatly increased in recent years in most developed countries. This has prompted much research on the social and economic impacts of immigrants on the host society. Such impacts may refer to job creation (or loss), wage changes, welfare and growth effects, trade and tourism flows, or new business formation. A broad review of migration impact assessment matters can be found in Nijkamp et al. (2012). An important and recurrent question is whether a migration inflow may widen the wage differences between natives and migrants. The present paper will examine in particular the wage gap between natives and migrants with a higher education diploma in the Netherlands. In the Netherlands, the share of immigrants in the total population has risen substantially in recent decades. Figure 1 below describes the immigration situation over the past ten years. As can be seen, the share of younger immigrants is higher compared to the older categories. This indicates that migrants who migrated to the Netherlands during that period were mostly young people. Especially, the 20-30 age group is large, and their share increases as we move toward 2010. Some of these migrants have completed their education in their country of origin; others in the Netherlands 1 . With reference to Eurostat 2010, there appeared to be 1.8 million foreign-born residents in the Netherlands, corresponding to 11.1 per cent of the total population. Of these, 1.4 million (8.5 per cent) were born outside the EU and 0.428 million (2.6 per cent) were born in another EU Member State. Immigration and the immigrants' economic impact on the host society have long been a sensitive topic in the economic literature. As migrants are heterogeneous in terms of skills and social demographic characteristics, their impact on the host country labor market can also be different. There is much evidence of a wage gap between migrant workers and native workers (Groot, 2013;Behtoui, 2004) . The aim of the present paper is, first, to examine the gross salary of students who have graduated from Dutch higher professional education, and then to make a comparison between migrants and natives in the Dutch labor market. In doing so, we borrow the Mincer equation for graduates of Dutch higher professional education. Moreover, this paper contributes to the emerging list of studies on wage differences between migrants and natives in the following ways. First of all, in our analysis the role of skill bias is suppressed: natives and migrants in our sample have largely obtained the same degrees for a higher education institution. Secondly, we also control for parents' roots, and our empirical results reveal that wage discrimination is related to the individuals' roots. Graduates from non-OECD countries are receiving relatively low wages. Furthermore, immigrants who invest in their education at later ages earn lower wages; therefore, age structure plays likely a role in the payment of different wages in the labor market. After reviewing the literature on wage differences between immigrants and natives, the paper presents interesting empirical results, by using data from Maastricht University for 2007 to 2010 on graduates of higher professional education. We found that there is no wage difference between natives and second-generation migrants, but the wage gap between first-generation migrants and natives is -3 per cent. Furthermore, we also find that migrants coming from outside the OECD zone receive a lower gross salary in comparison with OECD zone migrants. This study demonstrates also that the most important factor in the wage gap between immigrants and natives is in fact not strongly related to their human capital endowment, but probably more to the effect of being "otherness". The remaining part of the paper is organized as follows. Section 2 provides a literature review. Section 3 describes our data set and offers a descriptive analysis. Section 4 presents the empirical results, and Section 5 concludes. Literature review In recent years, special attention has been devoted to the impact of immigrants in general and highly educated and skilled immigrants, in particular. Several studies (see Groot, 2013;Friedberg, 2000) have revealed that, although developed countries are in desperate need of skilled and highly educated immigrants, immigrants and even the children of immigrants (also called the secondgeneration) are not enjoying equal job opportunities and wages. According to human capital theory, the difference in labor market outcome is related to an individual's investment in education and job trainings (Becker and Becker, 1993;Mincer, 1974). Education and job training increase an individual's productivity, which in turn has a positive impact on a person's earning. On the basis of this theory, individuals with the same labor supply characteristics are expected to have the same wage and employment opportunity. Furthermore, the conventional human capital model cannot fully explain the differences in terms of wage and employment opportunities between migrants and natives. Some additional adjustments have sometimes been added to the model, for example, individuals' investments in human capital, and whether this was accumulated in the country of origin or the country of destination. The same holds true for the years of work experience, especially if they are from non-OECD countries (Coulon, 2001;Friedberg, 2000), lack of host country's specific skills, language and knowledge. In due time, however, after immigrants have lived for a number of years in the host country, they steadily acquire the host country's specific knowledge and language. Consequently, their labor market performance will increase, and in the course of time their wage difference in comparison to natives tends to diminish (Friedberg, 2000;Borjas, 1985;Chiswick, 1978). In our study, we focus on immigrants who have graduated from Dutch higher education institutions, and therefore they have the same educational qualifications as the natives. If that were the only relevant factor, there would not be a wage difference between immigrants and natives, and in particular between second-generation immigrants and natives. At the same time, the concept of social capital indicates that social ties produce transferable values, and can lead people toward better employment opportunities and possibly higher paid jobs. According to Bourdieu and Wacuant (1992, p 119) "social capital is the sum of the resources that accrue to an individual or a group by virtue of possessing a durable network of more or less institutional relationships of mutual acquaintance and recognition." This entails two important elements of social capital: 1) the strength of a social network (total number of connections) that one can depend on, and 2) the sum of the resources (capital, human and cultural) that each social network possesses. Studies find that a person with a better connected network has more chances in job-matching channels, which may also be associated with higher incomes (Granovetter, 1995;Sprengers et al., 1988). As personal relationships are homogeneous in different groups (e.g. ethnic, religious), job opportunities acquired via personal relationships can cause inequalities in society (Behtoui, 2004). Campbell et al. (1986) indicate that networks are essentially resources and, like many other resources, they are not distributed evenly. Sprengers et al. (1988) studied 242 Dutch men, aged 40-55, who became unemployed in or before 1978. He concludes that those with better social capital found a job within a year, especially those with access to social capital through weak ties. Furthermore, Lin et al. (1981) found that a persons who use information from-and enjoys the influence of-powerful, wealthy or prestigious people are more likely to find a better job than those without such connections. There are two neoclassical economic models that can explain the labor market gaps between immigrants and natives from the demand side. The first is the taste model developed by Becker (1957), and the second is statistical discrimination pioneered by Phelps (1972) and Arrow (1973). According to Becker's model, discrimination is fundamentally a problem of taste, meaning that there is a disamenity value in employing a person, while, according to Phelps and Arrow, it is due to lack of information about the productivity of individuals. This gives the firms an incentive to use observable characteristics, such as race, gender, etc., to infer the expected productivity of applicants. However, the second model is not free of criticism (for an overview, see Aigner and Cain, 1977). As it is difficult to measure discrimination empirically 2 in the labor market, scholars adopt the conventional discrimination measure namely the effect of "otherness" on wage and employment to explain the differences between immigrants and the natives (Chiswick, 1978 ;Behtoui, 2004). Foreign background is negatively related to employment and wages, especially for those outside the OECD circle (Miles, 1993). In this paper, we divide the immigrants first into first and second generation, and then into two groups namely: those with roots in OECD countries, and those with roots in non-OECD countries. The motivation behind this selection is the cultural similarity of OECD countries to the Netherlands, and cultural distance between non-OECD countries and the Netherlands. Through this distinction we may be able to capture the possible risk of suffering from discrimination (Miles, 1993). Furthermore, having a foreign background is associated with lower wages and employment, especially for those from non-OECD countries (Behtoui, 2004). And finally, we also examine the effect of having a foreign born father or mother from OECD or non-OECD countries for the second generation migrants to test the Chiswick (1977) and the related Behtoui (2004) hypothesis. In our study, we focus on highly educated migrants who have completed their studies together with natives in the same year, and then entered the labor market. Thus, we have hardly any skill bias in our analysis. Before presenting the empirical results, we discuss the data set and present some descriptive analyses. The next section presents a brief description of the data we used. approximately 18 months after they had completed their studies, and information was collected not only on their discipline of study and other aspects of their background, but also on their current job. Together with this, spatial information was also collected. The average response rate was 37 per cent for each year. Furthermore, we focus on graduate students who had obtained their degree and have a full-time job. We dropped from our analysis those graduates who had part-time jobs, were self-employed, were still students, and whose answer sheets had missing information. Data source and descriptive analysis For the students who have graduated from higher education, data are available on a series of variables including: personal characteristics (such as gender, age and ethnicity), subject of study, mode (full-time vs. part-time), degree results at the time of graduation, whether individuals are employed in small firms (1-9 employees), medium-size firms (10-99 employees), or large firms (>=100 employees), while graduates were also asked to give information about their place of residence, for instance: where they lived when they were 16 years old; where they lived during their course of study; and where they were living now. Through an analysis of these questions, we were able to generate four variables, namely: those who lived in Noord Holland, Zuid Holland, Utrecht (NH, ZH, U); moved to (NH, ZH, U); left (NH, ZH, U), and moved in-between (NH, ZH, U). Each of the aforementioned provinces (Noord Holland, Zuid Holland, Utrecht) hosts one of the major Dutch cities (Amsterdam, Rotterdam, The Hague, and Utrecht); these cities are all located in Dutch Randstad. Table 1 presents the personal characteristics of graduates with a higher professional education. The gender composition is 53 per cent male and the mean age of the graduates is 27 years. The share of second-generation migrants is higher (8.6 per cent) compared to first-generation (3.4 per cent). We also added three dummies to capture the differences between natives, OECD nationals and non-OECD nationals. As can be observed from Table 1, the share of non-OECD (7.6 per cent) nationals is higher compared to OECD nationals (4.4 per cent). Regarding the graduation score, Table 2 presents descriptive statistics for natives, and first-and second-generation migrants. The share of the first-generation migrants in the high graduation marks is slightly higher than that of the second-generation. This suggests that the first-generation migrants is more talented than the second-generation migrants. A possible reason for higher marks of the first-generation migrants might be that some of these students came into the Netherlands already with a degree from their country of origin, and, because their original degree is not considered to be equivalent to a Dutch degree, they have to re-study for a couple of years. Figure 2 below shows the ratio of supply and wages for natives-immigrants in different age categories of graduates with a higher professional education. As expected, the supply ratio of first-generation migrants is low in the younger age groups (20-24), but, interestingly, they get higher wages. As we move further along the age line, the supply ratio of first-generation migrants to native increases, and the wage ratio gets below 1, indicating that older migrants are not paid as much as natives of the same age in the labor market. For the second-generation immigrants, there is no wage difference with natives, and even at older ages, the second-generation migrants receive slightly higher wages compared to natives. First-generation immigrants (age groups) Second-generation of immigrants (age groups) Estimation of the Mincer equation The Mincer equation (Mincer, 1974) is often used in economics to analyse wage variation. This equation relates wages to a series of personal, work, and regional characteristics, and performs well in explaining the positive relationship between ability (proxied by years of education) and earnings. In the Mincer equation, it is assumed that the logarithm of earnings is a nonlinear function of experience, and, according to the model, it can be measured as age minus years of schooling, minus the school starting age (5 years). In this study we do not have information on total years of education. Therefore, we use age and age squared as proxies for experience. Furthermore, we also include the subject of study in the form of 7 dummies for graduates with a higher professional training 4 . We next introduce a dummy variable taking a value of 1, if the individual is responsible for controlling other employees i.e. he/she is a 'supervisor', and 0 otherwise. Furthermore, to control for the language of the graduates, we use a dummy, taking the value of 1 if a language other than Dutch is spoken inside the household. The regression equation for graduates with higher professional education is written as: where (w it ) is the gross monthly salary of individual (i) in year (t); X i,t represents the explanatory variables that include the graduation score 5 , age (a proxy for experience), age-squared to capture nonlinear effects, dummies for gender, field of study, and residential location; Z i,t is a dummy for immigrant status; and , i t  is the error term. We use residential and time fixed effects to cope with spatial and temporal heterogeneity. We applied four steps in the Mincer estimations. In the first variant, we include the main variables, while in the second variant we separate age and age squared for the first-and the second-generation immigrants. In the third variant, we add the interaction between the first-and second-generation migrants with different firm sizes . And, finally, in the fourth variant we add dummies for the field of study. 4 For more information on the descriptive statistics, we refer to Appendix A. 5 For details see Footnote 2. Table 3 shows the empirical results. There is a wage gap between the genders (male and female) who are equally educated; male graduates receive 8 per cent more gross salary per month than their female counterparts, but the outcome improves a bit (7 per cent) when we control for the field of education. We have only considered full-time jobs and therefore, the gender difference cannot be explained by the difference in working hours. Empirical evidence The age variable, which is used as a proxy for experience, is positively related to our dependent variable, and is highly significant in all three variants. The estimated coefficients are comparable with the values generally found in the literature. Furthermore, as the descriptive analysis shows (Section 2), the first-generation migrants experience a difference in their gross salary per month if they graduate at later ages. To capture this age effect, we separated the age and age squared for the first-and the second-generation immigrants, and the interpretation of our result is presented in Figure 3 below. As can be observed from Figure 3, there is no significant wage difference in age category between the second-generation immigrants and the native graduates. On the other hand, if we compare native graduates with the first-generation immigrants, we can observe that the older the age category of the first-generation immigrants, the lower the wages . This indicates that, for the first-generation immigrants who are investing in their human capital at later ages, the return to their education gets smaller compared to the natives and the second-generation immigrants of the same age. The human capital measure indicates that talented graduates receive higher wages in the labor market compared with our reference case (where the graduation score is below 7.5). Graduating with marks between 9 and 10, increases the monthly gross salary by 5 per cent compared to our reference category, ceteris paribus. For those who graduated with scores between 7.5-8.5, the difference is 3 per cent. The social structure variable, which contains various variables of interest, indicates that firstgeneration migrants earn lower wages, leading to a 3 per cent wage gap between natives and the first-generation migrants. Our finding for the first-generation migrants is in line with the literature: that is, the wage gap is mostly related to language and social skills (Chiswick, 1978). Our result confirms previous study findings: for example Algan et al. (2010) find for France, Germany and the United Kingdom that first-generation migrants who are living and working in the above-mentioned countries earn significantly less than the natives, and for those who come from developing countries, their wage gap increases further. Our empirical result for non-OECD countries indicates that the wage gap between graduates from OECD members and non-OECD countries is 1 per cent. Furthermore, a possible reason for the wage difference between OECD and non-OECD graduates could be that graduates from non-OECD countries accept lower paid jobs to remain in the Netherlands. This is confirmed by a recent study by Bijwaard and Wang (2013) who find that graduate students from less developed countries accept low paid jobs to remain in the country and to find better job opportunities. An important factor that affects wages according to the efficiency wage theory is the size of the firm (Akerlof, 1982;Bulow and Summers, 1986). Our empirical finding shows that wages increase with firm size. Medium-sized and large firms pay respectively, 2 and 6 per cent more gross salaries than our reference category (small firms). The second-generation immigrant earns higher wages in both medium-sized and large firms compared to the second-generation immigrant graduates employed in small firms. Furthermore, employees with more responsibility receive higher wages compared to those without. As indicated above in the data section (Section 2), we created four variables for residential location to determine whether residential location has an impact on the gross salary of these graduates. The results indicate that those who lived in the provinces Noord Holland, Zuid Holland and Utrecht (NH, ZH, U) receive 4 per cent more gross salary compared to our reference variable (which refers to those living and continuing to live in other provinces). Furthermore, those who are moving into the mentioned provinces are also receiving higher wages and their gross monthly salary increases by 3 to 4 per cent. Interestingly, for those graduates who are moving between the aforementioned provinces, their gross monthly salary increases by 6 per cent in comparison to our reference variable. Venhorst (2012) studied the wages of college and university graduates in the Netherlands, and found that wages are higher for those graduates who work in larger labor markets and expensive regions. Furthermore, those who move away from the aforementioned provinces, have a higher gross salary compared with the reference group. These results are in line with the literature that indicates that those graduates who change their location fare better than those who do not change location (Abreu et al. 2011). We also controlled for the field of education, and our results indicate that those who are involved in technical studies are paid the highest (17 per cent) compared with our reference category (language and arts). All coefficients for the field of study are positive and significant, which indicates that graduates of language and arts courses are employed in less well paid jobs. The impact of parent's roots Taking into account the conventional discrimination measures applied by Chiswick (1977), having a native-born mother contributes more to language skills than a native-born father, and, as a result, individuals can earn higher wages. However, Behtoui (2004), with reference to a Swedish case, finds that since fathers can occupy higher positions in the labor market than mothers, a native-born father can pass on a more valuable social network to the children than a native-born mother. We have tested both hypotheses by categorizing individuals' parents as originating from either OECD or non-OECD countries. Through this distinction we can observe the difference in culture, language and quality of the parents' education and its impact on the productivity of individuals in the labor market. Table 4 above shows the share of each category of immigrants in terms of their parents' roots (i.e. country of origin). In the first-generation immigrants, the share of graduates from non-OECD countries is higher compared with the other categories (OECD, father from OECD, mother from OECD), and this share is the second highest for the second-generation immigrants. This is not surprising because, after the Second World War, the Netherlands hosted a large number of guest workers from non-OECD countries. Table 4 also shows that the share of children born from marriages between Dutch nationals (both male and female) and non-OECD nationals is relatively higher compared with the share of marriages with OECD nationals. Table 5 presents the results concerning the wages of higher education graduates correcting for their parents' roots. We estimated the wages in two variants, but because of space limitation, we only report here the parents' roots variables. The first variant does not control for field of education, while the second does. Our result for the second generation of immigrants indicates that, if individuals have a native father or a native mother in combination with a non-OECD national mother (-2.6 per cent) and an OECD father (-2.5 per cent), they are earning lower wages compared to our reference category (where both parents are Dutch nationals). The results suggest that having either a native father or mother and access to their social capital does not affect the labor market outcome of these young graduates compared with the case where both parents are natives. The difference between having a native mother or a native father is very small in our estimation, but still our result confirm Behtoui's (2004) results that graduates with a native father perform (the difference is between 0.0045 to 0.0059 per cent) better than a native mother, even though they probably would speak a different language at home. The difference between those young graduates who have roots in OECD countries and those with roots in non-OECD countries shows that having non-OECD parents decreases their wages by 2 per cent compared with the reference case (where both parents are Dutch natives), ceteris paribus. The finding for OECD and non-OECD parents captures the culture and language differences on the one hand, and parents' quality of education, on the other. The first-generation immigrants follow a pattern that is similar to what we have just described for the second-generation migrants. Young graduates with roots in non-OECD countries experience labor market disadvantages which are twice as high as those of young graduates with roots in OECD countries. Furthermore, this also highlights the effect of being "otherness" due to one's name and family name. We may conclude that acquiring Dutch human capital, Dutch-specific skills, language proficiency, and integration in the long term (second-generation) does not remove discrimination in the labor market, especially for people from non-OECD countries. 406 Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 *, the reference category is Dutch parents. Included variables are; age, age-square, gender, medium-size firm, large-firm, supervisor position, graduation score lives in(NH,ZH,U), left (NH,ZH,U), moved (NH,ZH,U), moved between (NH,ZH,U), field of study (only on the second variant), and time-fixed effect. Robustness check In order to check the robustness of our OLS regression on the wage difference between the firstand second-generation immigrants and natives, we employed two different methods; firstly, we dropped some of the variables such as: different firm sizes, graduation scores and supervisor position from our analysis, because there may be a case of endogeneity of these variables with our dependent variable. Table 6 presents our results in two variants, in the first variant, the dependent variable is monthly gross salary, while in the second variant it is gross-hourly wage. As can be observed, the results are similar to the ones we found in Table 3. Secondly, we ran a quantile regression. The quantile regression appears to confirm also our OLS results. The first-generation immigrants in fact receive lower gross salary wages with an order of magnitude of -3 percent per month. As can be observed from Figure 4 below, the coefficient confidence interval in the quantile regression for both first-and second-generation immigrants does, for the most part, not cross the confidence interval of the OLS regression. Therefore, we can conclude that the quantile regression results are not significantly different from the OLS results. Conclusion In this study we have investigated wage differences between immigrants (first-and secondgeneration) and natives, and the extent to which the immigrant background has an impact on the labor market outcome of graduates with a higher professional education who have full-time jobs. Our empirical results indicate that, even when migrants are educated equally as natives, there is still a wage gap between migrants and natives in the Netherlands. Our empirical findings reject the human capital hypothesis that people with the same qualification and supply characteristics would have the same labor market outcome, especially for the first-generation migrants and immigrants with roots in non-OECD countries. Furthermore, graduation age plays a significant role in wage discrimination, in particular for the first-generation immigrants. The first-generation immigrants who start to invest in their human capital at later age experience more wage discrimination compared with those who invest at younger age. We also find that there is a monthly gross income gap between males and females. This is even larger than the wage gap between the first-and the second-generation migrants and the natives. The female graduates who are full-time employed and have graduated with equal scores as their males counterparts receive between 7 to 8 per cent less monthly gross salary compared with male graduates with the same labor market supply characteristics. The literature indicates that graduates who change their location fare better than those who do not change location. Our results confirm the findings of those previous studies, and also add new information on the emerging literature regarding the people who move from one big province to another. These people earn higher wages compared with the rest of the relevant categories. We have also compared individuals according to their parents' roots: those who have roots in OECD countries and those who have roots in non-OECD countries. We found that for the second-generation immigrants, having roots in non-OECD countries (mainly referring to those individuals with both parents from non-OECD countries) is negatively related to wages. So, when both parents are from outside the OECD, their wages are lower by approx. 2 per cent. This indicates that neither the parents' acquisition of Dutch-specific labor market knowledge due to long duration of residence, nor the graduates' acquisition of Dutch-specific human capital are able to overcome labor market wage differences. The same result is found for the first-generation immigrants with roots outside OECD countries. Further research on the effect of social capital and specifically on the parents' roots is needed to divide both the first and the second-generations immigrants into more detailed socio-economic groups. However, in our research context it was difficult to pursue this categorization because of the limited number of observations.
6,802.6
2017-10-01T00:00:00.000
[ "Economics" ]
On a fast discrete straight line segment detection Detecting lines is one of the fundamental problems in image processing. In addition for real time applications, detection should be achieved in real time. In this paper we investigate the use of fast trigger processor technology used in high energy physics experiments. We propose a method for detecting discrete straight lines segments in binary images based on a simple resistor network trigger processor. I. Introduction Detecting lines is one of the fundamental problems in image processing.For real time applications there is an additional computational time constraint.A fast straight line segments detection is essential in these applications.For example in augmented reality (AR) systems, augmentation of artificial information is conventionally in real time in order to provide real time vision for Human-Computer interaction. The weighting resistor matrix (WRM) is developed as a fast trigger processor in high energy physics [1], where detecting signal (pattern) in data at high speed is crucial.A first parallel test processor have been developed and tested.It consists on 7 interconnected boards, where each board is able to detect segment lines of a certain slope.The main two advantages of the WRM chip are: • It is extremely fast, basically no computations are performed, but rather the interconnections of the resistor network give the desired response.This depends only on the signal propagation delay inside the circuit. • The WRM naturally, using the voltage propagation inside the circuit, performs a best fit correlation instead of relying on perfect instances, allowing to overcome imperfectness in input data. In this paper we will describe a miniaturized serial version of the WRM, as well as its usage as a fast discrete straight line detector in binary images.The rest of the paper is organized as follows: Section II contains basic definitions as well as an introduction to the WRM circuit.Section III explains the working principles of the WRM, while Section IV illustrates the usage of the WRM data for discrete straight line segment detection. II. Basics Since in binary images we are dealing with discrete planes and there is no such thing as of straight lines in continuous plane, it is fundamental to define what is straight line in the discrete case.A binary image can be viewed as a finite grid inside Z 2 .Given a point (n,m) on the Z 2 , we define its 4-neighbors and 8neighbors pixels as follows: Given two points A, B ∈ Z 2 .We say A and where gcd(a,b) = 1. In the definition, µ is intercept of the line and ω its thickness.Given L = (a, b, µ, ω) ∈ Z 4 we have We consider in this paper ω = max(|a|, |b|) to work on 8−connected, 1 bit-thick discrete lines.As we said in the introduction, the WRM does not look for DSL pattern that matches perfectly the predefined patterns.Data are fitted with the WRM RODs, and the ROD that maximize the likelihood will be taken in consideration.That is to say, even if a certain DSL does not match any of the predefined pattern, it will be still detected with the best fit.A key point for doing that is the natural voltage diffusion inside the WRM chip.Each point of the input binary image can be viewed as 1Volt input and it causes one dimensional voltage diffusion inside the resistor circuit, as shown in figure 4. The effect of this convolution, which is performed naturally by the WRM, is to smooth the binary image data reinforcing linear correlation. III. 8 × 8 patch DSL segment detection In this section we explain how the binary image data is processed inside the WRM chip to detect small DSL segments.Since this is the basic functionality of the WRM circuit, the processing and the detection are performed almost instantly.Given S = (s i,j ) 1≤i≤8,1≤j≤m a 8 × m matrix data produced by the convolution of a 8 × m binary image, as explained in the previous section.A series of sums are performed at each (., j) pixel point from 1 ≤ j ≤ m using a N prefixed patterns p k as follows: For example in our case, these sums are performed following 8 × 8 WRM RODs as illustrated in Figure 3. Given a threshold th, a DSL pattern fits The derivatives are performed following the j columns direction. Data are loaded inside the WRM serially.At the beginning a 8 × 16 pixels are loaded inside the circuit, then at each hardware clock this 8 × 16 window is shifted one pixel to the right inside a 8 × n pixel data.The above sums are computed, in a very fast way, by means of resistors.This hardware clock can be seen as the columns index j of the image data.Once the 8 × n data are processed, the derivatives are performed to detect patterns.The grids in the image to the left correspond to the prefixed DSL patterns that are tested against the input data.The plots to the right correspond to the 2st derivative performed over the ROD outputs.The peaks in the data correspond to where the ROD patterns fit well the input data, detecting this way the 5 discrete straight lines in the input data. As you can see from the above, the peaks amplitude vary according to how much the RODs fits the data at a specific hardware clock.More the pixels points are linearly correlated the greater the amplitude will be. IV. DSL segment detection in images In this section we explain how to use the WRM output for detecting arbitrary DSL segments in images. The first required step is to transform RGB images to binary images, the natural way is to use an edge detector [4].RGB image is usually transformed to grayscale image.Then an edge detector method looks for sharp changes to produce the final binary image.Gradient based methods consist of thresholding the gradient to detect edges.Since the change in the gradient happens on more that one pixel, thresholding gives thick edges, which are post-processed by a thinning algorithm.On the contrary, zero crossing on the 2st derivative occurs at one pixel level, producing 1 bit thick edges, ideal input for the WRM. Another important consideration for our choice, is that the zero crossing edge detector can be implemented easily in electronics.This is usually required when the edge detector needs to provide data to the WRM at a very high speed.In less speed demanding cases, an FPGA implementation can be considered.Up to this point we have a processing chain 8 that starts from an RGB image, then goes through and edge detector, then the edge detected binary image is processed inside the WRM chip to detect small segments.Since the WRM takes an 8 × n input format, the full image is sampled in order to scan it fully.At the end of the chain, the WRM produces different sets of small 8 bits DSL segments.Each set corresponds to DSL segments that are detected withing certain inclinations.A set can be seen as 2D parametric space, where the parameters are the sample number and the hardware clock.These disjoints parametric spaces give an overall descriptors of image DSL segments.For example, DSL segments that go from to we can consider RODs that cover three areas: Since inside each parametric space the points follow a specific direction, detecting segments of arbitrary length consist of joining these points from sample to sample.Following is a pseudo-code for detecting DSL from WRM sampled data set.At the end a list of segment is given where each segment is provided by its starting point (x 1 , y 1 ) and its ending point (x 2 , y 2 ). V. Experiments In this section we present experimental results of the speed of our method for detecting DSLs.Just to give a baseline for the speed of our implementation, we will compare our result with the standard classical method for detecting DSLs which is the Hough Transform (HT).The Hough Transform is implemented on a Intel(R) Core(TM) i7-3632QM CPU @ 2.20GHz Linux machine.Also the last part of our chain, which is the final DSLs detection algorithm, is implemented in C on the same hardware. Our input data is a 800 × 600 pixels binary images produced by the zero crossing edge detector from RGB images.We don't count the time needed for the edge detector to produce the binary image as it is a common time for both methods. The experiment shows that with our method, we are able to process about 60 f rames/s. Using the Hough Transform method, we were able to reach a maximum of 8 f rames/s.Also the experiment shows that our method was more successful in filling gaps in DSLs.Showing a very good performance as shown in figure 10. VI. Discussion The process of creating different sampled parametric spaces from input data is similar to the voting procedure of the Hough Transform [2].The voting procedure in our case selects only points which have strong local linear correlation on a specific ROD direction.The computational complexity and the storage requirements are the main problems for the voting procedure of the HT.Many approaches have been taken in order to address these problems In HT based method.It is important to mention that methods based on the Hough Transform for detecting straight line segments, need also to perform post processing.That is to say, no end points or length are given [3] but only straight lines coefficient.In some sense, the final part of the DSL segment detection is a common problem for the Hough based methods and for the WRM method.The main advantages of the WRM method regarding the last step are: • WRM Sampled data is smaller than the input data, which reduce computational complexity. • Sampled data are separated in disjoint parametric spaces for each inclination. Thus the problem is highly parallelizable. In addition, our method is able to overcome gaps due to the voltage diffusion that reinforces linear correlation.A very high speed in processing binary images can be reached by using the WRM, which fill perfectly the requirement for real time processing. VII. Conclusion In this paper we discussed the adaptation of a fast trigger processor board for digital image analysis.We showed a formal view of what the device is actually doing.We proposed an algorithm that is able to construct from the WRM samples, a list of DSLs present in the image.The main advantage of this device is that it is extremely fast and highly parallelizable.In Figure 10 : Figure 10: Original grayscale image is processed by the zero crossing edge detector.The binary edge detected data is then processed by the WRM.Finally the reconstructed segments are detected and overlayed on the grayscale image.
2,476.6
2014-09-25T00:00:00.000
[ "Physics", "Computer Science" ]
Load Balanced Routing for Lifetime Maximization in Mobile Wireless Sensor Networks Challenge of efficient protocol design for energy constrained wireless sensor networks is addressed through application specific cross-layer designs. This design approach along with strong design assumptions limits application of protocols in universal scenarios and affects their practicality. With proliferation of embedded mobile sensors in consumer devices, a changed application paradigm requires generic protocols capable of managing greater device heterogeneousness and mobility. In this paper, we propose a novel lifetime maximization protocol for mobile sensor networks with uncontrolled mobility considering residual energy, traffic load, and mobility of a node. The protocol being generic is equally applicable to heterogeneous, homogenous, static, and mobile sensor networks. It can handle event driven as well as continuous traffic flow applications. Simulation results show that proposed scheme outperforms minimum hop routing and greedy forwarding in terms of network lifetime, data packet latency, and load balance while maintaining comparable throughput. Introduction Wireless sensor networks (WSN) have wide range of applications in many areas of daily life [1].Routing protocols for WSN are generally application specific and cross-layer design approach is adopted to achieve efficiency.Application specific cross-layer protocols have strong design assumptions and are not suitable for universal scenarios.This improved performance comes at the cost of design modularity, stability, and robustness.Cross-layer design involves complex interactions among multiple network layers ranging from physical to application layer.Suitable models to describe these interactions are still being investigated.Unless these models are available, cross-layer architecture would find little acceptance in universal context.On the contrary, in layered architecture, complex problems are easily solved by breaking into simple ones.Layered architecture leverages modular, loosely coupled adaptable designs and has secured deeper acceptance in industry. WSN are traditionally considered as no or quasimobility networks.However, mobility can leverage greater benefits in terms of improved coverage with sparse sensor deployment, healing of topological defects, energy efficiency, and increased application domains.Few areas utilizing mobility are urban sensing, assisted living and residential monitoring, industrial automation, and mobile sensor based wide area monitoring.WSN mobility is characterized as controlled or uncontrolled.Controlled mobility is used for efficient data collection and healing of topological defects.Mobility is deemed to tradeoff delay to achieve energy and resource efficiency.The approach is less suitable for applications with hard realtime constraints.Uncontrolled mobility is relatively less researched but is important in context of proliferation of sensors in consumer devices, mobile phones, personal data assistants, and special purpose platforms.Use of mobile sensors with uncontrolled mobility in routing tasks is so far rather limited.These sensors can be utilized in applications like people centric urban sensing and assisted living.In urban 2 International Journal of Distributed Sensor Networks sensing [2] environment, data can be very speedily passed back to static infrastructure using sensors embedded into user devices carried by robots or vehicles. A sensor network with uncontrolled mobility represents a very large heterogeneous network comprising mobile as well as static networks.Static networks are connected through mobile devices carried by robots or vehicles.This heterogeneous network is connected to back haul infrastructure to form a very large cooperative network in contrast to small scale application specific sensor implementations.Because of overwhelming mobility and heterogeneity of involved devices, managing interactions among network elements is a very complex task. In mobile wireless sensor networks, due to heavier maintenance cost, static or preconfigured routing is less suitable.Also, proactive approach is not feasible for event driven sensor networks where information generated by sensor nodes is not known a priori and depends on the arbitrary occurrence of events.In [3], authors argue that data packet sizes in WSN are smaller as opposed to other computer networks and signalling overhead in this case becomes significant compared to data traffic.On-demand protocols have less maintenance cost but generate a lot of signalling traffic for discovery of new paths.This assertion can be true in case of scaler data but is not valid for high end futuristic as well as multimedia sensors. In this work, we propose an on-demand routing scheme for mobile sensor networks with uncontrolled mobility.The protocol considers, in its path selection, the residual energy, traffic load, and mobility of a node.Main design objective of the proposed routing scheme is to maximize network lifetime.The protocol can be applied in static as well as mobile scenarios.It keeps practical limitations in view and does not compromise efficiency.The protocol suits equally event driven as well as continuous monitoring applications.Simulation analysis shows that the proposed scheme can effectively handle sensor network mobility, increase network lifetime, and decrease data packet latency. The rest of the paper is organized as follows: Section 2 summarizes related work; in Section 3, we present network model; Section 4 describes design of proposed routing scheme and its operation; performance evaluation and analysis of results are given in Section 5. Section 6 concludes the paper and highlights future work directions. Related Work In this section, we present related work that surveys issues of routing in mobile sensor networks (MWSN) and load balanced routing and energy aware routing. Research in MWSN gained momentum in recent years especially in mobility assisted data collection and urban sensing.A comprehensive survey of routing protocols for MWSN is available in [4].Moreover, surveys of mobility based communication techniques are available in [5][6][7] and mobility models can be found in [8].In [6], requirements, merits, and demerits of three mobility based schemes are compared.A comprehensive survey of data collection techniques using mobile elements is presented in [5].The authors categorize mobile elements as relocatable nodes used to heal topological defects, mobile data collectors for data collection, and mobile peers for sensing and routing tasks. The authors in [9] study use of mobile relays as resource provisioning method to extend lifetime of sensor network in a large dense network.They conclude that use of one energy rich mobile relay can extend network lifetime up to four times of that of static network.The work in [10][11][12][13] investigates lifetime maximization problem using mobile sink; especially, issues related to finding optimum sink route or trajectory are addressed.These proposals utilize controlled mobility for efficient data collection and do not take into account nodes embedded into mobile platforms having uncontrolled mobility. Developing a large scale general purpose sensor network in urban setting for the general public is studied in [2].Authors propose network architecture based on opportunistic sensor network paradigm capable of supporting urban sensing with widespread people centric applications and heterogeneity in devices.Sensor speed, direction of move, and location are used to select a sensor for delegation or tasking. The authors in [14] propose a mobility aware routing protocol where mobility is used to form sink cluster and during route discovery process.A cluster based routing protocol for a low mobility homogenous sensor network is presented in [15].The nodes are considered to follow random mobility model.Zone head is elected based on mobility factor which is taken as ratio of zone changes to position changes within a zone.The scheme considers node speed and location information for determining mobility factor.In our routing scheme, mobility factor is one of the factors considered to select the next hop node.However, our technique of mobility factor determination considers node speed and does not require node location information exchange.The scheme [15] tries to balance energy consumption by considering the number of times a node has acted as zone head whereas in our scheme balance is achieved by considering the traffic load a node receives. In [16], authors have studied the problem of reducing energy consumption by flow augmentation to balance energy utilization across network.This scheme uses residual energy of nodes as basic admission control criteria.Selection of nodes on the said criteria can balance energy consumption but results in longer source to destination paths and increases latency of information delivery.One of the earliest proposals on energy aware routing [17] considers clustered network topology and utilizes topological information for this purpose.Performance of such protocols is severely affected by mobility as topology constantly changes resulting in significant topology maintenance overhead. Load balance and local congestion control is investigated in [17].It considers two identical metrics, that is, maximum connections per relay and overall relay load.These metrics help to increase lifetime of relay node and avert packet loss by avoiding overcommitted nodes from becoming relay.But limiting maximum connections per relay node can result in coverage issues across the network.E-WLBR [18] is a proactive routing protocol, in which load balance is achieved by distributing traffic among the next hop neighbors according to their load handling capacity determined in terms of residual energy levels.Each node notifies its load handling capacity during initialization phase.Protocol in its present form is less suitable for a network with mobile nodes and bears disadvantage of proactive protocols for scalable networks.Authors in [3] show that sending traffic on multiple paths can reduce significant energy consumption.Candidate paths for forwarding traffic are determined based on multiple weighted factors.However, existence of completely disjoint multiple paths can only enhance performance but it is totally dependent on network topology.Also, in mobile networks, using multiple paths will increase route maintenance overhead as mobility can affect all paths.In another scheme [19], one or two next hop neighbors are selected according to hybrid routing metric and traffic is then distributed among these selected neighbors in round robin or weighted round robin manner.Round robin achieves per packet load balancing whereas in weighted round robin traffic is distributed according to assigned weights.In mobile networks, candidate neighboring nodes can change frequently and maintaining even two hop nodes information can result in enhanced energy overhead.LEAR [20] considers a number of active routes through a relay node for load balancing and routes multimedia traffic on fully or partially disjoint paths.However, LEAR does not consider realistic traffic load on a relay node but assumes that each flow is identical, having same data rate.Reference [21] surveys load balanced routing strategies and highlights that this issue still requires significant research. Network Model In this section, network model is presented and highlights the assumptions and terminologies used in the proposed routing scheme.Moreover, techniques of utilizing and estimating node mobility, energy, and load are described. The target network is of heterogeneous nature consisting of mix of high and low end sensors.The high end sensors possess relatively better processing and energy resources whereas low end sensors are constrained in these resources.The network nodes are assumed to be deployed according to flat or random topology as depicted in Figure 3.In target network, the majority of network nodes are static while the remaining are mobile.The nodes have inherent mobility detection mechanism in place.Mobility is not used for resource provisioning or data collection; rather, the sensors are onboard a mobile platform, for example, sensing robot, vehicle, consumer device, or an aerial platform. The terminologies used in this work are defined as follows. (i) Static Sensor Network.It is a wireless sensor network where all nodes are static. (ii) Mobile Node.It is a sensor node embedded in a sensing robot, a vehicle, a consumer device, or an aerial platform.The node not only carries out sensing tasks but also relays messages from other nodes. (iii) Mobility Detection.The node is capable of detecting mobility and for this purpose it either has GPS or relative position detection mechanism in place. (iv) Low Mobility Network.It is a network which consists of majority of static and some mobile sensor nodes.The mobile nodes may follow random or group mobility models. (v) Medium Mobility Network.It is a network which consists of equal number of mobile and static sensor nodes.Mobile nodes follow random or group mobility models. Sensor Network Mobility. Besides benefits, mobility also poses challenges in protocol design as it affects route stability and route maintenance cost.For efficient protocol design, mobility must be taken into account to avoid establishing routes through mobile nodes, thus conserving energy in frequent maintenance.Node mobility is characterized in terms of mobility factor and is estimated based on location information using approaches discussed below.These schemes have varying degree of computation complexity, accuracy, and need for information exchange.For WSN, a lesser complex scheme requiring no additional information exchange is a better choice.However, accuracy may be improved by taking moving average of mobility measure over certain period of time.Mobility prediction approaches are as follows. Transitions Count. The approach assumes that sensor network is divided into zones which may be defined according to a specific criterion.Node mobility is measured in terms of number of transitions of a mobile node across different zones.The scheme has limitations in case of group motion where although the nodes move across different zones they may still maintain association or link with their neighbors.This approach also requires location information exchange. Remoteness. To capture the notion of relative mobility, the concept of remoteness is introduced in [22].Here the mobility factor is determined in terms of rate of link change. If nodes in a zone are in group motion, average link change is minimal.The node movement in such scenarios does not affect association of node with a zone or a link.So the remoteness of a node from its neighbors can be treated as a measure of mobility and is given as where () = (1/) ∑ =1 () for all ̸ = 0. Let () be the distance at time of node from th neighbor; () is average distance of a node from its neighbors.By considering average distance over time intervals link change rate can be determined.The approach requires exchange of location information among nodes. International Journal of Distributed Sensor Networks 3.1.3.Speed.Node speed may also be used as measure of node mobility.However, such a representation has limitations especially in case of group motion.If the nodes are in group motion at constant speed, they do not break link despite motion.In other cases, a node itself may be static with the least mobility factor but its neighbors may move out breaking the link.However, this approach does not require any location information exchange and can be very easily calculated: where V () = (1/) ∑ =1 () for all ̸ = 0 represent sensor node velocity over an interval and is the number of old samples being considered. Energy Aware and Lifetime Maximization Based Routing. WSN have extreme constraints of energy; therefore, energy efficiency is the main design objective during protocol design.Several approaches for energy consumption are in practice [16,23,24].One of the well-known metrics for decreasing energy consumption and latency is the selection of minimum hop paths.However, this results in premature death of those nodes that are frequently used in minimum hop routing paths.On the contrary, energy balanced algorithms utilize suboptimal paths to maximize network lifetime [16,23]. Load Balanced Routing. Network load balance is another important factor for lifetime maximization [3,17,19,20].Load aware routing helps to conserve energy by avoiding collisions and overcome delays caused by local congestion.Load balance is achieved by spreading traffic either on multiple paths or by avoiding overcommitted nodes as relays.Multipath routing has related overheads and energy costs, whereas later approach is simple and can be implemented without global knowledge.This metric helps in achieving longer network lifetime by avoiding overloaded nodes to participate in routing.The traffic load of a node is given as follows: where () is the counting function over time interval and is the number of old samples being considered. Proposed Routing Scheme The scheme uses hybrid cost function for routing decisions.The hybrid metric is formed based on the factors discussed in Section 3. Summarized description of these factors is as follows. (1) Mobility Factor.Competing approaches to estimate mobility have been discussed in Section 3.1.Considering low mobility, energy expenditure for location information exchange and lesser computation cost mobility factor based on node speed are used in this scheme.Node mobility is given as below: where V () = (1/) ∑ =1 () for all ̸ = 0 represent sensor node velocity over an interval and is the number of old samples being considered.Windowed exponential moving average of node speed helps smoothing transients over a period of time .A node with the least mobility is considered a better candidate for the next hop.This helps increasing link lifetime and adds to longer network lifetime by saving energy required for frequent maintenance of broken routes. (2) Residual Energy.Energy aware routing and network lifetime have been discussed in Section 3.2.This scheme considers residual energy level of a node for selecting it as the next hop.Initially, once energy level is high, this factor has little role to play in routing decisions.However, as energy depletes, it becomes a dominating consideration.This metric allows minimum hop routing initially and thus improved network delays.In our case, a node with greater residual energy is a preferred choice as the next hop node. (3) Node Load Figure .By considering state of load being handled by a node, energy wastage due to collisions and delay in servicing packets can be reduced.Load balancing helps in achieving better network lifetime and is discussed in Section 3.3.The load at a node is given as follows: where () is the counting function over time interval and is the number of old samples being considered.Windowed moving average of load helps smoothing transients over a period of time .A node with the least load is preferred to be selected as the next hop node.This helps in increasing network lifetime. (4) Path Length Constraint.Proposed scheme allows use of suboptimal paths in favour of balanced energy consumption and longer network lifetime.However, a likely pitfall of forming extremely nonoptimal paths is prevented by using path length constraint.A minimum cost routing path is selected only if it is within hops of the shortest path; otherwise, the shortest path routing is performed. is the maximum allowed hop deviation from the minimum or shortest path between source and destination.It is dependent on network size and node density.In [20], authors have shown based on experimental results that a deviation of up to four hops from the shortest path can give better throughput in an average size network if the shortest path is not suitable due to high traffic load. Cost Function. Hybrid routing metric based on factors discussed here and also in Section 2 is as follows: is hybrid routing metric used for routing decisions in proposed scheme, is mobility factor, is the traffic load, and is residual energy of a node. , , and are weights for mobility factor, traffic load, and residual energy, respectively.These weights can be selected according to type of network and to alter the contribution of a particular factor in overall decision making.For example, in a high mobility network, in order to increase network lifetime, can be made comparatively bigger than and .Similarly, if energy constraints are sever, then might be made bigger. max and max are maximum node speed and application reporting rate.These factors are used to normalize node speed and load.The value of max can be estimated for a particular application.However, maximum traffic handled by a node in mobile multihop network is quite difficult to estimate especially because of forwarding load component.Therefore, normalized load factor may not lie in 0 to 1 interval and result in biased routing decisions.In order to overcome such a situation, dynamic adjustment of weights based on either fuzzy logic or analytical hierarchical process (AHP) may be used.However, it would entail additional processing cost.In other cases, these weights can be experimentally selected for a particular application. Each node in the network calculates its routing metric based on its residual energy, load, and mobility factor.This figure is updated after short intervals of time.Selection of metric update interval has effect on selection of optimal route as well as network lifetime.Each node shares its metric with other nodes by packing it in route request (RREQ) and route reply (RREP) messages.Node with the least is considered a better next hop node.Initially, once the residual energy is high, routing decisions are based mainly on sensor load and mobility factor.Once the energy is depleted, residual energy factor becomes significant; thus it ensures that nodes with lesser energy are not selected for relaying packets. Route Discovery. Once a node has to send data or it receives RREQ, for another node for which it does not have an active route, it broadcasts RREQ message to its neighbors.Besides other information, it inserts value of in RREQ packet.Based on metric value received in RREQ, a node decides to select the source node as the next hop or otherwise.The route discovery scheme is given in Algorithm 1 and is also depicted in Figure 1. Operation. Load balanced routing (LBR) protocol is an on-demand routing protocol which is designed to maximize network lifetime for sensor networks with mobile elements.Routing decisions are made based on hybrid routing factor given in (6).A node with the least is preferred as the next hop node. When a node has data to transmit, it broadcasts RREQ message to its neighbors.At a neighbor node, three cases are possible.In the first case, a neighbor node may be the destination itself, so it sends route reply (RREP) message and records value of beside other essential information (source and destination addresses, hop count, RREQ ID and sequence number, etc.) in its routing table.In the second case, a node may not have received the RREQ message previously; then it broadcasts RREQ to its neighbors and records essential information in its routing table.In the third case, where the node has already received RREQ, it drops it; however, if the old value of metric is bigger than the newly received, it updates value of in its routing table and also sets up backward pointer to this node.If the hop count received in RREQ is more than hops of the minimum hop path, then backward pointer is not reset even if value of is lesser than previously set path.So the least mobility, highest residual energy, and least busy path from source to destination are established.The established backward route or pointer is also subject to path length constraint.The path establishment operation is depicted for one source and sink in Figure 2. Performance Evaluation In this section, we report performance of LBR compared to shortest path routing and greedy forwarding protocols.For this purpose, AODV and GPSR protocols are used.We study the effect of different weights, network size, traffic load, and mobility on protocol performance.The traffic load is increased gradually by increasing application reporting rate, that is, number of packets per second.However, to capture effect of mobility, the evaluation is done in a static network, a low mobility network with 25% mobile nodes, and a high mobility network with 50% mobile nodes.The evaluation metrics, experimental setup, simulation parameters, results, and their analysis are presented in this section. Performance Metrics. The metrics of throughput, data latency, network lifetime, and load balance are used to measure performance.These evaluation metrics are defined as follows. 5.1.1.Throughput.It is the measure of average number of bits per second of application data received at sink during the entire simulation period. Latency. The end to end delay of data packets is the average time taken by data packets during flow from source to sink.It also includes the time taken during discovery and establishment of route to sink. Network Lifetime. Network lifetime is determined in a number of ways including time till the death of the first node, certain percentage of nodes, or time till all of the nodes die.During this evaluation, we consider sensor deaths over time and plot the remaining alive nodes over simulation duration.Moreover, nodes with energy less than 0.001 joules are considered dead because of their inability to transmit sensed data due to low energy reserve.plot per node load normalized by total number of packets successfully received at sink. Experimental Setup. Network simulator (NS-2) is used to evaluate protocol performance.The simulations are conducted over five sensing fields of 200 by 200 meters.Each contains one sink and 99 randomly deployed nodes.The sink is considered to be static while other nodes are a mix of static and mobile nodes.Sensing field with randomly deployed nodes is shown in Figure 3.During this evaluation, one hundred random events scattered over sensing field and staggered randomly over simulation time are considered.After occurrence, the event is assumed to be reported for 60 seconds at specified rate.Summary of simulation parameters is given in Table 1 and key parameters are described as follows. Energy. The experiments are conducted assuming homogenous networks and all sensors are set to have initial energy of 2 joules except sink which is assumed to have no energy limitation.The performance of proposed scheme is assumed to be even better in case of heterogeneous network due to energy aware routing. Mobility. Protocol performance is evaluated in static and a network with 25 and 50 percent mobile nodes.The sink is assumed to be static although protocol imposes no such limitation and relative performance measures obtained for static sink are equally applicable to a mobile sink also. The mobile nodes are assumed to follow random way point mobility model at average speed of 1.11 m/s. = 0 and = 2. Overall better results are obtained with = 1.5 and = 0.5, so these weights are taken as reference during further simulation. Routing Overhead and Packet Delivery Ratio.Route request message of LBR is similar to AODV except additional field for piggy packing value of hybrid routing metric as per (6) which would require few additional bytes depending upon platform where LBR is implemented.However, since route request flooding is suppressed in LBR, its routing overhead is lesser compared to other reactive protocols. For packet delivery success rate, five random deployment scenarios and hundred events reported at eight packets per second are considered.Performance under no, medium, and high mobility settings is shown in Figure 5. LBR achieves better delivery ratio for static, medium, and high mobility networks compared to AODV because of its congestion and mobility resilience. Performance versus Network Size. For simulation, networks consisting of 25, 50, 75, and 100 nodes are taken.Each experiment is repeated five times with different random deployment scenarios.In each scenario, 50% nodes are made mobile and 10 traffic flows spanning over 500 seconds are used.Average results in each case are shown in Table 3.With increase in network nodes, the average throughput increases; however, beyond a threshold, throughput starts to decrease as contention for medium access increases resulting in packet drops. Evaluation of Results 5.6.1.Throughput.The data throughput is shown in Figure 6.LBR has better throughput than AODV and GPSR for higher mobility and traffic loads, whereas, for lower traffic loads International Journal of Distributed Sensor Networks and mobility, it has comparable results.While the increase in traffic load congestion occurs, LBR being congestion resilient avoids overcommitted nodes, thus achieving greater throughput.Similarly, with increase in mobility, link lifetime decreases and maintenance cost is also increased but LBR can still maintain better data throughput because of mobility awareness. 5.6.2.Latency.Average end to end data latency is shown in Figure 7. LBR has better average data latency because of load balanced routing which helps to avoid overloaded nodes, improves packet service time, and thus reduces resultant delays.For lower traffic and mobility, LBR has comparable results, but, with increase in traffic load because of congestion resilience, it avoids overcommitted nodes and thus better latency figures are obtained. Network Lifetime. The LBR achieves much longer network lifetime than the minimum hop routing and greedy forwarding; the results are shown in Figure 8.For both AODV and GPSR, network partitioning occurs much earlier as indicated by constant number of alive nodes.LBR performs better in terms of both first node death and number of dead nodes.Because of comparable throughput and much longer network lifetime, new scheme can transfer much more data contents compared to both other protocols before network partitioning. 5.6.4.Load Balance.The network load distribution of three protocols is shown in Figure 9. LBR achieves more even distribution of load despite greater content transfer.Owing to load aware routing, proposed scheme prevents taking of same route and distributes load across nodes.This even distribution contributes towards better lifetime and lower latency figures achieved by proposed scheme.Although per node load for GPSR is less compared to the other two protocols, it can be attributed to premature network partitioning and lesser transferred data.Therefore, results do not reflect a better load balance and the same is evident from very less network lifetime also. Conclusion and Future Work Sensor network routing protocols are traditionally designed for specific environment and applications to achieve efficiency and assumptions generally made by designers are rather strong limiting protocol application in generic scenarios.Secondly, mobility, in general, is used for data collection and resource provisioning only.With proliferation of embedded sensors in consumer devices, greater variety of applications and accompanied challenges would need to be addressed.In this changed scenario, application paradigm is likely to transform from application specific smaller scale to larger or global one.To address these challenges, generic protocols capable of handling wide ranging applications, device heterogeneity, and uncontrolled mobility would be needed.Proposed protocol utilizes mobility in novel manner and is deemed to address these new challenges.The scheme is energy efficient, load balanced, and congestion resilient and can handle variety in sensor mobility.The protocol is equally suitable for static, mobile sensor networks and can handle event driven as well as continuous traffic flows.Simulation results show that LBR outperforms minimum hop routing and greedy forwarding in terms of network lifetime, load balance, and data latency.The scheme has comparable results as far as throughput is concerned.In this work, weight selection is performed using simulation method which may not be optimal to handle variety in mobility and load across different applications.Therefore, weight selection based on fuzzy logic or analytical hierarchical process (AHP) may be studied as future work.Moreover, in case of mobile sensor networks, link quality may vary more rapidly compared to static networks, so addition of reliability to routing metric will improve protocol performance and may be another dimension for future research. Figure 2 :Figure 3 : Figure 2: Dotted edges show forward dissemination of RREQ from source S to sink D whereas solid edges (red colored) represent the data route established.The number on the edge shows value of routing metric .Instead of the shortest path through node R, a suboptimal path along nodes T and P is established. 5. 1 . 4 . Load Balance.The metric represents the distribution of traffic load per node or across segments of sensing field.In our study, we consider number of packets per node and International Journal of Distributed Sensor Networks Figure 7 : Figure 7: Average end to end delay of data packets: graph shows logarithm of delay. Figure 8 : Figure 8: Average number of alive nodes over time. ) Require: Values of and Hop Count (HopCnt) Ensure: Select a path with maximum residual energy, least mobility and least congestion if Source and RREQ ID not in routing table then Setup reverse path with source Broadcast packet to neighbors else if < Previous & HopCnt < min HopCnt + ( is the deviation from the shortest path) then Table 1 : [25,26]ion parameters.events).Average throughput for different and is shown in Table2.Maximum average throughput is obtained with = 1.5 and = 0.5.Protocol performance with different weights is depicted in Figure4.It can be seen that, if more weightage is given to load, for example, = 2 and = 0, protocol performs better for application with higher reporting rates compared to = 0 and = 2 which show better results for lower reporting rates as load component is not considered.The results for different weights start to decline for reporting rate above 20 packets per second because, in case of 802.15.4 MAC, application data rate in NS-2 is approximately 120 kbps[25,26]; this limit is even less for multihop case, so, with event reporting rate of 20 packets per second, application data rate handled by sink may reach maximum capacity (6.7 * 20 * 100 * 8 = 104.7 kbps where 6.7 is number of events and event reporting rate is 20 packets of 100 bytes).Because of this, an unpredictable throughput spike is observed at 24 packets for a Metric update interval.b Samples for moving average.c Path length constraint. Table 2 : Weights selection.Values for and , for example, (2, 0), represent = 2 and = 0. was set as 0.2.b Averaged over 5 scenarios and 6 application reporting rates in high mobility network. a Table 3 : Performance versus network size. a Averaged over five scenarios and six application reporting rates in high mobility network.
7,460.6
2014-07-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Capital mobility in Latin American and Caribbean countries: new evidence from dynamic common correlated effects panel data modeling This study investigates the degree of capital mobility in a panel of 16 Latin American and 4 Caribbean countries during 1960 to 2017 against the backdrop of the Feldstein-Horioka hypothesis by applying recent panel data techniques. This is the first study on capital mobility in Latin American and Caribbean countries to employ the recently developed panel data procedure of the dynamic common correlated effects modeling technique of Chudik and Pesaran (J Econ 188:393–420, 2015) and the error-correction testing of Gengenbach, Urbain, and Westerlund (Panel error correction testing with global stochastic trends, 2008, J Appl Econ 31:982–1004, 2016). These approaches address the serious panel data econometric issues of cross-section dependence, slope heterogeneity, nonstationarity, and endogeneity in a multifactor error-structure framework. The empirical findings of this study reveal a low average (mean) savings–retention coefficient for the panel as a whole and for most individual countries, as well as indicating a cointegration relationship between saving and investment ratios. The results indicate that there is a relatively high degree of capital mobility in the Latin American and Caribbean countries in the short run, while the long-run solvency condition is maintained, which is due to reduced frictions in goods and services markets causing increase competition. Increased capital mobility in these countries can promote economic growth and hasten the process of globalization by creating a conducive economic environment for FDI in these countries. Introduction Economists and policymakers have been studying the dynamic role of capital mobility in economic growth recently, especially in emerging countries in general and in Latin American and Caribbean countries in particular, which are experiencing large inflows of capital from abroad. One source is the recent unconventional monetary expansion in the United States through large-scale quantitative easing (QE), undertaken through extensive purchases of assets by its Federal Reserve. Lower U.S. interest rates and other phenomena, such as an appetite for increased global risk, improvements in these developing countries' macroeconomic fundamentals, and the rapid progress in information technology, have all contributed to the increase in capital flow to these economies. Ford and Horioka (2016), in explaining the Feldstein-Horioka puzzle (Feldstein and Horioka 1980;Feldstein 1983), state that net transfers of capital among countries depend not only on the integration of financial markets but also on the integration of the goods and services markets. Ko and Funashima (2019) find evidence that large markets have higher correlations between savings and investments compared with mid-and small-sized countries. Eaton et al. (2016) present empirical evidence that financial friction in the goods and services markets reduces the degree of capital mobility. In fact, they contend that removing the frictions in the goods and services markets reduces considerably the dependence of domestic investment on domestic saving, leading to a greater degree of capital mobility in the observed Feldstein-Horioka puzzle, which is estimated by the following equation: where I/Y is the investment ratio to gross domestic product (GDP) in country i and period t. S/Y is the saving ratio as a percentage of GDP and β it is the saving retention coefficient, which indicates the capital mobility level in country i. ε it indicates the error term of the regression model. In countries where capital mobility is high, the savingsretention coefficient is expected to be low reflecting a low level of correlation between domestic investments and savings. However, Feldstein and Horioka (1980) illustrate empirically that the correlation between investment and saving ratios is high in developed countries, where it is expected to be low, which has created the Feldstein-Horioka puzzle. Eaton et al. (2016) argue, given that there is no "home-country bias" and an absence of financial friction in the goods markets and financial markets, in recipient countries, there are many benefits from an inflow of capital where there is a low level of domestic savings as well as crowding-out effects of a budget deficit. These factors have been instrumental in enabling a developing economy to pursue the most profitable investment opportunities and acquire foreign funds to finance domestic investment projects. This then limits the tax burden on relatively immobile domestic factors of production, smooths domestic consumption, and, ultimately, improves resource allocation and the economic welfare in the recipient countries (Obstfeld and Rogoff 2010;Boschi 2012;Ghosh et al. 2012;Ahmed and Zlate 2013;Koepke 2019;Zheng et al. 2019;Al-Jassar and Moosa 2020). At the same time, an economy that is witnessing a steady capital inflow, despite some benefits, may experience many undesirable economic consequences and distortions, the most important being the inability to implement independent monetary policy actions. A review of capital inflow data reveals that many Latin American countries, unlike in the 1970s and 1980s, have experienced larger capital inflows recently. External shocks, like low-interest rates or economic slowdowns in developed countries, "push" investors to emerging markets, like Latin America, and are considered key factors in attracting foreign investments (Calvo et al. 1993;Fernandez-Arias 1996;Aizenman and Binici 2016;Kang and Kyunghun 2019;Koepke 2019;Eller et al. 2020). However, Baek (2006) argues that together with low international interest rates, which are push factors, the most important factor attracting foreign investment to Latin America is strong domestic economic growth, which is considered a pull factor for foreign investments. Fomina (2021), with the help of systematic and structural-logical techniques, modeled the stages forming pull factors that increase the competitive advantages of Latin America and improve its investment attractiveness. To analyze country and regional international case studies in this context and derive current and future policy implications, a reliable and econometrically robust quantitative measure of the prevailing degree of capital mobility, such as an estimate of the savings-retention coefficient, is warranted. The main objectives of this study are to test whether capital mobility exists and to quantify its degree using the size of the savingsretention coefficient by investigating a panel of 16 Latin American and 4 Caribbean countries during the period 1960-2017. To investigate capital mobility in Latina American and Caribbean countries, Murthy (2009) applies the panel group fully modified ordinary least squares (FM-OLS) estimator developed by Pedroni (2000Pedroni ( , 2001 for the period 1960-2002. Our study employs panel data over a longer period (1960 to 2017) by using a recently developed robust panel data estimation technique, the dynamic common correlated effects mean group (DCCEMG) estimator from Chudik and Pesaran (2015). To test whether our results are robust, we also report the savings-retention coefficients estimated from applying another panel data estimator, Pesaran's (2006) common correlated effects mean group (CCEMG). Furthermore, to test whether there is a cointegrating relationship between the saving and investment ratios in the presence of cross-sectional dependence, we conduct the Gengenbach et al. (2008Gengenbach et al. ( , 2016 error-correction tests at the panel and individual country levels. To the best of our knowledge, no other study in the literature has employed these panel data estimations along with a relatively long period sample to examine Latin American and Caribbean countries. The rest of the paper is organized as follows. Literature survey section briefly reviews the literature on capital mobility; Model specification and data section provides the specification of the employed model and the data; Empirical results section presents results of the empirical analysis conducted in this study and finally the last section concludes with the discussion of policy implications. Literature survey A literature survey on capital mobility shows many studies investigating the prevalence of capital mobility using the Feldstein-Horioka hypothesis and framework in both developed and developing countries. As explained in the introduction, in the literature on capital mobility in international macroeconomics, Feldstein-Horioka hypothesis is a major puzzle. It should be noted that Feldstein and Horioka were the pioneers in statistically determining whether capital mobility exists in 16 Organization of Economic Cooperation and Development (OECD) countries during 1960 to 1974. As explained, using a simple regression model, they find some econometric evidence, contrary to the expected theoretical notion, that capital mobility is absent in this group of countries. To test the presence of capital mobility in the 16 OECD countries, they specify an econometric model (FH-Model). According to the FH-model, if the estimated savings-retention coefficient is not statistically different from zero in the estimated model, then there is perfect capital mobility. In contrast, if the value of the estimated coefficient β is close to one and is statistically significant, then the evidence supports that capital is immobile. In the economic literature on capital mobility, the reasoning is that statistically significant lower values of β denote the prevalence of a reasonable or moderate degree of capital mobility, although some development economists suggest a special cutoff savings-retention coefficient value of 0.60 (see, Murphy (1984)), especially for developing countries, implying the presence of a moderate degree of capital mobility. In their econometric study, Feldstein and Horioka (1980) find an estimated statistically significant savingsretention coefficient of 0.887, with a computed t-value of 12.67, and a coefficient of determination (R-square) for the model as a whole of 0.91. Therefore, they conclude that in these countries, contrary to the expected theoretical notion of capital mobility, the empirical evidence reflects the absence of capital mobility. Hence, the name for this unexpected phenomenon is the Feldstein-Horioka puzzle. Since the publication of their important research papers, many econometric studies have attempted to determine empirically the degree of capital mobility in various countries, in different groups of countries, over different time periods, using both time-series and panel data. Further, many studies have attempted to solve the FH puzzle for developed countries. However, as in our case, others have tested the FH puzzle in developing countries, where large investment inflows are essential factors for economic growth. As may be expected in studies in developing countries, Narayan (2005) and Rocha (2006) find a high saving-investment correlation, indicating restricted capital mobility in these countries. However, other studies find increasing capital mobility in developing countries; for example, Holmes (2005) A review of these studies indicates that only a small number, with the exception of Murthy (2009), focus on the degree of capital mobility exclusively in Latin American countries. Holmes (2005), using data for 1979 to 2001, finds a savings-retention coefficient of 0.33, applying a FM-OLS estimator for a panel of 13 Latin American countries. Murthy (2009), conducting a panel cointegration analysis using data for 1960 to 2002, employing the Pedroni group FM-OLS estimator, reports a savings-retention coefficient of 0.46 with incorporated common time dummies and 0.48 without common time dummies for 14 Latin American countries and 5 Caribbean countries. Most studies on capital mobility that apply panel data methods suffer from several econometric shortcomings. These studies, with the exceptions of Hernandez (2015) and Bibi and Jalil (2016), do not address some important econometric issues that plague panel data such as the existence of observed and unobserved common effects, cross-sectional dependence, parameter heterogeneity, and endogeneity in a multifactor dynamic error framework (Pesaran 2006;Eberhardt and Bond 2009;Cavalcanti et al. 2011;Pesaran and Tosetti 2011;Chudik and Pesaran 2015;Ditzen 2016). Hernandez (2015) and Bibi and Jalil (2016) employ Pesaran's (2006) static CCEMG. While Bibi and Jalil (2016) include a large panel consisting of 88 widely diverse countries. Hernandez (2015) uses panel data consisting of 18 emerging economies, looking at quarterly data from 2000Q1 to 2012Q4. Whereas Bibi and Jalil (2016) apply the static Pesaran CCEMG, Hernandez employs the augmented mean group (AMG) estimator (Eberhardt and Bond 2009), which controls for cross-sectional dependence in a static multifactor error structure framework. Eyuboglu and Uzar (2020) employ both the static CCEMG and the AMG estimators. The CCEMG estimator, although a robust panel data method, has shown that it does not yield consistent estimates in a dynamic multifactor error framework (Chudik and Pesaran 2015;Everaert and De Groote 2016;Ditzen 2016). Therefore, to overcome existing econometric shortcomings, we apply the DCCEMG estimation procedure to test whether capital mobility exists examining a panel of 20 Latin American and Caribbean countries 1960 to 2017. The choice of countries and the time period are dictated by the availability of reliable and complete data sets. Model specification and data Our study tests the economic hypothesis of the FH puzzle. We do so by applying the DCCEMG estimator. The DCCEMG is a modified estimator for handling dynamic and heterogeneous coefficients of a panel model that incorporates lagged dependent and weakly exogenous regressors. Following Chudik and Pesaran (2015) and Ditzen (2016Ditzen ( , 2018, the small sample time series bias is controlled by using the recursive correction method. Using similar notations of Ditzen (2016Ditzen ( , 2018, we specify the model in a multifactor error structure framework, as shown in Eqs. (1)-(3) (Pesaran 2006;Chudik and Pesaran 2015;Baltagi 2015Baltagi , 2020Teal 2011, 2013;Cavalcanti et al. 2011). where (I/Y) i,t and (S/Y) i,t are the ratio of gross capital to GDP and the ratio of gross domestic savings to GDP, respectively. β i denotes the country-specific heterogeneous slope showing the effect of a change in the (S/Y) on the (I/Y) and is defined as the savings-retention coefficient. A statistically significant low value of β indicates a relatively high degree of capital mobility. A savings-retention coefficient of one implies zero capital mobility as investment is financed by domestic savings. In (1), α 1i is a country-specific intercept. Furthermore, the disturbance term μ i,t in (1) consists of unexplained components of the investment ratio influenced by a set of common factors f t in (2) with heterogeneous factor loadings of τ i that may comprise country-specific fixed effects and heterogeneous country-specific deterministic trends, and the residuals ε i, t . In Eq. (2), ε i, t are idiosyncratic disturbance terms distributed with zero mean and finite variances. Furthermore, in Eq. (2), it is highly likely that f t may induce cross-sectional dependence between the error terms and the explanatory variable (S/Y) i,t . Since both the dependent and the explanatory variables are affected by the same unobservable processes f t , the problem of endogeneity may occur in model (1). To avoid this simultaneity problem and the presence of heterogeneity of slope coefficients, the CCEMG estimator adds cross-sectional averages of both the dependent and the explanatory variables to approximate the unobservable factors in running the OLS regression to estimate model (1). Since the CCEMG estimator, unlike a static situation, does not yield consistent estimates of both β i (individual slope estimates) and β CCEMG (the average slope coefficient) in a dynamic setting, Chudik and Pesaran (2015) incorporate extra lags (P T = cube root of T) of the cross-sectional averages of the lagged dependent and explanatory variables as: where β i represents the DCCEMG estimator, β DCCEMG. The estimator, β DCCEMG controls for a dynamic panel with lagged dependent variable and weakly exogenous explanatory variables, common effects, and heterogeneity. P T is the number of incorporated lags, where λ i and β i are stacked into π i = (λ i , β i ). The mean group estimates of DCCEMG are computed as πDCCEMG (Ditzen 2016(Ditzen , 2018. z¯ [[]] is denoted as: Although our main objective is to test the presence and degree of capital mobility by applying the DCCEMG estimator, for comparison, we also report the estimates of savings-retention coefficients, β i , employing the CCEMG. As shown by Chudik and Pesaran (2015), the CCEMG and DCCEMG approaches offer several econometric advantages in panel data model estimations. Monte Carlo simulations have shown that extending the CCEMG approach to dynamic panels having multifactor error structures performs well, judging by the criteria of biasness, size, power, and root mean square (RMSE), even in samples with low dimensions of N and T (Chudik and Pesaran 2015). Furthermore, as shown by Stock and Watson (2008), the DCCEMG and CCEMG approaches are robust to the presence of structural breaks in the data generating processes (for details, see Eberhardt and Bond 2009;Eberhardt and Teal 2011). Of note, the CCEMG panel data estimator controls for the existence of cross-sectional dependence that may be found in a panel due to many observed and unobservable common factors, such as aggregate demand and aggregate supply shocks, oil shocks, financial crises, global and local technological shocks, global and local spillover effects, terroristic events, and business cycles. As shown by Kapetanios et al. (2011), the CCEMG estimator, in a static multifactor error structure framework, and the DCCEMG estimator, in a dynamic setting, in addition to handling heterogeneity in slope coefficients and the impact of unobserved common factors, yield consistent estimates regardless of whether the common factors are stationary or non-stationary. Furthermore, the CCEMG and DCCEMG approaches to the estimation of the panel data are robust to the presence of heteroskedasticity, serial correlation, and structural breaks (Pesaran 2006). In the literature, it has been observed that in terms of statistical power, size properties, and RMSE, the CEMG, DCCEMG, and the Gengenbach et al. (2008Gengenbach et al. ( , 2016 estimators perform well; for example, Chudik and Pesaran (2015), Westerlund and Urbain (2015). Therefore, here, DCCEMG is considered as the preferred panel data estimator. The data on (I/Y) it, gross capital as a percentage of GDP, and (S/Y) it , gross domestic saving as a percentage of GDP, for 20 Latin American and Caribbean countries for 1960 to 2017, are gathered from the World Bank's World Development Indicators (WDIs), 2018 (World Bank 2018). The countries analyzed are Argentina, Bolivia, Brazil, Barbados, Chile, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, Guatemala, Guyana, Honduras, Jamaica, Mexico, Nicaragua, Paraguay, Peru, Trinidad and Tobago, and Uruguay. In developing countries, there are no available data on investment as a ratio of GDP; instead, we use gross capital as a percentage of GDP. This approach to measure (I/Y) is consistent with previous studies of the FH puzzle in developing countries. In addition to the 2018 WDIs, we use other annual World Bank WDIs as needed. Table 1 presents important descriptive statistics on our panel data. Only 4 out of the 20 countries show negative savings in certain years in the period, having spending that depends on borrowing. These are El Salvador, Jamaica, Nicaragua, and Guatemala. Applying the panel-normality test, the Sktest, equivalent to the Jarque-Bera test in panel data, we fail to reject the null hypothesis of normality for the series of I/Y and S/Y. The results of the Pesaran (2007) cross-sectional dependence tests and of the Pesaran and Yamagata slope homogeneity test (Pesaran and Yamagata 2008) 1 reveal cross-sectional dependence and slope heterogeneity in the series. Generally, panel data estimators, such as the fixed effects (FE) and random effects (RE), assume that the slope coefficients are homogeneous. Applying the FE and RE estimators to our data set would lead to biased and inconsistent results. Therefore, we apply the DCCEMG and CCEMG to estimate the average (mean) savings-retention coefficient for both the panel as a whole and the individual countries in the sample. Empirical results Furthermore, to ascertain empirically the integration order of variables in levels, we conduct the cross-sectional Pesaran (2007) unit root test (CIPS). The CIPS test is a second-generation panel unit root test that allows for cross-sectional dependence in the data and is robust to the presence of common factors and serial correlation (see Breitung and Pesaran 2008;Pesaran 2007). The CIPS test results are reported in Table 2. The observed p-values of the test statistics for various deterministic terms show that both the I/Y and S/Y series are non-stationary in levels and stationary in first-differences at 1% significance; thus, they are integrated in order of one, I (1). Once we find that I/Y and S/Y variables are non-stationary in levels, we test whether there is any long-run economic equilibrium relationship between these variables or whether they are cointegrated through cross-sectional dependence. To that end, we conduct the Westerlund (2007) and Gengenbach et al. (2008Gengenbach et al. ( , 2016) cointegration tests; the results are reported in Table 3. The Westerlund's cointegration test, in addition to allowing for cross-sectional dependence, is flexible enough to accommodate a large degree of heterogeneity in both the short-run and long-run dynamics (see Westerlund 2007;Persyn and Westerlund 2008). Of note, the Westerlund (2007) cointegration test allows for multiple structural changes in generating data and maintains the null hypothesis of no cointegration. Specifically, it consists of two sets of tests: the group-mean tests (G τ and G α ) maintaining the alternative hypothesis that at least for one panel member the variables are cointegrated. The other set represents the panel tests (P τ and P α ), stating the alternative hypothesis that for the panel as a whole I/Y and S/Y are cointegrated. The Gengenbach et al. (2008Gengenbach et al. ( , 2016 test is the panel error correction estimation and has many advantages over Westerlund's (2007) method. This procedure, based on structural dynamics, allows for cross-sectional dependence, non-stationary common factors, and parameter heterogeneity in a multifactor error structure framework for both the individual countries and the panel. In these tests, the maintained null hypothesis is no cointegration (no error-correction) as opposed to the alternative hypothesis of cointegration or the existence of error-correction, for technical details, see Gengenbach et al. (2008Gengenbach et al. ( , 2016. The results of the Westerlund (2007) tests are reported in the first panel and the results of the Gengenbach et al. (2008Gengenbach et al. ( , 2016 tests are reported in the second panel of Table 3. The results of the Westerlund (2007) tests indicate that the observed Z, P, and robust P-values are significant at 1% indicating that the null hypothesis of no cointegration is clearly rejected. Thus, we find that the I/Y and S/Y variables do form a long-run Gengenbach et al. 2016). *, ** and *** denote statistical significance at the 1%, 5% and 10% levels of significance, respectively b The observed p-value in the parenthesis. Critical values are taken from Tables 1 and 3, with m = 1 and no deterministic term (Gengenbach et al. 2008) link in the panel. Of note, especially for the Westerlund's group-mean test, the alternative maintains that S/Y and I/Y have a stationary relationship in at least one country in the panel. This implies that in the long run, the solvency condition is satisfied for the panel as a whole (see Coakley et al. 1996;Coakley and Kulasi 1997). The results of the Gengenbach et al. (2008Gengenbach et al. ( , 2016 tests illustrate that for the panel as a whole, the observed error correction term rejects the null hypothesis of no cointegration at 1% significance. The observed error correction terms are statistically significant at the 1% level for five panel members and statistically significant at the 5% level for four countries. The long-run savings-retention coefficient we find, reported in Table 3, is much smaller than what has been reported in the literature (see Murthy 2009). As stated, we use the CCEMG and DCCEMG estimators to measure the degree of capital mobility in the panel and individual countries. Table 4 presents our empirical results of these panel data estimators that take into account common factors, cross-sectional dependence, and slope heterogeneity. The mean savings-retention coefficient of 0.21 and its significant difference from one indicates a relatively high degree of capital mobility in the short run in the panel countries. This finding shows that in the long run, the current account balance is maintained, as the countries in the panel cannot finance their deficits forever. According to Corbin (2004), "...the existence of a one-toone relationship between the saving and investment rates does not rule out the possibility of lags in the adjustment process of the current account imbalances, which can be viewed as evidence of the existence of capital mobility in the short run" (p. 271). At the outset, we observe that the magnitudes of the savings-retention coefficients using the CCEMG and DCCEMG estimators are consistently similar around 0.20 and statistically significant, indicating a relatively high degree of capital mobility in the panel countries. The fact that these estimates of the savings-retention coefficient are similar in magnitude and sign indicates that our results are robust to various assumptions. The size and direction of the savings-retention coefficient are reasonable given the economic reforms, improvements in macroeconomic fundamentals, increased financial and economic integration of these countries due to globalization, disinflation, and strong fiscal and monetary policies in effect to create macroeconomic stability. Furthermore, we observe that our estimated savings-retention coefficient, reported in Table 4, is much lower than that reported in the literature. For comparison, Murthy (2009), using the Pedroni panel group mean FM-OLS, reports a savings retention rate of 0.46 with common dummies and 0.48 without common dummies. Cavallo and Pedemonte (2015), employing the FM-OLS estimator on a panel of 24 Latin American and Caribbean countries for the period 1980-2013, report a savings-retention coefficient of 0.39. Payne and Kumazawa (2005), using the panel mean group (PMG) estimator for 19 Latin American countries, report a savings-retention coefficient of 0.35. Our estimate of the savings-retention coefficient, using robust panel econometric procedures, is the lowest among these studies. This is broadly consistent with the financial and global integration that has been taking place in the Latin American and Caribbean countries. It is also notable that the extent of cross-sectional dependence as measured by the Pesaran CD tests, reported in Table 4, has dropped remarkably, and is barely significant at the 1% level. The presence of a relatively persistent low degree of CD can be attributed to the larger time-series dimension in the data matrix compared with the number of cross-sections. The RMSE associated with all the estimators is relatively low, although it is the lowest for the CCEMG. Furthermore, a low degree of the residual CD may be due to the ever-increasing financial, technological, and economic integration among the countries included in the study. To control for small sample bias, we use a bias correction method, the recursive mean adjustment (see Ditzen 2016;Chudik and Pesaran 2015) in applying the DCCEMG technique for the savings-retention coefficient estimation. The results of both the CCEMG and DCCEMG estimations of the savings-retention coefficients for individual panel members are reported in the second panel of Table 4. The econometric results for the group as a whole are satisfactory. Savings-retention coefficients for many individual countries are found to be positive and not statistically significant at the lowest 5% level employing both techniques. In panel estimation, what really matters is that the panel (group) coefficient is significant. 2 In conclusion, the estimated β coefficients from the Gengenbach et al. (2008Gengenbach et al. ( , 2016 procedure, the CCEMG, and the DCCEMG estimators are all below 0.21, indicating a relatively high degree of capital mobility. The main contribution of our study lies in how we test the presence of capital mobility in a panel of 16 Latin American and 4 Caribbean countries by estimating statistically discernible heterogeneous savings-retention coefficients. Using a panel model in a multi-factor error structure consisting of both cross-sectional dependence and unobserved common factors, we apply the DCCEMG estimator. For the degree of capital mobility, the presence of heterogeneity and cross-sectional dependence matter, as stated by Coakley et al. (2004): "It seems that when country heterogeneity, CS dependence, and permanent shocks are explicitly accommodated in a panel framework, the traditional FH puzzle results are completely overturned" (p. 587). Our study attempts to follow Coakley et al.'s (2004) suggestion in testing capital mobility in Latin American and Caribbean countries by applying panel estimation methodologies that allow for slope heterogeneity, cross-sectional dependence, non-stationarity, and endogeneity. Conclusion In this paper, we discuss our investigation of capital mobility in 16 Latin American and 4 Caribbean countries for 1960 to 2017, employing the CCEMG and the DCCMG panel data techniques. These estimators, unlike the widely applied FE and RE methods, are efficient, unbiased, and consistent in the presence of cross-sectional dependence, slope heterogeneity, simultaneity, and non-stationarity. The results of the various tests shared in the paper, point out that our panel suffers from slope heterogeneity, crosssectional dependence, simultaneity, and non-stationarity. Unlike previous studies on capital mobility in Latin American and Caribbean countries, we show that the magnitude of the savings-retention coefficient is small indicating a relatively high degree of capital mobility, a sign of an integrated capital market. The evidence of cointegration shows that for the countries in the sample, the long-run solvency condition is satisfied. Giannone and Lenza (2008) have demonstrated that when heterogeneous propagation and transmission of global shocks take place, the size of the savings-retention coefficient decreases. However, it is important to understand that results of the estimated savings retention coefficient illustrate the average across countries and through time-series dimension, indicating a decline in the coefficient for the current period compared with early periods due to changes in financial policies of the Latin American and Caribbean countries. The average of the coefficient illustrates that at the current period it may be even lower than 0.21, taking into account its high level in early periods. The low savings-retention coefficient reported here implies a higher degree of capital mobility, which is consistent with the changing economic environment in these countries as reflected by economic reforms, structural adjustments, removal of capital controls, and the rapid global development of information technology. Our basic results are further verified to be robust by applying an alternative estimator of panel error correction modeling. Our results suggest that in recent years, frictions in the financial and goods markets and a degree of "home country bias" in these countries has decreased. In Latin America, since 1990, significant liberalization has been underway (Estevadeordal and Taylor 2013). Some of the prominent liberalization policies include the slashing of tariff and non-tariff barriers, export diversification, deregulation, ambitious preferential trade agreements, such as the 2012 Pacific Alliance agreement between Chile, Colombia, México and Peru, the Central American Free Trade Agreement (CAFTA), the creation of the Custom Union consisting of Argentina, Brazil, Paraguay, Uruguay, Bolivia, and Venezuela, NAFTA (among the United States, Mexico, and Canada), and MERCOSUR (Agreements among Brazil, Argentina, Uruguay, and Paraguay). These financial integration-augmenting efforts have pushed liberalizing in the movements of goods, services, and capital within the region, in addition to generating externalities creating a conducive environment for increased opportunities for the inflow of foreign domestic investment (FDI). Furthermore, trade agreements stimulate exports, provide legal protection for the properties of enterprises under international law. These agreements also increase competition among firms, engendering cheaper and higher quality goods for consumers, and reduce risks related to a potential escalation of higher tariffs and expropriation of foreign-owned investments. In fact, many economists contend that trade deals and agreements, by reducing frictions that impede trade in goods and services, encourage specialization based on the principle of comparative cost advantage. Ponce (2006), in his econometric study dealing with the impact of foreign trade agreements (FTA) on FDI in Latin American countries during 1985 to 2003, finds the coefficient of FTA highly significant at a 1% level, implying that as countries sign more FTAs, they attract more FDI. The Pacific Alliance, founded in 2011, has been instrumental in increasing its members' global trade from $876 million in 2010 to $1.03 trillion in 2016. Such measures have reduced the friction in trading in goods and services, increased productivity, and resulted in the opening up of the economies in Latin American countries. Experts on Latin America, such as Estevadeordal and Taylor (2013), have shown that liberalization has led to the region's median increase in trade GDP ratio by 28%. They also contend that liberalization is estimated to have increased Latin America's GDP per capita growth rate from 0.60 to 0.70 percentage points. In fact, contrary to the Heckscher-Ohlin hypothesis of the substitutability of trade and capital flow, the economic performance of several Latin American countries, especially after the liberalization movement, has shown that capital flow and the consequent FDI are complements rather than substitutes (for detailed effects of liberalization on Latin American countries, see Bown et al. 2017). Therefore, we reason that the econometric finding of a high degree of capital mobility in the Latin American and Caribbean countries during the period investigation is due in part to reduced frictions in goods and services markets leading to increased competition and better access to financial markets, facilitated by regional free trade agreements and other liberalization efforts. Increased capital mobility in these countries can promote economic growth and hasten the process of globalization by creating a conducive economic environment for FDI in these countries. With increased capital mobility, these countries need not be constrained by a low level of domestic savings based on the low level of income of most citizens. Of course, increased capital mobility, in addition to facilitating more efficient allocation of both physical and financial resources by promoting credit and risk-sharing across international borders, is not without shortcomings. It might induce more economic volatility in the prices of securities, interest rates, and exchange rates. Moreover, the countries cannot simultaneously accomplish their three major economic objectives of having free capital mobility, a fixed exchange rate, and an independent monetary policy. In light of increased capital mobility in these countries, the Central bankers may not pursue monetary policy actions by simultaneously trying to attain their economic external and internal targets such as the exchange rate and interest rate, respectively. However, increased capital mobility would compensate for certain negative outcomes by providing the overall benefits of supplementing insufficient domestic savings and lowering the cost of capital, ultimately leading to increased economic growth and greater financial integration. Our empirical findings support the prevailing theoretical notion that if the magnitude of the statistically significant savings-retention coefficient is small, then the implication is that in these countries, capital is relatively mobile. The evidence also points to the absence of the FH puzzle. Our study is focused on the estimation of capital mobility in Latin America and Caribbean countries employing recently developed robust panel estimation approaches using the longest period employed in the literature on capital mobility estimations for Latin American and Caribbean countries. However, the employed panel estimation technique does not take into account structural breaks that may exist in developing countries. The potential area for further research is consideration of structural breaks in capital mobility estimations for Latin American and Caribbean countries.
7,917
2020-12-01T00:00:00.000
[ "Economics" ]
A Complex Network Model for Analysis of Fractured Rock Permeability State Key Laboratory for Geomechanics and Deep Underground Engineering, China University of Mining and Technology, Xuzhou, Jiangsu 221116, China Mechanics and Civil Engineering Institute, China University of Mining and Technology, Xuzhou, Jiangsu 221116, China Fractured Coal Masses Laboratory of Mine Cooling and Coal-Heat Integrated Exploitation, China University of Mining and Technology, Xuzhou, Jiangsu 221116, China Introduction e networks of rock fracture are formed by structural deformation and physical diagenesis [1]. On the inner surface of rock stratum, the scale of naturally formed fracture network is expanding and redistributing randomly with different fracture development degrees, which is always difficult to identify. However, researchers usually use the dip angle and azimuth to determine the spatial orientation of fractures. e structural characteristics are consistent with the two directional attributes of structural geology: tendency and trend. In geological objects, the complex trace analysis is used to calculate the obliquity estimation of the three-dimensional data body, i.e., steering cube [2], and then gets the dip angle and azimuth information of each data point. e permeability of fractured reservoirs is very low, and the fracture network controls the fluid flow [3]. Hence, it has an important influence on oil or gas exploitation [4] and geothermal energy extraction [5]. In recent years, researchers around the world have studied permeability characteristics of fracture networks and put forward corresponding models [6][7][8]. Snow [9] in his study established the parallel plate model and obtained a tensor analytical formula of permeability of fracture network. Koudina et al. [10] studied the permeability of fracture network in three-dimensional space by means of numerical simulation. e fracture network was composed of polygonal shapes and the flow of fluid in each fracture satisfied Darcy's law, while comparing it with Snow's model. Xia [11] established the dynamic model of permeability and opening of fracture network under different confining pressures. Van Stappen et al. [12] also connected the seepage model with fracture opening by determining the relationship between fracture permeability and confining pressure. Li et al. [13,14] broke away from the traditional practice of thinking fractured reservoirs as dual media and established a percolation model with equivalent continuum suitable for low-permeability fractured shale reservoirs. De Dreuzy et al. [15] studied the permeability of randomly generated two-dimensional fracture network by numerical and theoretical methods and compared it with natural fractures to verify the accuracy of the model. Klimczak et al. [16] used the parallel plate model to obtain the permeability formula of a single crack under conditions that the fracture length and opening satisfied the power-law relationship and verified the accuracy of the model through numerical simulations. Wei et al. [17] derived a forecasting model of permeability using the electrokinetic relationship between fluid flow and current in microfractures and analyzed the influence of connectivity between fractures on permeability. Li [18] proposed a new model considering fracture connectivity according to the hydraulic fracture morphology of raw coal, the "matchstick" seepage model and the cubic law. However, the above models do not quantitatively relate the permeability of fracture network with porosity, surface density, and microstructural parameters of fracture network, such as fracture connectivity, openness, the dip angle, and azimuth. e randomly distributed fracture networks in rocks have been shown to have statistical self-similarity, which is a basic feature of fractal. Interested readers may consult [19][20][21][22][23][24][25][26] for details. Watanabe and Takahashi [5] used the fractal theory to study the permeability of fracture networks and the extraction of heat in dry hot rocks, but they did not put forward a permeability expression with micro parameters. Yu et al. [27] based on the study of seepage characteristics of porous media in fracture networks by using fractal methods put forward an explicit expression with micro parameters, such as the structure of fracture network and porosity, and then gave the scaling relationship between permeability and the structure of fracture network. Li et al. [28] established the mathematical model of equivalent permeability tensors in fractured reservoirs, based on fracture statistics, the simulation technique of fracture network, and equivalent flow assumption, and then obtained the equivalent permeability tensor of fractured media by using boundary element method. Jafari and Babadagli [29] obtained fractal permeability expressions of random fractures by using multiple regression analysis based on logging data but their empirical relationship contained many empirical constants. Recently, Miao et al. [6] obtained the analytical expression of fracture network permeability according to the basic fractal theory. is model quantitatively connected the fracture length, aperture, the fracture dip angle, and fracture azimuth with permeability of fractured rocks, which did not include any empirical constant. Most of the above models initiate from the statistical parameters of isolated fractures and macroscopic homogenization. e connectivity of fracture networks is not considered, particularly the influence of the connectivity of a small number of local fractures (maximum degree) on the overall permeability. Starting from the topological structure of fracture network and based on the complex network theory, this paper establishes the network permeability model of fractured rocks and probes into the internal mechanism of the influence of structure parameters of fracture network on permeability, including fracture porosity ∅ M , fracture density D, power index d k , and the maximum node degree k max . Degree Distribution of Hierarchical Networks. In order to illustrate the modularity, local clustering, and scale-free topological characteristics of many complex network systems, it is necessary to assume that the modules generate a hierarchical network in some iterative way [30]. Recent studies show that [31] some of the topology modules are well organized hierarchically in the network. e hierarchical network seems to have a very conspicuous feature; that is, the local is similar to the whole in a sense, i.e., the selfsimilarity. Hierarchical network integrates scale-free topology with internal module structure. Song et al. [32] further reveal that self-similarity and degree distribution of scale-free hold true at all coarse-grained stages of the network by adopting renormalization procedure and the degree distribution P(k) of the renormalized network is invariant under renormalization. e power-law relationship can be expressed as follows [32]: where P represents the total number of nodes in the network with degree k, k represents the number of other nodes connected to a node, and c represents the self-similarity index with the range of 1-3, which is transformed by the exponential formula [32]. Basic Features of Fractals. Most of the trace length of the fracture satisfies the power-law (scale-free) distribution [33,34]. e fractal power-law distribution refers to the fact that the fracture length in nature is random and disordered, showing the characteristics of similarity and fractal. e power-law expression is [35] where D f is the fractal dimension of fracture length, l is the track length of fracture, and N is the total number of fractures. is is the basic expression of fractal scaling law and basis of box counting method. Power-Law Expression of Fracture Complex Networks. Covariant analogy is also known as mathematical similarity analogy. Power-law relations (1) and (2) have obviously similar functional relations and equation (1) multiplied by k can be analogous to equation (2). e number of edges of complex network is associated with the number of edges of fracture network. e following expression is obtained: e relationship between c in power-law distribution formula (1) and D f in power-law expression (2) is as follows [32]: where the power exponent d k is 1.5 times of D f [32]. Substitute equation (4) into equation (3) to get a proportional relationship: e parallel plate model is usually used to represent the effective aperture of the fracture and the relationship between crack length and effective aperture has also been studied by a large number of researchers [36,37]. is relationship is given by where β is the proportionality coefficient, which is related to the mechanical properties of the medium around the fracture in the range of 10 − 3 ∼ 10 −1 [16]. a is the effective aperture of the fracture and n is the power exponent. When the power exponent n � 1, the fracture network has the characteristics of self-similarity and fractal [37]. So, for the fracture network with self-similarity [16], equation (6) can be rewritten as Equation (1) can be rewritten as where M represents the number of network edges, and α is the proportionality coefficient. Differentiating equation (8), we can get the number of edges whose node degrees are in the range k to k + dk: wherein the negative sign indicates that the number of edges of a complex network decreases with the increase of node degree, which is in line with the actual situation and −dM(k) > 0. e probability density of an edge with node degrees k is expressed as where M t represents the total number of edges the network has, and f(k) � (α/M t )ck −c is the probability density function of the edge with node degrees k, which satisfies the normalization principle: us, it can be obtained that Evidently, when k min ≪ k max , equation (12) can be expressed as generally, k min /k max ≤ 10 −2 can be taken and complex networks in nature usually meet this requirement. Yu [38,39] studied the power-law relation of fractal distribution of pores in porous media. Likewise, Majumdar and Bhushan gave the cumulative size distribution of islands on the Earth's surface [40]: where N is the total number of islands with the largest area s max greater than s, and D is the fractal dimension for the size distribution of islands. Equation (14a) indicates that there is the largest island on the Earth's surface; in addition, Majumdar and Bhushan [40] used this power-law formula to describe the contact points on engineering surfaces, where the maximum point area s max � gλ 2 max , a point area s � gλ 2 with λ being the diameter of a point and g being a geometry coefficient. Since self-similarity is one of the basic characteristics of fractal, the self-similarity of porous media with fractures needs to satisfy a certain power-law relationship [41]. Hence, equation (14a) is used to describe islands on the Earth's surface and points on the engineering surface can be extended to describe the size distribution of nodes on the surface of a fracture network. In the complex network theory, the characteristic size of a single node includes outdegree and in-degree [30]: where k omax k imax represents the maximum node size with k omax and k imax , respectively, being the maximum out-degree and maximum in-degree, k o k i is a node size with the outdegree and in-degree being k o and k i , respectively. When the direction of the degree is ignored, equation (14b) can be simplified as From equation (14c), the cumulative number of nodes whose degrees are greater than k can be expressed as Advances in Civil Engineering 3 where N is the cumulative number of nodes in a fracture network. From equation (14d), the total number of nodes in a complex fracture network is obtained: Because the contribution of one edge to degree is 2, the average degree of complex networks is Inserting equation (15) into equation (16), Inserting equation (17) into equation (13) to get the proportionality coefficient, en, we insert equation (18) into equation (9) which gives Equation (19) is an important power-law distribution relation of edges with certain node degree in complex networks. Furthermore, by the same logic, the average degree of complex networks can be obtained as 2.4. Surface Porosity of Fracture Networks. Self-similarity is closely related to the fractal. Yu and Li [42] deduced the relationship between porosity and fractal dimension in porous media based on the fractal theory: where λ min and λ max are, respectively, the minimum pore diameter and the maximum pore diameter. D p is the fractal dimension of pores. d E is the Euclidean dimension: in two dimensions, d E � 2; in three dimensions, d E � 3. Equation (21) is appropriate not only for precise fractal geometry but also for statistical fractal geometry. As long as the pores of porous media fall within the self-similar range of λ min ∼ λ max , forming a fractal set, equation (21) holds accurate regardless of the shape of the pores. erefore, in a hierarchical complex network, it is embedded into the matrix as a fracture, forming a network model with fracture properties. e edges of the complex network, that is, the fractures with a node, satisfy the above equation (21) within the self-similar range of k min ∼ k max and are independent of the shape of the node. It can be rewritten as where ∅ M is the effective porosity of fractures in the rock and k min and k max are the minimum and maximum of the nodes, respectively. On the cross section of the representative elementary volume, the surface porosity of the fracture network is defined as [6] where A M represents the cross-sectional area of the representative elementary volume in which the fracture network is located, and A PM represents the total area of fracture pores on this area. According to equations (5), (7), and (19), we can get the total cross section area of the fracture [6]: Inserting equation (22) into equation (24), where the porosity ∅ M is used in the two-dimensional space of equation (22), i.e., d E � 2. Relationship between Surface Density and the Self-Similarity Index According to equation (5), the total length of fractures on the cross section of the representative elementary volume is as follows: Inserting equation (22) into equation (26), Advances in Civil Engineering In the two-dimensional fracture network, the surface density refers to the density of the cross section in a unit cell of fractures (not a single fracture), which is defined by [43] where D is the surface density of fractures, and L total is the total length of all fissures on the cross section of the representative elementary volume body, which is related to the complex network model. Equations (23), (25), and (27) are inserted into equation (28) to get the surface density of the fracture, i.e., Equation (29) shows that the surface density in the twodimensional complex fracture network is a function of the fracture porosity ∅ M , self-similarity index c, power index d k related to fractional dimension, proportion coefficient β, and the degree k max of the largest node. In order to study the relationship between the surface density of these four fracture networks and the self-similarity index, the prediction results of the surface density for the complex fracture network are compared with the four random fracture networks generated by Zhang and Sanderson [43] through the numerical method of selfavoiding walking. In their simulation, the critical fractal dimensions lie in a narrow range from 1.22 to 1.38 (average 1.30) for those critical clusters with variations in the lower limit of length from 0.005 to 1.5 m, in the dispersion angle of fracture direction from 0 to 50°, and in exponents from 1.2 to 1.8. erefore, during calculation, the minimum degree of the node is taken to be 1 and the maximum degree is taken to be 300 according to the same ratio coefficient. Meanwhile, the average power index is 2 and the average self-similarity index is 1.65 through equation (4), and the average porosity ∅ M is calculated through equation (22). From Figure 1, it can be observed that the predicted results are in good agreement with numerical simulations. Meanwhile, Figure 1 shows that the surface density of fracture network increases with the increase of fracture self-similarity index. Figure 2 shows the relationship between fracture surface density and porosity when the maximum degree of node k max is 300 and β is 0.006. It can be observed from Figure 2 that the surface density of fracture network increases with the increase of fracture porosity. is is because the larger is the porosity, the greater will be the pore area of the fracture network. Under certain conditions of β, the longer is the total length of the fracture, the stronger the connectivity will be as mentioned. is result is consistent with the simulations by Miao et al. [6]. It can be explained that the change of numerical values will not affect the general trend between them. Complex Network Model of Permeability of Fractured Rocks Generally, the production of low-permeability reservoirs often depends on the seepage system of fracture network. When there are differences of temperature and pressure in the system, there will be fluid flow or heat transfer between the fracture networks. In these processes, however, the laws of mass, momentum, and energy transfer among fluids are very complicated. Moreover, the geometric aspects of fractures cannot be determined relatively, including density and surface roughness. For fractured reservoirs, we can proceed from macroheterogeneity, because the degree distribution, length, aperture, and orientation of the fracture are often random and disordered. Complex network can provide an effective method for representing irregular objects. e topological model of complex networks normally considers the positional relationship between nodes but not their shape and size. erefore, network space determined by the azimuth and dip angle has an important influence on the seepage characteristics of fracture network. Nevertheless, the spatial orientation of fractures is usually random and the number of fractures in space is so large that it is almost impossible to express the orientation of each fracture precisely [44]. Generally, the statistical method in the field of engineering has been adopted to show the location of fracture network, which is to take the average value of fracture dip angle and fracture azimuth [45], and this is often used in petroleum engineering, shale gas exploitation, and geothermal energy extraction. erefore, in this paper, we assume that the average dip angle of the complex fracture network is θ and the average azimuth of the fracture is α, as shown in Figure 3. e cubic law of single fracture is based on the model of parallel plate, which becomes the basic theory of network seepage of fractured rocks and this is usually considered simple and effective. e flow rate along the flow direction through a fracture can be described by the famous cubic law [46,47]: where L 0 denotes the length of the representative elementary volume, a denotes the fracture aperture, l denotes fracture trace length, Δp denotes the pressure drop across a fracture along flow direction, and μ denotes dynamic viscosity coefficient of the fluid. If the spatial orientation of fracture is considered, the flow rate of a single fracture can be expressed as [6,48] e total flow rate of fluid through a set of complex fracture networks can be obtained by integrating equation (31) from minimum degree to maximum degree in a unit cross section; i.e., Advances in Civil Engineering In general, k min ≪ k max . According to equation (4) and [35], since 1 < c < 2.3 in the two-dimensional plane, and (k min /k max ) 4/d k (− c+1) ≪ 1, consequently, equation (32) can be simplified as It can be seen from equation (33) that the total flow rate of fluid in the complex fracture network is related to the index of self-similarity c, fractional-dimension related power index d k , fracture azimuth α, and fracture dip angle θ and the flow rate is very sensitive to the maximum degree k max of nodes. Darcy's law for Newtonian fluid flow in porous media is given by [6] e permeability of complex fracture network can be obtained by inserting equation (33) into equation (34): Fracture Flow direction x y Horizontal plane α θ Figure 3: e average orientation of fractures in the three-dimensional space, the plane of the coordinate axis is the horizontal plane and the direction of water flow is along the x-axis. e included angle between the fracture direction and y-axis is α, that is, the azimuth of the fracture. e θ angle between the fracture plane and the horizontal plane is the dip angle of the fracture. Advances in Civil Engineering By inserting equation (28) and equation (29) into equation (35), the permeability of fracture network can be expressed by the surface density of fracture: Equation (36) suggests that permeability is a function of self-similarity index c, power index d k , structural parameters (maximum degree of a node k max , fracture surface density D, fracture azimuth α, and fracture dip angle θ), and fracture porosity ∅ M in a medium formed by a complex fracture network. Equation (36) further reveals that permeability is strongly dependent on the maximum degree k max of the node. e higher is the node degree, the stronger is the connectivity of fracture network. e fluid capacity increases with increase of the flow path, leading to higher permeability. erefore, this model has more advantages than the traditional model and can better explain the influence of node failure on fluid flow in the fracture network. Results and Discussion Jafari and Babadagli [49] analyzed 22 different fracture networks in nature. e digitized fracture patterns were exported to commercial fracture modeling software (FRACA) to calculate their equivalent fracture network permeability. A 3D model with a grid block size of 100 m × 100 m × 10 m was constructed. Each digitized 2D fracture pattern (i.e., the digitized mapped fracture traces from outcrops) was imported into the 3D model in such a way that all fractures were considered to be vertically touching the top and the bottom of the layer, wherein the maximum fracture length is 2 m and the dip angle of the fracture is 0°. erefore, in the calculation, the minimum degree of the node is 1, and the maximum degree is 6. Furthermore, since the model of parallel plate mainly depends on the effective aperture of a single fracture, the actual tortuosity of the fracture is not considered by using this simplified model. Via equation (4) and equation (20), the average power index and the average degree of the node are calculated. All the structural parameters used in theoretical calculations are listed in Table 1. Figure 4 shows that the predicted values of our model are in good agreement with the results of numerical simulations. We discuss the influence of model parameters on permeability. From equation (36), it is observed that the parameters that play a decisive role mainly include fracture porosity ∅ M , fracture dip angle θ, fracture surface density D, power index d k , and maximum node degree k max . Figure 5 shows the relationship between permeability and fracture porosity of the complex network model at different dip angles. In the calculation, the maximum degree k max � 398 of the fracture node (at β � 0.006) is taken. It can be seen from Figure 5 that the permeability of fracture network increases with the increase of fracture porosity. In addition, with the same porosity, the larger is the fracture dip angle, the smaller is the permeability of fracture network. is is because the flow resistance of fluid increases with the increase of the fracture dip angle. Figure 6 shows the relationship between permeability and fracture surface density in the complex network model. In this calculation, the maximum degree k max � 398 of fracture node, fracture dip angle θ � 45°, fracture azimuth α � 0°, and β � 0.006 are taken. It can be seen from Figure 6 that the permeability of fracture network increases with the increase of fracture surface density. is is because as the density of fracture surface increases, the porosity of fracture also increases. Hence, the permeability of fracture network increases. Figure 7 shows the relationship between permeability of complex network model and the power index. In the calculation, the minimum and maximum degrees of fracture Advances in Civil Engineering 7 network nodes as 1 and 6 are taken, respectively. Equations (4) and (20) are used to calculate the average self-similarity index c av � 1.67 and the average degree of node k av � 4 and we take the dip angle of fracture θ � 0°and β � 0.001. It can be seen from Figure 7 that the permeability of fracture network decreases slowly with the increase of power index. Miao et al. [6] have verified that the permeability of the fractal fracture network model increases slowly with the increase of fractal dimension. By considering equation (4), that is, the internal correlation between scale-free property of complex networks and fractal scaling law, it can be concluded that there will be a competitive relationship between the inhibition of seepage flow by power index and the promotion of seepage flow by fractal dimension. Henceforth, it leads to the discontinuous phase that is not always occurring. Figure 8 shows the relationship between the permeability of complex network model and the maximum node degree. When a node in a network has multiple edges connected to it, the number of edges is the degree of the node, regardless of its direction. In the calculation, we take the dip angle (from 0°to 180°) of fracture θ � 45°, azimuth of the fracture α � 0°, β � 0.006 with the range of 0.001∼0.1 [16], and average surface density D � 10 (m/m 2 ). It can be seen from Figure 8 that permeability of fracture network increases sharply with the increase of the maximum degree of nodes. Since connectivity of the entire fracture network is strongly dependent on maximum degree of a node, it is equivalent to the connection hub of entire complex network. When a small number of edges are removed from the network, the overall connectivity of the network will not be greatly affected. us, the complex network has a high robustness to the node destruction. At the same time, if a node with the maximum degree is deliberately attacked, the entire network will paralyze quickly and the fluid can only flow through a few paths. is is also the vulnerability of complex network to deliberate attacks on nodes. Conclusion is paper applies the complex network theory and topological model to fractured rocks, while describing the Figure 8: e relationship between fracture permeability and maximum node degree. 8 Advances in Civil Engineering fracture network as a hierarchical network with self-similarity. Meanwhile, the fracture network model of surface density is obtained based on the power-law distribution relation of network edges. en, the permeability model of fractured rocks is deduced in accordance with the famous cubic law, Darcy's law, and complex network theory. Compared with the existing numerical simulations, the predicted results show that the above models are accurate. Besides, the effect of structural parameters on the permeability of fractured media is also discussed. e permeability of fracture networks increases with the increases of porosity and surface density. e permeability of fracture networks increases exponentially with the increase of the maximum node degree and its power exponent is 3/d k . Data Availability e data (numerical simulation) used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest regarding the publication of this paper.
6,322.4
2020-10-29T00:00:00.000
[ "Geology", "Engineering" ]
Temporary Loading Prevents Cancer Progression and Immune Organ Atrophy Induced by Hind-Limb Unloading in Mice Although the body’s immune system is altered during spaceflight, the effects of microgravity (μG) on tumor growth and carcinogenesis are, as yet, unknown. To assess tumor proliferation and its effects on the immune system, we used a hind-limb unloading (HU) murine model to simulate μG during spaceflight. HU mice demonstrated significantly increased tumor growth, metastasis to the lung, and greater splenic and thymic atrophy compared with mice in constant orthostatic suspension and standard housing controls. In addition, mice undergoing temporary loading during HU (2 h per day) demonstrated no difference in cancer progression and immune organ atrophy compared with controls. Our findings suggest that temporary loading can prevent cancer progression and immune organ atrophy induced by HU. Further space experiment studies are warranted to elucidate the precise effects of μG on systemic immunity and cancer progression. Introduction To date, over 500 astronauts have traveled to space, with long-term stays of 6 months to 1 year in the International Space Station (ISS) likely to become possible. In the near future, manned space missions are scheduled to reach beyond low Earth orbit, such as return expeditions to the Moon, or to Mars. A mission to Mars will require spending approximately 2.5 years in space-6 months to travel there, 1.5 years on the surface, and 6 months to return. Space travel is no longer merely a dream. For safe long-term stays in space, it is urgent that we evaluate any detrimental effects on human physiological, behavioral, and psychological health to ensure astronaut health and performance under outer space-specific conditions. Space radiation, including heavy ions, is one of the main health hazards of spaceflight. Exposure to space radiation on long-duration and exploration spaceflights may lead to an increased risk of cancer [1,2], tissue degeneration, and development of cataracts [3,4], and may affect the central nervous system [5][6][7], cardiovascular system [8], and immune functions [9]. Several factors, including microgravity (µG) [10], are large uncertainties in the projection of these risks and prevent the evaluation of the effectiveness of possible countermeasures. Exposure to µG were found to reduce bone [11], muscle [12], and ventricular masses [13], and "immune problems" were also associated Int. J. Mol. Sci. 2018, 19, 3959 2 of 10 with spaceflight [14][15][16][17][18][19]. In space shuttle experiments, spleen and thymic masses were reduced in flight mice [20], and significant changes in thymopoiesis was reported in healthy flight astronauts in association with a defined physiological, emotional, and physical stress event [21]. Immune system dysregulation has now been demonstrated to occur during spaceflight and persist during 6 months orbital spaceflights [17,[22][23][24]. These results suggest that immune system aberrations caused by stressors associated with space travel should be included when estimating risk for pathologies such as cancer. Hind-limb unloading (HU) of rodents was developed in the 1980s to enable the study of mechanisms, responses, and treatments for the adverse consequences of spaceflight. Although it is used to investigate the effect of weightlessness on the musculoskeletal system, several studies have suggested that HU has a similar impact on other physiological functions, including the immune system, to that experienced during anti-orthostasis and inactivity [25][26][27][28]. Although immunodeficient mice showed no difference in tumor growth, normal mice demonstrated significantly increased tumor growth and greater splenic atrophy during HU compared with controls [29]. In this study, we assessed metastasis in HU mice to investigate cancer progression under µG. In addition, we verified how to prevent cancer progression during HU. Change of Body Weight by Four Suspension Conditions Mice in the suspension groups (HU, temporary loading during HU (TL), and orthostatic suspension (OS)), demonstrated reduced body weight compared with the standard housing group (Con). At 3 days after suspension, there were no statistically significant differences in body weight between the suspension groups. In addition, there were no significant differences in body weight between the HU and TL groups at 21 days after the inoculation of cancer cells (Table 1). with spaceflight [14][15][16][17][18][19]. In space shuttle experiments, spleen and thymic masses were reduced in flight mice [20], and significant changes in thymopoiesis was reported in healthy flight astronauts in association with a defined physiological, emotional, and physical stress event [21]. Immune system dysregulation has now been demonstrated to occur during spaceflight and persist during 6 months orbital spaceflights [17,[22][23][24]. These results suggest that immune system aberrations caused by stressors associated with space travel should be included when estimating risk for pathologies such as cancer. Hind-limb unloading (HU) of rodents was developed in the 1980s to enable the study of mechanisms, responses, and treatments for the adverse consequences of spaceflight. Although it is used to investigate the effect of weightlessness on the musculoskeletal system, several studies have suggested that HU has a similar impact on other physiological functions, including the immune system, to that experienced during anti-orthostasis and inactivity [25][26][27][28]. Although immunodeficient mice showed no difference in tumor growth, normal mice demonstrated significantly increased tumor growth and greater splenic atrophy during HU compared with controls [29]. In this study, we assessed metastasis in HU mice to investigate cancer progression under µG. In addition, we verified how to prevent cancer progression during HU. Change of Body Weight by Four Suspension Conditions Mice in the suspension groups (HU, temporary loading during HU (TL), and orthostatic suspension (OS)), demonstrated reduced body weight compared with the standard housing group (Con). At 3 days after suspension, there were no statistically significant differences in body weight between the suspension groups. In addition, there were no significant differences in body weight between the HU and TL groups at 21 days after the inoculation of cancer cells (Table 1). Temporary Loading Prevents Immune Organ Atrophy by Hind-limb Unloading The spleen and thymus in HU mice were shrunken compared with the other experimental groups ( Figure 1). Because it was thought that weight loss influenced the size of these organs, we calculated the fresh weight of the spleen and thymus relative to body weight. The weights of these organs in HU mice were significantly lower than those of the Con and OS groups, although there was a positive correlation between body weight and organ weight ( Figure A1A). However, no significant differences in splenic or thymic mass were seen between TL mice and Con or OS groups ( Figure 1). Con, standard housing control; HU, hind-limb unloading; TL, temporary loading during HU (2 h per day); OS, orthostatic suspension. Weight changes (% ± standard error) were calculated using the body weight before and after treatment. # , ANOVA test; † , Kruskal-Wallis test. NS, not significant. Temporary Loading Prevents Immune Organ Atrophy by Hind-Limb Unloading The spleen and thymus in HU mice were shrunken compared with the other experimental groups ( Figure 1). Because it was thought that weight loss influenced the size of these organs, we calculated the fresh weight of the spleen and thymus relative to body weight. The weights of these organs in HU mice were significantly lower than those of the Con and OS groups, although there was a positive correlation between body weight and organ weight ( Figure A1A). However, no significant differences in splenic or thymic mass were seen between TL mice and Con or OS groups ( Figure 1). Circles, standard housing control group (Con); diamonds, hind-limb unloading (HU); triangles, temporary loading during HU (2 h per day) (TL); inverted triangles, orthostatic suspension (OS). Error bars indicate standard errors. # , ANOVA test; † , Kruskal-Wallis test. * p < 0.05; NS, not significant. Temporary Loading Prevents Acceleration of Tumor Growth by Hind-Limb Unloading Tumor growth in the HU group was significantly accelerated compared with that of the other experimental groups. TL mice had slower tumor growth compared with HU mice. In addition, there were no statistically significant differences in tumor growth between the TL and Con or OS groups ( Figure 2). Circles, standard housing control group (Con); diamonds, hind-limb unloading (HU); triangles, temporary loading during HU (2 h per day) (TL); inverted triangles, orthostatic suspension (OS). Error bars indicate standard errors. # , ANOVA test; † , Kruskal-Wallis test. * p < 0.05; NS, not significant. Temporary Loading Prevents Acceleration of Tumor Growth by Hind-Limb Unloading Tumor growth in the HU group was significantly accelerated compared with that of the other experimental groups. TL mice had slower tumor growth compared with HU mice. In addition, there were no statistically significant differences in tumor growth between the TL and Con or OS groups ( Figure 2). Circles, standard housing control group (Con); diamonds, hind-limb unloading (HU); triangles, temporary loading during HU (2 h per day) (TL); inverted triangles, orthostatic suspension (OS). Temporary Loading Prevents Acceleration of Tumor Growth by Hind-Limb Unloading Tumor growth in the HU group was significantly accelerated compared with that of the other experimental groups. TL mice had slower tumor growth compared with HU mice. In addition, there were no statistically significant differences in tumor growth between the TL and Con or OS groups ( Figure 2). Temporary Loading Prevents Acceleration of Metastasis by Hind-Limb Unloading The number of metastatic nodules was higher in HU mice compared with that of the other experimental groups. The TL group demonstrated 32.1% fewer metastatic nodules compared with HU, and there were no statistically significant differences in the number of metastases between the TL Int. J. Mol. Sci. 2018, 19, 3959 4 of 10 and Con groups. Although there were no statistically significant differences between the Con and OS groups, the number of metastases in the OS group was significantly lower than the other suspension groups ( Figure 3). Additionally, a negative correlation between immune organ weight and cancer progression was also identified ( Figure A1B). Temporary Loading Prevents Acceleration of Metastasis by Hind-Limb Unloading The number of metastatic nodules was higher in HU mice compared with that of the other experimental groups. The TL group demonstrated 32.1% fewer metastatic nodules compared with HU, and there were no statistically significant differences in the number of metastases between the TL and Con groups. Although there were no statistically significant differences between the Con and OS groups, the number of metastases in the OS group was significantly lower than the other suspension groups (Figure 3). Additionally, a negative correlation between immune organ weight and cancer progression was also identified ( Figure A1B). Discussion In this study, we demonstrated the effects of HU on immune organ atrophy ( Figure 1) and the accelerated tumor growth of osteosarcoma in vivo ( Figure 2). Our data agree with a previous report using spindle cell carcinoma in the HU mouse model [29]. To clarify the potential for metastasis under HU, we used LM8 cells with high metastatic potential to the lung [30]. Increased lung metastasis during HU in our experiment can almost certainly be explained by changes in anti-tumor immune responses ( Figure 3). There was also a negative correlation between immune organ weight and indicators of cancer progression, such as tumor volume and number of metastases ( Figure A1B). Immune organ atrophy may be caused by hormones such as sclerostin and osteopontin through the loss of mechanical loading to the bones [26,31]. It was reported that the multifunctional hormone osteopontin plays diverse roles in bone biology, immune regulation, and cancer metastasis [26]. Many studies have investigated virus infection in relation to immune system dysregulation during spaceflight or HU [32][33][34], but there is currently very little data regarding cancer progression [35]. The immune system usually protects the body from tumor initiation to metastatic progression by the destruction of abnormal cells [36]. The current study suggests the possibility that prolonged µG of a long-term stay in space may increase the risk of cancer incidence and mortality. Discussion In this study, we demonstrated the effects of HU on immune organ atrophy ( Figure 1) and the accelerated tumor growth of osteosarcoma in vivo ( Figure 2). Our data agree with a previous report using spindle cell carcinoma in the HU mouse model [29]. To clarify the potential for metastasis under HU, we used LM8 cells with high metastatic potential to the lung [30]. Increased lung metastasis during HU in our experiment can almost certainly be explained by changes in anti-tumor immune responses ( Figure 3). There was also a negative correlation between immune organ weight and indicators of cancer progression, such as tumor volume and number of metastases ( Figure A1B). Immune organ atrophy may be caused by hormones such as sclerostin and osteopontin through the loss of mechanical loading to the bones [26,31]. It was reported that the multifunctional hormone osteopontin plays diverse roles in bone biology, immune regulation, and cancer metastasis [26]. Many studies have investigated virus infection in relation to immune system dysregulation during spaceflight or HU [32][33][34], but there is currently very little data regarding cancer progression [35]. The immune system usually protects the body from tumor initiation to metastatic progression by the destruction of abnormal cells [36]. The current study suggests the possibility that prolonged µG of a long-term stay in space may increase the risk of cancer incidence and mortality. Space radiation is a cause of increased cancer risk [1,2]. During a long-term deep space mission outside Earth's protective magnetic field, astronauts will be constantly exposed to galactic cosmic rays (GCRs) and occasionally to particles from large solar particle events. Because the energy of some GCR particles is very high, it is difficult to protect astronauts using conventional materials [37]. This phenomenon may increase the risk of cancer development in conjunction with extended µG duration. Importantly, cancer risk assessment for space radiation based on the dose response data of static radiation conditions with disregard to the influence of µG might underestimate the potential risk posed to astronauts. In the near future, astronauts and civilians who might harbor undetectable micro-cancers may undertake long-term stays in space. Therefore, such increased cancer risk poses a significant problem. This finding raises another unresolved question: How can we prevent cancer progression induced by µG? To answer this, we investigated the effect of TL on lymphoid organ atrophy and cancer progression. We found significant differences between the TL and HU groups using the Student's t-test. This new finding indicates that TL prevents the negative effects of µG. Interestingly, astronauts routinely undertake physical exercise for an average of 2 h per day, incorporating both strength and aerobic training to counteract reductions in muscle strength, mass, and cardiorespiratory fitness that occur because of prolonged periods in µG spaceflight (Figures 1-3). It was reported that an additional benefit of performing exercises in space is that it has profound effects on the normal function of the immune system [19,38]. Indeed, exercise was shown to increase the release of certain "myokines", such as IL-7, which is essential for maintaining thymic function and stimulating the release of new T-cells [39]. Our research HU methods have a significant limitation; HU may not represent a perfect model of µG. Therefore, it will be necessary to verify these results in space-based experiments after feasibility studies have been performed. The space experimental environment was well-regulated using newly developed mouse habitat cage units, which were installed in the Multiple Artificial-gravity Research System on the ISS, and enabled mice to be exposed to µG, partial gravity, and 1G conditions [11]. These space experiments are critically important to clarify the possibility of cancer progression induced by immune system dysregulation, and to increase our knowledge and promote technological advances to counteract human adaptation during and after prolonged deep spaceflight. Mice Female C3H/HeNJcl mice (7 weeks old) were obtained from Clea Japan, Inc. (Tokyo, Japan). Mice were housed in individual cages in a temperature-and humidity-controlled (23 ± 1 • C and 60 ± 5% relative humidity) room with a 12 h (6 am-6 pm) light-dark cycle. All experimental animals were procured, maintained, and used in accordance with the Recommendations for Handling of Laboratory Animals for Biomedical Research, compiled by the guidelines of the Animal Care and Experimentation Committee of Gunma University, Showa Campus (No. 18-023; Application date: 19 March 2018). Tail Suspension Tail suspension is the most commonly used animal model of µG in outer space. Prior to tail suspension experiments, mice were allowed to acclimatize to being housed individually in single cages (width 200 × depth 300 × height 130 mm) before suspension. Briefly, a small (35 × 13 mm) metallic rotary hook (PandaHall, Guangdong, China) was linked together with a nylon thread of 0.29 mm diameter (CN500, DUEL Co., Inc., Fukuoka, Japan) by puncturing a 23G needle (Terumo Corp., Tokyo, Japan) into the sacrum coccyx joint of the mouse (Figure 4A,B). The hook was then attached to a small swivel key chain that was connected to an electric suspension device with a digital power supply timer (AD-001, Adachi Factory, Maebashi, Japan). Mice could move on the y-axis and rotate 360 degrees, and therefore had access to all areas of the cage. Tail suspension is widely performed by placing adhesive tape around the tail [40,41]; however, necrosis of the tail end often occurs if the blood flow is inhibited by the tape. Therefore, we used the hook and key chain method because such detrimental effects were not observed in response to this type of hind-limb suspension (Ohira et al., unpublished data). Mouse hind-limbs were maintained just off the cage floor with the body of the mouse at an angle of approximately 30 • from the cage floor. The mice could move freely, and the angle and height of the mice were checked daily (HU group, Figure 4C(b)). The TL mice were released from suspension for 2 h (8 pm-10 pm) per day using an electric suspension device digital power supply timer (TL group). The orthostatic suspension mice were separated into individual cages under identical conditions to the unloading groups, but without tail suspension (OS group, Figure 4C(a)). As a control experiment, mice were kept under standard housing conditions without introduction of a thread into the tail (Con group). Int. J. Mol. Sci. 2018, 19, x 6 of 10 metallic rotary hook (PandaHall, Guangdong, China) was linked together with a nylon thread of 0.29 mm diameter (CN500, DUEL Co., Inc., Fukuoka, Japan) by puncturing a 23G needle (Terumo Corp., Tokyo, Japan) into the sacrum coccyx joint of the mouse ( Figure 4A,B). The hook was then attached to a small swivel key chain that was connected to an electric suspension device with a digital power supply timer (AD-001, Adachi Factory, Maebashi, Japan). Mice could move on the y-axis and rotate 360 degrees, and therefore had access to all areas of the cage. Tail suspension is widely performed by placing adhesive tape around the tail [40,41]; however, necrosis of the tail end often occurs if the blood flow is inhibited by the tape. Therefore, we used the hook and key chain method because such detrimental effects were not observed in response to this type of hind-limb suspension (Ohira et al., unpublished data). Mouse hind-limbs were maintained just off the cage floor with the body of the mouse at an angle of approximately 30° from the cage floor. The mice could move freely, and the angle and height of the mice were checked daily (HU group, Figure 4C(b)). The TL mice were released from suspension for 2 h (8 pm-10 pm) per day using an electric suspension device digital power supply timer (TL group). The orthostatic suspension mice were separated into individual cages under identical conditions to the unloading groups, but without tail suspension (OS group, Figure 4C(a)). As a control experiment, mice were kept under standard housing conditions without introduction of a thread into the tail (Con group). Experimental Schedule A schematic of the work flow for experiments is shown in Figure 5. The mice were divided into four groups-Con, HU, TL, and OS. LM8 cells (2 × 10 6 cells in 50 µL culture medium without FBS, administered by subcutaneous injection) were inoculated into the lower right abdominal region on Experimental Schedule A schematic of the work flow for experiments is shown in Figure 5. The mice were divided into four groups-Con, HU, TL, and OS. LM8 cells (2 × 10 6 cells in 50 µL culture medium without FBS, administered by subcutaneous injection) were inoculated into the lower right abdominal region on day 3 after tail suspension. We measured tumor size regularly throughout the experiment, and quantified the number of lung metastatic nodules at 21 days post-inoculation. To assess whether the spleen and thymus were diminished at 3 days after tail suspension (n = total 25) and at 21 days after LM8 inoculation (n = total 37), the spleens and thymi were weighed. Int. J. Mol. Sci. 2018, 19, x 7 of 10 day 3 after tail suspension. We measured tumor size regularly throughout the experiment, and quantified the number of lung metastatic nodules at 21 days post-inoculation. To assess whether the spleen and thymus were diminished at 3 days after tail suspension (n = total 25) and at 21 days after LM8 inoculation (n = total 37), the spleens and thymi were weighed. Measurement of Tumor Growth The diameters of tumors (length and width) were measured using a vernier caliper at the time of treatment, and twice a week thereafter. The lengths and widths obtained by superficial twodimensional measurements were recorded. Tumor volume (TV) in mg was calculated according to the formula TV = (4/3) × π × L × W 2 , where L and W are the length and width in mm, respectively. Measurement of Lung Metastatic Nodules Bilateral lungs were initially fixed in Bouin solution overnight at day 21 after subcutaneous tumor cell implantation into the mice. Pulmonary metastatic nodules on the surfaces of all the pulmonary lobes were macroscopically counted. Statistical Analysis All values were expressed as the mean ± standard deviation (SD), with n indicating the number of independent experiments. EZR (Easy R) free software (version 1.37) was used for statistical analysis [42]. The Bartlett test was used to analyze the normal distribution of data. Differences in demographic data among the groups were analyzed using one-way analysis of variance (ANOVA) or the Kruskal-Wallis test (non-parametric equivalent of the ANOVA) for continuous variables, in accordance with the data normality. To analyze differences in knowledge, attitudes, and behaviors among the different occupational categories, Tukey's test was used for the post-hoc analysis of parametric variables analyzed using ANOVA, and post-hoc comparisons for non-parametric variables analyzed using the Kruskal-Wallis tests were made using the Steel-Dwass multiple comparison test. A p value of less than 0.05 was considered statistically significant. Measurement of Tumor Growth The diameters of tumors (length and width) were measured using a vernier caliper at the time of treatment, and twice a week thereafter. The lengths and widths obtained by superficial two-dimensional measurements were recorded. Tumor volume (TV) in mg was calculated according to the formula TV = (4/3) × π × L × W 2 , where L and W are the length and width in mm, respectively. Measurement of Lung Metastatic Nodules Bilateral lungs were initially fixed in Bouin solution overnight at day 21 after subcutaneous tumor cell implantation into the mice. Pulmonary metastatic nodules on the surfaces of all the pulmonary lobes were macroscopically counted. Statistical Analysis All values were expressed as the mean ± standard deviation (SD), with n indicating the number of independent experiments. EZR (Easy R) free software (version 1.37) was used for statistical analysis [42]. The Bartlett test was used to analyze the normal distribution of data. Differences in demographic data among the groups were analyzed using one-way analysis of variance (ANOVA) or the Kruskal-Wallis test (non-parametric equivalent of the ANOVA) for continuous variables, in accordance with the data normality. To analyze differences in knowledge, attitudes, and behaviors among the different occupational categories, Tukey's test was used for the post-hoc analysis of parametric variables analyzed using ANOVA, and post-hoc comparisons for non-parametric variables analyzed using the Kruskal-Wallis tests were made using the Steel-Dwass multiple comparison test. A p value of less than 0.05 was considered statistically significant. Conclusions Our study demonstrates the induction of cancer progression and lymphoid organ atrophy by HU. Of note, temporary loading prevented these adverse effects. This finding may have important implications for long-term space travel. It is necessary to verify these findings by performing experiments in space. (B) negative correlation between number of metastases or tumor volumes, and thymus weight. Black dots, standard housing control (Con) (n = 13); red dots, hind-limb unloading (HU) (n = 10); green dots, temporary loading during HU (2 h per day) (TL) (n = 6); blue dots, orthostatic suspension (OS) (n = 8). R, correlation coefficient.
5,731.6
2018-12-01T00:00:00.000
[ "Biology" ]
Gauge-invariant formulation of time-dependent configuration interaction singles method We propose a gauge-invariant formulation of the channel orbital-based time-dependent configuration interaction singles (TDCIS) method [Phys. Rev. A 74, 043420 (2006)], one of the powerful ab initio methods to investigate electron dynamics in atoms and molecules subject to an external laser field. In the present formulation, we derive the equations of motion (EOMs) in the velocity gauge using gauge-transformed orbitals, not fixed orbitals, that are equivalent to the conventional EOMs in the length gauge using fixed orbitals. The new velocity-gauge EOMs avoid the use of the length-gauge dipole operator, which diverges at large distance, and allows to exploit computational advantages of the velocity-gauge treatment over the length-gauge one, e.g, a faster convergence in simulations with intense and long-wavelength lasers, and the feasibility of exterior complex scaling as an absorbing boundary. The reformulated TDCIS method is applied to an exactly solvable model of one-dimensional helium atom in an intense laser field to numerically demonstrate the gauge invariance. We also discuss the consistent method for evaluating the time derivative of an observable, relevant e.g, in simulating high-harmonic generation. I. INTRODUCTION Time-dependent configuration interaction singles (TDCIS) method is one of the powerful ab initio methods to investigate laser-driven electron dynamics in atoms and molecule . In the TDCIS method, the time-dependent electronic wavefunction is given by the configuration interaction (CI) expansion, where Φ is the ground-state Hartree-Fock (HF) wavefunction, and Φ ia is a singly-excited configuration-state function (CSF), replacing an occupied HF orbital φ i in Φ with a virtual (unoccupied in Φ) orbital φ a , and the electron dynamics is described through the time evolution of the CI coefficients, C 0 and {C ia }. Compared to more involved ab initio wavefunctionbased approaches [25] such as time-dependent multiconfiguration self-consistent-field (TD-MCSCF) methods [26][27][28][29][30][31][32][33], time-dependent R-matrix based approaches [34][35][36], or timedependent reduced density-matrix approach [37,38], distinct advantages of the TDCIS method include a low computational cost and the conceptual simplicity to analyze simulation results. Furthermore, an equivalent, effective one-electron theory with coupled channels has been developed [2], which introduces the orbital-like quantity, called channel orbital, and rewrites EOMs for CI coefficients with those for channel orbitals {χ i (r, t)} with no reference to virtual orbitals. This reformulation removes the bottleneck of the CI coefficientbased TDCIS method to compute all (or, at least sufficiently many, including bound and continuum) virtual orbitals prior to the simulation, and thus particularly useful in grid-based simulations. Despite this advantage, numerical applications of the channel orbital-based TDCIS method has been limited to Refs. [2,14,15] for a one-dimensional Hamiltonian and Ref. [1] for noble gas atoms with a Hartree-Slater potential, as far as we know, and the vast majority of applications to date have adopted the CI coefficient-based approach , except for the use of {χ i } as intermediate quantities in evaluating photoelectron spectra [18]. The preference of CI coefficientbased approach might be partially due to the high symmetry of atomic systems, for which the stationaly Hartree-Fock operator decouples for different angular momenta [4], making it a relatively feasible task to obtain all virtual orbitals (within a given radial grids or radial basis functions) for the lowest few angular momenta. The channel orbital-based approach would be more suited, on the other hand, to simulations of electron dynamics with intense and/or long-wavelength laser fields, requiring much longer angular momentum expansion [39][40][41], and moreover to grid-based molecular applications, where obtaining a sufficient spectrum of virtual levels could be unacceptably expensive. However, the TDCIS method, either in the CI coefficientbased or channel orbital-based formulation, suffers from the lack of gauge invariance, as a general consequence of relying on truncated CI expansion with fixed orbital functions. Previously, the length gauge (LG) has been employed e.g, in Ref. [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16], and the velocity gauge (VG) in Ref. [17][18][19][20][21][22][23][24]. Although gauge dependence of the TDCIS method using fixed orbitals has been noted already in Ref. [2], comparative assessment of the LG and VG treatments (within the grid-based TDCIS) has not been reported to the best of our knowledge, except for being briefly mentioned in Ref. [42]. In particular, the channel orbital-based approach [2] has been applied only in the LG [1,2], and as shown below in this paper, the VG treatment with fixed orbitals is not very appropriate for applications to high-field phenomena. This is a serious drawback, since for an efficient simulation of molecules, it is highly ap-preciated to take advantage of the velocity-gauge treatment, e.g, the feasibility of exterior complex scaling [43,44] as an absorbing boundary, to reduce the computational cost related to the number of grid points. In the present work, we propose a gauge-invariant reformulation of the channel orbital-based TDCIS method. To this end, instead of applying the fixed-orbital TDCIS ansatz to the velocity-gauge time-dependent Schrödinger equation (TDSE), we adopt the formulation using unitary-rotated orbital φ p (t) = U (t)φ p , where U (t) is the gauge transformation operator connecting the (exact) solution of TDSE in the LG and VG. The resulting EOMs in the reformulated VG is equivalent to the LG ones with fixed orbitals by construction, and at the same time allows to exploit advantages of the velocitygauge simulations as mentioned above. This paper proceeds as follows. In Sec. II, after defining the target Hamiltonian and the gauge transformation in Sec. II A and reviewing the TDCIS method using fixed orbitals both in the CI coefficient-based [Sec. II B] and channel orbital-based [Sec. II C] approaches, we present the gaugeinvariant reformulation in Sec. II D, and a consistent method for evaluating the time derivative of one-electron observables in Sec. II E. Then in Sec. III we apply the channel orbitalbased TDCIS method, using LG with fixed orbitals, VG with fixed orbitals, and the reformulated VG, to the model onedimensional (1D) Hamiltonian to compare the results of various TDCIS approaches with numerically exact TDSE results, and demonstrate the importance of non-Ehrenfest method to compute dipole acceleration. Finally, concluding remarks are given in Sec. IV. The Hartree atomic units are used throughout unless otherwise noted. A. System Hamiltonian and gauge transformation Let us consider an atom or a molecule consisting of N electrons interacting with an external laser field. In this work, we restrict our treatment in the clamped-nuclei approximation and the electron-laser interaction within the electric dipole approximation. Then the exact description of the system dynamics is given by the solution Ψ L (t) of TDSE, with the system Hamiltonian H L (t) = H 0 + H ext L (t), where H 0 is the field-free electronic Hamiltonian where r k and p k = −i∇ k are the coordinate and canonical momentum of an electron, h(r, p) = 1 2 p 2 + v n (r), with v n being the electron-nucleus interaction. Here we are considering the LG treatment, where the electron-laser interaction H ext L is given by where E(t) is the laser electric field. As well known, the system dynamics is equivalently described in the VG, of which the wavefunction Ψ V is connected with the LG one through with a unitary transformation where is the vector potential, and we arbitrarily include the second term in the exponential, which is a c-number, to avoid appearance of terms proportional to |A| 2 in subsequent equations. Then we substitute with One should carefully note that the present proof of equivalence of the LG and VG treatments, Eqs. (3) and (8), with the transformation of Eq. (7), applies only to the exact solution of TDSE. See e.g, Ref. [45][46][47] for deeper discussions on the gauge transformation within TDSE, and Ref. [25] for the gauge invariance of TD-MCSCF methods. For a compact presentation of the many-electron theory, we rewrite the system Hamiltonian in the second quantization, where {ĉ † pσ } and {ĉ pσ } are the creation and annihilation operators, respectively, for the set of spin-orbitals given as a direct product {φ p } ⊗ {s ↑ , s ↓ } of orthonormal spatial orbitals {φ p } and up-spin (down-spin) functions s ↑ (s ↓ ). The operatorsĥ,r, andp are defined, respectively, asĥ = where h pq , r pq , and p pq are the matrix elements of h, r, p, respectively, in terms of {φ p }, and The TDSE of the LG, Eq. (3), and VG, Eq. (8), read with the transformation µσĉµσ is the number operator. In this work, we consider a closed-shell system with even number of electrons, and choose as {φ p } the timeindependent Hartree-Fock (HF) orbitals satisfying the canonical, restricted HF equation where p is the orbital energy, andŴ φ φ is the electrostatic potential of a product φ * (r)φ (r) of given orbitals, defined in the real space as As usual, we separate the full set of HF orbitals {φ p } into the occupied orbitals {φ i } which are occupied in the HF groundstate wavefunction (also referred to as the reference , and the virtual orbitals {φ a } which are unoccupied in |Φ . B. Review of CI coefficient-based TDCIS with fixed orbitals We write the second-quantized version of Eq. (1), for the LG case, as where The equations of motion for the CI coefficients have been derived [2] by inserting Eq. (19) into the LG TDSE, Eq. (14a), and closing from the left with the reference and singly-excited CSFs, Conceptually more proper derivation of Eqs. (20) is based on Dirac-Frenkel variational principle, which considers the Lagrangian and requires ∂L L /∂C * 0 = ∂L L /∂C * ia = 0. SubstitutingĤ L of Eq. (10a) into Eqs. (20), using the Slater-Condon rule for the Hamiltonian matrix elements, and noting the canonical condition f pq = p δ pq , the EOMs for the length gauge are derived as [2] where the action of the operatorF i on a given orbital φ is defined aŝ References [17][18][19][20][21][22][23][24] have used the same expansion in terms of fixed CSFs also in the VG case, and required Eqs. (20) to hold, withĤ L , C 0 , and C ia replaced withĤ V , D 0 , and D ia . This is equivalent to consider the following Lagrangian, C. Review of Channel orbital-based TDCIS with fixed orbitals An interesting reformulation of the above-described TD-CIS method, as mentioned in Sec. I, has been proposed in Ref. 2, which introduces the time-dependent channel orbitals |χ i that collects all the single excitations originating from an occupied orbital |φ i , and rewrites the EOMs in terms of C 0 and {|χ i } as whereP =1 − j |φ j φ j |. According to these EOMs and the initial conditions [C 0 (t → −∞) = 1, and {C ia (t → −∞) = 0} ⇐⇒ {χ i (t → −∞) ≡ 0}], the channel orbitals |χ i gets gradually populated along with the laser-electron interaction, measuring an excitation of an electron out of |φ i . See Ref. [2] for interesting properties of the channel orbitals. It is also possible to formulate the channel orbital-based scheme based on the velocity gauge TDCIS using fixed orbitals, although not previously considered. We, therefore, introduce the analogous quantity and rewrite Eqs. (26) as Hereafter, we refer to the method based on Eqs. (28), i.e, the channel orbital-based TDCIS in the length gauge with fixed orbitals, simply as LG method, and that based on Eqs. (30), i.e, the channel orbital-based TDCIS in the velocity gauge with fixed orbitals, as VG method, for notational brevity. D. Channel orbital-based TDCIS in the velocity gauge with rotated orbitals The gauge dependence of the LG and VG treatments, Eqs. (28) and (30), results from the fact that the ansatz of Eqs. (19) and (24), both using fixed orbitals, cannot be connected with the transformation, Eq. (16), as is generally the case for truncated CI expansion using fixed orbitals. For a method to be gauge invariant, the underlying Lagrangian in LG and VG cases should be numerically the same when evaluated with the solution of respective EOMs, which does not hold in the present case, L L (t) = L V (t), with Eqs. (21) and (25). Thus we define the total wavefunction |Ψ V (t) , transformed from |Ψ L (t) to the velocity gauge, as with |Ψ L (t) constructed with the solution of CI coefficientbased EOMs in the LG, Eqs. (22). Here |Φ =Û (t)|Φ and |Φ ia =Û (t)|Φ ia = σĉ † aσĉ iσ |Φ / √ 2 are the reference and singly-excited CSF constructed with unitary rotated orbitals, i.e, |φ p =Û |φ p andĉ pσ =Û (t)ĉ pσÛ −1 (t). It should be noted that |Ψ V cannot be rewritten into the form of Eq. (24) in general. Associated with this wavefunction, we consider the following Lagrangian, The equivalence of this approach to the LG treatment is readily confirmed by seeing One may naively expect that L V of Eq. (32), which differs from L V of Eq. (25) only by the replacement of Ψ V with Ψ V , leads to the EOMs of Eqs. (26) This is not the case, however, due to the time dependence of the rotated CSFs, e.g, Φ |Φ ia = iE(t) · Φ |r|Φ ia , and after extracting these time dependence, Eq. (32) reads where ∂ c t time differentiates CI coefficients only. Now requiring ∂L V /∂C * 0 = ∂L V /∂C * ia = 0, or equivalently, substituting the back transformation |φ p =Û −1 |φ p into Eqs. (22) derives whereF i is given by Eq. (23) with {φ j } replaced with {φ j }. Equations (35) are the CI coefficient-based TDCIS EOMs based on the Lagrangian of Eq. (32). Although this approach is guaranteed to be equivalent to the CI coefficient-based LG TDCIS, it brings no numerical gain over Eqs. (22), peculiarly including both E · r and A · p, and requiring extensive gauge transformation of all occupied and virtual orbitals. None the less, a useful method can be derived, if one switches to the channel orbital-based scheme by defining the rotated channel functions, Then we use dÛ /dt = i(E ·r +N |A| 2 /2)Û , and notê . Although several terms in the EOMs still involve the dipole operator, they all apply to the (rotated) occupied orbital which is localized around nuclei, thus posing no difficulty in enjoying the same advantages of VG propagations of orbitals [39][40][41]. E. Evaluation of the time derivative of an observable Let us next consider how to compute expectation value of a one-electron operator Ô (t) = Ψ(t)|Ô|Ψ(t) , and its time derivative d Ô /dt. For exact solution of TDSE, |Ψ = −iĤ|Ψ , the time derivative is given by known as the Ehrenfest expression. For an approximate method, however, the Ehrenfest theorem, Eq. (38b), generally does not hold, and one should explicitly evaluate the time derivative as Eq. (38a). Important exceptions include those theories using time-dependent orbitals evolving to satisfy the time-dependent variational principle, such as time-dependent Hartree-Fock (TDHF), TD-MCSCF, and time-dependent density functional theory. See Ref. [41] for more details. The TDCIS expectation value of a one-electron operatorÔ is given [2] by in the LG case. That for the VG is given by replacing C 0 with D 0 in the above equation, and for the rVG by replacing {φ j , χ j } with {φ j , χ j }. The expression for the time derivative, in the LG case, is derived by using Eqs. (28) in Eq. (38a) as The VG expression is also given by the above equation with C 0 replaced with D 0 , and that for the rVG is Although Eqs (40) and (41) look rather complicated, their evaluations are straightforward given the time derivatives of working variables C 0 , {χ i }, etc, which are necessary, in any case, to propagate the EOMs. III. NUMERICAL EXAMPLES In this section, we numerically apply the channel orbitalbased TDCIS method in the LG, VG, and rVG to the 1D model Helium atom, using the computational code developed by modifying an existing TDHF code used in our previous work [30,33,48]. The field-free electronic Hamiltonian is given by for two electronic coordinates z 1 and z 2 , and the laser-electron interaction E(t) · r and A(t) · p are replaced with E(t)z and A(t)p z = −iA(t)∂/∂ z , respectively, in Eqs. (28), (30) and (37). Orbitals are discretized on equidistant grid points with spacing ∆z = 0.4 within a simulation box −1000 ≤ z ≤ 1000, with an absorbing boundary implemented by a mask function of cos 1/4 shape at 10% side edges of the box. Each EOM is solved by the fourth-order Runge-Kutta method with a fixed time step size (1/10000 of an optical cycle). Spatial derivatives are evaluated by the eighth order finite difference method, and spatial integrations are performed by the trape- zoidal rule. We consider a laser electric field given by for 0 ≤ t ≤ τ , and E(t) = 0 otherwise, with a wavelength λ = 2π/ω 0 = 750 nm, a foot-to-foot pulse length τ of three optical cycles, and a peak intensity I 0 = E 2 0 for I 0 = 5 × 10 14 W/cm 2 and I 0 = 10 15 W/cm 2 . The 1D Hamiltonian, computational details, and the applied laser field are the same as used in Ref. [48] to facilitate comparison with TDSE results in Ref. [48]. First, we compare the time-dependent dipole moment z (t) obtained with TDCIS approaches with that of TDSE in Fig. 1, which immediately reveals a strong gauge dependence of fixed-orbital approaches, i.e, the large difference between LG and VG results. One should note that the comparison of LG and VG results alone can tell nothing about the preference of either approach; TDCIS method in both LG and VG are the first approximation in the hierarchy of CI expansions, which, at the full-CI limit, would be gauge invariant. The point here is that the LG scheme outperforms the VG scheme in comparison to the exact TDSE result as clearly seen in Fig. 1, which convinces one an empirical preference of the LG treatment. On the other hand, the results of LG and rVG agree perfectly within the graphical resolution, numerically demonstrating the theoretical gauge invariance. Next, we consider the dipole acceleration a (t) defined as the time derivative of the kinematic momentum, whereπ =p z for the LG, andπ =p z +A(t) for the VG. In the exact TDSE case, applying Eqs. (38) forÔ =π (also taking into account the trivial, explicit time dependence of π(t) in the VG case) derives where ∂v nuc /∂z = −∂/∂ z 2(z 2 + 1) −1/2 = 2z(z 2 + 1) −3/2 for the 1D Hamiltonian. Numerically achieving the theoretical equivalence of Eq. (44) and (45) (45), with that of TDSE in Fig. 2, clearly showing a better agreement of the results of the former approach with that of TDSE. From this result, and also by the fact that being based on Eq. (44) guarantees that the HHG spectra obtained from the velocity π (t) and the acceleration a (t), at the convergence, properly relate to each other [45], we consider that Eq. (44), together with Eq. (40) or Eq. (41), should be adopted as a consistent method for evaluating the dipole acceleration. Then we compare the time evolution of the dipole acceleration [ Fig. 3] and the HHG spectrum [ Fig. 4] obtained as the modulus squared of the Fourier transform of the dipole acceleration obtained with TDCIS method in LG, VG, and rVG [based on Eq. (44)] with those of TDSE. We observe that (1) the LG and rVG results are identical to within the scale of the figure, (2) they also show a good agreement with TDSE . HHG spectrum of 1D-He exposed to a laser pulse with a wavelength of 750 nm and an intensity of (a) 5×10 14 W/cm 2 and (b) 1×10 15 W/cm 2 . Comparison of the results with TDCIS in the LG, VG, and rVG with that of TDSE. results, (3) and in contract, the VG results strongly deviate from all the other results. Especially, Fig. 4 shows a remarkable agreement of the TDCIS spectra in the LG and rVG and the TDSE one, suggesting that the TDCIS method would be a useful computational method for studying HHG process in more complex atoms and molecules, in particular, when the present rVG treatment is combined with advanced, velocity gauge-specific computational techniques. IV. CONCLUSIONS In this work, we propose a gauge-invariant formulation of the channel orbital-based TDCIS method for ab initio investigations of electron dynamics in atoms and molecules. Instead of using fixed orbitals both in length-gauge and velocitygauge simulations, we adopt, in the velocity-gauge case, the EOMs derived with unitary rotated orbitals |φ p (t) = U (t)|φ p using gauge-transforming operatorÛ (t), which replaces the length-gauge operator E ·r appearing in the lengthgauge EOMs with the velocity-gauge counterpart A · p, while keeping the equivalence to the length-gauge treatment. This would make it possible to take advantages of the velocitygauge simulation over the length-gauge one, e.g, the faster convergence of simulations of atoms interacting with an intense and/or long-wavelength laser field, with respect to the maximum angular momentum included to expand orbitals, and the native feasibility of advanced absorbing boundaries such as the exterior complex scaling. Applications to real atoms and molecules with the three-dimensional Hamiltonian will be presented elsewhere.
5,026
2018-02-07T00:00:00.000
[ "Physics" ]
Pathway and protein engineering for biosynthesis Sustainable biosynthesis of chemicals and efforts to create new molecules of interest require efficient enzymes and pathways as well as comprehensive tools and technologies to implement this rewiring. Enzymes are the key components for construction of efficient biosynthetic pathways, enzyme characterization and engineering can help to identify key enzymes and regulatory factors for construction of biosynthetic pathways, as well as improve enzyme performance; Pathway engineering can help construct biosynthetic pathways and balance metabolic network to improve biosynthetic efficiency; Tools and technologies facilitate the engineering of enzymes, pathways, and whole cells. This special issue focusing on “Pathway and Protein Engineering for Biosynthesis” comprises eight review articles and nine original research articles, which highlight and showcase current progress on Pathway and Protein Engineering and their application for biosynthesis. Enzyme characterization and engineering Enzymes are the basic components for biosynthesis. For example, microbial synthesis is considered as a feasible approach for sustainable terpenoid production, which relies on terpenoid synthase as a catalytic enzyme. Ma et al. identified a (− )-bornyl diphosphate synthase from Blumea balsamifera and applied it for the biosynthesis of (− )-borneol in yeast [1]. Corpuz et al. reviewed the current progress on protein-protein interface analysis of the non-ribosomal peptide synthetase (NRPS), providing insights for engineering these mega-enzymes [2]. Similarly, Guzman et al. summarized how to use fragment-antigen binding domains as protein crystallization chaperones for structural study of assembly-line polyketide synthases (PKSs), which are of interest to synthesize an unusually broad range of medicinally relevant compounds [3]. Glycosyltransferases (GTs) catalyze the transfer of nucleotideactivated sugars to specific acceptors during biosynthesis of natural product glycosides. He et al. discussed recent progress in the identification and engineering of novel GTs for synthesis of plant natural products [4]. Cytochrome P450 enzymes (CYPs) catalyze a series of C-H and C--C oxygenation reactions for biosynthesis of desired chemicals or pharmaceutical intermediates, a review article by Yan et al. provided a comprehensive overview of CYP function for the C-H and C--C oxygenation reactions and also various strategies for achieving higher selectivity and enzymatic activity [5]. Vitreoscilla hemoglobin (VHB) has been widely used to enhance cellular oxygen transfer and metabolite synthesis in fermentation. Zhang et al. optimized the expression cassette of VHB to improve poly-γ-glutamic acid production in Bacillus licheniformis [6]. Pathway engineering Even with efficient enzymes, biosynthesis pathways should be carefully balanced to enhance net reaction flux. 3-Hydroxypropionic acid (3-HP) is an important platform chemical that can be easily transformed into other valuable compounds such as acrylic acid, acrylamide and 1,3-propanediol. Lai et al. optimized the 3-HP biosynthetic pathway and central metabolism in E. coli, which enabled efficient production of 3-HP from syngas-derived acetic acid [7]. Cyanobacteria can utilize CO 2 to produce a variety of high value-added products through photosynthesis, which involves complex electron transfer process. Fan et al. showcased that enhancing the cellular content of plastoquinone, an important electron carrier, improved the photosynthesis and respiration rate, as well as cellular lipid and protein contents [8]. Ethanol is predominantly used as a renewable 'drop-in' transportation fuel and a feedstock for production of other compounds. van Aalst et al. reviewed pathway engineering strategies for improving ethanol yield of anaerobic fermentation of sugars [9]. For heterologous production of spinosad in Streptomyces albus, An et al. engineered the polyketide skeleton and precursor supply, which resulted the highest spinosad titer of 70 mg/L in a heterologous Streptomyces species [10]. Complex peptide natural products exhibit diverse biological functions and can be served as drug candidates. Wenski et al. overviewed biosynthetic pathways and engineering strategies for two main complex peptides: ribosomally synthesized and post-translationally modified peptides and non-ribosomal peptides [11]. Tools and technologies Synthetic biology tools and advanced technologies can accelerate the engineering of the pathways and enzymes in a high throughput manner. Two review articles included in this special issue summarized the recent progresses on technological developments to improve the stress tolerance of microorganisms [12] and engineering of pathways and genomes [13], respectively. Base editing technology has opened a new avenue for genome engineering, however it still suffers from limited availability of editable sites in the target bacterial genome. Chen et al. developed a broad-spectrum DNase-inactive Cpf1 (dCpf1) variant from Francisella novicida through directed evolution, which enabled specific C to T mutations at multiple target sites in the E. coli genome without compromising cell growth [14]. Construction and balancing of biosynthetic pathways require expression of multiple genes, which is normally realized by different promoters with various strengths. Yan et al. systematically characterized a variety of native promoters and also constructed artificial promoters for metabolic engineering of methylotrophic yeast Ogataea polymorpha [15], which will help to construct yeast cell factory for methanol biotransformation. For engineering of Saccharomyces cerevisiae, Ambrosio et al. designed and characterized 41 synthetic guide RNA sequences to expand the CRISPR-based genome engineering capabilities, and characterize in high temporal resolution 20 native promoters and 18 terminators [16]. As mentioned above, engineering of methyltrophic yeast can help to establish methanol biotransformation process for chemical biosynthesis, but the complex regulation of methanol metabolism hinders rational engineering. Hou et al. carried out comparative proteomics analysis of Pichia pastoris cultivated in glucose and methanol, which identified several genes that play important roles in methanol utilization [17]. We thank all contributing authors for making this special issue on "Pathway and protein engineering for biosynthesis" possible, and also the reviewers for their time and constructive comments throughout the reviewing process to improve the manuscripts. We hope that readers find these articles interesting and inspiring to their own research.
1,285.2
2022-06-01T00:00:00.000
[ "Biology" ]
Scanning and Splicing Atom Lithography for Self-traceable Nanograting Fabrication Atom lithography is a unique method to fabricate self-traceable pitch standards and angle standards, but extending its structure area to millimeter-level for application is challenging. In this paper, on the one hand, we put forward a new approach to fabricate a full-covered self-traceable Cr nanograting by inserting and scanning a Dove prism in the Gaussian beam direction of atom lithography. On the other hand, we extend the structure area along the standing-wave direction by splicing two-step atom deposition. Both nanostructures manufactured via scanning atom lithography and splicing atom lithography demonstrate good pitch accuracy, parallelism, continuity, and homogeneity, which opens a new way to fabricate centimeter-level full-covered self-traceable nanograting and lays the basis for the application of square ruler and optical encoders at the nanoscale. Introduction The utilization of fundamental natural constants plays a key role in the advancements in metrology for precise, reliable, reproducible, and accurate measurements with improved measurement uncertainties [1]. The recent revolutionary changes in SI unit redefinition in 2019 based on fundamental constants satisfy the increasing accuracy demands in science and industry, allowing the developments of quantum effect-based measurement techniques and providing in situ traceability to SI units for a wide range of users [2]. Selftraceability characteristics based on natural constants are also useful for accuracy improvements in nanometrology. For example, one-dimensional nanogratings, two-dimensional nanogratings, critical dimension (CD) structures, step height, and particles are the main kinds of length transfer standards at the nanoscale. Recently, the Si lattice constant has already been intensively used to fabricate self-traceable CD standards [3][4][5][6] and step height standards [7][8][9]. It has been proven to be an alternative approach to realize traceability in nanoscale dimension metrology [10][11][12]. Similarly, atom lithography has been demonstrated as a unique way to fabricate self-traceable pitch standards [13][14][15] and angle standards [16]. Atom lithography, or the so-called laser-focused atomic deposition, holds an accurate pitch, whose value is determined by the absolute transition frequency between two energy levels of an element, such as Cr [13], Fe [17], Al [18], Yb [19], and Ar [20]. Taking Cr atom lithography, for example, a series of one-dimensional nanogratings with different accurate pitches can be fabricated, such as 212.8 nm (λ/2) [15,21], 106.4 nm (λ/4) [22], and 53.2 nm (λ/8) [21]. For the most investigated pitch of 212.8 nm, a previous diffraction measurement yielded a measurement mean pitch uncertainty to be 0.0069 nm, which agrees well with the theoretical mean pitch uncertainty limit of 0.0049 nm [15]. Recently, it has been approved as a firstclass standard reference material (GBW13982) in China. Meanwhile, two-step Cr atom lithography is promising as a natural square ruler at the nanoscale [16]. It is expected to hold a natural orthogonal angle whose error is as small as 0.0027°. Moreover, a precision displacement measurement system based on a self-traceable grating interference has been proposed by the Littrow configuration, which shows direct traceable displacement measurement capabilities comparable to those of laser interferometers [23]. 3 The structure area of atom lithography nanograting is challenging and necessary as a pitch standard for the applications. Generally, the structure area of atom lithography nanograting is determined by the shape parameter of the focusing laser field and atom flux source. To guarantee the power density for focusing, typically, the laser field radius at 1/e 2 along the Gaussian direction is hard to extend to 250 μm [13,24,25]. To ensure the collimation of the atomic beam effusion, the width of the atomic beam deposition along the standing-wave direction is generally set to be 1 or 2 mm. The area size limits its application as a natural square ruler and optical encoder. Although an elliptical laser standingwave field has been introduced to fabricate a millimeterlevel nanostructure [26], the method coarsens the line edge of Cr nanograting, which increases the uncertainty of the pitch standard. Meanwhile, most of the atom fluxes along the Gaussian direction are not used during the deposition, which has wasted the collimated deposition materials to some degree. Motivated by the aspects above, here, we propose a new approach to realize a millimeter-level self-traceable nanograting while keeping the standing-wave energy and atom flux full utilization at the same time. Analogous to e-beam lithography, we scanned the standing wave along the Gaussian direction to form a full-covered nanograting. Based on the scanning atom lithography, the structure width along the Gaussian direction was extended from 500 to 1500 μm. Based on splicing atom lithography, the structure width along the standing-wave direction was extended from 3 to 4.8 mm. Both methods have produced highly parallel, smooth, and unbroken self-traceable lines on the substrate. As expected, scanning and splicing atom lithography open a new way to fabricate self-traceable nanograting to millimeter-level and even centimeter-level areas. Hence, they lay the basis for highly homogeneous two-dimensional square ruler structures and self-traceable grating interferometers. Theories In Cr atom lithography, the dipole force imposed on Cr atoms in the standing wave will focus the atom beam into lines to form self-traceable nanogratings. Their pitch is directly traceable to half of the laser light wavelength, which is strictly locked to a specified atomic level transition. The energy level transition of chromium atom from the ground state 7 S 3 to the excited state 7 P 0 4 corresponds to the laser wavelength in the vacuum = 425.55 nm, with a natural linewidth = 5 MHz. Although the excited state 7 P 0 4 may also transits to the metastable states 5 D 3 and 5 D 4 , it can be regarded as a two-level system because of its extremely small probability. The potential energy function of a twolevel system can be written as [27,28] where ℏ is the Planck's constant, is the detuning of the laser frequency from the atomic resonance, I(x, y, z) is the light intensity of the laser, and I s is the saturation intensity of the atomic transition (for the chromium atom, I s = 85 W/ m 2 ). The light intensity formula of the standing-wave field formed by the Gaussian laser propagating along the x-axis and being reflected by the mirror can be expressed as [29] where x is the distance from the mirror surface; k is the number of waves of the laser; y and z are the beam waists of the laser in the y and z directions, respectively; and I 0 is the light intensity of the standing-wave field at anti-nodes. As shown in Eqs. (1) and (2), for blue or red detuning, at the nodes of the standing-wave field ( I(x, y, z) = 0), the chromium atom has the lowest potential energy. Moreover, at the anti-nodes of the standing-wave field on the x-axis ( I(x, y, z)=I 0 ), the chromium atom has the highest potential energy. Thus, for the blue detuning ( > 0 ), chromium atoms converge at the nodes of the standing-wave field. Conversely, for the red detuning, the chromium atoms converge on the anti-nodes of the standing-wave field. Particularly, in Eq. (2), if x = 0, that is, on the surface of the mirror, I(x, y, z) = 0. This value shows that the intensity at the mirror surface is always the node of the standing-wave field. If we keep the substrate and mirror relatively fixed in atomic lithography, then the positions of the standing-wave field nodes and anti-nodes on the substrate do not change. Hence, when we move the standing-wave field up and down parallelly, the trajectory of nodes and anti-nodes are parallel to the mirror. Experimental Design for Scanning Atom Lithography The working element of atom lithography is chromium obtained from our lab. The experimental system is shown in Fig. 1, and the laser configuration is demonstrated in Fig. 2. The entire experimental setup was placed on an optical platform. To obtain 425.55 nm of blue laser for the experiments, we used laser products from Coherent, Inc. to form a laser light source system with the following specifications: diode-pumped all-solid-state laser Verdi G-12, continuously adjustable single-frequency titanium gem laser MBR-110, and frequency multiplier MBD-200. Among them, Verdi G-12 can output a 532-nm continuous green laser with a maximum power of 11.5 W. MBR-110 can output an 851-nm near-infrared laser with a maximum power of 1.9 W under the pump of Verdi G-12, with a tunable range of 700-1000 nm. MBD-200 can output a 425.55-nm laser with a maximum power of 500 mW after doubling the laser output of MBR-110. The chromium powder evaporated from an effusion cell, which was heated to 1625 ℃. The chromium atom flux passed the stabilization laser, cooling laser, and standingwave in order. Collimated Cr atoms were focused to the node or anti-node of a standing wave grazing across the substrate surface. As mentioned above, when we moved the standing-wave field up and down parallelly, the trajectory of nodes and anti-nodes were parallel to the mirror. Therefore, we propose a method called scanning atom lithography, which is illustrated in Fig. 3. The main difference of scanning atom lithography to the normal setup is that we insert a Dove prism for the standing-wave adjustment. The Dove prism was produced by Union Optic, Wuhan, China. The product model is DVP0110, the vertical direction of the movable height is 10 ± 0.2 mm, the bottom angle is 45°, the angle error is less than 3′, and the surface roughness is 0.6-1.5 nm. Generally, the Dove prism is designed to invert an image. Here it was used for changing the height of the standingwave parallelly and "scanning" the whole Cr atom beam flux after a slit. In the experiment, the slowly moving Dove prism scanned the whole Cr atom flux to form a full-covered nanograting. Figure 4 shows the standing-wave configuration at different locations along the Gaussian direction. Because the standing-wave status at the mirror is nodes all the time, it will not change the parallelism of nanograting. Moreover, the prism motion will hardly change the optical path of the standing wave, and only the mirror surface waviness will influence the position of every grating profile. Because the surface roughness of the mirror coating can be controlled at the sub-nanometer level, the distance between the mirror and laser beam radius waist will not change during scanning. By keeping the stability of the mirror-laser beam waist superposition, the self-traceable nanograting holds a good uniformity at trans-scale. Experimental Design for Splicing Atom Lithography Splicing atom lithography relates to a method for extending the structure area along the standing-wave direction. It comprises the steps illustrated in Fig. 5. The direction of the incident laser will stay unchanged during both processes. However, in this study, we moved the main lens, substrate, and mirror along the standing-wave direction for a certain distance and fixed the reflection to complete the exact coincidence with the incident laser. The two atom lithography processes will form an overlapping area to make the atom lithography grating lines "spliced" at the overlapping area, thus extending the length of the line area along the standing wave. Repeated splicing operations can be subsequently performed as needed, and each operation process is consistent with the above splicing operations. For splicing atomic lithography, it is extremely important to control the grating line error in the overlapping area, including the grating line parallelism and grating pitch consistency. Fortunately, the basic principles of atomic lithography determine that this splicing atomic lithography is perfect splicing in theory. During the whole process of splicing atomic lithography, we moved the substrate, standing-wave beam radius, and reflection mirror as a whole, and the phase of the mirror position was always at the node status of the standing-wave field. Therefore, for every deposition, the phase of the standing-wave field formed at a specific position away from the mirror was exactly the same and fixed all the time if there is no relative movement between the reflection mirror and substrate. This condition lays the theoretical foundation for perfect splicing. In the overlapping area, the result of perfect splicing is that the height of the grating will increase, but this has no effect on the calculation of the parameter of the mean pitch. Therefore, splicing atomic lithography is based on perfect splicing. The length of the grating along the standing-wave direction was extended, which provides a perfect solution for solving the problem of range expansion in the use of the grating as an encoder. Scanning Atom Lithography Along the Gaussian Direction Next, we conducted the scanning atom lithography experiment as designed. The slit size was 3 mm along the standingwave direction and 1.5 mm along the Gaussian beam direction. We fixed the Dove prism to a vertical stage with the KSMV6-40ZF model. It was built with a total travel range of 6 mm and a sensitivity of 2 μm. During the scanning process, we moved the standing-wave vertically by 100 μm every 20 min. The total scanning time was 5 h. For the laser beam part, the power of the stabilization laser beam, cooling laser beam, and focusing laser beam are 8, 25, and 60 mW, respectively. The cutting proportion of the focusing laser beam was approximately 45% during the whole process. Figure 6 shows the monitor record of the laser frequency before frequency doubling during the process. In Fig. 6, the laser wavelength stabilized at 851.10691 nm throughout the deposition process. Although the wavelength of the laser frequency rapidly fluctuated, it was quickly locked at the position of the atomic transition frequency by the frequency locking system. During the whole process, the laser wavelength error that affects the atomic lithography grating should not exceed 4 × 10 −6 nm. Since the nanograting pitch is only a quarter of this wavelength, the pitch difference at the deposition area varied by approximately 1 × 10 −6 nm. The extremely high stabilization of the frequency lays the basis for a highly uniform nanograting fabrication. The parallelism and continuity of the nanograting are also the main characteristics of scanning atom lithography. We randomly selected a "mapping" measurement reference line along the Gaussian direction for the scanning electron microscope measurement. Figure 7a shows a typical image of Cr nanograting, where all the lines are highly parallel and have no adhesions. Figure 7b shows the peak-to-valleyheight (PTVH) distribution along the Gaussian direction. After a careful examination of all the images, no misplacement and bridging phenomenon exists. Scanning atom lithography lowers the production efficiency to some degree. However, in fact, if we take the structure size and vacuum system pumping time into account, it increases the production efficiency. In subsequent studies, we will continue to optimize the fabrication process, such as increasing the deposition rates and laser power density. Splicing Atom Lithography Along the Standing-Wave Direction Similarly, we arranged the splicing atom lithography process based on the previous experimental design, and two atom lithography depositions were performed. Figure 8 is an optical image of the grating region formed by splicing atom lithography and AFM images from P 1 to P 7 of the grating area along the standing-wave direction. First, in the optical image, there is an overlapping deposition area between the two atom lithography processes. The deposition color in the overlapping area is significantly brighter than that in the area deposited alone, mainly due to the increase in the thickness of the atomic deposition film. Second, in the AFM image, the grating lines of the first deposition area, second deposition area, and overlapping deposition area are all of good quality, and the pitches are all the same. This finding preliminarily shows that perfect stitching occurs in the process of splicing atom lithography. Other evidence for the perfect stitching is the height of the deposited gratings in different regions. In the optical image in Fig. 8, we made a graph of the PTVH of the grating changing with the position along the standing-wave direction in 50-µm steps, as shown in Fig. 9. Typical values for the first and second atomic lithography grating PTVHs are basically 15-22 nm, respectively. A typical value of the PTVHs of the overlapping area grating is 23-32 nm. This value range can illustrate two key issues: On the one hand, the grating height of the overlapping part is very close to the sum of the first and second grating heights. Hence, the second deposition process grows on the basis of the first deposition grating, and the peaks and troughs correspond one-to-one. On the other hand, the height of the grating in the overlapping part is smaller than the sum of the heights of the first and second deposition area gratings, and the half-height width is slightly larger than the two. As a result, the grating will be widened due to the shading effect in addition to the process of the height increase, which is consistent with our previous observations. Based on these two aspects of analysis, the proposed splicing atom lithography technology achieves the perfect expansion of the grating area in the direction of the standing-wave field with confidence. Pitch Error Analysis In previous studies, Jabez McClelland's group performed an error analysis of the atom lithography deposition [15]. By contrast, our splicing and scanning atom lithography involve multiple atom lithography depositions; the standing-wave field moves along the mirror in scanning atom lithography, and the position change in splicing atom lithography results in an additional pitch error. Therefore, it is necessary to perform error analysis. Here, we mainly analyzed three error sources brought about by scanning and splicing atom lithography: the non-parallelism of the mirror surface, Error Caused by the Non-parallelism of the Mirror Surface In scanning atom lithography, there are continuous movements of the standing wave along the mirror surface. Because the mirror surface is not absolutely flat, there is a pitch error of mirror non-parallelism. The error caused by the non-parallelism of the mirror surface is shown in Fig. 10. The error caused by the mirror non-parallelism is given by Eq. (3): where is the angle caused by the uneven mirror surface and p is the pitch of grating. The mirror surface non-parallelism we used is < 5′, resulting in a pitch error of 2.2 × 10 -4 nm. Error caused by the Non-coincidence of the Standing-Wave Similar to the mirror's non-parallelism, the Dove prism also has a geometric defect, which will change the angle of incidence of the incident Gaussian laser and cause the reflected light to not coincide with the incident light. A diaphragm was installed 1 m from the mirror to observe the coincidence of the incident light and reflected light. In the process of scanning atom lithography, the movement of the Dove prism hardly changed the non-coincidence of the reflected light with the incident light. We estimated the pitch error based on [15], and the pitch error caused by the non-coincidence of incident light and reflected light was as small as 2.5× 10 -5 nm. Error Caused by the Gaussian Beam Phase Shift Because Gaussian beams acquire phase shifts in the direction of propagation, this phase shift differs from the phase shift of plane waves propagating at the same optical frequency. This difference is known as the Gaussian beam phase shift. The phase shift produced by the Gaussian beam is given by Eq. (4) [30]: where z is the axial distance from the beam waist, z R = π 2 0 ∕ is the Rayleigh range of the Gaussian beam, is the wavelength, and 0 is the 1∕e 2 beam radius at the waist. The pitch error produced by the phase shift of the Gaussian beam in single-time atom lithography is [15] For our splicing atom lithography experiment this time, z 2 = 6.8 mm, z 1 = 2 mm; beam waist 0 was 70 µm, which was measured using a laser beam profiler (LBP2-HR-VIS2, Newport, California, USA). Thus, z R = 36.2 mm. The total error resulting from the Gaussian beam phase shift was Δz = 8.84 nm, and the pitch error for 4.8 mm was 3.9 × 10 -4 nm. In addition, due to the geometric defects in the Dove prism, such as surface roughness (0.6-1.5 nm) and angular error (less than 3′), additional optical path differences were created during the movements. The optical path difference of 1.5 mm brought by the vertically moving Dove prism was less than 0.2 mm, so the pitch error caused by the Dove prism for 4.8 mm was less than 1.6 × 10 -5 nm. We enumerate the additional errors generated by scanning atom lithography and splicing atom lithography in Table 1. As can be seen in the table, splicing atom lithography and scanning atom lithography introduced a mean pitch error of 4.5 × 10 −4 nm. Because this error is too small compared to the previously calculated 0.0049 nm [15], the theoretical pitch of our system's atom lithography grating remained at 212.7787 ± 0.0049 nm. Hence, both nanostructures manufactured via scanning atom lithography and splicing atom lithography will demonstrate good pitch accuracy, parallelism, continuity, and homogeneity. Conclusions To summarize, in this paper, we put forward the scanning atom lithography method and splicing atom lithography method to extend the structure area of self-traceable nanogratings along the Gaussian direction and standingwave direction, respectively. By inserting and scanning a Dove prism in the standing-wave direction, the Cr nanograting area extended from 500 to 1500 μm. By splicing two atom lithography areas together, we achieved a perfect expansion of the grating area from 3 to 4.8 mm in the direction of the standing-wave field with confidence. Based on the experimental results and pitch error analysis, both nanostructures manufactured via scanning atom lithography and splicing atom lithography demonstrated good pitch accuracy, parallelism, continuity, and homogeneity. Therefore, scanning atom lithography and splicing atom lithography open a new way to fabricate centimeter-level full-covered self-traceable nanograting, which lay the basis for the useful applications of square rulers and optical encoders at the nanoscale.
5,077.8
2022-04-26T00:00:00.000
[ "Physics" ]
Decision Support for Oropharyngeal Cancer Patients Based on Data-Driven Similarity Metrics for Medical Case Comparison Making complex medical decisions is becoming an increasingly challenging task due to the growing amount of available evidence to consider and the higher demand for personalized treatment and patient care. IT systems for the provision of clinical decision support (CDS) can provide sustainable relief if decisions are automatically evaluated and processed. In this paper, we propose an approach for quantifying similarity between new and previously recorded medical cases to enable significant knowledge transfer for reasoning tasks on a patient-level. Methodologically, 102 medical cases with oropharyngeal carcinoma were analyzed retrospectively. Based on independent disease characteristics, patient-specific data vectors including relevant information entities for primary and adjuvant treatment decisions were created. Utilizing the ϕK correlation coefficient as the methodological foundation of our approach, we were able to determine the predictive impact of each characteristic, thus enabling significant reduction of the feature space to allow for further analysis of the intra-variable distances between the respective feature states. The results revealed a significant feature-space reduction from initially 19 down to only 6 diagnostic variables (ϕK correlation coefficient ≥ 0.3, ϕK significance test ≥ 2.5) for the primary and 7 variables (from initially 14) for the adjuvant treatment setting. Further investigation on the resulting characteristics showed a non-linear behavior in relation to the corresponding distances on intra-variable level. Through the implementation of a 10-fold cross-validation procedure, we were further able to identify 8 (primary treatment) matching cases with an evaluation score of 1.0 and 9 (adjuvant treatment) matching cases with an evaluation score of 0.957 based on their shared treatment procedure as the endpoint for similarity definition. Based on those promising results, we conclude that our proposed method for using data-driven similarity measures for application in medical decision-making is able to offer valuable assistance for physicians. Furthermore, we consider our approach as universal in regard to other clinical use-cases, which would allow for an easy-to-implement adaptation for a range of further medical decision-making scenarios. Introduction According to the global cancer statistics (GLOBOCAN 2018) nearly 93,000 new cases of oropharyngeal squamous cell carcinoma (OPSCC) were reported worldwide in 2018 [1]. Lately, the incidence of OPSCC is significantly increasing in many countries worldwide, particularly due to positive human papillomavirus (HPV)-related OPSCC [2]. HPV, primarily type 16, is recognized as a risk factor and important prognostic factor alongside tobacco and alcohol consumption [3]. Nevertheless, the actual therapeutic decision for OPSCC is currently not differentiated according to HPV status. Instead, it is essentially based on the individual situation of the patient and his or her anatomical and biomedical conditions. While early-stage OPSCCs are usually treated by surgery or radiation therapy, more advanced stages require multimodal therapeutic concepts depending on the pathological indication. These may include invasive surgical procedures as well as adjuvant radiation or combined radiochemotherapy [4,5]. In cases of unresectable tumors, definitive radiochemotherapy is indicated. For recurrent or metastatic disease, new therapeutic options in the field of checkpoint immunotherapy have been approved. These represent a valuable addition to established conventional chemotherapies by blocking inhibitory immune checkpoint signaling pathways to reactivate immune response against cancer [6]. Activation of the PD-1 protein, which can be expressed by T cells, in response to PD-L1, leads to inhibition of the immunological response of T cells and serves as a mechanism to bypass the tumor immune system. Anti-PD-1/PD-L1 immune checkpoint inhibitors (ICIs) can inhibit suppressive signaling through the PD-1/PD-L1 pathway and enhance antitumor immune activity [7,8]. Due to individual tumor characteristics, differences in resectability and comorbidities that may conflict with radio-or even more chemotherapy, a personalized view of the diagnostic and therapeutic process becomes necessary. This includes adjusted diagnostics and individualized decision-making to provide optimal outcomes and a valuable quality of life for the individual patient. To consider all personal patient-related factors, the evaluation of ideal treatment strategies for OPSCCs is currently being discussed in interdisciplinary tumor boards. In these meetings, specialists from different disciplines evaluate the available options in order to find the best possible therapy for a specific patient case. The following disciplines are usually represented: otorhinolaryngology, head and neck surgery, maxillofacial surgery, pathology, radiology, radiation therapy, as well as medical oncology [9]. Making such complex clinical decisions involves a set of individual considerations. The particular knowledge required to act in the patient's favor comes from various sources of information such as learned expertise, specialist publications, and individual experience [10]. Verifiable results from significant medical studies or clinical trials are considered a level of safety as they represent the current state of clinical evidence [11]. This evidence also serves as a foundation for the preparation of clinical practice guidelines (CPG), which are provided by several medical associations. This overall process, also defined as evidence-based medicine (EBM), represents one current baseline for making medical decisions [12,13]. Although the concept of EBM integrates medical science and research, it provides general practice recommendations. So, it is therefore not an individual "instruction manual", but must be applied to the individual patient according to the specific circumstances. Therefore, the clinical experience that a clinician accumulates during his or her professional career should not be underestimated in the diagnostic and decision-making process. Most judgments concerning specific criteria of the patient are made based on the clinician's individual knowledge, training, and experience. According to Lakoff et al., experience does not refer to memory, i.e., the result of interaction with the environment, but characterizes the immediate encounter, i.e., the process of repeated sensorimotor interaction with the environment in the sense of a repetitive action [14]. This progressively shapes and links the functional neuron groups involved in this process more effectively. Experience thus changes the neuronal connection patterns of the brain. This implies that the diagnosis and therapy finding of current patient cases are cognitively compared with similar patient and diagnostic profiles of the past. For very unusual, rare, and complex cases, for which even highly trained clinicians may lack the experience, this described process of decision-making reaches its limit and can no longer guarantee the optimal strategy for an individual patient [15]. Similarity analysis and comparison with previous cases could therefore form a valuable part of selecting an optimal diagnostic and therapeutic strategy. By means of an IT-supported process, it should be possible to access a broad knowledge base of patient cases. Based on the human cognitive process, an automatic search function can be used to evaluate specific diagnostic results of comparable patient cases and their courses for the current research question. The idea of comparing a new problem with a similar previous situation found its beginnings in the 1980s and has been tried to establish since then [16]. As a cognitive similarity to clinical decision-making based on expertise coupled with the duality of subjective and objective knowledge, the term case-based reasoning (CBR) was introduced with the main principle: "similar problems have similar solutions" [16]. Considering the enormous potential of CBR for automated systems in clinics, the capability has yet to be achieved with suitable technologies, since other fields already utilize similar approaches. Similarity analysis is used in the medical context for DNS and protein analysis, for example, but is also used in many other domains [17]. It already forms an omnipresent and indispensable part in the context of recommendation systems. Based on the analysis of user behavior, suggestions for online shopping ("customers who bought this item also bought..."), music and movie streaming, or e-learning applications already influence decisions in our everyday life. To make recommendations, many member profiles with similar preferences and tastes are matched with the current user profile and the most suitable objects are recommended in a personalized catalog according to the collaborative technique of recommendation systems [18]. This already established concept should now also relieve medical staff in their everyday work. The consideration of computational similarity analysis for patients is a well-known approach that has been thoroughly investigated throughout the years [19]. Especially since the advent of algorithmic analysis and machine learning (ML), methods such as k-nearest neighbors (kNN) and associated solutions have been applied to this problem with large success [20][21][22][23]. However, while those similarity metrics are well suited for the identification of similar (vector-based) abstractions of patients, they do only account for differences at a variable level (i.e., two patients with the same gender or almost equally distributed expressions in the blood count) but they do not consider the distances between individual variable states (i.e., between two categorical variables that are not equidistant regarding their influential factors, e.g., general performance status (ECOG) or other medical staging systems). While multiple measures that address this modality (also known as overlap measure) exist, they do only account for categorical variables. Since medical data sets are often subject to mixed variable types, solutions that are able to process those diverse entities are required. To overcome those current methodological limitations in similarity search among patients, we present a novel approach that considers the intra-variable similarity of clinical cases based on mixed-type variables by using the φK correlation coefficient [24]. Due to this procedure, we also introduce a novel real-world application for the stated φK metric and evaluate its suitability for the task of patient matching. Accordingly, the main aim of the presented method is to contribute to comprehensive and objective (unbiased) assistance in case-based reasoning and thus also to the therapy decision process in the long term. In conclusion, this methodology made it possible to identify an objective selection of decisive diagnostic features and their individual impact on primary and adjuvant treatment decisions in the head and neck tumor board. Information Modeling Creating a Patient-Specific Vector In order to adequately compare OPSCC patient cases, it is necessary to determine the context-specific variables (features) that are considered relevant to decision-making in relation to a corresponding endpoint (see Figure 1). In the present case, this endpoint relates to the primary and adjuvant treatment decision. Thus, relevant and specific characteristics were initially identified from the diagnostic results using the hospital's internal clinical information system and then transferred into patient-and diagnosis-related features. For primary treatment, in the patient category, age, severe pre-existing conditions, and the ECOG score, a general performance measure, are decisive factors for diagnostic and therapeutic management (see exemplary patient data in supplement Table S1). While as diagnostic features, factors such as tumor size, infiltration of certain structures, possible metastases, as well as histo-and molecular-pathologic characteristics are important in assessing whether the tumor is resectable or chemotherapy is tolerable (see Table S2). Provided that surgical therapy is successfully evaluated in terms of achieving complete tumor resection with clear margins and optimal quality life expected postoperatively, potential adjuvant treatment is discussed in a postoperative tumor board based on the definitive pathologic findings. The histopathological report of the surgical resection should include tumor localization, tumor size, histological tumor type and grading, lymph vessel invasion, blood vessel invasion and perineural invasion, locally infiltrated structures, number and size of affected lymph nodes, presence of extracapsular extension, and the resection status (see Table S3). In addition, immunohistochemical scores such as the combined positive score (CPS) and the tumor proportion score (TPS) are also acquired to estimate PD-L1 expression. The CPS score evaluates the number of PD-L1 positive cells (tumor cells, lymphocytes, macrophages) relative to all viable tumor cells. TPS assesses the percentage of PD-L1 positive tumor cells in proportion to all viable tumor cells [25]. The result of this process is a patient-specific vector of independent information entities, which shows the related medical factors influencing therapy decisions in a structured format. In this context, Tables S1 and S2 each form the characteristic constellations for the primary decision scenario and Tables S1 and S3 for the adjuvant therapy. pre-existing conditions, and the ECOG score, a general performance measure, are decisive factors for diagnostic and therapeutic management (see exemplary patient data in supplement Table S1). While as diagnostic features, factors such as tumor size, infiltration of certain structures, possible metastases, as well as histo-and molecular-pathologic characteristics are important in assessing whether the tumor is resectable or chemotherapy is tolerable (see Table S2). Provided that surgical therapy is successfully evaluated in terms of achieving complete tumor resection with clear margins and optimal quality life expected postoperatively, potential adjuvant treatment is discussed in a postoperative tumor board based on the definitive pathologic findings. The histopathological report of the surgical resection should include tumor localization, tumor size, histological tumor type and grading, lymph vessel invasion, blood vessel invasion and perineural invasion, locally infiltrated structures, number and size of affected lymph nodes, presence of extracapsular extension, and the resection status (see Table S3). In addition, immunohistochemical scores such as the combined positive score (CPS) and the tumor proportion score (TPS) are also acquired to estimate PD-L1 expression. The CPS score evaluates the number of PD-L1 positive cells (tumor cells, lymphocytes, macrophages) relative to all viable tumor cells. TPS assesses the percentage of PD-L1 positive tumor cells in proportion to all viable tumor cells [25]. The result of this process is a patient-specific vector of independent information entities, which shows the related medical factors influencing therapy decisions in a structured format. In this context, Tables S1 and S2 each form the characteristic constellations for the primary decision scenario and Tables S1 and S3 for the adjuvant therapy. To establish the dataset, 102 patient cases with OPSCC from the University Hospital of Leipzig were retrospectively analyzed. All of them were previously discussed by a team of interdisciplinary experts in the head and neck tumor board. We were able to only capture complete patient data without any missing information during this process. Data-Driven Reduction of the Feature Space Using the PhiK Correlation Coefficient Since not every feature is equally important in the context of making a therapeutic decision, a data-driven metric for expressing the individual weight of that information needed to be derived from the data set. To achieve this, we first split the data into a training (81 patients) and test set (21 patients) to enable later verification of our approach with previously unseen data (patient data that was not used to derive feature space reduction and intra-variable analysis). Based on the training set, we then calculated the individual correlation of each feature in relation to the recorded treatment decision using the PhiK (φK) package (version 0.9.12) in a Jupyter notebook python environment [24]. The φK coefficient is based on a refined version of Pearson's χ 2 contingency test to evaluate the independence of two or more variables through an algorithmic calculation without restrictions to a single variable type. Thus, it enables the parallel consideration of categorical, ordinal, and interval variables, which is a crucial characteristic when dealing with medical data that is usually represented in mixed-type data columns, e.g., age (ordinal), ECOG (categorical). In contrast to more traditional metrics, such as Pearson's r, it also accounts for non-linear behavior between variables, which is another important characteristic regarding medical data including artificial scoring systems to express certain medical modalities (e.g., TNMstaging, ECOG). The PhiK package allows for the calculation of a correlation matrix using its own φK coefficient as the associated metric. While there is currently no gold standard regarding the correlation threshold, we defined scores greater than 0.3 to be significant for our analysis. The coefficient itself ranges from 0 to 1. In a second step, we then evaluated the resulting features in terms of their statistical significance using the integrated PhiK significance based on a modified p-value calculation [24]. The algorithm then calculates a Z-value for each possible feature constellation which can then be obtained in a matrix-based representation according to the previous correlation matrix. For the performed analyses, we have determined a Z-value greater than 2.5 to be significant. Analysis of Intra-Variable Behavior to Enable Granular Similarity Scoring From a clinical point of view, there may be a difference in terms of treatment capacity whether a patient has ECOG = 1 or ECOG = 2, whereas no relevant distinction is usually made here between ECOG = 0 and ECOG = 1 states. Therefore, we further refined our analysis to account for intra-variable behavior in the remaining features (after reduction) with the goal to quantify the individual differences between the respective variable states. We therefore performed the same φK-based correlation and significance tests in relation to the therapy target variable while limiting the respective input feature states to every possible pairwise permutation schema, e.g., ECOG 0/ECOG 1, ECOG 0/ECOG 2, and ECOG 1/ECOG 2. In this way, we were able to calculate the numeric differences between the resulting clusters, allowing us to derive the amount of similarity or distance that results from looking at the individual states rather than the overall feature. Consolidation of the Findings into a Similarity Metric Based on the inter-and intra-variable analysis of the considered features, we were able to construct a weight-matrix that integrates the φK correlation coefficients for all possible state permutations. Based on this, we suggest the implementation of the derived findings as additional factors to the calculation of similarity in the following way: Thereby, n represents the number of features in a patient vector that is considered for similarity analyses with a range of other same-type vectors in an iterative way. The weight factor w represents the associated values from the weight-matrix that account for the respective correlation for each constellation of variable states. Due to w, a relatively small φK correlation coefficient also results in a small distance as it has been shown that the derivation between both factors is of less importance for the respective decision scenario. Consequently, if the normalized sum of all feature correlation is small, it follows that the distance between two patient vectors is small, which then results in a high similarity value S. Evaluation of the Approach To verify our approach, we further implemented an initial evaluation process by performing similarity searches among the test set (new and unseen patient data) with the training set. In this scenario, we considered a difference in variable states as relevant, if the calculated weight surpassed a score of 0.5. All other constellations were thus considered to be similar. If one or more matches (defined as patients to be the same or similar in all considered features) for a case in the test set were found according to this procedure, we then checked if their corresponding therapy selection was equal among all findings. For example, if our approach identified cases B and C as two similarity matches for case A, and all cases were treated equally, this would result in a perfect evaluation score of 1.0. If differences were found in the recorded treatments, the score would decrease accordingly. Finally, we have calculated a final evaluation score through summing up the individual results and dividing them by the sum of all found matches. To account for unrepresentative effects caused by only a one-time random selection of cases in the train-test-split, we implemented a 10-fold cross validation with randomly assigned cases to the respective test (n = 21) and training (n = 81) cohorts during each fold. The overall evaluation metric (evaluation score) is thus defined as the mean of the individual per-run outcomes. Statistical Description of the Data Set With an average age of 60.4 years and a male share of 74.5% (76 patients), the data set represents typical patients with OPSCC. A total of 25.5% (26 patients) of the documented patients are female. This corresponds to a female to male ratio of 1:2.9. Overall, 50% (51 patients) had ECOG status 0, which means normal unrestricted activity as before the disease. Whereas 41.2% (42 patients) already have minor physical limitations, which is encoded by an ECOG status of 1. The remaining have further restrictions and an ECOG status of 2, which means that the therapy options for invasive procedures may be limited (see Table 1). At the time of diagnosis, 81.4% (83 patients) of the cases already had affected lymph nodes and 59.8% (61 patients) had a tumor size of more than 4 cm (see Table 2). In 8.8% (9 patients), distant metastases were already detectable at time of diagnosis. Remarkable are the tumor infiltrations into neighboring structures, such as in 58.8% (60 patients) into the tongue musculature, in 8.8% (9 patients) into the nasopharynx, and in 23.5% (24 patients) into the hypopharynx. In particular, the involvement of non-lymphatic structures, including the internal jugular vein (IJV), spinal accessory nerve (SAN), and sternocleidomastoid muscle (SCM) determine the surgical management of the neck in OPSCC [26]. The frequencies for these are distributed as follows in our dataset: IJV: 18.6% (19 patients), SAN: 5.9% (6 patients), and SCM 13.7% (14 patients). Risk factors as well as possible indicators for adjuvant treatment are included in the final histopathological findings (see Table 3). For instance, in our dataset, extracapsular spread of the lymph node metastasis was observed in 37.3% (38 patients). Positive resection margins were detected in 7.8% (8 patients). Perineural and lymphatic invasion were found both in 76.5% (78 patients) of the pathological examinations, vascular invasion in 6.9% (11 patients). (28 patients) were treated with definitive radiochemotherapy. The remaining 5.9% (6 patients) received best supportive care, which is not a curative approach, but inherits the main aim to relieve the symptoms and achieve the best possible quality of life (see Table 4). Identification of Diagnostic Factors for the Primary Treatment Scenario Regarding the primary treatment scenario, the utilization of φK-based correlation and significance analysis identified six diagnostic factors with a representative correlation coefficient above 0.3 and a Z-value above 2.5 (see Figure 2). Those included T-state For primary therapy, tumor size and infiltration of certain structures play a decisive role in the diagnostic process, as this influences resectability. The ECOG concludes a clinical assessment of a patient's general performance and therefore, correlates with the tolerability of invasive procedures such as surgery, radiotherapy, and even more chemotherapy. Analysis of Intra-Variable Distances Based on the fact that the M-state as well as the identified infiltrations are represented as binary states that can either be present or not present (0 or 1 for the M-state, respectively), those factors did not account for intra-variable investigation. Thus, the overall correlation of the feature can be further considered. In terms of the T-state, our analysis showed a nonlinear behavior which closely adapts to clinical expectations (see Figure 3). Consequently, extreme differences in staging (i.e., T1 to T4) do also have extreme deviations while smaller distances do have smaller impact on the therapeutic decision and thus, more similarity during case comparison. Analysis of Intra-Variable Distances Based on the fact that the M-state as well as the identified infiltrations are rep sented as binary states that can either be present or not present (0 or 1 for the M-sta respectively), those factors did not account for intra-variable investigation. Thus, t overall correlation of the feature can be further considered. In terms of the T-state, o analysis showed a non-linear behavior which closely adapts to clinical expectations (s Figure 3). Consequently, extreme differences in staging (i.e., T1 to T4) do also have e treme deviations while smaller distances do have smaller impact on the therapeutic d cision and thus, more similarity during case comparison. In a similar way, the analysis of ECOG provided equally comprehensible results th clearly show the value of considering intra-variable distances to derive medical ca similarity (see Figure 4). While the derivation of ECOG 0 and ECOG 1 showed almost impact on assessing two individuals as different during therapy decision-making, larg distances (i.e., ECOG 0 and ECOG 2) carry tremendous differences. This behavior wou have not been obvious from considering the overall feature correlation of 0.65 (see Figu 2) during similarity calculation. Implementing the previously introduced 10-fold cross evaluation approach for si ilarity-based case matching through unseen test data, we were able to identify a medi of eight cases from the testing cohort with one or more identified matches from t In a similar way, the analysis of ECOG provided equally comprehensible results that clearly show the value of considering intra-variable distances to derive medical case similarity (see Figure 4). While the derivation of ECOG 0 and ECOG 1 showed almost no impact on assessing two individuals as different during therapy decision-making, larger distances (i.e., ECOG 0 and ECOG 2) carry tremendous differences. This behavior would have not been obvious from considering the overall feature correlation of 0.65 (see Figure 2) during similarity calculation. Analysis of Intra-Variable Distances Based on the fact that the M-state as well as the identified infiltrations are represented as binary states that can either be present or not present (0 or 1 for the M-state respectively), those factors did not account for intra-variable investigation. Thus, the overall correlation of the feature can be further considered. In terms of the T-state, our analysis showed a non-linear behavior which closely adapts to clinical expectations (see Figure 3). Consequently, extreme differences in staging (i.e., T1 to T4) do also have extreme deviations while smaller distances do have smaller impact on the therapeutic decision and thus, more similarity during case comparison. In a similar way, the analysis of ECOG provided equally comprehensible results that clearly show the value of considering intra-variable distances to derive medical case similarity (see Figure 4). While the derivation of ECOG 0 and ECOG 1 showed almost no impact on assessing two individuals as different during therapy decision-making, larger distances (i.e., ECOG 0 and ECOG 2) carry tremendous differences. This behavior would have not been obvious from considering the overall feature correlation of 0.65 (see Figure 2) during similarity calculation. Implementing the previously introduced 10-fold cross evaluation approach for similarity-based case matching through unseen test data, we were able to identify a median of eight cases from the testing cohort with one or more identified matches from the training cohort. Based on the fact that all those identified matches shared an equal treatment modality with the corresponding test case, we were able to achieve a perfect evaluation score of 1.0. Implementing the previously introduced 10-fold cross evaluation approach for similaritybased case matching through unseen test data, we were able to identify a median of eight cases from the testing cohort with one or more identified matches from the training cohort. Based on the fact that all those identified matches shared an equal treatment modality with the corresponding test case, we were able to achieve a perfect evaluation score of 1.0. Identification of Diagnostic Factors for the Adjuvant Treatment Scenario In clinical practice, the decision to conduct postoperative (adjuvant) therapy is based on CPG, such as the NCCN guidelines, which specify exactly which characteristics require adjuvant therapy and if so, which particular strategies [27]. Thus, for example, a patient who has undergone a complete R0 resection with sufficient margins after surgery, along with a N0 status, would not require adjuvant therapy in many cases. However, in clinical practice, patients are still offered the option of adjuvant therapy, for example when certain risk factors such as expended tumor size (T3 and larger) or lymphatic (L1), venous (V1), or perineural invasion (Pn1) are identified. Our analysis identified seven diagnostic factors as relevant for adjuvant therapy decision (see Figure 5). Those included primary therapy (correlation: 0.71, significance: 8.09), ECOG status (correlation: 0.60, significance: 2.69), lymphatic invasion (correlation: 0.57, significance: 2.88), perineural invasion (correlation: 0.68, significance: 3.72), vascular invasion (correlation: 0.70, significance: 3.94), extracapsular spread (correlation: 0.92, significance: 7.54), as well as resection margin (correlation: 0.51, significance: 6.29). The factors identified by the application are also consistent with the clinical approach. The interdisciplinary tumor board for post-surgery treatment also considers various characteristics that determine the adjuvant therapy. Firstly, results from pathological diagnostics, such as a positive resection margin or extracapsular spread of the lymph node metastasis, are an indication for adjuvant therapy. While the presence of invasion in the distinct anatomical structures is again represented as binary expressions, we further investigated the ECOG status variable according to our presented approach. From the numbers (see Figure 6), we again perceive a non-linear behavior, which ranges between a correlation score of 0.06 and 0.32. For our approach, however, this relates to the fact that changes throughout those states need not to be considered during similarity matching: a fact which would have been the case if state deviations in the variable would have been considered in general (correlation: 0.60, see Figure 5). Further analysis would have also been necessary regarding the resection margin variable based on our stated method. However, since the presence of state R2 was only found once in the data, it was not possible to find matches in the train-test split. Thus, we considered it at an overall variable level (correlation: 0.51) during evaluation and only considered absolute matches throughout the respective states to be similar. Based on the 10-fold cross evaluation procedure, we were able to identify a median of nine cases from the testing cohort with one or more matches from the training cohort. The results revealed a mean evaluation score of 0.957 (minimum: 0.71, maximum: 0.96), which relates to a very high accuracy in the identification of patients that received equal adjuvant therapeutic procedures. For practical application of the model to support the diagnostic and therapeutic process, the model emphasizes a detailed determination of extracapsular extension of lymph nodes. For a patient with pT2 pN1 M0 (according to TNM classification 2017 [28]) OP-SCC and HPV-16 positivity, there are critical differences in treatment decision-making. In particular, the absence of extracapsular extension has to be considered, which practically indicates radiotherapeutic adjuvance alone, whereas the presence of extracapsular extension requires combined adjuvant chemoradiation. Another example points out the significance of the ECOG status. In a patient with pulmonary metastasis from an OPSCC, the model refers to the evaluation of ECOG status to estimate chemo tolerability: A patient with ECOG 1 is likely to tolerate systemic chemotherapy, whereas a similar patient with ECOG 3 will probably not tolerate conventional chemotherapy. Another finding directs to the diagnostic assessment of nasopharyngeal infiltration of OPSCC regarding treatment decision, as indicated in Figure 5. How case comparison could be used in a clinical setting is also shown in Figure A1 in Appendix A. For practical application of the model to support the diagnostic and therap process, the model emphasizes a detailed determination of extracapsular extensio lymph nodes. For a patient with pT2 pN1 M0 (according to TNM classification 2017 OPSCC and HPV-16 positivity, there are critical differences in treatment sion-making. In particular, the absence of extracapsular extension has to be consid which practically indicates radiotherapeutic adjuvance alone, whereas the presen extracapsular extension requires combined adjuvant chemoradiation. Another exa points out the significance of the ECOG status. In a patient with pulmonary metas from an OPSCC, the model refers to the evaluation of ECOG status to estimate ch tolerability: A patient with ECOG 1 is likely to tolerate systemic chemotherapy, wh a similar patient with ECOG 3 will probably not tolerate conventional chemothe Another finding directs to the diagnostic assessment of nasopharyngeal infiltratio OPSCC regarding treatment decision, as indicated in Figure 5. How case compa could be used in a clinical setting is also shown in Figure A1 in Appendix A. Discussion Based on our approach, an initial objective selection of crucial diagnostic fea and their individual impact regarding primary and adjuvant therapy decisions in head and neck tumor board could be established. Nevertheless, it should be noted the determination of the introduced metrics is highly dependent on the underlying tabase. It must therefore be assumed that the results of our retrospective analysis o patient cases are limited to some extent and would be more significant with the int tion of more or other data sets. This research therefore serves as a proof-of-concept st The outcomes presented in this paper should be considered as a starting point that n to be further analyzed and verified by including additional case data. The precision o decision is then proportional to the amount of case evidence provided. However, b on the trends and effects revealed by the utilized algorithms, we were able to agree the results from the perspective of clinical professionals in the weighting of dia sis-related factors. This indicates that the presented approach is likely to adapt to ca implications in the real-world setting (e.g., lowering the need for adjuvant treat when an R0 resection with clear margins was achieved). In this work, we exclusively focused on the utilization of the -correlation c cient to perform feature-space reduction and similarity scoring. While this method Discussion Based on our approach, an initial objective selection of crucial diagnostic features and their individual impact regarding primary and adjuvant therapy decisions in the head and neck tumor board could be established. Nevertheless, it should be noted that the determination of the introduced metrics is highly dependent on the underlying database. It must therefore be assumed that the results of our retrospective analysis of 102 patient cases are limited to some extent and would be more significant with the integration of more or other data sets. This research therefore serves as a proof-of-concept study. The outcomes presented in this paper should be considered as a starting point that needs to be further analyzed and verified by including additional case data. The precision of the decision is then proportional to the amount of case evidence provided. However, based on the trends and effects revealed by the utilized algorithms, we were able to agree with the results from the perspective of clinical professionals in the weighting of diagnosis-related factors. This indicates that the presented approach is likely to adapt to causal implications in the real-world setting (e.g., lowering the need for adjuvant treatment when an R0 resection with clear margins was achieved). In this work, we exclusively focused on the utilization of the φK-correlation coefficient to perform feature-space reduction and similarity scoring. While this method was mainly based on the fact that the integrated data set inherited a mixed-type variable constellation, the resulting sets for both treatment decision scenarios were purely categorical. This would have allowed for benchmarking our approach with other methodological solutions that are also capable of considering state differences among variables (i.e., Goodall measure or probability-based methods). However, since the main goal of our work was to present a novel solution to the problem of case-based reasoning, an in-depth comparison of our approach to other potential solutions was out of scope but should be considered in future works, also using further data sets to evaluate the generalizability. While this might go along with different outcomes regarding the resulting feature set (e.g., by integrating numerical variables such as laboratory measures), the issue of unprocessable complexity in the analysis of state permutations among two variables would require pre-processing, e.g., by transforming the respective values to z-scores. The analysis performed in this study considers only patients who were assessed and treated in a single hospital. Thus, the aspect of institutional bias cannot be completely dismissed. However, due to the generalized description and design of the methodological process, a simple transferability to a multicenter application is feasible. This could not only lead to a minimization of bias but could also make the process of identifying similar patient cases even more useful by extending the associated search area accordingly. However, this implementation is associated with a correspondingly high organizational and technical effort in practice as it would require the provision of a central repository for the structured input and storage of medical case data. It would also be necessary to ensure terminological consistency. This standardization also applies to the initial evaluation of the individual information entities for prior classification. Furthermore, it should be noted that for certain patient cases, there may be more than one possible treatment option and that the patient's will should not be disregarded. This may lead to deviations between the tumor board decision and the intervention that is performed and documented in the electronic health record. Consequently, it is possible that patients who might be identified as similar by our approach might have received another treatment option than the one initially suggested to the patient. In a future setting that integrates our approach towards similarity calculation for case-based reasoning, this very likely circumstance should be addressed. Although the methodological approach was presented and evaluated using the example of OPSCC, it requires very little adaptation for further use cases in both oncological and non-oncological contexts. For this purpose, the presented processes only need to be mapped to the respective domain and the results need to be interpreted and evaluated accordingly. This method may also be suitable for very rare and complex cases, where decisionmaking is further complicated when the available information and experience is limited. Therefore, misdiagnosis and incorrect treatment are more likely in rare and complex diseases due to insufficient knowledge and awareness [29]. A concise identification of objective, decisive diagnostic features and an analysis of similarity to previous cases can answer individual questions with the aim of determining the best possible diagnosis and treatment strategy for the patient. This adds quality and granularity to the decision-making process and potentially improves patient outcomes. In addition, the analyses provided may contribute to the training and expertise of health professionals. Particularly, beginners may benefit from this, which also enhances objectivity and quality control in hospital diagnostic and treatment processes. While the provided solution is intended to offer rational and intuitive assistance in clinical decision-making, it still needs to be considered that medical cases provide enormous diversity and should not be exclusively evaluated by a set of features. However, our primary aim is to provide proper assistance in identifying relevant cases as a further source of evidence in the therapy decision process and not the specification of the decision itself. Conclusions In this paper, we developed and evaluated a novel approach to provide data-driven similarity analysis for medical cases to support the diagnosis and treatment process in clinical practice. By calculating the individual φK-correlation of each diagnostic feature in relation to the registered treatment decision and evaluating its significance, it was possible to identify both patient-and diagnosis-related factors that are consistent with the clinical assessment of experts and the clinical practice guidelines. Based on the implemented procedure, we were able to evaluate a novel real-world application that can benefit from the theoretical works by Baak et al. [24] and the resulting φK correlation coefficient in a meaningful way. This allows an individualized diagnostic assessment of the patient, potentially reducing the patient's waiting time for treatment proposals and enabling the application of the most effective treatment method. Since the individual expertise of the collaborative members of a tumor board highly depends on the individual participants, the presented method can introduce a new layer of competence by enabling case comparison. This helps to tackle uncertainty or decision bias, thus providing sufficient support to the diagnosis and treatment process in order to improve patient outcomes.
9,034.2
2022-04-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Denoising Neural Network for News Recommendation with Positive and Negative Implicit Feedback News recommendation is different from movie or e-commercial recommendation as people usually do not grade the news. Therefore, user feedback for news is always implicit (click behavior, reading time, etc). Inevitably, there are noises in implicit feedback. On one hand, the user may exit immediately after clicking the news as he dislikes the news content, leaving the noise in his positive implicit feedback; on the other hand, the user may be recommended multiple interesting news at the same time and only click one of them, producing the noise in his negative implicit feedback. Opposite implicit feedback could construct more integrated user preferences and help each other to minimize the noise influence. Previous works on news recommendation only used positive implicit feedback and suffered from the noise impact. In this paper, we propose a denoising neural network for news recommendation with positive and negative implicit feedback, named DRPN. DRPN utilizes both feedback for recommendation with a module to denoise both positive and negative implicit feedback to further enhance the performance. Experiments on the real-world large-scale dataset demonstrate the state-of-the-art performance of DRPN. Introduction Online news platforms, such as Google News and Microsoft News, have attracted a large population of users (Wu et al., 2020b). However, massive news articles emerging every day on these platforms make it difficult for users to find appealing content quickly (Wu et al., 2019b). To alleviate the information overload problem, recommender systems have become integral parts of these platforms. A core problem in news recommendation is how to learn better representations of users and * This work was done when Yunfan Hu was an intern at Tencent. † Xian Wu is the Corresponding Author. news (Hu et al., 2020b). Early works include collaborate filtering (CF) based methods (Das et al., 2007), content-based methods (IJntema et al., 2010) and hybrid methods (De Francisci Morales et al., 2012) that combine the two. These methods usually have the cold start problem when being exposed to the sparsity of user-item interactions (Zhu et al., 2019). Recently, deep learning methods have been proposed to learn better user and news representations. The techniques evolve from using recursive neural network (Okura et al., 2017), attention mechanism (Zhu et al., 2019;Wu et al., 2019c), to graph neural network (Wang et al., 2018a;Hu et al., 2020b,a;Qiu et al., 2022). These methods usually recommend news for users based on their historical feedback. Implicit feedback is more commonly collected than explicit feedback for news because the users usually do not grade the news. Hence, current news recommendation methods naturally use positive implicit feedback like click behavior as the historical feedback to model user interests. However, there are gaps between positive implicit feedback and user real preferences (Wang et al., 2018b). For example, the click behaviors do not fully reflect the user's preferences. The user may exit the news immediately after clicking, which will involve a noise in the positive feedback. Additionally, some news that users did not click, may also attract them later. Ignoring them also impacts the recommendation performance. Our observation is that using both positive and negative implicit feedback can better model user interests. Besides, positive and negative implicit feedback can help to denoise each other by conducting inter-comparison and intra-comparison. If a news story in one feedback sequence is more similar to the news in the opposite feedback sequence rather than the news in the same sequence, it is very likely that this news story constitutes noise. We can remove this news when building user interests. This idea is shown in Figure 1. In this paper, we propose the Denoising neural network for news Recommendation with Positive and Negative implicit feedback, named DRPN. It first introduces a news encoder to represent the news in two implicit feedback sequences. Then two parallel aggregators are used to extract user representations from both positive and negative historical feedback: (1) content-based aggregator, which selects the informative news in the feedback sequences to represent the user; (2) denoising aggregator, which finds and reduces the noises in the feedback sequences. In addition to the semantic information, we introduce a graph neural network to incorporate the collaborative information to further enrich the user representation. Finally, the user and candidate news representations are used to predict the clicking probability. The contributions of this paper are summarized as follows: • We propose a novel neural news recommendation approach DRPN which jointly models both positive and negative implicit feedback sequences to represent the user to improve recommendation performance. • In DRPN, to minimize the impacts of the noises in the implicit feedback, the denoising aggregators are designed to refine the two feedback sequences and can help to further improve the recommendation performance. • The experiments on the large-scale real-world dataset demonstrate that DRPN achieves stateof-the-art performance. 2 Related Works Recommendation with Multi-type Feedback Few works notice the noise problem in the implicit feedback. (Zhao et al., 2018;Liu et al., 2020) use multiple types of feedback to improve recommendation. However, they ignore the noise in the im-plicit feedback. (Wang et al., 2018b) notices the noise problem but it fails to use the meaningful semantic information in the news. (Wu et al., 2020a;Xie et al., 2020;Bian et al., 2021) use the explicit feedback (such as reading time and like/dislike behaviors) to help denoise the implicit feedback. However, the explicit feedback is harder to collect than the implicit feedback. Differently, DRPN only depends on the implicit feedback (click and nonclick behaviors) to conduct the denoise to better model the user preferences. Graph Neural Network Recently, graph neural networks (GNN) have received wide attention in many fields (Wu et al., 2020c). The convolutional GNN can learn powerful node representations by aggregating the neighbors' features. Recently, some works have attempted to leverage the graph information to enhance the representations learning for news recommendation with GNNs. (Wang et al., 2018a) uses entities in news to build a knowledge graph and use the entity embeddings to improve the model performance. (Ge et al., 2020) combines the oneand two-hop neighbor news and users to enrich the representations of the candidate news and user, respectively. However, these methods also depend on the positive implicit feedback to model user representations and ignore the noise problem. Problem Formulation The news recommendation problem in our paper can be illustrated as follows. Let U and R denote the entire user set and news set. The feedback matrix for the users over the news is denoted as Z ∈ R lu×lr , where z u,r = 1 means user u gives a positive implicit feedback to news r (e.g., u clicks r), z u,r = −1 means user u gives a negative implicit feedback to news r (e.g., u sees r but ignores it), and z u,r = 0 means no feedback. l u and l r denote the numbers of the users and news, respectively. For each specific user, his historical positive feedback sequence [p 1 , ..., p lp ] and negative feedback sequence [n 1 , ..., n ln ] can be gathered from the feedback matrix Z, where p i , n j ∈ R. Given the feedback matrix Z, the goal is to train a model M (i.e., DRPN). For each new pair of user and candidate news (u ∈ U, r ∈ R), we can use M to estimate the probability that u would like to click r. Figure 2 shows the architecture of DRPN. It first employs the title encoder and id embedding layer to represent all news in two feedback sequences and the candidate news. Then two separate encoders are employed to extract the user semantic interest and collaborative interest information from both positive and negative implicit feedback sequences. Next, two fusion nets combine multiple interest representations to represent the user. Finally, we use the user and candidate news representations to estimate the clicking probability. We will detail each component in the following subsections. Input The inputs of the DRPN contain six parts: the titles of positive feedback sequence [p t 1 , ..., p t lp ], the titles of negative feedback sequence [n t 1 , ..., n t ln ], the candidate news title r t c , the IDs of positive feedback sequence [p o 1 , ..., p o lp ], the IDs of negative feedback sequence [n o 1 , ..., n o ln ], and candidate news ID r o c . For each news title t, we convert its every word w to a d-dimensional vector w via an embedding matrix E W ∈ R lw×d , where l w is the vocabulary size and d is the dimension of word embedding. Then, the title t is transformed into a matrix T. For each news ID o, we also convert it to a d- Title Encoder The title encoder can extract the sentence-level semantic representation of the news title. It contains two sub-layers. We take the title T as an example to detail the encoding process. The first sub-layer is a multi-head self-attention layer, which can model the contextual representation of each word. Given three input matrices Q ∈ R lq×d , K ∈ R lv×d and V ∈ R lv×d , the attention function is defined as: Multi-head self-attention layer MH(·, ·, ·) will further project the input to multiple semantic subspaces and capture the interaction information from multiple views: are the parameters to learn. l h is the number of heads. Moreover, we employ the residual connection and layer normalization function LN defined in (Ba et al., 2016) to fuse the original and contextual representations: T = LN(T + MH(T, T, T)). The second sub-layer is a gated aggregation layer (Qiu et al., 2020). It will select the important words to generate an informative title representation. The gated mechanism is employed to decide the weight of each words. Given the word embedding matrix T, its sentence-level semantic representation t is calculated as follows: are trainable parameters. Finally, we can use the title encoder to model the titles of all news in two user feedback sequences to obtain P t = [p t 1 , ..., p t lp ] and N t = [n t 1 , ..., n t ln ]. For the candidate news, we can also obtain its title representation r t c via the same title encoder. Semantic Interest Encoder The titles of the news which the user interacted usually reflect the user's interests. Hence, we can learn user interest representations by encoding the semantic representations of the news. As is shown in Figure 3, the semantic interest encoder leverages two aggregators, content-based aggregator (CA) and denoising aggregator (DA), to extract user preferences from both positive and negative feedback sequences. Content-based Aggregator Different news have different informativeness when representing users. For example, sport news are more informative than weather news in modeling user personality, since the latter are usually browsed by most users. The content-based aggregator (CA) will first evaluate the importance of different news in the feedback sequence from the content view and then aggregate the important news to represent the user. It contains two sub-layers. The first one is a multi-head self-attention layer, which can enhance the news representations by capturing their interactions. For the positive feedback sequence P t , the multi-head self-attention layer generates P t = LN(P t + MH(P t , P t , P t )). The MH is define in Eq.(2) with independent parameters and the LN is the layer normalization function. The second sub-layer is a gated aggregation layer that has the same structure as the one defined in Eq.(3). For P t , it can select the more informative news to generate the user representation: p t s = Aggregate( P t ). We also use the content-based aggregator to generate another user representation from the negative feedback sequence N t , n t s . Denoising Aggregator Denoising aggregator will conduct what we call a refining operation, which aims to mitigate the impacts of the noises in the feedback when modeling the user interests. Intuitively, if news clicked by the user is more semantically relevant with the news in the positive feedback sequence, this news is more likely the user true preference. Otherwise, if it is more semantically relevant with the news in the negative feedback sequence, it is more likely a noise for representing the user interest. As shown in Figure 3, for each news in the positive feedback sequence, we will conduct the intracomparisons with the news in the positive sequence and inter-comparisons with the news in the negative sequence to decide its weight when representing the user. This module contains three sub-layers. The first sub-layer is an intra-attention layer. For news p t j ∈ P t , this layer uses it as the query to aggregate all news in P t except p t j by the attention mechanism to obtain the sequence-level representation, The second sub-layer is an inter-attention layer. For p t j , this layer uses it as the query to aggregate its relevant news in the negative feedback sequence N t by the attention mechanism. The third sub-layer is a gated aggregation layer. The weight of the news p t j are decided by the semantic similarities between p t j and two sequencelevel representations,p t j andn t j . ∈ R and γ are learnable parameters. Then, this layer will aggregate all news according to their weights to obtain the denoised representatioin, p t h = j=lp j=0 α j p t j . For the negative feedback sequence N t , we take a dual denoising process to obtain its final representation n t h . Graph Neural Network If two news, r i and r j , are co-clicked by the user u 1 and r i is also clicked by u 2 , u 2 may also prefers r j based on the idea of the collaborative filtering. Hence, we can further enrich the user interest representations by modeling the collaborative information. Like the knowledge graph, we build a collaborative graph G = {(r i , r j )|r i , r j ∈ R} over the news set R based on the co-clicking relationships in the historical feedback matrix Z. (r i , r j ) indicates they are neighbors in the graph and have been clicked by the same user. To incorporate the collaborative information, we employ the graph transformer neural network (Shi et al., 2021) to model the news in the user feedback sequence. First, for each news node r o in P o and N o , we compute the attention weights between it and its neighbors N (r o ) in G. N (r o ) denotes the neighbor set of node r o . Take its neighbor r o k (k ∈ N (r o )) as an example, the attention weight between r o and r o k at the m-th head is calculated by where W g m, * ∈ R d×d/l ′ h are learnable parameters. l ′ h is the number of heads andd is equal to d/l ′ h . Next, each news node will aggregate the information of its neighbors from multiple heads according to the attention weights. For the node r o , the representation aggregated from its neighbors is: are trainable parameters. ∥ denotes the concatenation operation for l ′ h heads. Finally, we will update the representation of each node by fusing its aggregated and original representations. where W f 1 , W f 2 ∈ R 2d×d are learnable parameters. ⊙ denotes the element-wise multiplication operation. σ is the sigmoid function. We can use this graph neural network to encode all news in the user positive and negative feedback sequences to obtain Collaborative Interest Encoder The module aims to model user interests by aggregating the representations of two feedback sequences encoded by the graph neural network layer, which have incorporated the collaborative information. The structure of the collaborative interest encoder is similar to that of the semantic interest encoder and also contains two aggregators, a content-based aggregator and a denoising aggregator. The denoising aggregators have the same structure as the one in the semantic interest encoder. The only structural difference between two content-based aggregators of two encoders is that there is no multi-head self-attention operation in the content-based aggregator of the collaborative interest encoder. This is because the context information is already propagated by the graph neural work, which has a similar effect with the multihead self-attention. The inputs of this encoder are the positive sequence representation P o and the negative sequence representation N o . The content-based aggregator will generate two user representations, p o s and n o s , based on two sequence representations, respectively. Similarly, the denoising aggregator will denoise two sequences and generate two user representations p o h and n o h . Fusion Net There are two fusion nets as shown in Figure 2. They are used to fuse multiple user interest representations extracted by two interest encoders to form a comprehensive user representation. For different user-candidate news (u, r) pairs, the fusion net dynamically allocates different weights for different interest representations. Two fusion nets have similar structures but different parameters. We take the one for the semantic interest encoder as an example to detail the fusion process. The fusion net first represents the (u, r) pair. It should mitigate the effect of two interest encoders and independently calculate the weights for the output representations of two encoders. Hence, it uses the outputs of the title encoder to represent (u, r), f t = [u t f ; r t c ], where u t f = Aggregate([P t |N t ]). P t , N t and r t c are the title representations of the news in user positive and negative feedback sequences and the candidate news extracted by the title encoder. Then, this module leverages four different fully connected layers to calculate the weights for four representations extracted by the semantic interest encoder (i.e., p t s , n t s , p t h and n t h ). For example, the weight of p t s is calculated by 2 ∈ R are learnable parameters. The weights, β n s , β p h and β n h , of the representations n t s , p t h and n t h can be calculated by the same way in Eq.(9). Finally, the user content-view representation is calculated by Another fusion net is used to fuse four interest representations extracted by the collaborative interest encoder and has a similar structure with the above one. The only difference is that it uses the outputs of the news ID embedding layer to represent the (u, r) pair, Prediction Following (Wu et al., 2019c), the clicking probability scoreŷ is computed by the inner product of the user representation and the candidate news representation:ŷ = u t ⊤ r t c + u o⊤ r o c , where u t ⊤ r t c stands for the score calculated from title information and u o⊤ r o c stands for the score calculated from collaborative information. Training Following (Wu et al., 2019c), for each positive sample, we randomly select l k negative samples from the same user to construct a l k + 1 classification task. Each output of the DRPN for a classification sample is like [ŷ + ,ŷ − 1 , ...,ŷ − l k ], whereŷ + denotes the clicking probability score of the positive sample and the rest denote the scores of the l k negative samples. We define the training loss (to be minimized) as follows. where P denotes the set of positive samples. Computation Complexity The time complexity of the title encoder is O(L 2 d+ Ld 2 ), where L is the title length and d is the embedding size. The time complexity of each interest encoder is O((l p + l n )d 2 + (l 2 p + l 2 n + (l p + l n ) 2 )d) where l p and l n are the lengths of positive and negative feedback sequences. The time complexity of GNN is O(|G|d), where |G| denotes the number of edges that existed in collaborative graph. Hence, The overall time cost is O((l p + l n )(Ld 2 + L 2 d) + (l 2 p + l 2 n + (l p + l n ) 2 + |G|)d). During the inference phase, we can compute the news representations in advance and the computation complexity will be O((l 2 p + l 2 n + (l p + l n ) 2 )d). Dataset There is no off-the-shelf dataset in which the user profile includes both positive and negative historical feedback sequences. Therefore, we use MIND 1 dataset (its original user profile only contains positive feedback) to re-build one to conduct the experiments. The original MIND dataset contains the user impression logs. An impression log records the news displayed to a user when visiting the news website homepage at a specific time, and the click behaviors on the news list. We re-build the dataset based on the MIND's impression logs as follows: (1) Select the impression logs of the first 5 days of the original training set. Then we add the news that a user has seen but did not click to his negative feedback sequence, and add the news he clicked to his positive feedback sequence. In this manner, the user profile includes both positive and negative historical feedback sequences; (2) Training set: the impression logs of 6-th day of the original training set; (3) Validation set: the first 10% chronological impression logs of the original validation set; (4) Testing set: the last 90% chronological impression logs of the original validation set. The training, validation, and testing sets use the same user profile built in Step (1). Since the user profiles are only built in Step (1) which is ahead of Step (2)-(4), there is no label leakage to validation and testing sets. Moreover, same as the original MIND dataset, the re-built dataset also has 44.6% users of validation set and 48.7% users of test set that are not shown in the re-built training set. Table 1 shows some statistics of the re-built dataset. Baseline Approaches and Metrics We evaluate the performance of DRPN by comparing it with several baseline methods, including: (1) LibFM (Rendle, 2012), factorization machine (FM); (2) DeepFM (Guo et al., 2017), which combines the FM and neural networks; (3) DKN (Wang et al., 2018a), which uses the CNN to fuse the entity and word embeddings to learn news representations; (4) LSTUR , which uses the GRU to model short-and long-term interests from the click history; (5) NPA (Wu et al., 2019b), which introduces the attention mechanism to select important words and news; (6) DEERS (Zhao et al., 2018), which uses GRU to encode positive and negative feedback sequences; (7) DFN (Xie et al., 2020), a factorization-machine based network which uses transformers to encode both positive and negative feedback sequences to enhance performance; (8) GERL (Ge et al., 2020), which constructs user-news graph to enhance the performance; (9) NAML (Wu et al., 2019a), which uses multi-view learning to aggregate different kinds of information to represent news; (10) NRMS (Wu et al., 2019c), which uses multi-head self-attention to learn news and user representations; (11) NAML + TCE, which incorporates the denoising training strategy TCE into NAML; (12) NRMS + TCE, which improves NRMS by using TCE. Implementation Details For DRPN, the representation dimension d is set to 300. We use the GloVe.840B.300d (Pennington et al., 2014) as the pre-trained word embeddings. The maximum title length is set to 15. The lengths of feedback sequences l p and l n are set to 30 and 60. Padding and truncation are used to keep sequence and word numbers the same. The head number l h in multi-head self-attention is set to 6. The hidden size d ′ in the gated aggregation layer is set to 200. The head number in graph neural network l ′ h is set to 2. The negative sampling ratio l k is set to 4. When preparing data for graph neural network, we only input sub-graph that contains nodes in the user feedback sequences. Moreover, we pick the maximum 5 neighbor nodes for each node r in user feedback sequences, which are most frequently co-clicked with r. We have also released the source code at https://github.com/chungdz/DRPN. For NRMS, DKN, LSTUR, NPA, and NAML, we use the official code and settings 2 . For others, we reimplement them and set their parameters based on the experimental setting strategies reported by their papers. For fair comparisons, all methods only use the news ID, title, category and subcategory as features. The validation set was used for tuning hyperparameters and the final performance comparison was conducted on the test set. Performance Evaluation The experimental results of all models are summarized in Table 2. We make the following observations from the results. First, our proposed Sports The Geno Smith … thing in Seahawks' win over 49ers. Sports Russell Wilson has MVP … beat 49ers in OT classic. Weather … farmers endure major crop and profit losses as climate changes. Movies Actress accuses Roman Polanski of raping her in 1975. Finance Dean Foods files for bankruptcy. Music Broadway actress Laurel Griggs dies at Age 13. Lifestyle A master suite … is asking for $1,200/month in rent. Weights Categories Titles Finance Confidence in the US economy accelerates. perform better than the feature-based methods (e.g., LibFM and DeepFM). This performance improvement should be attributed to better news representation methods. Among the deep neural methods, NRMS+TCE achieves the best performance by using two level multi-head self-attention to learn user representations and using TCE to denoise the negative samples. Third, among two baselines that use both positive and negative feedback, DFN performs worse than DEERS. The reason may be that original DFN depends on the explicit feedback but the experimental dataset only contains implicit feedback. Compared with NAML, DEERS has a competitive performance even if its news encoder is a simple pooling layer. This also proves the effectiveness of the negative implicit feedback. Ablation Study To highlight the individual contribution of each module, we use the following variants of DRPN to run an ablation study: (1) DRPN-D, which removes the denoising aggregator; (2) DRPN-G, which re-moves the knowledge graph part; (3) DRPN-DG, which removes the knowledge graph part and the denoising aggregator; (4) DRPN-N, which only uses the positive feedback; (5) DRPN-P, which only uses the negative feedback. The results are shown in Table 3. First, DRPN-D and DRPN-G perform worse than DRPN, proving the effectiveness of the designed denoising module and the collaborative graph. Second, the results of DRPN-N and DRPN-P indicate the effectiveness of negative and positive feedback, respectively. Third, even without deliberately designing, by using both positive and negative implicit feedback, DRPN-DG can achieve competitive performance compared with the strongest baseline NRMS+TCE. This further proves the effectiveness of the negative feedback. Case Study To intuitively illustrate the effectiveness of the denoising aggregator, we sample a user and visualize his historical feedback attention weights in the denoising aggregator of the semantic interest encoder. The upper part of the Figure 4 shows the attention weights and ranks the news in descending order of the attention weight. We can find in positive feedback sequence, the top 4 news are about sports and weather and the last 4 news are about music, movie, finance, and lifestyle. Meanwhile, in negative feedback sequence, the top 4 news are about finance, music, politics, and lifestyle, and the last 4 news are all about sports. This indicates that the denoising aggregator believes that the user likes sports, and dislikes the topics such as finance, music, movies, politics, and lifestyle. As shown in the lower part of Figure 4, based on the predicted user preferences, we can see DRPN prefers to recommend the sports news for this user. Moreover, in the validation data, we can observe that this user clicks the top 2 recommended news and ignores the last 2 news. It suggests the user preference extracted by the denoising aggregator is consistent with the user's real behaviors. In summary, the visualization results indicate the denoising module can better capture the user's real preferences by conducting the interand intra-comparisons between the positive and negative implicit feedback sequences. Conclusion In this paper, we propose a novel deep neural news recommendation model DRPN. In DRPN, we design two aggregators to extract user interests from both positive and negative implicit feedback. The content-based aggregator focuses on the contents in the news representations and the denoising aggregator aims to mitigate the noise impact commonly existing in the implicit feedback. Besides, apart from the title information, DRPN also exploits the collaborative information by the graph neural network to further improve the recommendation performance. Experimental results on a large-scale public dataset demonstrate the state-of-the-art performance of DRPN. The further study results also show the effectiveness of the denoising module. A.1 Limitations In this paper, to better learn the representations, our method refines the historical behaviors of the user by the denoising manner. There are still some potential directions to further improve our approach. First, since the user profile in the experimental dataset only contains the historical behaviors and has no basic information (e.g., gender and age), our current approach doesn't support these features but they are widely used in practice. After these features are ready, we can convert them to embeddings and fuse them with the semantic interest representations obtained by two interest encoders to better represent the user. Second, the news generally contains many forms of features except for the title (such as the cover image and author information) and our approach will explore how to involve more features to better represent the news. A.2 Potential Risks Our approach is based on the collaborative filtering, which may lead to that all of recommended news are similar to what the user has seen. This is a common problem faced by the majority of recommender systems. The concentration of a large number of similar information may narrow users' perspective and result in an imbalance on the personal information structure (Li and Wang, 2019). Our method can combine with some rule/human-based strategies (such as popularity based recommendation) to improve the recommendation diversity to alleviate this problem.
7,144.2
2022-04-09T00:00:00.000
[ "Computer Science" ]
Application of a Dy3Co0.6Cu0.4Hx Addition for Controlling the Microstructure and Magnetic Properties of Sintered Nd-Fe-B Magnets The focus of new technologies on the formation of inhomogeneous distributions of heavy rare-earth metals (REMs) in hard magnetic Nd–Fe–B materials is of scientific importance to increase their functional properties, along with preserving existing sources of heavy REMs. This paper focused on the coercivity enhancement of Nd2Fe14B-based magnets by optimizing the microstructure, which includes the processes of grain boundary structuring via the application of a Dy3Co0.6Cu0.4Hx alloy added to the initial Nd–Fe–B-based powder mixtures in the course of their mechanical activation. We have studied the role of alloying elements in the formation of phase composition, microstructure, the fine structure of grains, and the hysteretic properties of hard magnetic Nd(R)2Fe14B-based materials. It was shown that the Dy introduction via the two-component blending process (the hydrogenated Dy3Co0.6Cu0.4 compound is added to a powder mixture) resulted in the formation of the core-shell structure of 2–14–1 phase grains. The efficient improvement of the coercivity of Nd(RE)–Fe–B magnets, with a slight sacrifice of remanence, was demonstrated. Introduction Researchers have made many attempts to reduce the heavy rare-earth (RE) consumption Nd-Fe-B sintered magnets with high-coercivity. Some progress has been achieved using Dy and/or Tb in various forms to realize approaches named grain boundary diffusion (GBD) [1][2][3] and grain boundary structuring (GBS) [4][5][6][7][8]. The application of binary mixtures allows one to improve the structure of the boundary phases and grain boundaries of the main magnetic phase and to realize the diffusion of a required component of the alloy directly through the boundaries. It has been demonstrated that by controlling the process time and temperature of GBD processes, the coercivity of the magnet can be greatly enhanced, without sacrificing the remanence. It was shown in our previous studies that hydrogenated Tb and Dy additions allowed us to enhance the coercivity with a slight decrease in the remanence [9] and increase the stability of the magnet properties during annealing at the low-temperature [10], respectively. The grain boundary restructuring, with rare-earth-rich low-melting compounds added to low-alloyed Nd-Fe-B-based compositions in the course of technological processing, was realized when using (Pr,Nd) 6 Fe 13 Cu [4], Dy 32.5 Fe 62 Cu 5.5 [5], Dy 69 Ni 31 [6], Dy 88 Mn 12 (wt.%) [11], Pr 34.4 Co 65.6 (wt.%) [12], and Dy 82.3 Co 17.7 (wt.%) [13], which is a low-melting eutectic composition. It was shown that the intrinsic coercivity evidently increased when using Dy 82.3 Co 17.7 and the maximum intrinsic coercivity was achieved when its content was 2 wt.%. At the same time, the remanence and maximum-energy product decreased slightly as the Dy 82.3 Co 17.7 content increased. By adding a small amount of Dy 82.3 Co 17.7 , the coercivity improved greatly, and the irreversible loss decreased sharply. The increase in the Curie temperature of the magnets suggests that Co atoms have been incorporated into the 2:14:1 main phase. A well-developed a core-shell structure is formed in these magnets. The experiments with REM-M-H compounds (rare earth metal-transition metal(s)-hydrogen), which are added at the stage of mechanical milling and alloying, were performed to realize the optimum microstructure, nano-heterogeneous distribution of heavy REMs (Dy or Tb) within a grain, and economically alloyed composition of magnets, which assumes, in particular, the distribution of heavy REMs within the near-grain boundary areas. Such a heavy-REM distribution allows us to (1) locally increase the coercive force and decrease the probability of the formation of reverse domains at grain boundaries; (2) limit the substitution of heavy REM for neodymium in the matrix phase and, thus, decrease the probability of decreasing magnetization and remanence; and (3) decrease the amount of heavy REMs, which is required to reach the given increase in the coercive force. The latter circumstance determines the possibility of the development of physico-chemical and technological foundations of resource-saving technology, the possibility of decreasing the material costs and prices of products manufactured from the new alloys, and the possibility of substantially widening the functionality of the materials. Thus, by applying compositions with a heavy rare-earth metal, the outer region of the Nd 2 Fe 14 B matrix grains was enriched during the sintering process and substitutes for Nd were used in the matrix grains to form the (Nd,Dy) 2 Fe 14 B core-shell phase. This paper focused on optimizing the microstructure of the near-stoichiometric Nd 2 Fe 14 B-based magnet, which included the grain boundary diffusion and grain boundary structuring processes via the application of a hydrogenated Dy 3 Co 0.6 Cu 0.4 H x composition added to a powder mixture. Experimental The strip casting technique was used for the preparation of the base Nd-24.0, Pr-6.5, Dy-0.5, B-1.0, Al-0.2, Fe-balance alloy (wt.%). The strip-cast alloy was subsequently subjected to hydrogen decrepitation process, which was realized during heating to 270 • C in a hydrogen flow at a pressure of 0.1 MPa and holding at this temperature for 1 h. The Dy 3 (Co 1−x Cu x ) alloy with x = 0.4 was produced by the arc melting of the starting components (distilled Dy of 99.9% purity, Co of ≥ 99.25% purity, and oxygen-free Cu of 99.95% purity) in an argon atmosphere using a water-cooled copper bottom and a non-consumable tungsten electrode. The ingot was homogenized at 600 • C for 90 h and subjected to hydrogenation under conditions used for the strip-casting alloy, namely, upon heating to 270 • C in a hydrogen flow at a pressure of 0.1 MPa and subsequent 1 h heating at this temperature (Regime 1 was used to manufacture the magnet), and upon heating to 700 • C in a high-purity hydrogen atmosphere and holding at this temperature for 1 h in a glass Sieverts-type apparatus (Regime 2 was used for investigations). In the case of heating at 700 • C, the hydrogenation up to the Dy 3 Co 0.6 Cu 0.4 H x composition with x = 8.26 was realized. It is expected that such a hydrogen content accords with the complete hydrogenation of dysprosium to a dysprosium hydride. The mixture of hydrogen-decrepitated strip-cast Nd(RE)-Fe-B alloy and the Dy 3 Co 0.6 Cu 0.4 H x alloy (Regime 1) was milled for 40 min to an average particle size of 3 µm using a vibratory mill and isopropyl alcohol medium. After wet pressing of the pulp in a transverse magnetic field of 1500 kA/m, compacts were sintered at 1080 • C for 2 h and optimally heat treated (HT) at 500 • C for 2 h. Then, samples of the magnet were subjected to low-temperature heat treatment in the temperature range 400-900 • C, with subsequent quenching in N 2 . The phase composition of the Dy 3 Co 0.6 Cu 0.4 and Dy 3 Co 0.6 Cu 0.4 H x (x = 8.26) alloys was investigated by X-ray diffraction (XRD) analysis using an Ultima IV (Rigaku») diffractometer (equipped with a "D/teX" detector, CuKα radiation) and a Philips X'Pert 1 diffractometer, respectively; the scanning step was 0.001 • . X-ray diffraction patterns were processed, and the phase composition of the alloy was determined using PowderCell software. Data on the crystal structure type, lattice parameters, and the crystallographic positions of atoms in the Dy-Co, Dy-Cu, and H-Dy systems [14][15][16] were used to simulate theoretical XRD patterns. An Quanta 450 FEG high-resolution field emission gun scanning electron microscope (FEI Company, Fremont, USA) equipped with an energy-dispersive spectroscopy (EDS, EDAX Inc., Mahwah, USA) microprobe was used to investigate the structure, chemical composition, and distribution of magnet components (X-ray mapping) of the addition and magnet sample. The mean particle size was evaluated by means of a MasterSizer 3000 laser diffraction particle size analyzer (Malvern Panalytical Ltd, Malvern, United Kingdom). The hysteretic properties of the magnet sample were measured at room temperature (RT) using an automatic hysteresis graph system MH-50 (Walker Scientific Inc., Worcester, USA). The differential thermal analysis (DTA) and thermogravimetric analysis were performed under an argon atmosphere with a heating/cooling rate of 30 • C/min using a STA 449 F3 Jupiter installation (Netzsch Holding, Selb, Germany). Figure 1 shows the X-ray diffraction pattern of the Dy 3 Co 0.6 Cu 0.4 alloy subjected to prolonged annealing in an argon atmosphere. The reflections belong to the main Dy 3 (Co,Cu) phase and the Dy(Cu,Co) phase based on DyCu [14,15]. The analysis of the crystal structures of the found compounds and theoretical XRD patterns constructed for the simulated structures allowed us to determine variations in the lattice parameters of the Dy(Cu 1−y Co y ) and Dy 3 (Co 1−x Cu x ) phases alloyed with Co and Cu, respectively (see Table 1). As seen, the alloying of the binary compounds with Co and Cu did not change the crystal structure type of the compounds. In accordance with the binary phase diagrams [14,15], the phases present in the alloy are alloyed compositions of the binary compounds. compacts were sintered at 1080 °C for 2 h and optimally heat treated (HT) at 500 °C for 2 h. Then, samples of the magnet were subjected to low-temperature heat treatment in the temperature range 400-900 °C, with subsequent quenching in N2. X-Ray Diffraction Analysis The phase composition of the Dy3Co0.6Cu0.4 and Dy3Co0.6Cu0.4Hx (x = 8.26) alloys was investigated by x-ray diffraction (XRD) analysis using an Ultima IV (Rigaku») diffractometer (equipped with a "D/teX" detector, CuKα radiation) and a Philips X'Pert 1 diffractometer, respectively; the scanning step was 0.001°. X-ray diffraction patterns were processed, and the phase composition of the alloy was determined using PowderCell software. Data on the crystal structure type, lattice parameters, and the crystallographic positions of atoms in the Dy-Co, Dy-Cu, and H-Dy systems [14][15][16] were used to simulate theoretical XRD patterns. An Quanta 450 FEG high-resolution field emission gun scanning electron microscope (FEI Company, Fremont, USA) equipped with an energy-dispersive spectroscopy (EDS, EDAX Inc., Mahwah, USA) microprobe was used to investigate the structure, chemical composition, and distribution of magnet components (x-ray mapping) of the addition and magnet sample. The mean particle size was evaluated by means of a MasterSizer 3000 laser diffraction particle size analyzer (Malvern Panalytical Ltd, Malvern, United Kingdom). The hysteretic properties of the magnet sample were measured at room temperature (RT) using an automatic hysteresis graph system MH-50 (Walker Scientific Inc., Worcester, USA). The differential thermal analysis (DTA) and thermogravimetric analysis were performed under an argon atmosphere with a heating/cooling rate of 30 °C/min using a STA 449 F3 Jupiter installation (Netzsch Holding, Selb, Germany). Figure 1 shows the x-ray diffraction pattern of the Dy3Co0.6Cu0.4 alloy subjected to prolonged annealing in an argon atmosphere. The reflections belong to the main Dy3(Co,Cu) phase and the Dy(Cu,Co) phase based on DyCu [14,15]. The analysis of the crystal structures of the found compounds and theoretical XRD patterns constructed for the simulated structures allowed us to determine variations in the lattice parameters of the Dy(Cu1−yCoy) and Dy3(Co1−xCux) phases alloyed with Co and Cu, respectively (see Table 1). As seen, the alloying of the binary compounds with Co and Cu did not change the crystal structure type of the compounds. In accordance with the binary phase diagrams [14,15], the phases present in the alloy are alloyed compositions of the binary compounds. The phase composition of the alloy was also confirmed by the EDS microanalysis, see Figure 2 and Table 2. The microstructure consisted of Dy 3 (Co 1−x Cu x ) (x~0.4) dendrites (point 1 in Figure 2) and Dy(Cu 1−y Co y ) + Dy 3 (Co 0.6 Cu 0.4 ) mixture (point 2 in Figure 2) found in the interdendritic regions. The composition of the Dy(Cu 1−y Co y ) phase cannot be accurately determined by the EDS analysis because of its small size, since the surrounding matrix is analyzed along with this very small inclusion. However, the increased content of copper is evident in this mix area. The phase composition of the alloy was also confirmed by the EDS microanalysis, see Figure 2 and Table 2. The microstructure consisted of Dy3(Co1−xCux) (x ∼ 0.4) dendrites (point 1 in Figure 2) and Dy(Cu1−yCoy) + Dy3(Co0.6Cu0.4) mixture (point 2 in Figure 2) found in the interdendritic regions. The composition of the Dy(Cu1−yCoy) phase cannot be accurately determined by the EDS analysis because of its small size, since the surrounding matrix is analyzed along with this very small inclusion. However, the increased content of copper is evident in this mix area. As is shown in Table 1, the substitution of Cu for Co in Dy3(Co1−xCux) (with regard to the solubility of Cu and Co in Dy3Co and DyCu, respectively) changed the lattice parameters: the lattice parameters b and c increased as the radius of Cu atoms (0.128 nm) was higher than that of the Co atoms (0.125 nm), whereas the lattice parameter a decreased. This is likely to be due to the fact that copper atoms substitute for cobalt atoms only at certain sites. X-Ray Diffraction Analysis We assumed that the solidification of the alloy occurs via the primary formation of the Dy3Cobased phase by peritectic reaction; the DyCu-based compound is the secondary phase. According to the Co-Dy phase diagram, the solidification path may include the formation of the Dy12Co7-based phase by peritectic reaction. Interaction of Dy3(Co,Cu) Alloy with Hydrogen The saturation of the Dy3Co0.6Cu0.4 alloy with hydrogen led to the embrittlement of the alloy (i.e., the powder material suitable for further introduction of the composition into the Nd-Fe-B magnetic alloy powder during cooperative milling was obtained). Figure 3a shows the x-ray diffraction analysis data for the Dy3Co0.6Cu0.4 alloy subjected to hydrogenation (Regime 2). The hydrogenated composition contained DyH2 [17] and DyH3 [18] hydrides. Other reflections corresponded to the Dy3(Co,Cu) phase; it is likely that small quantities of the Dy3(Co,Cu) and Dy(Cu,Co) phases did not react with hydrogen. After hydrogenation, copper and cobalt may be present in the form of a fine mixture. As is shown in Table 1, the substitution of Cu for Co in Dy 3 (Co 1−x Cu x ) (with regard to the solubility of Cu and Co in Dy 3 Co and DyCu, respectively) changed the lattice parameters: the lattice parameters b and c increased as the radius of Cu atoms (0.128 nm) was higher than that of the Co atoms (0.125 nm), whereas the lattice parameter a decreased. This is likely to be due to the fact that copper atoms substitute for cobalt atoms only at certain sites. We assumed that the solidification of the alloy occurs via the primary formation of the Dy 3 Co-based phase by peritectic reaction; the DyCu-based compound is the secondary phase. According to the Co-Dy phase diagram, the solidification path may include the formation of the Dy 12 Co 7 -based phase by peritectic reaction. Interaction of Dy 3 (Co , Cu) Alloy with Hydrogen The saturation of the Dy 3 Co 0.6 Cu 0.4 alloy with hydrogen led to the embrittlement of the alloy (i.e., the powder material suitable for further introduction of the composition into the Nd-Fe-B magnetic alloy powder during cooperative milling was obtained). Figure 3a shows the X-ray diffraction analysis data for the Dy 3 Co 0.6 Cu 0.4 alloy subjected to hydrogenation (Regime 2). The hydrogenated composition contained DyH 2 [17] and DyH 3 [18] hydrides. Other reflections corresponded to the Dy 3 (Co,Cu) phase; it is likely that small quantities of the Dy 3 (Co,Cu) and Dy(Cu,Co) phases did not react with hydrogen. After hydrogenation, copper and cobalt may be present in the form of a fine mixture. Figure 3b shows the x-ray diffraction analysis data of the alloy Dy3Co0.6Cu0.4Hx subjected to thermal dehydrogenation (upon heating during DTA). The sample was heated up to 700 °C ( Figure 3). After heating, the presence of DyH2 and small quantities of the Dy3(Co,Cu) and Dy(Cu,Co) phases were detected; DyH3 was absent. The presence of a thin mechanical mixture of Cu and Co is also possible. According to the DTA data (Figure 4), the decomposition of DyH3 started at a temperature of ~314 °C, which agreed with the literature data [16]. Between ~314 °C and ~690 °C, no thermal effects were identified. Above ~690 °C, in accordance with the Dy-H [16] diagram, the solid solution of hydrogen in dysprosium decomposed to form dysprosium. However, the thermal effects at temperatures above 600 °C can correspond to the melting of one of the metallic phases of the alloy; nevertheless, the thermal effect corresponding to ~690 °C is accompanied by a significant weight loss. The observed formation of Dy hydrides indicates the possibility of the hydrogenated Dy3Co0.6Cu0.4 alloy to be used as additions in manufacturing sintered Nd-Fe-B magnets. Microstructure and Electron Microprobe Analysis of Sintered NdFeB-Based Magnet In accordance with the microprobe analysis data shown in Table 3, the microstructure of a magnet prepared from the powder mixture with 2 wt.% Dy3Co0.6Cu0.4Hx (Regime 1) was characterized by the presence of four structural components differing in the chemical composition, see Figure 5 (the phases are indicated by red numbers). Figure 3b shows the X-ray diffraction analysis data of the alloy Dy 3 Co 0.6 Cu 0.4 H x subjected to thermal dehydrogenation (upon heating during DTA). The sample was heated up to 700 • C (Figure 3). After heating, the presence of DyH 2 and small quantities of the Dy 3 (Co,Cu) and Dy(Cu,Co) phases were detected; DyH 3 was absent. The presence of a thin mechanical mixture of Cu and Co is also possible. According to the DTA data (Figure 4), the decomposition of DyH 3 started at a temperature of~314 • C, which agreed with the literature data [16]. Between~314 • C and~690 • C, no thermal effects were identified. Above~690 • C, in accordance with the Dy-H [16] diagram, the solid solution of hydrogen in dysprosium decomposed to form dysprosium. However, the thermal effects at temperatures above 600 • C can correspond to the melting of one of the metallic phases of the alloy; nevertheless, the thermal effect corresponding to~690 • C is accompanied by a significant weight loss. The observed formation of Dy hydrides indicates the possibility of the hydrogenated Dy 3 Co 0.6 Cu 0.4 alloy to be used as additions in manufacturing sintered Nd-Fe-B magnets. Figure 3b shows the x-ray diffraction analysis data of the alloy Dy3Co0.6Cu0.4Hx subjected to thermal dehydrogenation (upon heating during DTA). The sample was heated up to 700 °C ( Figure 3). After heating, the presence of DyH2 and small quantities of the Dy3(Co,Cu) and Dy(Cu,Co) phases were detected; DyH3 was absent. The presence of a thin mechanical mixture of Cu and Co is also possible. According to the DTA data (Figure 4), the decomposition of DyH3 started at a temperature of ~314 °C, which agreed with the literature data [16]. Between ~314 °C and ~690 °C, no thermal effects were identified. Above ~690 °C, in accordance with the Dy-H [16] diagram, the solid solution of hydrogen in dysprosium decomposed to form dysprosium. However, the thermal effects at temperatures above 600 °C can correspond to the melting of one of the metallic phases of the alloy; nevertheless, the thermal effect corresponding to ~690 °C is accompanied by a significant weight loss. The observed formation of Dy hydrides indicates the possibility of the hydrogenated Dy3Co0.6Cu0.4 alloy to be used as additions in manufacturing sintered Nd-Fe-B magnets. Microstructure and Electron Microprobe Analysis of Sintered NdFeB-Based Magnet In accordance with the microprobe analysis data shown in Table 3, the microstructure of a magnet prepared from the powder mixture with 2 wt.% Dy3Co0.6Cu0.4Hx (Regime 1) was characterized by the presence of four structural components differing in the chemical composition, see Figure 5 (the phases are indicated by red numbers). Microstructure and Electron Microprobe Analysis of Sintered NdFeB-Based Magnet In accordance with the microprobe analysis data shown in Table 3, the microstructure of a magnet prepared from the powder mixture with 2 wt.% Dy 3 Co 0.6 Cu 0.4 H x (Regime 1) was characterized by the presence of four structural components differing in the chemical composition, see Figure 5 (the phases are indicated by red numbers). Table 2). The chemical composition of matrix grains (Phase 1 in Figure 5a) was close to the stoichiometric (Nd,R)2Fe14B composition. The presence of Dy in the matrix alloy did not allow us to unambiguously conclude the formation of the core-shell structure, but the presence of cobalt in 2:14:1 phase grains demonstrates the possibility of micro-alloying through the use of hydrogenated low-melting Cocontaining compounds (the melting temperature was lower than the sintering temperature of Nd-Fe-B magnets). The Nd-rich phase (Phase 2 in Figure 5a) was characterized by a variable composition. Phase 3 (Figure 5a) corresponded to the oxide phases. In accordance with the literature data [19,20], they may be based on NdO, Nd2O3, or NdO2. The presence of a phase based on Fe-Nb in triple junctions (TJ) was observed (Phase 4, Figure 5b). This fact may be related to impurities in the industrially prepared alloy matrix. The distribution of rare earth elements, Co and Cu in the matrix grains, and in the intergranular Nd-rich phases (phase 2 in Figure 5a) in the sintered magnets prepared from the powder mixture with 2 wt.% of Dy3Co0.6Cu0.4Hx addition was also investigated by x-ray mapping (see Figure 6). The nonuniform Dy distribution within the 2:14:1 phase grains could be observed. The depletion of triple junctions of Co and their enrichment in Cu should be noted in the case of the addition of Dy3Co0.6Cu0.4Hx. The presence of reactive Dy powder (originating from DyH2 that was decomposed during sintering) ensures the diffusion of Dy atoms to the 2:14:1 phase lattice, since the atomic radius of Dy atoms is lower than that of Nd atoms. This led to ousting Nd atoms to peripheral areas. The diffusion coefficient of Nd atoms is lower than that of Dy atoms [21]; thus, the diffusion of Dy is more significant. Such an inequality of diffusion flows of atoms caused lattice stresses and resulted in the inhomogeneous Dy and Nd(Pr) distribution over the 2:14:1 phase grains. The core-shell structure (Dy-enriched shell and Dy-depleted core) is evident in Figure 6. Table 2). The chemical composition of matrix grains (Phase 1 in Figure 5a) was close to the stoichiometric (Nd,R) 2 Fe 14 B composition. The presence of Dy in the matrix alloy did not allow us to unambiguously conclude the formation of the core-shell structure, but the presence of cobalt in 2:14:1 phase grains demonstrates the possibility of micro-alloying through the use of hydrogenated low-melting Co-containing compounds (the melting temperature was lower than the sintering temperature of Nd-Fe-B magnets). The Nd-rich phase (Phase 2 in Figure 5a) was characterized by a variable composition. Phase 3 (Figure 5a) corresponded to the oxide phases. In accordance with the literature data [19,20], they may be based on NdO, Nd 2 O 3 , or NdO 2 . The presence of a phase based on Fe-Nb in triple junctions (TJ) was observed (Phase 4, Figure 5b). This fact may be related to impurities in the industrially prepared alloy matrix. The distribution of rare earth elements, Co and Cu in the matrix grains, and in the intergranular Nd-rich phases (phase 2 in Figure 5a) in the sintered magnets prepared from the powder mixture with 2 wt.% of Dy 3 Co 0.6 Cu 0.4 H x addition was also investigated by X-ray mapping (see Figure 6). The nonuniform Dy distribution within the 2:14:1 phase grains could be observed. The depletion of triple junctions of Co and their enrichment in Cu should be noted in the case of the addition of Dy 3 Co 0.6 Cu 0.4 H x . The presence of reactive Dy powder (originating from DyH 2 that was decomposed during sintering) ensures the diffusion of Dy atoms to the 2:14:1 phase lattice, since the atomic radius of Dy atoms is lower than that of Nd atoms. This led to ousting Nd atoms to peripheral areas. The diffusion coefficient of Nd atoms is lower than that of Dy atoms [21]; thus, the diffusion of Dy is more significant. Such an inequality of diffusion flows of atoms caused lattice stresses and resulted in the inhomogeneous Dy and Nd(Pr) distribution over the 2:14:1 phase grains. The core-shell structure (Dy-enriched shell and Dy-depleted core) is evident in Figure 6. The other components of the Dy3Co0.6Cu0.4Hx composition (i.e., Cu and Co) are also useful additions for Nd-Fe-B-based magnets. It is evident from Figures 6 and 7 that Co evinced the tendency to incorporate the 2:14:1 phase grains, while the Cu enriched triple junction phases. The role of Cu in Figure 6. Co, Cu, and Dy mapping in 2:14:1 phase grains and triple junction phases of the Nd-Fe-B sintered magnet prepared from the powder mixture with 2 wt.% Dy 3 (Co,Cu). The red circle indicates the depletion of 2:14:1 phase grain in Dy (i.e. the formation of core-shell structure). The other components of the Dy3Co0.6Cu0.4Hx composition (i.e., Cu and Co) are also useful additions for Nd-Fe-B-based magnets. It is evident from Figures 6 and 7 that Co evinced the tendency to incorporate the 2:14:1 phase grains, while the Cu enriched triple junction phases. The role of Cu in The other components of the Dy 3 Co 0.6 Cu 0.4 H x composition (i.e., Cu and Co) are also useful additions for Nd-Fe-B-based magnets. It is evident from Figures 6 and 7 that Co evinced the tendency to incorporate the 2:14:1 phase grains, while the Cu enriched triple junction phases. The role of Cu in the grain-boundary restructuring and positive effects of Co on the coercivities of Nd-Fe-B magnets were reported in our previous work [22] and were also considered in [23][24][25][26][27][28][29][30][31]. Dependence of the Coercive Force ( j H c ) on the Heat Treatment Temperature The magnetic properties ( j H c ) of the magnets (see Table 4 and Figure 8) prepared with the hydrogenated Dy 3 Co 0.6 Cu 0.4 alloy were lower than those in the case of the application of the addition of the DyH 2 [31]. One of the causes is the incomplete hydrogenation of the alloy (see Figure 3, XRD data) and, therefore, the incomplete occurrence of the grain boundary diffusion of the available Dy. The small quantity of the Dy 3 (Co,Cu) phase present in the Dy 3 Co 0.6 Cu 0.4 alloy was subjected to hydrogenation. However, the value of B r in the case of Dy 3 Co 0.6 Cu 0.4 H x was higher than that in the case of DyH 2 , which may be due to a difference in the Dy content in the chemical composition of the 2:14:1 phase. The difference in the rare-earth metal and Cu contents in the Nd-rich phases provided a lower value of H k in the case of magnets with 2 wt.% Dy 3 Co 0.6 Cu 0.4 H x . The hysteretic properties of the Nd-Fe-B magnet, without the addition of hydride after optimal HT, are also shown in Table 4 for comparison. the grain-boundary restructuring and positive effects of Co on the coercivities of Nd-Fe-B magnets were reported in our previous work [22] and were also considered in [23][24][25][26][27][28][29][30][31]. Dependence of the Coercive Force (jHc) on the Heat Treatment Temperature The magnetic properties (jHc) of the magnets (see Table 4 and Figure 8) prepared with the hydrogenated Dy3Co0.6Cu0.4 alloy were lower than those in the case of the application of the addition of the DyH2 [31]. One of the causes is the incomplete hydrogenation of the alloy (see Figure 3, XRD data) and, therefore, the incomplete occurrence of the grain boundary diffusion of the available Dy. The small quantity of the Dy3(Co,Cu) phase present in the Dy3Co0.6Cu0.4 alloy was subjected to hydrogenation. However, the value of Br in the case of Dy3Co0.6Cu0.4Hx was higher than that in the case of DyH2, which may be due to a difference in the Dy content in the chemical composition of the 2:14:1 phase. The difference in the rare-earth metal and Cu contents in the Nd-rich phases provided a lower value of Hk in the case of magnets with 2 wt.% Dy3Co0.6Cu0.4Hx. The hysteretic properties of the Nd-Fe-B magnet, without the addition of hydride after optimal HT, are also shown in Table 4 for comparison. We assumed that the optimal HT for magnets of this type was in the range of 475 to 500 °C, as in the case of the magnets considered in [32][33][34]. Subsequent HT in this temperature range, which is performed after the optimal heat treatment (500 °C), will lead to an increase in the coercive force of magnets with 2 wt.% Dy3Co0.6Cu0.4Hx. Table 4. Hysteretic properties of sintered magnets prepared from the powder mixtures with 2 wt.% Dy3Co0.6Cu0.4Hx and DyH2 and optimally heat treated at 500 °C for 2 h; Br = remanence of magnetic flux density; jHc = coercivity of magnetic polarization; Hk = parameter adopted as a criterion of coercivity (i.e., the magnetic field determined at 0.9 x Br); (BH)max = maximum energy product; HT = heat treatment. Figure 9 shows the variations of the coercive force (jHc) with changing heat treatment (HT) temperature. As can be seen from the data, after low-temperature HT in a range of 475-500 °C, jHc demonstrated an abrupt increase. We assumed that the optimal HT for magnets of this type was in the range of 475 to 500 • C, as in the case of the magnets considered in [32][33][34]. Subsequent HT in this temperature range, which is performed after the optimal heat treatment (500 • C), will lead to an increase in the coercive force of magnets with 2 wt.% Dy 3 Co 0.6 Cu 0.4 H x . Table 4. Hysteretic properties of sintered magnets prepared from the powder mixtures with 2 wt.% Dy 3 Co 0.6 Cu 0.4 H x and DyH 2 and optimally heat treated at 500 • C for 2 h; B r = remanence of magnetic flux density; j H c = coercivity of magnetic polarization; H k = parameter adopted as a criterion of coercivity (i.e., the magnetic field determined at 0.9 × B r ); (BH) max = maximum energy product; HT = heat treatment. Figure 9 shows the variations of the coercive force ( j H c ) with changing heat treatment (HT) temperature. As can be seen from the data, after low-temperature HT in a range of 475-500 • C, j H c demonstrated an abrupt increase. Conclusions The phase composition of the Dy3Co0.6Cu0.4 alloy in the initial homogenized and hydrogenated states was studied. The alloy in the homogenized state was multiphase and contained the Dy3(Co,Cu) and Dy(Cu,Co) phases. During the hydrogenation of the alloy, the disproportionation or hydrogenolysis process took place, which, regardless of the multiphase composition of the initial alloy, resulted in the formation of DyH2-3 hydride and a fine (Co + Cu) mixture with small trace quantities of Dy3(Co,Cu) and Dy(Cu,Co). The study of the sintered Nd(RE)-Fe-B magnet prepared from the strip-cast alloy showed that Dy introduction via the two-component blending method (the hydogenated Dy3Co0.6Cu0.4 compound was added to the powder mixture) resulted in the formation of the core-shell structure of 2-14-1 phase grains. The efficient enhancement of the coercivity of Nd(RE)-Fe-B magnets, with a slight sacrifice of remanence, was demonstrated. The positive effect of REM-alloy hydrogenated additions to the Nd-Fe-B powder mixture allows the possibility of introducing various components to the permanent magnets (heavy REMs, elements structuring grain boundaries, and restricting the magnet grain growth) at the preparation stage, rather than at the alloy-melting stage. This gives the possibility of using a unified initial alloy for the manufacture of magnets with improved (high-coercive or high-performance) magnetic characteristics. Funding: This study was carried out within the project "Development of physico-chemical and engineering foundations for the initiation of innovative resources-economy technology of high-power and high-coercivity (Nd,r)-Fe-B (R = Pr, Tb, Dy, Ho) low-REM permanent magnets", projects No. LTARF18031 funded by the Ministry of Education, Youth and Sports of the Czech Republic and No. 14.616.21.0093 (unique identification number: RFMEFI61618x0093) funded by the Ministry of Science and Higher Education of the Russian Federation. The SEM/EDS investigation and particle size analysis were performed using the research infrastructure of the Regional Materials Science and Technology Centre, VSB -Technical university of Ostrava (Czech Republic), and XRPD analysis, DTA/TG analysis, and the study of magnetic characteristics were carried out at the Center of Collaborative Access for Functional Nanomaterials and High-Purity Substances, Baikov Conclusions The phase composition of the Dy 3 Co 0.6 Cu 0.4 alloy in the initial homogenized and hydrogenated states was studied. The alloy in the homogenized state was multiphase and contained the Dy 3 (Co,Cu) and Dy(Cu,Co) phases. During the hydrogenation of the alloy, the disproportionation or hydrogenolysis process took place, which, regardless of the multiphase composition of the initial alloy, resulted in the formation of DyH 2-3 hydride and a fine (Co + Cu) mixture with small trace quantities of Dy 3 (Co,Cu) and Dy(Cu,Co). The study of the sintered Nd(RE)-Fe-B magnet prepared from the strip-cast alloy showed that Dy introduction via the two-component blending method (the hydogenated Dy 3 Co 0.6 Cu 0.4 compound was added to the powder mixture) resulted in the formation of the core-shell structure of 2-14-1 phase grains. The efficient enhancement of the coercivity of Nd(RE)-Fe-B magnets, with a slight sacrifice of remanence, was demonstrated. The positive effect of REM-alloy hydrogenated additions to the Nd-Fe-B powder mixture allows the possibility of introducing various components to the permanent magnets (heavy REMs, elements structuring grain boundaries, and restricting the magnet grain growth) at the preparation stage, rather than at the alloy-melting stage. This gives the possibility of using a unified initial alloy for the manufacture of magnets with improved (high-coercive or high-performance) magnetic characteristics. Funding: This study was carried out within the project "Development of physico-chemical and engineering foundations for the initiation of innovative resources-economy technology of high-power and high-coercivity (Nd,r)-Fe-B (R = Pr, Tb, Dy, Ho) low-REM permanent magnets", projects No. LTARF18031 funded by the Ministry of Education, Youth and Sports of the Czech Republic and No. 14.616.21.0093 (unique identification number: RFMEFI61618x0093) funded by the Ministry of Science and Higher Education of the Russian Federation. The SEM/EDS investigation and particle size analysis were performed using the research infrastructure of the Regional Materials Science and Technology Centre, VSB-Technical university of Ostrava (Czech Republic), and XRPD analysis, DTA/TG analysis, and the study of magnetic characteristics were carried out at the Center of Collaborative Access for Functional Nanomaterials and High-Purity Substances, Baikov Institute of Metallurgy and Materials Science, Russian Academy of Sciences. The MasterSizer 3000 particle size analyzer (Malvern) was acquired within the "Development of research and development basis of RMSTC" project (No. CZ.1.05/2.1.00/19.0387) within the frame of the operational program "Research and Development for Innovations" financed by structural funds and the state budget of the Czech Republic. Conflicts of Interest: The authors declare no conflict of interest.
8,118.6
2019-12-01T00:00:00.000
[ "Materials Science" ]
Mycological air contamination level and biodiversity of airborne fungi isolated from the zoological garden air — preliminary research Abstract   The aim of this paper was to evaluate the degree of mycological air contamination and determine the taxonomic diversity of airborne fungi residing in the air of 20 different animal facilities in a zoological garden. The concentrations of fungi in the zoological garden were measured using a MAS-100 air sampler. The collected microorganisms were identified using the combination of molecular and morphological methods. The fungal concentration ranged from 50 to 3.65 × 104 CFU/m3 during the whole study. The quantitative analysis of the fungal aerosol showed that the obtained concentration values were lower than the recommended permissible limits (5 × 104 CFU/m3 for fungi). Environmental factors, including temperature and relative humidity, exerted a varying effect on the presence and concentration of isolated fungi. Relative humidity was shown to correlate positively with the concentration of fungal spores in the air of the facilities studied (rho = 0.57, p < 0.0021). In parallel, no significant correlation was established between temperature and total fungal concentration (rho =  − 0.1, p < 0.2263). A total of 112 fungal strains belonging to 50 species and 10 genera were isolated. Penicillium was the dominant genera, including 58.9% of total fungal strains, followed by Aspergillus 25.89%, Cladosporium 3.57%, Talaromyces 3.57%, Mucor 1.78%, Schizophyllum 1.78%, Syncephalastrum 0.89%, Alternaria 0.89%, Absidia 0.89%, and Cunninghamella 0.89%. Our preliminary studies provide basic information about the fungal concentrations, as well as their biodiversity in zoological garden. Further studies are needed to generate additional data from long-term sampling in order to increase our understanding of airborne fungal composition in the zoological garden. Supplementary Information The online version contains supplementary material available at 10.1007/s11356-024-33926-2. Introduction Zoological gardens are one of the most popular attractions visited by tourists worldwide.We should perceive them not only as places where a large diversity of animals is kept, but also as places where people can admire both native and exotic species (Nekolný and Fialová 2018).Both visitors and zoo workers are potentially exposed to bioaerosols which contain bacteria, viruses, pollens, fungi, and mycotoxins (Michalska et al. 2021;Nageen et al. 2023).Among the microorganisms in bioaerosols, fungi are the most numerous group of biological particles (Szulc et al. 2020).Those microorganisms are abundant in the environment and play important roles as symbionts, saprotrophs, or parasites.It is estimated that airborne fungi constitute nearly 25% of the global biomass and, therefore, play a significant role in air pollution affecting human health (almost 150 fungal taxa are associated with allergies) (Nageen et al. 2023). National and global research about the contamination and the biological diversity of fungi in breeding facilities was carried out mainly in the large-scale poultry houses, barns, and piggeries (Plewa and Lonc 2011;Pusz et al. 2015;Douglas et al. 2018;Seifi et al. 2018;Lee and Kim 2021).Against this background, research on fungal biodiversity and the degree of air and environmental pollution in zoological gardens are scarce (Rivas et al. 2018;Cateau et al. 2022;Álvarez-Pérez et al. 2023;Debergh et al. 2023). The majority of these cited works primarily concentrate on fungi of the genus Aspergillus, with a particular focus on Aspergillus fumigatus, its sensitivity to azole drugs, and the detection of fungi solely in the environment of a single animal group (penguins).They do not usually consider the presence of fungi in other animal habitats and another group of molds.Moreover, the only national study on microbial air contamination was carried out in Krakow's Zoo (Grzyb andLenart-Boroń 2019, 2020).The research focused entirely on the determining occurrence of individual bioaerosol (bacterial and fungal) fractions, without taking into account the biodiversity of fungi and their potential toxicity.The studies conducted in animal breeding places other than zoos often showed high concentrations of this group of microorganisms that can be harmful to human and animal health.Among the isolated fungi, the most frequently identified were fungi belonging to the genera Aspergillus (A. fumigatus, A. niger, A. flavus), Penicillium (P.citrinum, P. viridicatum), Cladosporium spp., Alternaria spp., and Scopulariopsis spp.(Plewa and Lonc 2011;Pusz et al. 2015;Seifi et al. 2018).The presence of these poses a risk of disease in people with bronchial asthma, EAA (extrinsic allergic alveolitis), allergic rhinitis, and ODTS (dust-induced toxic syndrome organic), while in animals, it causes pulmonary aspergillosis and mycotoxicosis (Szulc et al. 2020). Therefore, research in zoological gardens is important because these facilities are not only places for breeding animals but may be a source of dangerous fungi.This is important information, taking into account the specificity of zoos, which are not only a working environment, but also one of the most frequently visited tourist places.Thus, the aim of this paper was to evaluate the degree of mycological air contamination and determine the taxonomic diversity of airborne fungi residing in the air of different animal facilities using the combination of microscopic and genetic analyses. Study area The study was conducted at the Zoological Garden in Wroclaw.This zoo has the largest collection of animals in Poland, with almost 1100 different species in an area covering 33 hectares (https:// zoo.wrocl aw.pl). The measurements were carried out inside twenty facilities: two sites of Monkey House, four sites of Apes Pavilion (Pan troglodytes), Papio Pavilion (Papio anubis), three cages with maggots (Macaca sylvanus), five sites in Kongo Pavilion (with crocodiles, manatees, and numerous species of birds), and five sites in East Africa (with the Hippopotamus amphibious, Orycteropus afer, Heterocephalus glaber). The choice of study sites was guided by their convenient accessibility. Sampling methods Air samples were taken in October 2022 and in January, April, and June 2023 using a MAS-100 air sampler (Merck KgaA, Darmstadt, Germany).Three parallel samples (two incubated at 27 °C degree, one at 37 °C) were collected at the center point of each location at a height of 1.5 m above the ground and directly struck on the surface of Sabouraud agar.The plates were incubated for about 7 days at 25 °C (two samples) and 37 °C (one sample).The number of fungal colonies was expressed as a total colony-forming units (CFU/m 3 ) and revised by the equation: where Pr, r, and N stand for revised colonies, number of viable colonies, and the number of sieve pores, respectively (Feller 1968).The concentration of airborne microorganisms (CFU/m 3 ) was calculated according to the following formula: X = (a × 1000)/V, where "a" is the number of fungal colonies, and "V" is the air volume sampled (m 3 ) (Michalska et al. 2021). Characterization of meteorological conditions The air temperature and relative humidity were measured during each sampling session using a thermo-hygrometer (HI9565 HANNA, Poland).The air temperature ranged from 11.2 to 22.7 °C (autumn season), 17.1 to 24 °C (winter season), 18.1 to 24.6 °C (spring season), and 19.4 to 25.6 °C (summer season), respectively.Relative humidity in the autumn season was between 27.9 and 89.4%, in the winter season from 30.8 to 95.5%, in the spring season from 35.5 to 90.8, and in the summer season from 54.7 to 80.1%. Identification of fungi Airborne fungi were identified based on their macro-and microscopic features using diagnostic keys (Samson et al. 2014(Samson et al. , 2019;;Yilmaz et al. 2014;Visagie et al. 2014).Then, the cultured fungi were subjected to molecular identification to confirm species identity.DNA extraction from the selected and dominant fungi was performed using the Tissue DNA Purification Kit (EURx, Gdańsk, Poland) according to the manufacturer's instructions.For the diagnostic of the airborne fungi, we used both morphological criteria and molecular analyses based mainly on the sequence of the internal transcribed spacer ITS.In the case of several closely related strains, we sequenced b-tubulin and calmodulin fragment.Amplification parts of the ITS were amplified according to the methods described by White et al. (1990).The ITS was amplified using a pair of primers: ITS1 (5′-TCC GTA GGT GAA CCT GCG G-3′) and ITS4 (5′-TCC TCC GCT TAT TGA TAT GC-3′).PCR reactions were performed in a T100 Thermal Cycler (Bio-Rad, Warsaw, Poland) in a total volume of 12.5 µL.Each PCR reaction contains 6.25 μL of 2 × PCR Mix Plus (A&A Biotechnology, Gdansk, Poland), 0.625 μL of each primer (10 mM), 4 μL of DNA template, and 1 μL of ddH 2 O. PCR conditions included an initial denaturation step of 95 °C (30 s); 34 cycles of 95 °C (45 s), 55 °C (60 s), and 72 °C (60 s); and the final elongation of 72 °C (3 min). All positive samples obtained in the PCR reaction were purified and sequenced (Macrogen, Amsterdam, the Netherlands) using the same primer pairs as in the PCR reaction.The obtained sequences were manually edited using DNA Baser Sequence Assembly software (Heracle BioSoft SRL Romania), and consensus sequences were aligned and compared with sequences deposited in the National Center for Biotechnology Information's GenBank (NCBI, Bethesda, MD, USA) using the BLAST algorithm (http:// www.ncbi.nlm.nih.gov/).The most representative sequences that form the basis of the phylogenetic tree have been included in the GenBank database (Table 1).Phylogenetic analysis was performed using the Maximum Likelihood method with the MEGA 7.0 software, bootstrapping was performed using 1000 replicates. Statistical analysis The data underwent statistical analysis using R version 4.3.1 in RStudia 2023.09.0.Linear regression models and Spearman correlations were computed to evaluate the correlation between fungal abundance and meteorological parameters (humidity and temperature).The Kruskal-Wallis test and Dunn's post hoc test for multiple comparisons were utilized to investigate the influence of qualitative factors, such as location and season, on fungal abundance.For statistical significance, results with p < 0.05 were considered.Margalef's Index and Jaccard's Similarity Index were employed to assess the diversity of fungal species at specific research sites.The Jaccard Index was calculated pairwise, and then the average value for each site was obtained. Fungal concentrations in air samples A total of 240 air samples were collected from 20 locations in the zoological garden.Fungi were detected in 234 (97.5%) samples.The concentrations of countable fungal aerosol are presented in Table 2 and ranged from 5.0 × 10 1 to 3.65 × 10 4 CFU/m 3 (for a temperature of 27 °C).The lowest concentration of fungi was recorded in Papio Pavilion 5 × 10 1 CFU/m 3 (range 5 × 10 1 -2.5 × 10 2 CFU/m 3 ).It is worth noticing that this concentration was lower than the recorded at another site, because the air sampling was done just before the morning cleaning of the room.The highest concentration of fungi was recorded in the Apes Pavilion and ranged between 2.02 × 10 4 and 3.65 × 10 4 CFU/m 3 .Similar concentrations were found in the Papio Pavilion (range 1.31 × 10 4 -1.32 × 10 2 CFU/m 3 ) and the Kongo Pavilion (1.38 × 10 4 -2.61 × 10 4 CFU/m 3 ).Statistical analysis performed using the Kruskal-Walis test showed statistically significant differences between the sampling location and the CFU/m 3 fungi values (Fig. 1). There have been a lot of studies about microbial and mycological air contamination in animal production premises (Plewa and Lonc 2011;Pusz et al. 2015;Matković .The air quality in some farming objects has been described by Radon et al. (2002).According to that author, the total number of fungi in poultry houses and in pig farms was higher than the concentrations of airborne fungi noticed in our research and ranged from 8.3 × 10 4 to 1.1 × 10 9 (Radon et al. 2002).Such large differences in the level of microbial contamination between the zoological garden and other breeding premises are most likely due to different methods of air sampling.The sampler used in our investigation has some limitations.This device is characterized by sampling efficiency described by the cuttoff size parameter D50, at the level of 1.7 μm (Górny et al. 2023).All particles above the size would be collected and almost all particles smaller than that size may not be captured by this impactor.Airborne fungal spores typically have an aerodynamic diameter (d ae ) of 2 to 4 μm, and some species release fragments of 0.3 to 1.3 μm (Madsen et al. 2016).That is why the number of airborne fungi in our research may have been underestimated in relation to the actual concentration in the air.The results of the present study suggest that it is necessary to use the same instruments and methods to assess microbial contamination. The lack of standardization of air sampling methods may lead to incorrect conclusions.The concentration of airborne fungi varies across different seasons and is dependent on environmental factors such as temperature or relative humidity (Yuan et al. 2022).The important factors that increase the spread of fungi in the air and sustain their growth are temperature and humidity (Pflieger et al. 2020).Therefore, our studies were conducted in a seasonal cycle (spring, summer, autumn, winter), and the environmental conditions (temperature and relative air humidity) were measured during each sampling season using a thermo-hygrometer (Table 3).The lowest recorded temperature (11 °C) was noted in autumn in the room for Papio Pavilion, while the highest temperatures were noted in summer in the Kongo Pavilion (25.5-25.9°C).Similarly, the highest humidity was observed in winter in East Africa (95.5%) and in the Kongo Pavilion.In general, the relative humidity during the studies varied from 29.3% (in the Apes Pavilion) to 95.5% in the Kongo Pavilion.Environmental parameters (temperature and relative humidity) were correlated with the total number of CFU/m 3 noted in all locations, based on Spearman correlation analysis.The relative humidity correlated positively with the total fungal concentration (rho = 0.57, p < 0.0021), while temperature showed no significant correlation with the total fungal concentration at all locations (rho = − 0.1, p < 0.2263) (Figs. 2 and 3).The concentration of airborne fungi observed at each location varied significantly across the seasons.The highest concentration was recorded in the autumn, reaching a level of 3.65 × 10 5 CFU/m 3 while the lowest levels were observed in the spring and summer, with values of 1.5 × 10 4 and 1.32 × 10 4 CFU/m 3 , respectively.The lowest concentration of fungi (2.5 × 10 2 CFU/m 3 ) was observed in the winter season.It has been demonstrated that optimal temperature and high relative humidity can contribute to a sudden increase in the concentration of airborne fungi.The medium fungal concentrations varied considerably by season, with the greatest variation noted between summer and autumn (p < 0.0018).A strong variety of airborne fungal concentrations was also noticed between spring and summer (p < 0.0056), while no statistically significant differences were observed between other seasons (Fig. 4). Interpretation of the results of microbiological air contamination is challenging because of the lack of acceptable Fig. 2 The correlation between temperature and fungal conidia concentration in the air Fig. 3 The correlation between relative humidity and fungal conidia concentration in the air limits for microbiological agents.The most commonly used measure of exposure to microbial air pollution is the degree of such pollution, expressed in terms of the number of colony-forming units (CFU) in 1 m 3 of air (Górny et al. 2016).The threshold limit values (TLV) were used in the assessment of mycological air contamination.For microbiological agents in the air of occupational and non-occupational environments, TLV was proposed by the Expert Group on Biological Agents at the Polish Interdepartmental Commission for Maximum Admissible Concentrations and Intensities for Agents Harmful to Health in the Working Environment (Górny et al. 2016).These values (5 × 10 4 CFU/m 3 for fungi) were developed as a result of volumetric measurements of environmental bioaerosols.Based on the TLV proposed by the mentioned Expert Group, we can conclude that the quantitative analysis of the fungal aerosol showed lower concentration values than the recommended permissible limits.Nevertheless, this is not equivalent to the absence of microbial contamination in the facilities that were studied.Particularly as some of the values were close to TLV values.The US government agency, Occupational Safety and Health Administration (OSHA), suggests that a value higher than 1.0 × 10 3 CFU/m 3 indoor may be an indicator of microbial contamination.However, a determination of bioaerosol biodiversity is needed to confirm a health hazard, as certain species may pose a greater health concern than others (OSHA 2015;Rivas et al. 2018). Biodiversity of airborne fungi The total number of airborne fungi varied by location (Table 4) and seasons (Table 5).The Margalef's Index was used to analyze the biodiversity of airborne fungi throughout the study period and in different locations.The highest value of Margalef's Index (0.93) was recorded at Kongo Pavilion in location no 18 and the lowest (0.00) was observed at East Africa Pavilion in location no 12.Concerning seasons, the highest diversity of airborne fungi was recorded in the winter (34), followed by the autumn (29), and spring ( 27), whereas the lowest number of strains were noted in the summer ( 21).The average Jaccard Index for all sites was quite low, at 0.135.The lowest average Jaccard Index, calculated pairwise for each research site, was for the Kongo Pavilion (0.12).This indicates that the Kongo Pavilion had the highest number of unique isolated species compared to other research sites.Direct comparison showed the lowest species similarity between the Kongo Pavilion and the Apes Pavilion (Jaccard Index = 0.06) and between the Kongo Pavilion and the Papio Pavilion (0.08).The highest similarity was found between the Kongo Pavilion and the East Africa Pavilion (0.22). We used both morphological and molecular identity methods based on sequencing regions of the ITS, b-tubulin, and calmodulin.Using these methods, a total of 112 fungal strains belonging to 10 genera and 50 species were isolated during the whole 1-year-long study.The molecular identification corresponded well with the morphological diagnosis and proved to be a good tool for diagnostic strains that were not identified microscopically.By comparing the partial sequences of the ITS region of the own isolates with the sequences of other isolates available in GenBank, the results showed that the similarity percentage of nucleotide sequences ranged between 95 and 100% (Supplement 1).In most cases, (87%) the sequence data allowed the identification at the species level.The study based on the molecular analysis allowed us to identify 97 out of 112 species of fungi.Diagnosis of thirteen strains belonged to Penicillium genera, and one strain each of the Cladosporium and Absidia genera failed.Analysis of ITS region revealed that they belong to closely related species.Even the sequencing of the less conservative region (b-tubulin) and calmodulin did not allow for the identification of these fungi at the species level. Diversity of identified fungi varied between different locations and seasons.Unfortunately, there is a lack of research about the biodiversity of airborne fungi in zoological gardens.Therefore, our results could be compared to the studies obtained for the livestock premises.For instance, in the China poultry houses, researchers observed a different genus composition than in our studies.They isolated mainly Trichosporon, Candida, Aspergillus, Cladosporium, and Alternaria genera (Chen et al. 2021).In other studies, Aspergillus, Scopulariopsis, Penicillium, and Cladosporium were detected in the air of the swine house (Kumari et al. 2016).Comparable results to ours have been reported by Matković et al. 2009.Croatian authors detected Aspergillus and Penicillium genera in dwellings for dairy cows and laying hens. Some of the fungi isolated in our research may be associated with allergic respiratory diseases (especially in people with a weakened immune system).Filamentous fungi can be a source of polysaccharides such as the β(1 → 3)-glucans.This compound may cause inflammatory airway reactions and also affect the immune system.There is increasing evidence that β(1 → 3)-glucans cause non-specific inflammatory reactions and can be responsible for bioaerosol-induced respiratory symptoms observed in both indoor and occupational environments (Pflieger et al. 2020).The other fungi are important producers of mycotoxins, which have been suggested as one of the major possible causes of health problems.For example, several detected fungi belonging to Aspergilli in sections Circumdati (A. steynii and A. westerdijkiae), Flavi (A. flavus), and Nigri (A. carbonarius and A. niger) are well-known producers of ochratoxins, a mycotoxin teratogenic, carcinogenic, immunosuppressive, and nephrotoxic in animals (Kagot et al. 2019).The other species, such as A. versicolor and A. flavus, produce sterigmatocystin and aflatoxins, respectively.Aflatoxins are the most potent natural carcinogens (Lamoth 2016).What's more, the identified in our study A. flavus is recognized as an opportunistic animal and human pathogen, while A. fumigatus is the main causative agent of pulmonary invasive aspergillosis (Al-Shaarani et al. 2023).The most abundant Penicillium genera isolated in our studies is listed among the most common allergenic fungal taxa and has been linked to asthma.Whereas, Schizophyllum commune, which accounted for 1.78%, may be the cause of ABPM (allergic bronchopulmonary mycosis) and allergy-related bronchopulmonary infections and sinusitis (Oguma et al. 2023). Conclusion This is the first research on culturable airborne fungi at the zoological garden in Poland.The quantitative analysis of the fungal aerosol showed that the obtained concentration values were lower than the recommended permissible limits.But a lot of detected fungi can be harmful to human and animal health.That is why, our study emphasizes the necessity of air quality monitoring.Similar studies to ours are rarely conducted.Therefore, our preliminary research provides basic information about the fungal concentrations and their biodiversity in these touristic facilities.However, further long-term quantitative, as well as qualitative, and mycotoxicological research is needed to fully understand airborne fungal composition in the zoological garden and its potentially negative impact on human and animal health.Moreover, quantitative and qualitative assessment of mycotoxins in the air is of great importance from the occupational exposure point of view.In addition, our preliminary results along with planned long-term studies of mycological air contamination in the zoo may contribute in the future to the development of microbiological air quality standards by relevant institutions. Fig. 6 Fig. 6 Maximum likelihood (ML) phylogenetic tree (Jukes-Cantor + G) for Aspergillus sp.based on sequences of the ITS gene fragment.Bootstrap values are shown above the branches.Sequences from this study are marked with solid circles.The dendrogram was constructed with 1000 replications using MEGA software Table 1 Molecular identification (using ITS region) of fungi isolated from the air in the zoological garden Table 2 Matković et al. (2007) other studies on the mycological quality of air in zoos, the results of our own research can be compared to other breeding facilities, e.g., poultry houses and cowshed.Matković et al. (2007)observed in the barn the average concentration of fungi ranged from 5.23 × 10 4 CFU/m 3 (at noon) to 8.35 × 10 4 CFU/m 3 (in the morning) Table 5 Number of fungi species isolated from the air in the seasons
5,168.8
2024-06-18T00:00:00.000
[ "Environmental Science", "Biology" ]
Evaluation of PLAGA/n-HA Composite Scaffold Bioactivity in vitro Qing Lv1,2, Xiaohua Yu1,4, Meng Deng1,2,4, Lakshmi S. Nair1-5 and Cato T Laurencin1-6* 1Institute for Regenerative Engineering, University of Connecticut Health Center, School of Medicine, Farmington, CT 06030, USA 2The Raymond and Beverly Sackler Center for Biomedical, Biological, Physical and Engineering Sciences, University of Connecticut Health Center, School of Medicine, Farmington, CT 06030, USA 3Department of Biomedical Engineering, University of Connecticut, School of Engineering, Storrs, CT 06268, USA 4Department of Orthopaedic Surgery, University of Connecticut Health Center, School of Medicine, Farmington, CT 06030, USA 5Department of Chemical and Biomolecular Engineering, University of Connecticut, School of Engineering, Storrs, CT 06268, USA 6Department of Materials Science and Engineering, University of Connecticut, School of Engineering, Storrs, CT06268, USA Introduction Scaffolds as a key component in bone tissue engineering should meet a series of criteria, such as appropriate porous structure, bioactivity, appropriate degradation rate with non-toxic degradation products, and cytocompatibility [1][2][3]. Towards the design and development of ideal scaffold for optimal regenerative performance, a variety of scaffold fabrication techniques such as solvent casting/salt leaching, phase separation, and freeze-drying have been employed to generate scaffolds with distinct properties [4][5][6]. Although all these techniques have demonstrated their potential for bone tissue engineering application, a microsphere-based approach for tissue engineering initially developed by Laurencin et al. appears to be extraordinarily attractive [7]. Scaffold generated by this method could provide highly inter-connected 3D structure as well as sufficient mechanical properties close to functional trabecular bone [8,9]. The primary choice of materials for this technique is so far poly (lactic acid-glycolic acid) (PLAGA), a biodegradable polymer which has been most widely investigated and FDA approved for various biomedical applications [10,11]. While microsphere-based scaffolds made of PLAGA have shown to be able to support basic cellular activity in/on the scaffolds such as cell attachment and proliferation [12,13] their bioactivity in terms of guiding cell differentiation towards osteogenic lineage is still considered poor.In order to improve the biological performance of these scaffolds, incorporation of bioactive components which provide cells with inductive cues into the scaffolds is highly desirable in the context of bone tissue engineering. Hydroxyapatite has been well established as excellent osteo conductive materials due to its similarity to the inorganic components of natural bone [14]. It has been widely used as an addictive to form composite materials with both synthetic and natural polymers [15,16]. Recently, nano hydroxyapatite (n-HA) attracted significant attention as the exceptionally large surface area of these particles provides extraordinary performance as the second phase added into the scaffolds [17,18]. We hypothesized that incorporation of n-HA into the sintered PLAGA microsphere-based scaffolds could greatly improve the bioactivity of the scaffold. Previously, PLAGA/n-HA composite scaffolds have been successfully fabricated in our laboratory [19,20]. In this study, we focused on the evaluation of the bioactivity of these composite scaffolds compared to PLAGA scaffolds. Both acellular mineralization and cell based mineralization assays were employed to fully characterize the biological performance of the scaffolds in vitro. Materials and Methods Materials PLAGA(85:15 lactic acid to glycolic acid ratio, MW=120 kDa) was obtained from Lakeshore Biomaterials (Wilmington, OH). n-HA particles were purchased from Berkeley Advanced Biomaterials (San Leandro, CA) with average diameter around 100 nm. All other reagents were purchased from Sigma-Aldrich (St. Louis, MO) unless otherwise stated. modified emulsion and evaporation method described previously [21]. Briefly, PLAGA was dissolved in dichloromethane to make a 20% (w/v) solution. n-HA particles were dispersed into this solution at a PLAGA/n-HA ratio of 4:1 (w/w) and vortexed overnight to form a homogeneous mixture. The composite microspheres were then formed by gradually pouring the PLAGA/n-HA mixture into a 1% (w/v) poly (vinyl alcohol) (PVA) solution stirring at 250 rpm ( Figure 1). The emulsion system was kept stirring for 24 h to completely evaporate all the organic solvent. The resultant microspheres were collected after filtering through ashless filter paper, washed with DDI water, and air dried. The dried microspheres were passed through standard sieves and stored in desiccator for future use.PLAGA microspheres without n-HA was also prepared accordingly to serve as control group. Composite scaffold fabrication and degradation characterization Cylindrical composite scaffolds (4.0 mm × 2.5 mm) were fabricated by a microsphere sintering method described previously. PLAGA/n-HA microspheres were packed into a stainless steel mold and sintered at 90ºC for 3h. PLAGA sintered microsphere control group was prepared by sinteringPLAGA microspheres at 85ºC for 3h. In vitro mineralization on PLAGA/n-HA composite scaffolds Simulated body fluid (SBF) was prepared as reported previously [22]. The reagents were added to distilled and de-ionized (DDI) water in the following order and concentration: 142 mM NaCl, 5 mM KCl, 1.5 mM MgCl 2 , 0.5 mM MgSO 4 , 150 mM NaHCO 3 , 20 mM Tris, 2.5 mM CaCl 2 , and 1.0 mM Na 2 HPO 4 . The pH of SBF was adjusted to 7.40 using HCl/NaOH at 37ºC. Both the composite and plain PLAGA scaffolds were incubated in SBF for 28 days. Control scaffolds for both groups were incubated in DDI water. The solutions were changed every other day. After incubation for 28 days, samples (n=3) were removed from solutions, washed with DDI water, and air-dried for further analysis. The surface morphology and calcium deposition were examined by scanning electron microscopy (SEM). Calcium deposition was characterized by Alizarin red staining. In brief, the scaffolds were removed from solution, washed with DDI water 3 times, and transferred into new well plates. Scaffolds were stained with 10% Alizarin red (Sigma, St Louis, MO) solution for 10 minutes. The scaffolds were then washed with DDI water 5 or more times until no color could be washed off. Pictures of stained scaffolds were taken by stereo microscope (Discovery V12, Zeiss). Next, 1.0 mL 10% cetylpyridinium chloride (CPC) (Sigma-Aldrich, St. Louis, MO) solution was added to each scaffold to dissolve the color. The optical density of the solution was read at 550 nm with a TECAN SpectroFluo Plus reader (Boston, MA). The values of absorbance of scaffolds in SBF were reported after subtracting the reading of control groups in DDI water. RMSCs culture Rabbit mesenchymal stem cells (RMSCs)were harvested from New Zealand white rabbits with the protocol approved by the Animal Care and Use Committee, University of Virginia. Briefly, Young male New Zealand White (NZW) rabbits (average 1.67 kg) were euthanized using an overdose of sodium pentobarbital. Rabbit bone marrow was then isolated from the tibial and femoral bones and re-suspended in 10 mL Dulbecco's Modified Eagle's Medium (DMEM). The bone marrow suspension was layered on 10 mL Ficoll-Paque Plus (Amersham Biosciences) reagent in a 50 ml conical tube and centrifuged at 1500 rcf for 35 min. The mononuclear layer (low density) was collected and washed twice with PBS. The mononuclear cells were plated in basal DMEM media with 10% FBS and 1% penicillin/streptomycin in flasks. Non-adherent cells were removed when medium was changed after 3 days. The culture medium was then changed 3 times a week. The RMSCs were used till passage 7. Cell culture on PLAGA/n-HA composite scaffolds Scaffolds were soaked in 70% ethanol for 15 minutes, washed with sterile water twice for 15 minutes each time, and further sterilized by UV irradiation for 30 minutes on each side. RMSCs were seeded onto scaffolds at a density of 5×10 4 cells per scaffold in 96 well plates in basal media. After 24 hours, the scaffolds were transferred into DMEM supplemented with 0.1 µM dexamethasone, 50 µg/mL ascorbic acid and 10 mM β-glycerophosphate. The media were changed every 3 days, and the cultures were maintained for 21 days. At day 7, 14, and 21, scaffolds were taken out for further characterization. Cell proliferation was measured via application of Cell Titer 96™ Aqueous One Solution Cell Proliferation Assay (MTS Assay) (Promega, Madison, USA). At predetermined time points, the scaffolds were taken out, washed with PBS, and transferred into a 24 well plate. The assay was performed by adding 200 μlMTS reagent into each well containing one scaffold and 1 ml cell culture medium. After 2-hour incubation at 37°C, 250μL 10% sodium dodecyl sulfate (SDS) solution was added to stop the reaction. The mixture was diluted 5 times and then read at 490 nm by a Tecan SpectroFluo Plus reader (Boston, MA). Alkaline phosphatase (ALP) activity was measured as an early marker of osteoblastic phenotype. At each time point, scaffolds were taken out, washed with PBS, and transferred into a new well plate. Cells were lysed by adding 1.0 mL 1% Triton X-100 solution and then subjected to 3 freeze-thaw cycles. The resultant cell lysates were stored at -80°C for further assay. At the end of culture period, the cell lysate samples were thawed together and assayed. A 100 μL sample was mixed with 400 µL substrate solution (a mixture of p-Nitrophenylphosphate, diethanolamine buffer and DI water) and incubated at 37°C for 30 min. 100 µL 0.4 M NaOH was added to stop the reaction and the resultant solution is read at 405 nm using the TECAN reader. The ALP activity was reported as absorbance. The cell mediated mineralization on both types of scaffolds was analyzed using Alizarin red staining and quantified as described in the previous sections. PLAGA and PLAGA/nHA sintered microsphere scaffolds were incubated in SBF to test their mineralization capability in vitro. After incubation in SBF for 28 days, samples were taken out and examined under SEM. The SEM micrographs showed the difference in the surface morphology between PLAGA/n-HA and PLAGA scaffolds due to the distinct mineralization extent on various scaffolds. A layer of apatite coating was clearly formed on the surface of PLAGA/n-HA scaffolds. It should be noted that the apatite emerged as an aggregation of nano particles on the surface scaffolds, which makes them distinguishable from the original well-distributed nano HA particles incorporated in the scaffolds (Figures 2A-a/c). The size of nano HA particles were within the range of 100-300 nm as shown by SEM. In contrast, no apatite coating was formed on plain PLAGA scaffold and the scaffold surface appeared to be smooth after immersion in SBF for 28 days (Figure 2b/d). Only at high magnification, few apatite nanoparticles were observed, indicating plain PLAGA surface did not induce mineralization in vitro. To further prove that the apatite is newly formed through the precipitation from surrounding SBF, composite and pure PLAGA scaffolds were stained with ALZ staining. Both PLAGA/n-HA scaffolds and PLAGA plain scaffolds were incubated in DDI water for same time period to serve as control groups. Alizarin red staining showed that PLAGA/n-HA scaffolds in DDI water exhibited no obvious staining, indicating that the incorporated n-HA were mainly embedded into the polymer matrix instead of exposing to the surface ( Figure 3A). However, PLAGA/n-HA scaffolds exhibited clearly visible red staining as shown in Figure 3B after incubation in SBF for 28days, suggesting the deposition of mineral on the surface. As for plain PLAGA scaffold, no visible staining was observed in both DDI water and SBF groups, suggesting this scaffold could not initiate mineralization on its surface in vitro. We next quantified the calcium deposition by dissolving the staining in 10% CPC solution. There is no significant difference between two types of scaffolds in the initial incubation in SBF up to 14 days. From day 21, the calcium deposition on PLAGA/n-HA scaffolds increased remarkably and showed significant difference from PLAGA scaffolds. The relative amount of calcium deposition on PLAGA/n-HA was over 5-fold of the amount detected on PLAGA scaffolds ( Figure 3B). These data indicate that incorporation of n-HA into the scaffolds can accelerate the mineralization of composite scaffold, in turn, improve the bioactivity in vitro. The capability of forming apatite mineral on materials surface has been regarded as an important criterion in evaluating the bioactivity of biomaterials, especially for bone substitute materials [23]. The accelerated mineral deposition observed on PLAGA/n-HA scaffolds implied that incorporation of hydroxyapatite into the microsphere sintered scaffolds could improve their bioactivity in term of mineral deposition. The newly formed apatite layer showed characteristic morphology of tightly packed aggregation of nano particles, which has been observed by other researchers [24][25][26]. This unique morphology combined with Alizarin red staining confirmed that the observed mineral layer was formed during SBF incubation instead of the n-HA incorporated during scaffold fabrication. Although n-HA did not appear on composite scaffold surface, it could still promote surface mineralization by providing a local calcium rich microenvironment. It has been reported that hydroxyapatite undergoes dynamic dissolution/ re-precipitation process, thus the release of calcium ions during hydroxyapatite dissolution might create a high calcium concentration zone at the interface between scaffold and SBF [27]. This local environment might favor of apatite nucleation on the scaffold surface which then promoted the mineralization on the scaffolds. The mineral forming ability of scaffold is of particular importance as this ability predicts the performance of the scaffolds in vivo, which is to form direct bonding with natural bone. Thus, the incorporation of n-HA might improve the integration of the composite scaffolds towards host bone in vivo. Cellular response of PLAGA/n-HA composite scaffolds in vitro RMSCs were seeded on both types of scaffolds to evaluate the cytocompatibility of the scaffolds with and without n-HA. Cell proliferation measured by MTS assay showed that the cell numbers on both scaffolds increased with the culture time, indicating both scaffolds facilitate cell adhesion and growth. Importantly, the impact of n-HA incorporation into the scaffolds was clearly shown in cell proliferation. RMSCs seeded on PLAGA/n-HA scaffold started to proliferate earlier than PLAGA scaffold thus the proliferation of these cells plateaued earlier. In contrast, RMSCs on PLAGA grew slowly in the first week but the cell number continued to increase till day 14. The cell number on PLAGA/n-HA was substantially higher than PLAGA control Figure 4A). Unlike the trend of cell number on PLAGA/n-HA scaffolds, ALP activity on this scaffold exhibited an increasing tendency during the whole three-week culture period ( Figure 4B). Combined this data with the previous cell proliferation results ( Figure 4A), we conclude that this composite scaffold not only supported cell growth after seeding cells into the scaffold, but also promoted early stage differentiation of RMSCs towards osteogenic lineage as ALP is an early marker for osteogenic differentiation. As one of the most important functions of osteoblastic lineage cells, deposition of calcium onto ECM is worthy of investigation as an indication of mineralization. Alizarin red staining was used to visualize the calcium deposited onto the scaffold, as shown in Figure 4C. The plain scaffolds without cells (left column) exhibited nearly no color, indicating that almost no calcium was deposited on the scaffold. After 21 days culture in static condition, RMSCs on both PLAGA/n-HA and PLAGA scaffolds demonstrated visible red staining on the surface suggesting calcium deposition took place on both scaffolds, but the staining on PLAGA/n-HA was more intense than PLAGA scaffold. Plus, we also quantified the calcium deposition by dissolving the dye in 10% CPC solution and measured the OD at 550 nm. As shown in Figure 4D, calcium deposition gradually increased on both types of scaffolds, but the amount of calcium deposited in term of dye staining quantification was much higher in PLAGA/n-HA at day 7 and day 21. This observation suggests incorporation of n-HA could serve as a simulative cue to improve the mineralization of RMSC grown on PLAGA/n-HA. The improvement of RMSC performance on PLAGA/n-HA could mainly be attributed to the incorporation of n-HA into the composite scaffolds. Biodegradable polymers used for tissue engineering scaffolds fabrication such as PLAGA are generally considered as non-osteoconductive, thus introduction of hydroxyapatite into polymer scaffold has been a common approach to improve the osteoconductivity of the scaffolds [2,28]. Here we observed that both RMSCs proliferation and differentiation was promoted after addition of n-HA into the building block of our unique microsphere based scaffolds. In particular, RMSCs proliferation on PLAGA/n-HA scaffolds was accelerated in the first week of cell culture. Since it has been well documented that hydroxyapatite had extremely high affinity to a diverse range of proteins and other biomolecules, the cell proliferation of RMSC might be stimulated by the enrichment of proteins and growth factors by n-HA on scaffold surface [29,30]. We also found that the osteogenic differentiation of RMSCs on the composite scaffolds was largely enhanced as shown in Figure 3B and C. n-HA trapped in PLAGA microspheres might release certain amount of calcium ions before they reach equilibrium with the local microenvironment, this might play an important role during RMSC differentiation. A recent report by Boer et al. found that free calcium ions in culture medium could trigger a calcium-inducing signaling pathway and lead to osteogenic differentiation of MSCs [31]. Thus, we speculate that the release of calcium ions from n-HA might be the driving force for the enhancement of RMSC differentiation towards osteoblasts. Therefore, our results suggest that PLAGA/n-HA substrate enhanced MSCs proliferation at early time points, while differentiation and mineralization were promoted at later time points. Conclusion We have successfully evaluated the bioactivity of polymeric sintered microsphere scaffolds incorporated with n-HA. Our results suggest that n-HA was incorporated into the PLAGA microspheres with high efficiency while still maintaining the integrity of the composite scaffolds after sintering. The capability of inducing apatite formation in vitro was largely enhanced in the composite scaffolds compared to plain PLAGA scaffolds. More importantly, PLAGA/n-HA composite scaffolds have been shown to improve rabbit MSCs proliferation, differentiation, and mineralization as compared to control plain PLAGA scaffolds. Therefore, utilization of n-HA as an additive to be introduced into polymeric microspheres could be an efficient approach to improve the bioactivity of scaffolds designed for bone tissue engineering.
4,086.6
2014-11-03T00:00:00.000
[ "Materials Science", "Medicine", "Engineering" ]
Articial Intelligence In Source Discrimination of Mine Water: A Deep Learning Algorithm For Water Source Discrimination With increasing coal mining depth, the source of mine water inrush becomes increasingly complex. The problem of distinguishing the source of mine water in mines and tunnels has been addressed by studying the hydrochemical components of the Pingdingshan Coaleld and applying the articial intelligence (AI) method to discriminate the source of the mine water. 496 data of mine water have been collected. Six ions of mine water are used as the input data set: Na + +K + , Ca 2+ , Mg 2+ , Cl - , SO2- 4, and HCO- 3. The type of mine water in the Pingdingshan coaleld is classied into surface water, Quaternary pore water, Carboniderous limestone karst water, Permian sandstone water, and Cambrian limestone karst water. Each type of water is encoded with the number 0 to 4. The one-hot code method is used to encode the numbers, which is the output set. On the basis of hydrochemical data processing, a deep learning model was designed to train the hydrochemical data. Ten new samples of mine water were tested to determine the precision of the model. Nine samples of mine water were predicted correctly. The deep learning model presented here provides signicant guidance for the discrimination of mine water. Background With increasing coal mining depth, the source of mine water inrush becomes increasingly complex. Water inrush in mines can lead to serious disasters anywhere in the world due to the complicated hydrogeological conditions found in parts of China, which are uncommon elsewhere in the world 1 . Therefore, rapid and accurate discrimination of the source of water inrush is very important and necessary for both resuming production and rescuing miners 2 . The upcoming technological revolution has been termed Industry 4.0. Examples of the use of arti cial intelligence (AI) in parameter identi cation of groundwater systems, management of groundwater and mine hydrogeology. Both today and in the foreseeable future, it is important to take advantage of new technological developments and innovations in source discrimination of mine water inrush. The development that has taken the world by storm in the past few years is arti cial intelligence, which has been widely adopted in many elds, such as computer vision, intelligent robots, natural language processing and data mining 3 . As an important method of AI, deep learning is a hot topic in various elds because of its strong ability to automatically extract high-level representations from complex data, which has been applied widely in the elds of natural science, social science and engineering 4 . The value of source discrimination of mine water in the prevention and cure of mine water has been well established over the past several decades 5 . Hydrochemistry and mathematical methods are widely used to identify water sources in hydrogeology. The ion proportions of different aquifers in mines differ greatly, such as Na + , K + , Ca 2+ , Mg 2+ . However, even in the same aquifer, the content of hydrochemical ions has a great difference 6 . Therefore, some mathematical goelogy methods, such as Bayes and principal component analysis, are used to source identi cation of mine water. Characteristic ion contrast and ion proportional coe cients were applied to aquifers with distinct chemical characteristics to establish a characteristic index discrimination system 7 . Because of the arti cial neural network structure, deep learning excels at identifying patterns in unstructured data such as images, sound, video, and text. As a result, deep learning is rapidly transforming many industries, including healthcare, energy, nance, and transportation. These industries are rethinking traditional business processes 8 . Therefore, the study of source discrimination of mine water with arti cial intelligence is of great importance. Arti cial intelligence in this paper elaborates deep learning algorithms to process the main ionic composition of groundwater to better discriminate the source of mine water 9 . The organization of the paper is as follows. Section 2 presents the geological and hydrogeological conditions of the study area. The source discrimination of mine water problems in the framework of the DNN model is introduced in detail in Section 3. The results of deep learning for the source discrimination of mine water are demonstrated in Section 5. This paper closes with some conclusions and nal remarks. Outline of the coal eld The Pingdingshan coal eld (113°00-114°E, 33°30′-34°00′N), located in the central and western parts of Henan Province, northern China ( Fig. 1), is the third largest coal producer in China. The coal eld is approximately 40 km long E-W and 20 km wide N-S. The Pingdingshan Coalfeld is located in the low hilly area, which is divided into eastern and western areas by the Guodishan fault. Structurally, it is a large syncline with symmetrically gently dipping limbs. The coal-bearing sediments are mostly Permian in age, comprised of sandstone, siltstone and carbonaceous shale, which are overlain by Neogene, Paleogene and Quaternary deposits. The entire sequence is underlain by Cambrian karstic limestone (Fig. 1). Major structures The main coal-bearing measures are dominated by strike-parallel compressional structures. Of these, most folds and faults are concentrated within a narrow zone, known locally as a compressional zone or disturbance zone, which together with the Likou syncline has been interpreted to occur during Indosinian late Triassic orogenic compression. Stratum The exposed strata from old to new in the Pingdingshan coal eld are archaean metamorphic rock series, Upper Proterozoic Sinian, Lower Paleozoic Cambrian-Ordovician, Upper Paleozoic Carboniferous to Permian, Mesozoic Triassic and Cenozoic Neogene, and Paleogene to Quaternary, as shown in Fig. 2. The main coal-bearing strata in the study area are Carboniferous-Permian. Hydrogeological background The research area is situated in a transitional zone from a warm temperate zone to a subtropical zone, with a long-term average precipitation of 747.4 mm/year, mainly concentrated between July and September. The geomorphology in the east and south is an alluvial plain with a layer of 200 m~500 m thickness. The ground elevation is +75~80 m. With a surface elevation varying from 900 m to 1040 m, the topography is low in the southeast and high in the northwest. In uenced by the topographic features, the surface water is mainly distributed in the south and north of the mining area, that is, the Shahe River, Ruhe River, Zhanhe River and Baiguishan Reservoir. The Ruhe River and the Shahe River are perennial rivers that lie on the northern and southern margins of the study area. There are some seasonal rivers and man-made ditches, such as Zhanhe, Beigan Canal and Xigan Canal. The riverbed inserts into Cambrian limestone or Neogene marl, which has a certain replenishment effect on the groundwater of limestone in the Qikuang mine in the southwest of the Pingdingshan coal eld. The main aquifer is a limestone aquifer of the Taiyuan Formation. On the basis of the borehole pumping test data, the water in ow per unit of karst aquifer of the Taiyuan Formation is 0.00018~0.3569 L/s m, and the permeability coe cient is 0.0076~3.047 m/day. Data In the Pingdingshan coal mine, 496 mine water data points were collected. Due to the large amount of data, some of the data are shown in Table 2. Table 2 clearly shows that there is a variance of ve orders of magnitude. Data are most valuable when you have something to compare it to, but these comparisons aren't helpful if the data is bad or irrelevant. Data standardization is about ensuring that data are internally consistent, that is, each data type has the same content and format. Standardized values are useful for tracking data that isn't easy to compare otherwise. The raw data are normalized individually according to Eq (1) where the subscript i means the row of the data matrix, the subscript j means the column of the data matrix, Z ij represents the data after standardization, x ij represents the source data, and the symbol std represents the standard deviation of related data 10 . Table 2 Hydrochemical compositions and discriminant results of the water lling aquifer (unit: mg/L. In the last column, which is groundwater type (label column), 0 represents the surface water, 1 represents pore water of Quaternary limestone, 2 represents karst water of Carboniderous limestone, 3 represents sandstone water of Permian limestone, and 4 karst water of Cambrian limestone.) In the datasets, the label column is categorical data (string values). These labels have no speci c order of preference, and since the data are string labels, the deep learning model cannot work on such data directly 11 . One approach to solve this problem can be label encoding, where we assign a numerical value to these labels, for example, the surface water and pore water of the Quaternary mapped to 0 and 1. However, this can add bias in our model, as it will start giving higher preference to the pore water of the Quaternary parameter as 1>0, and ideally, both labels are equally important in the datasets. To address this issue, we will use the one hot encoding technique, which will create a binary vector of length 5. Here, the label 'the surface water', which is encoded as '0', has a binary vector of [0,0,0,0,1]. As is shown in Table 3. Deep learning basics A machine learning algorithm is an algorithm that is able to learn from data. As a special machine learning algorithm, most modern deep learning models are based on arti cial neural networks (ANNs), which form the basis of most deep learning methods and are a class of supervised learning techniques that mimic biological neural networks (Fig. 3). ANN is built from one or more layers containing a series of neurons 12 . The weights and biases between different neurons adjust as learning proceeds with the aim of minimizing the loss between the predicted output and actual output. The training processes of the ANN are the adjustment processes of weights and biases, which are carried out by a back propagation procedure. In the procedure, the gradient descent algorithm is used to update the weights and biases of neurons by estimating the gradient of the loss function. In the process of training, weights and biases accept an adjustment proportional to the partial derivative of the loss function relative to the current weights and biases. With the increasing number of layers, the problem of vanishing gradients, however, makes ANNs hard to train 13 . Typically, when training an ANN model, we have access to a training set, we can compute some error measure on the training set, called the training error, and we reduce this training error. Thus far, what we have described is simply an optimization problem. The training and test data are generated by a probability distribution over datasets 14 . Deep learning architectures Deep learning is a subset of machine learning where the arti cial neural network comes in relation. It solves all the complex problems with the help of algorithms and its process. This idea is that the additional level of abstraction improves the capability of the network to generalize to unseen data and hence outperforms traditional ANN on data outside of the network training set. The learning process is deep because the structure of arti cial neural networks consists of multiple input, output, and hidden layers. Each layer contains units that transform the input data into information that the next layer can use for a certain predictive task 15 . While indisputably powerful tools, traditional arti cial neural networks (ANNs) and more classical machine learning techniques rely on developers identifying the typical features that describe the problem. In this work, a deep learning approach is applied to the problems of source discrimination of mine water inrush. Deep learning further exploits the power of ANNs by relying on the network itself to identify, extract, and combine the inputs into abstract features that contain much more pertinent information to solve the problem, that is, predicting the output, as illustrated in Fig. 4. Na + +K + , Ca 2+ , Mg 2+ , Cl − , SO2-4, HCO-3. Every neuron accepts inputs from neurons on the previous layer based on linear or nonlinear activation functions (e.g., ReLU). The contents of six elements are delivered from the input layer to the output layer, where the output layer corresponds to the expectation to be predicted, which are the surface water, pore water of Quaternary limestone, sandstone water of Permian limestone, karst water of Carboniderous limestone and karst water of Cambrian limestone. An ANN with three hidden layers and one output layer is shown in Fig. 5. Every layer constitutes a module through which one can backpropagate gradients. At every layer, we compute the total input i to every unit rst, which is a weighted sum of the outputs of the units in the layer below. Then, a nonlinear function f is applied to i to obtain the output of the unit. For the sake of simplicity, the bias terms are omitted. The nonlinear functions in the hidden layer using the ANN include the recti ed linear unit (ReLU) f(z)= max(0,z). At the output layer, softmax is used to calculate the probability of the water source, which is commonly used in recent years 16 . At every hidden layer, we calculate the error derivative with respect to the output of every unit, which is a weighted sum of the error derivative with respect to the total inputs to the units in the layer above. Then, we convert the error derivative with respect to the output into the error derivative with respect to the input by multiplying it by the gradient of f. At the output layer, the error derivative with respect to the output of a unit is calculated by differentiating the cost function. This gives y l −t l if the cost function for unit l is 1/2*(y l −t l ) 2 , where t l is the target value. Once ∂E/∂z k is known, the error derivative for the weight w jk on the connection from unit j in the layer below is just y j ∂E/∂z k . The Python deep learning library Keras, with a TensorFlow backend and GPU acceleration, is used to train the ANN. TensorFlow is an end- to-end open source platform for machine learning. It has a comprehensive, exible ecosystem of tools, libraries, and community resources that allows researchers to push the state of the art in DL, and developers easily build and deploy DL-powered applications. The model parameters of the intelligent evaluation of the DNN model are shown in Table 4. Results And Discussion All of training process and result can be shown from Tensorboard, which is a browser based application that helps to visualize your training parameters (weights, biases & metrics). This shows the distribution of tensors in histograms and is used to show the distribution of weights and biases in every epoch regardless of whether they change as expected. We plot the histogram distribution of the weight for the rst fully connected layer every 20 iterations. It takes an arbitrarily sized and shaped tensor and compresses it into a histogram data structure consisting of many bins with widths and counts. The data source of distributions is the same as the histogram, which is shown in different former (Fig. 6). The distribution of weights and bias of the rst layer are shown in Fig. 6((a) and (c)). The abscissa represents the training times, and the ordinate represents the range of weights. It shows the range of weight values in the training process as a whole, which is constrained to learn to the layer by optimizing its weights or the layer truly 'eats up' many errors. Almost the same number of weights have values of -0.8 to 0.8 and everything in between. There are some weights having slightly smaller or higher values, but it might not be using its full potential. In comparison, this simply looks like the weights have been initialized using a uniform distribution with zero mean and value range -0.8 to 0.8 (Fig. 6 (c) and (d)). The histogram of the layer forms a bell curve-like shape. The values are centered around a speci c value, but they may also be greater or smaller than that. Each slice in the histogram visualizer displays a single histogram. The slices are organized by step; older slices are further 'back' and darker, while newer slices are close to the foreground and lighter in color. The y-axis on the right shows the step number. Most values appear close around the mean of 0, but values do range from -1.3 to 1.2. With increasing training times, the color of the curves gradually becomes lighter from back to front. There are many slices in Fig. 6(b,d), and each slice represents the frequency of the weight in the distribution of weights. Accuracy and loss are unitless numbers that indicate how closely the classi er ts the validation training data. A loss value of 0 represents a perfect t. The further the accuracy is from 0, the more accurate the t. Separate loss plots are provided for the batches The accuracy and loss of training metric of deep neural network and BP neural network have been compared, which are drawn by Matplotlib (Fig. 7). In Fig. 7, the abscissa axis represents the times of forward calculation and back propagation, and the vertical axis represents the accuracy or loss. The blue curve represents the accuracy and loss of training metric of deep neural network, and the red curve represents the accuracy and loss of training metric of BP neural network. It can be seen that the accuracy of training metric of deep neural network is higher than BP neural network, and the loss of training metric of deep neural network is lower than BP neural network. It means that the deep neural network do a better job than BP neural network in the source discrimination of mine water. The probability is based on the fraction of correctly predicted values to the total number of values predicted to be in a class, which is calculated by the softmax function (Fig. 8). Ten mine water samples were inputted into the trained DNN model to test its accuracy of prediction. The data is also be inputted into BP model. The prediction result is shown in Table 5. From the table, we can see that nine water samples have been predicted correctly with the DNN model, and one water sample has been predicted incorrectly. With the BP model, four water samples were predicted correctly, and six water samples were predicted incorrectly. The prediction result can also be seen in the 3-D histogram, as shown in Fig. 9. Conclusions And Outlooks In the research reported here, we apply deep learning methods to discriminate the source of mine water. (1) Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. The method has dramatically improved the state of the art in the source discrimination of mine water. Deep learning discovers intricate structures in large data sets by using the back propagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. (2) On the basis of hydrochemical data processing, a deep learning model was designed to train the hydrochemical data. Ten new samples of mine water were tested to determine the precision of the model. Nine samples of mine water were predicted correctly. The deep learning model presented here provides signi cant guidance for the discrimination of mine water. (3) This high predictive accuracy, combined with very low computational costs-execution of the full framework takes place on the order of milliseconds-makes the developed networks very well suited for discriminating source of mine water. The geological section of Pingdingshan coal eld Softmax layer as the output layer 3D-Histogram of probability of source of mine water
4,664.8
2021-12-01T00:00:00.000
[ "Environmental Science", "Computer Science", "Engineering" ]
Trichophyton Antigens Associated with IgE Antibodies and Delayed Type Hypersensitivity The dermatophyte fungus Trichophytonexhibits unique immunologic properties by its ability to cause both immediate and delayed type hypersensitivity. An 83-kDaTrichophyton tonsurans allergen (Tri t 4) was previously shown to elicit distinct T lymphocyte cytokine profiles in vitro. The homologous protein, Tri r 4, was cloned from aTrichophyton rubrum cDNA library, and the recombinant protein was expressed in Pichia pastoris. This 726-amino acid protein contained an arrangement of catalytic triad residues characteristic of the prolyl oligopeptidase family of serine proteinases (Ser-Asp-His). In addition, a novelTrichophyton allergen, encoding 412 amino acids, was identified by its human IgE antibody-binding activity. Sequence similarity searches showed that this allergen, designated Tri r 2, contained all of the conserved residues characteristic of the class D subtilase subfamily (41–58% overall sequence identity). Forty-two percent of subjects with immediate hypersensitivity skin test reactions to a Trichophyton extract exhibited IgE antibody binding to a recombinant glutathione S-transferase fusion protein containing the carboxyl-terminal 289 amino acids of Tri r 2. Furthermore, this antigen was capable of inducing delayed type hypersensitivity skin test reactions. Our results define two distinct antigens derived from the dermatophyte Trichophyton that serve as targets for diverse immune responses in humans. Dermatophyte fungi of the genus Trichophyton colonize keratinized tissues in humans including nails, hair shafts, and the stratum corneum of the skin. Trichophyton tonsurans, Trichophyton mentagrophytes, and Trichophyton rubrum are common causes worldwide of tinea capitis, athlete's foot, and onychomycosis (infection of the nail beds) (1). An estimated 30 -70% of adults are asymptomatic carriers of these pathogens, and the incidence of symptomatic disease increases with age (2). The immune response to antigens derived from Trichophyton is unique in that both immediate hypersensitivity (IH) 1 and delayed type hypersensitivity (DTH) skin test reactions are induced. Studies suggest that the nature of the underlying immune response to Trichophyton antigens is related to the severity of dermatophytosis; IH skin tests are associated with chronic recurrent infections characterized by low-grade inflammatory lesions and the presence of IgE antibodies (Ab) (4 -7). In contrast, DTH reactions are associated with highly inflamed lesions that resolve spontaneously and a resistance to re-infection (4, 8 -13). The implication of these findings is that cellmediated immune responses to Trichophyton are more effective at eradicating infection and may confer protection. Chronic dermatophytosis has been associated with allergic disease in the respiratory tract in individuals with immediate hypersensitivity (14 -17). Furthermore, exposure to Trichophyton proteins may result in bronchial sensitization and symptomatic asthma that can be controlled with systemic antifungal therapy (7,18,19). Experimental mouse models support a role for distinct T lymphocyte helper subsets in fungal infections (20). Furthermore, there is mounting evidence that a dichotomy in the immune response to a variety of pathogens, including Trichophyton, exists in humans and that these responses are regulated by distinct CD4 ϩ T cell subsets (20 -28). Characterization of antigens derived from Trichophyton provides a model system for studying both IgE antibody-and cell-mediated immune responses in humans; elucidation of the amino acid sequences of these antigens is relevant to structural analyses of intrinsic antigenic properties governing diverse immune responses and to the identification of antigenic determinants associated with immediate and delayed type hypersensitivity. Furthermore, elucidation of the biologic function of these unique antigens may define a role in fungal pathogenicity. We previously demonstrated that an 83-kDa T. tonsurans antigen (Tri t 4) elicited IH and DTH skin test reactions in different individuals (27). IH skin tests were associated with IgG, IgE, and IgG4 Ab specific for Tri t 4, whereas DTH reactions were associated with only low levels of IgG Ab. In addition, short-term T cell lines specific for Tri t 4 had distinct cytokine profiles characteristic of a Th1 and Th2/ThO phenotype that correlated with skin test reactivity in vivo (28). Here we describe the molecular cloning and expression of the Tri t 4 homologue, Tri r 4, produced by T. rubrum and define its limited sequence identity to the prolyl oligopeptidase family of serine proteinases. In addition, we characterize a novel T. rubrum allergen (Tri r 2) that has a high degree of sequence identity to the subtilase enzyme family; this protein exhibits human IgG and IgE Ab binding properties and the ability to induce DTH skin test reactions. EXPERIMENTAL PROCEDURES cDNA Cloning-Cultures of T. rubrum, T. mentagrophytes, and T. tonsurans were established in 25 ml of Sabouraud dextrose broth, and culture filtrates were screened using an assay for Protein IV as described previously (27). The T. rubrum cultures produced the highest concentration of Protein IV and were selected for construction of a cDNA library. Natural Protein IV was previously isolated from T. tonsurans, and this protein is now correctly termed Tri t 4 in keeping with allergen nomenclature. Thus, the homologous protein produced by T. rubrum is Tri r 4. Six grams of T. rubrum cells harvested on day 7 were washed in phosphate-buffered saline and ground with a mortar and pestle pre-cooled at Ϫ70°C. Messenger RNA was isolated from 6 g of culture material using a FastTrack kit (Invitrogen, Carlsbad, CA). A T. rubrum cDNA library was prepared from 10 g of mRNA in the UniZAP-XR phagemid expression vector (Stratagene, La Jolla, CA). cDNA clones were identified by screening the library with either a 1:5000 dilution of serum obtained from a mouse immunized with natural Tri t 4 (n-Tri t 4) or a 1:2 dilution of an IgE serum pool from four individuals with high titer IgE antibodies and IH skin test reactions to a Trichophyton extract (29). Selected cDNA clones were screened against individual sera from 10 subjects with IH skin test reactions to Trichophyton and five individuals with DTH or negative skin test reactions. DNA sequencing was carried out by automated sequencing (ABI Prism 377, Applied Biosystems, Inc., Foster City, CA). Sequences obtained were compared with the National Biomedical Research Foundation, Swiss-Prot, and GenBank TM Data Banks using FASTA. Sequence alignments were performed using the GCG program. The pres- [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] contain the conserved features of a signal peptide with a predicted cleavage site between Ala 19 and Phe 20 . Underlined regions represent amino acid sequences previously obtained for the NH 2 terminus and for six enzymatically generated peptides of natural Tri t 4. Catalytic triad residues (q) and four potential sites of N-linked glycosylation (X) are indicated. The stop codon TAG is shown (*). ence of signal peptides was confirmed using the prediction algorithm developed by Nielsen et al. (30). Expression of Recombinant Tri r 4 in Pichia pastoris-Recombinant Tri r 4 (r-Tri r 4) was expressed in P. pastoris as a hexahistidine-tagged protein using the pPICZ␣A expression vector (Invitrogen). Plasmid DNA (50 ng) encoding Tri r 4 was used as a template to generate a 2178-bp DNA fragment by polymerase chain reaction. The following primers for polymerase chain reaction were synthesized: 5Ј-CCGGAAT-TCTTTACCCCAGAGGACTTC-3Ј (sense), containing an EcoRI restriction site; and 5Ј-GCTCTAGAGCGTCGAAGTAAGAGTGAGC-3Ј (antisense), containing an XbaI restriction site. The 2178-bp polymerase chain reaction-amplified DNA fragment was ligated into EcoRI-XbaIdigested pPICZ␣A. Escherichia coli strain TOP10FЈ was transformed, and plasmid DNA was purified from Zeocin-resistant transformants selected on low salt LB medium containing 25 g/ml Zeocin. Yeast strain KM71 was transformed by electroporation (Bio-Rad GenePulser; 1500 V, 25 microfarads, 200 S) with 5 g of DNA linearized by digestion with PmeI. Transformants were selected on yeast extract peptone dextrose agar containing 100 g/ml Zeocin after incubation at 30°C for 3 days. A single colony was used to inoculate 10 ml of buffered glycerol complex medium, and cultures were grown at 30°C in a shaking incubator (300 rpm) until the culture reached A 600 nm ϭ 2.5. The process was repeated using 10 ml of culture to inoculate 1 liter of medium. After reaching A 600 nm ϭ 2.5, cells were harvested by centrifugation (3000 ϫ g for 5 min) and resuspended in 100 ml of buffered methanol complex medium containing 0.5% methanol. Expression of r-Tri r 4 was induced at 30°C in the presence of methanol for 4 days. The recombinant protein was purified from culture supernatants using immobilized nickel chelate (Probond resin, Invitrogen). Purity was assessed by SDSpolyacrylamide gel electrophoresis (PAGE) with silver staining, and protein yields were measured by the Bradford assay. Proteins expressed in pPICZ␣A contain an NH 2 -terminal ␣ factor signal sequence that targets expressed proteins to the secretory pathway and into the culture medium. Proteins also contain carboxyl-terminal hexahistidine and Myc epitope tags. NH 2 -terminal amino acid sequence analysis of r-Tri r 4 by Edman degradation confirmed cleavage of the signal se-quence and the presence of the first 22 NH 2 -terminal residues corresponding to those of n-Tri t 4. Expression of Tri r 2 in E. coli-Plasmid DNA containing T. rubrum clone 9A (ϳ1500 bp) was used as a template to generate an 867-bp DNA fragment encoding the carboxyl-terminal 289 amino acids corresponding to the putative mature form of Tri r 2. Primers for polymerase chain reaction incorporated EcoRI and XhoI restriction sites to allow subcloning into the pGEX-4T-3 expression vector and were as follows: 5Ј-CCGGAATTCGGGCACTAACCTCACC-3Ј (sense), containing an EcoRI restriction site; and 5Ј-CCGCTCGAGTTTGCCGCTGCCG-3Ј (antisense), containing an XhoI restriction site. The 867-bp polymerase chain reaction-amplified DNA fragment was ligated into EcoRI-XhoIdigested pGEX-4T-3. Expression of the 29-kDa putative mature form of Tri r 2 as a fusion protein with glutathione S-transferase (GST) was induced in E. coli strain BL21 with 0.2 mM isopropyl-1-thio-␤-D-galactopyranoside at 37°C. The recombinant protein (GST-Tri r 2) was purified from cell lysates using glutathione-Sepharose (yield of ϳ2-5 mg/liter of culture). Amino acid sequencing of the fusion protein by Edman degradation confirmed the presence of the first 5 NH 2 -terminal residues of the GST moiety. Purified GST-Tri r 2 was dialyzed against phosphate-buffered saline, and purity was analyzed by silver-stained SDS-PAGE for the purpose of skin testing. The recombinant protein was also purified by electroelution from a 12% acrylamide gel for the purpose of radiolabeling (31). Assays of Enzymatic Activity-Recombinant GST-Tri r 2 and Tri r 4 proteins were tested for proteolytic activity using a variety of substrates. Proteinase K (Sigma) was used as a positive control for all assays. Negative controls included reactions containing no putative enzyme. Glutathione S-transferase served as an additional negative control for assays of GST-Tri r 2 enzymatic activity. All assays were performed in duplicate. Substrates tested included azoalbumin, azocasein, azocollagen, and keratin according to methods previously described (32,33). Briefly, 10 g of the sample to be tested was incubated with 0.1 mg of azoalbumin, 2% (w/v) azocasein, 5 mg of azocollagen, or 5 mg of keratin azure in reaction volumes of 1, 0.2, 0.5, and 0.3 ml, respectively. Reactions were terminated after incubation for 24 h at 37°C (azocasein, azocollagen, and keratin) or after 30 min at 20°C Five clones encoding a single protein were obtained after screening a T. rubrum cDNA library with pooled IgE antibodies. Individual sera from 10 subjects with IH skin test reactions and from five subjects with DTH or negative (Neg.) skin test reactions were used to screen each clone by plaque immunoassay. Results are shown for clone tr6B. Similar binding patterns were observed for all five clones. Arrows denote subjects with IgE Ab binding, and individuals with bronchial reactivity to Trichophyton are indicated (*). (azoalbumin). Reaction products were measured spectrophotometrically. One unit of activity was defined as an increase in absorbance of 0.01. Recombinant Tri r 2 was also tested using the anilide substrate succinyl-(Ala) 3 -p-nitroanilide (32). Briefly, reactions contained a final concentration of 10 g/ml Tri r 2 and substrate concentrations of 0.1-2.0 mmol/liter; assays of 1-ml volume were incubated at 20°C for 1 h, and the absorbance of liberated nitroaniline was measured spectrophotometrically. Immunoassays for IgG and IgE Antibodies to Tri r 2 Fusion Protein-IgE and IgG Ab to GST-Tri r 2 were measured using an antigen binding radioimmunoassay according to methods previously described (27). Serum samples diluted 1:2 and 1:10 (IgE Ab assay) or 1:12.5 and 1:50 (IgG Ab assay) were incubated with 125 I-labeled GST-Tri r 2 (ϳ120,000 cpm added) for 4 h at room temperature. IgE myeloma serum (patient P. S.) diluted 1:300 was used as carrier in the IgE binding assay. Immune complexes were precipitated overnight at 4°C with 50 l of sheep anti-human IgE or 50 l of sheep anti-human IgG (Binding Site, Inc., San Diego, CA), and precipitates were counted in a ␥-counter. Quantitation of IgG Ab was carried out using a control curve constructed with pooled sera from patients K. M., J. C., and H. W., assigned to contain 2000 units/ml IgG antibodies. IgE Ab measurements are expressed as counts bound per min. Specificity of antibody binding to Tri r 2 was assessed by comparing values obtained for sera pre-absorbed with GST (5 mg of GST/ml of cyanogen bromide-activated Sepharose) and nonabsorbed sera. Human Subjects and Skin Testing-Sera were obtained from 73 subjects previously skin-tested with 0.03 ml of Hollister-Stier Trichophyton mixture containing T. tonsurans, T. rubrum, and T. mentagrophytes species (1:200, w/v). Intradermal skin testing was done with 0.03 ml of purified GST-Tri r 2 at 1 and 10 g/ml diluted in 0.05% human serum albumin in phenol/saline solution. Prick testing was carried out prior to intradermal testing using a 10-fold higher concentration of protein. Test sites were examined at 20 min after injection and at 24 and 48 h. Positive delayed reactions were defined as erythema of Ն5-mm diameter at 24 h. Subjects were skin-tested with purified GST as a negative control. Skin testing of human subjects using GST-Tri r 2 was approved by the University of Virginia Human Investigation Committee. Molecular Cloning of Tri r 4 -Screening a T. rubrum cDNA library with a human serum pool obtained from five individuals with high titer IgE Ab to n-Tri t 4 failed to identify positive plaques. However, screening the library with polyclonal serum obtained from a mouse immunized with n-Tri t 4 resulted in identification of three clones containing insert sizes of ϳ1100 bp (clone tr2), 1600 bp (clone tr1), and 2300 bp (clone tr3). Nucleotide sequence analysis confirmed that all clones encoded (Fig. 1). An estimated molecular mass of 78,193 Da (pK ϭ 2.2) without the signal peptide sequence was consistent with a non-glycosylated form of n-Tri t 4. Four potential sites of N-linked glycosylation were identified, and the presence of a signal sequence with a predicted site of cleavage between Ala 19 and Phe 20 was determined. Amino acid sequences of the amino terminus and six enzymatically generated internal peptides of n-Tri t 4 (comprising 108 residues) aligned with the deduced amino acid sequence of clone tr3. This confirmed that T. rubrum clone tr3 encoded a protein with high amino acid sequence homology to n-Tri t 4, and we have designated this recombinant protein Tri r 4. Amino acid sequence similarity searches identified homology between r-Tri r 4 and the prolyl oligopeptidase (S9) family of serine proteinases. These enzymes contain the distinctive Ser-Asp-His arrangement of catalytic triad residues in the carboxyl-terminal portion of the molecule. A short region spanning ϳ250 residues was identified within r-Tri r 4 that contained sequence similarity to prolyl oligopeptidases derived from other eukaryotic sources (20 -25% identity, 231-282-amino acid overlap). A Gly-X-Ser-X-Gly motif comprising the nucleophile serine at position 539 was present within this region. The highest sequence similarity was between r-Tri r 4 and human acylaminoacyl peptidase (25.6% identity and 57% similarity in a 242-amino acid overlap) (Fig. 2). Additional homologues included Saccharomyces dipeptidyl aminopeptidases B and C and dipeptidyl peptidase IV (DPP4) and dipeptidyl peptidase IVlike proteins (DPP6) derived from several mammalian species. Recombinant Tri r 4 was expressed in P. pastoris using the pPICZ␣A vector system. SDS-PAGE analysis at 24 h after induction of protein expression revealed the presence of an 85-kDa band in all recombinants selected consistent with a glycosylated form of r-Tri r 4 containing carboxyl-terminal Myc epitope and hexahistidine tags (Fig. 3A). The recombinant protein was purified by affinity purification from culture supernatants harvested on day 4, resulting in a yield of ϳ200 mg/liter of culture. The pure protein migrated as a single 85-kDa band on SDS-PAGE (Fig. 3B). Identification of a Novel T. rubrum Allergen as a Member of the Subtilase Family of Serine Proteinases-Since screening the T. rubrum cDNA library with an initial human serum pool (Pool 1) failed to identify positive plaques, a second pool was established (Pool 2) in order to screen for additional putative T. rubrum allergens. Sera were obtained from four individuals with high IgE antibody titers and IH skin test reactions to a Trichophyton extract. All subjects had chronic dermatophytosis, whereas three had asthma and positive bronchial provocation to Trichophyton. Five positive clones were identified with insert sizes of ϳ900 bp (clone tr6D), 1000 bp (clone tr6C), 1200 bp (clone tr6B), 1300 bp (clone tr6A), and 1500 bp (clone tr9A). Nucleotide sequence analysis confirmed that all five clones encoded the same protein and that this protein was unrelated to Tri r 4. A representative clone (tr6B) was screened with sera obtained from individuals with different skin test reactivity to Trichophyton. Eight of 10 subjects with IH skin test reactions displayed IgE antibody binding to this clone, whereas sera from DTH and negative skin test subjects yielded no positive responders (Fig. 4). Clone 9A contained an open reading frame FIG. 6. Sequence alignment of the Tri r 2 putative mature form with catalytic domains of other fungal subtilases. Tri r 2 showed high sequence identity to proteinase T produced by T. album (GenBank TM P20015), an alkaline proteinase (ALP) derived from A. fumigatus (GenBank TM P28296), proteinase Pr1 from M. anisopliae (GenBank TM P29138), and proteinase ISP6 from S. pombe (GenBank TM P40903). Alignments obtained using the GCG program are shown. Conserved residues of the class D subfamily of subtilase enzymes (boldface) and catalytic residues (q) are indicated. Asterisks denote identical residues. encoding 412 amino acids with an estimated molecular mass of 42,632 Da and a pK value of 1.9 (Fig. 5). The first 20 NH 2terminal residues contained the conserved features of a signal peptide, and four potential sites of N-linked glycosylation were identified. Sequence similarity searches showed a significant homology between the deduced amino acid sequence of clone 9A and serine proteinases of the subtilase family (S8) derived from other fungal species (Fig. 6). Conserved amino acid motifs were identified flanking aspartic acid, histidine, and serine residues, which form the catalytic triad characteristic of this enzyme family. Over 70 subtilases are currently known, belonging to four subfamilies. The deduced amino acid sequence of clone 9A contained all of the conserved residues characteristic of the class D subfamily, which consists of enzymes found only in yeast, fungi, and Gram-negative bacteria (Fig. 6). The highest degree of amino acid sequence identity was between Tri r 2 and proteinase T produced by the thermophilic fungus Tritirachium album (58.2% identity in a 304-amino acid overlap). Other enzymes with striking homologies included proteinases derived from the pathogenic fungus Aspergillus fumigatus (41.3% identity), the insect-colonizing fungus Metarhizium anisopliae (42.3%), and the yeast Schizosaccharomyces pombe (41%) (Fig. 6). Sequence alignments identified the presence of a putative pro-region in Tri r 2 (residues 21-123) with a predicted cleavage site between asparagine and glycine residues (positions 123 and 124, respectively) generating a putative mature product (positions 124 -412) with an estimated molecular mass of 29,171 Da (pK ϭ 1.9) (Fig. 5). Demonstration of the Immune Response to Tri r 2-The putative mature form of Tri r 2 was produced in E. coli using the pGEX-4T-3 vector. The resulting GST fusion protein purified from bacterial lysates by glutathione affinity chromatography migrated as a single 57-kDa band on SDS-PAGE, consistent with the presence of a GST moiety fused to the 29-kDa putative mature form of Tri r 2. Specific IgE Ab were measured in 73 sera: the prevalence of IgE Ab was significantly higher among subjects with IH skin test reactions (43%) compared with those with DTH or negative skin test reactions (12%) (p Ͻ 0.01) (Fig. 7A). It has been established that GST exhibits IgE Ab binding properties (34); however, absorption of sera with GST did not reduce the prevalence of IgE Ab binding to Tri r 2. The prevalence of IgG Ab was relatively high in all skin test groups. However, mean levels of IgG Ab were significantly higher in subjects with immediate reactions compared with those with delayed or negative skin test reactions (p Ͻ 0.01) (Fig. 7B). Recombinant 125 I-GST-Tri r 2 showed strong reactivity with IgG Ab (up to 46,000 cpm bound) and IgE Ab (up to 10,500 cpm bound), demonstrating that the putative mature form of Tri r 2 retained B cell epitopes. Intradermal skin testing was used to evaluate the reactivity of recombinant Tri r 2 in vivo. Five of nine individuals with delayed reactions to the Trichophyton mixture showed a positive delayed type hypersensitivity reaction maximal at 24 h; four of these subjects are shown in Table I. Activity against Protein Substrates-Given the amino acid sequence homology to known proteinases, the enzymatic activity of recombinant Tri r 2 and Tri r 4 was tested using a variety of general proteolytic substrates including albumin, casein, collagen, and keratin. No proteolytic activity was observed for GST-Tri r 2 for any of the substrates tested when compared with a GST control. Furthermore, this protein exhibited no activity against the anilide substrate succinyl-(Ala) 3 -p-nitroanilide. However, r-Tri r 4 exhibited weak activity against keratin (447 units/mg) compared with proteinase K (7490 units/ mg) and showed no activity against the other substrates tested. DISCUSSION We have reported the amino acid sequences and expression of two distinct proteins derived from the dermatophyte fungus Trichophyton. The 83-kDa mannose-rich natural glycoprotein (n-Tri t 4) was previously shown to elicit IH and DTH skin test reactions in different individuals (27). To our knowledge, this is the first reported sequence of a fungal antigen associated with distinct skin test reactions. The homologous recombinant protein produced by T. rubrum (r-Tri r 4) is a 726-amino acid protein with limited amino acid sequence homology to the prolyl oligopeptidase (S9) family of serine proteinases. Despite the relatively low amino acid sequence similarity (ϳ20%), several characteristics provide convincing evidence that r-Tri r 4 FIG. 7. IgE and IgG antibody binding to recombinant Tri r 2. Sera from 73 individuals with different skin test reactivities to the Trichophyton mixture were analyzed for IgE and IgG Ab binding to 125 I-labeled Tri r 2 fusion protein. A, the prevalence of IgE Ab binding in IH subjects and in those subjects with DTH or negative (NEG) skin test reactions was 43 and 12%, respectively. Arrows denote individuals with IgE Ab binding as determined by plaque immunoassay. B, mean IgG Ab levels were significantly higher in subjects with immediate reactions (p Ͻ 0.01). Bars represent geometric means. Individuals with bronchial reactivity to Trichophyton are indicated (E). belongs to this family of proteins: the distinctive arrangement of catalytic triad residues and their localization in the carboxylterminal region of the molecule, the presence of conserved amino acids flanking putative catalytic residues, its high molecular mass consistent with other members of this family, and a large variable NH 2 -terminal portion. In contrast to subtilases, S9 peptidases do not exist as proenzymes and are synthesized in an active form (35,36). These enzymes, which may be either cytosolic or membrane-bound, exhibit restricted specificities that may limit degradation of other cell proteins (37). Some family members have been reported to be involved in a variety of nonenzymatic physiologic processes; for example, the membrane glycoprotein dipeptidyl peptidase IV (CD26) plays a role in cell-matrix adhesion and transmembrane signaling (38 -40). Recombinant Tri r 4 exhibited a low level of proteolytic activity against keratin. Dermatophyte fungi are adapted to infect keratinized tissues by virtue of their ability to utilize keratin as a nutrient source. Whether the natural Tri r 4 and Tri t 4 proteins are functionally keratinolytic in vivo remains to be established. If this proves to be the case, the enzymatic activity of these proteins could facilitate colonization and may contribute to pathogenicity. P. pastoris was selected for expression of r-Tri r 4 since a eukaryotic system is more appropriate for expression of fungal antigens, and high yields of foreign proteins, including some allergens, were previously reported (41,42). Yields of recombinant protein were very high (ϳ200 mg/liter of culture). SDS-PAGE analysis suggested that r-Tri r 4 was glycosylated to a degree comparable to the natural antigen (ϳ5% carbohydrate by weight). Despite this, preliminary studies suggest that r-Tri r 4 exhibits partial loss of B cell epitopes as determined by decreased binding to IgG antibodies compared with natural antigen (data not shown). We hypothesize that partial loss of conformational epitopes on r-Tri r 4 may result from incorrect folding owing to its large size, a factor that may also contribute to its low enzymatic activity. Preliminary results have also shown that recombinant Tri r 4 failed to elicit DTH skin test responses in three individuals with DTH responses to natural Tri t 4. These findings are surprising since only linear antigenic determinants are required for initiation of T cell responses in vitro. Since recombinant Tri r 4 is derived from a T. rubrum cDNA library, and natural Tri t 4 was purified from a T. tonsurans extract, this raises the possibility that antigenic properties differ between homologous proteins derived from the two fungal species. Alternatively, it could be hypothesized that conformational epitopes or post-translational modifications of linear antigenic determinants required for DTH responses fail to occur in the recombinant protein. Similar findings have been demonstrated for a ribosomal protein derived from Brucella melitensis (43). This antigen typically induces DTH responses in Brucella-sensitized guinea pigs. However, recombinant antigen expressed in E. coli produced no skin response. It was concluded that post-translational acylation of protein is re-quired for DTH activity. Recombinant Tri r 4 will serve as a valuable tool for distinguishing the relevance of conformational epitopes or post-translational modifications in the induction of DTH responses in humans. The second antigen defined is an allergen with high amino acid sequence similarity to serine proteinases of the class D subtilase subfamily. Eight of 10 subjects with IH skin test reactions to Trichophyton displayed IgE antibody binding to this allergen, five of whom had bronchial reactivity to Trichophyton. True subtilisins derived from bacteria are among the best characterized of the subtilase enzyme family. Subtilisin Carlsberg (Alcalase), a class A subtilase produced by Bacillus licheniformis, is one of several subtilases used in detergent formulations. Soon after the initiation of large-scale production of enzyme-containing detergents, allergic respiratory reactions to the enzyme components were noted among factory workers (44,45). Thus, Tri r 2 is a member of the same enzyme family as an antigen previously related to asthma. Bacterial expression of the putative mature form of Tri r 2 in the absence of a fusion partner resulted in rapid degradation during purification. One possible explanation is that the predicted site of cleavage of the pro-region is incorrect and that the presence of additional NH 2 -terminal flanking residues is required for stabilization of the carboxyl-terminal domain containing active-site residues characteristic of the subtilase family. Alternatively, the presence of the entire pro-region may be required to serve as a template for correct folding of this domain, as has been demonstrated for other subtilase enzymes (46); however, attempts to express Tri r 2 with the putative pro-region were unsuccessful. Production of the putative mature form of Tri r 2 as a GST fusion protein facilitated stabilization of this domain. Members of the class D subtilase subfamily have been shown to exhibit cuticle-degrading and elastase activities. The class D subtilase ALP (alkaline proteinase), produced by the pathogenic fungus A. fumigatus, exhibits elastase activity and has been proposed to contribute to fungal persistence in allergic individuals (47). However, no enzymatic activity of recombinant Tri r 2 was demonstrated. It is possible that Tri r 2 is not an enzyme; however, given the high degree of homology to subtilase enzymes, especially in the putative active site, it appears more likely that the lack of activity reflects features intrinsic to the recombinant protein. These may include suboptimal processing of the recombinant protein owing to the absence of the putative pro-region, lack of post-translational modification, or the presence of the NH 2 -terminal GST moiety. Alternatively, inappropriate substrates may have been selected for study. Tri r 2 expressed as a GST fusion protein was shown to exhibit IgE and IgG Ab binding characteristics in addition to mediating DTH skin test reactions. These findings suggest that expression of the carboxyl-terminal 289 amino acids containing the putative mature form of the protein was sufficient for immunologic function. This is important since the absence of the pro-region or amino-terminal flanking residues could possibly influence immunologic properties. Tri r 2 is a novel antigen in that it is the first recombinant protein demonstrated to induce both IgE Ab-and cell-mediated responses in humans. Furthermore, the high prevalence of IgE antibodies suggests that this protein is an important allergen among patients with chronic dermatophyte infection. Dermatophytosis is an important clinical problem both because of its chronicity and because current antifungal therapy is only curative in a small proportion of cases. Identification of the antigenic determinants associated with protective cell-mediated immune responses would make it possible to design peptide or recombinant protein vaccines to modify the natural course of the disease. The 29-kDa antigen Tri r 2 is a good candidate for the application of overlapping peptide methodology to define immunodominant epitopes in individuals with either IgE antibody or DTH reactions. Thus, definition of these proteins will make it possible to investigate T cell recognition associated with different responses. Characterization of Trichophyton antigens provides unique molecular tools not only for the development of immunotherapeutic strategies related to management of chronic dermatophytosis and the associated allergic disease, but also for the analysis of immunologic mechanisms governing diverse immune responses in humans.
6,977.6
1998-11-06T00:00:00.000
[ "Biology" ]
Stress Waves and Characteristics of Zigzag and Armchair Silicene Nanoribbons The mechanical properties of silicene nanostructures subject to tensile loading were studied via a molecular dynamics (MD) simulation. The effects of temperature on Young’s modulus and the fracture strain of silicene with armchair and zigzag types were examined. The maximum in-plane stress and the corresponding critical strain of the armchair and the zigzag silicene sheets at 300 K were 8.85 and 10.62, and 0.187 and 0.244 N/m, respectively. The in-plane stresses of the silicene sheet in the armchair direction at the temperatures of 300, 400, 500, and 600 K were 8.85, 8.50, 8.26, and 7.79 N/m, respectively. The in-plane stresses of the silicene sheet in the zigzag direction at the temperatures of 300, 400, 500, and 600 K were 10.62, 9.92, 9.64, and 9.27 N/m, respectively. The improved mechanical properties can be calculated in a silicene sheet yielded in the zigzag direction compared with the tensile loading in the armchair direction. The wrinklons and waves were observed at the shear band across the center zone of the silicene sheet. These results provide useful information about the mechanical and fracture behaviors of silicene for engineering applications. Introduction Since single atomic layer graphene sheets were fabricated in 2004, two-dimensional (2D) materials have received much increasing attention [1]. Recently, single-layer silicon, or silicene, has been successfully prepared and has become a new focus of engineering and scientific research [2,3]. Silicene has attracted great attention because of its excellent physical and electrical properties [4]. However, compared with graphene, boron nitride, monolayer MoS 2 , and other 2D materials, silicene has a low elastic modulus and mechanical strength, which may affect its applications to sensors and devices [5]. Yang et al. [6] found the ideal strengths for uniaxial tension and armchair uniaxial tension based on ab initio calculation. Zhang et al. [7] studied novel finite metal endohedral silicene-like silicon nanotubes with the density functional theory and found that their structural stability increase with an increasing tube length. Li et al. [8] studied the geometrical structures and electronic properties of the armchair-and zigzag-edge silicene nanoribbons that the termination with oxygen, and the hydroxyl-group was investigated using the first-principles method. In order to understand the mechanics and the interactions of nanomaterials, the simulation method capable of describing the process within the nanomaterials is indeed required. It is worth noting that a molecular dynamics (MD) simulation is capable of accurately describing the mechanical processes of materials in a nanoscale [9][10][11][12]. For instance, Ansari et al. [13] observed that bulk modulus was strongly size-dependent and decreased when increasing the length of silicene nanosheets under uniaxial and biaxial tension using molecular dynamics. Ince and Erkoc [14] found out the effect of increasing the width of a silicene nanoribbon, depending also on the temperature and the presence or absence of boundaries using molecular dynamics simulations. Roman and Cranford [15] found that Nanomaterials 2016, 6, 120 2 of 10 silicene was relatively weaker than graphene in terms of stiffness, but more rigid when being subject to bending due to its slightly buckled molecular geometry. In this study, a MD simulation was carried out to study the mechanical properties of the silicene nanostructure at different temperatures subject to tensile loading. Young's modulus, the fracture strain, and the strain energy of the nanosheets in the armchair and the zigzag directions were explored. We further focused on the mechanical characterization of silicene sheets with crack defects. Simulation Method The Tersoff potential [16] is used for modeling the interaction between silicon atoms. The constant-temperature molecular-dynamics simulations are performed using a velocity scaling thermostat for the temperature control. The silicene model simulation is equilibrated for 20 ps in the canonical ensemble. The simulation time step of 1 fs is employed. Table 1 shows the lattice parameters and the geometric structure of the silicene at equilibrium state. For comparison, Table 1 also includes previous studies of MD and density functional theory (DFT) calculations [17][18][19][20]. In this study, the angle between neighboring bonds was taken as θ 0 = 117.98˝and the bond length r 0 = 2.32 Å. Hence, the lattice constant, d 0 , and the buckling height, D 0 , could be calculated as 4.00 Å and 0.38 Å, respectively. The preface armchair and zigzag silicene sheets have the dimensions, in (width)ˆ(length), of 49.9 nmˆ56.0 nm and 49.9 nmˆ56.1 nm in an approximately squared shape. The armchair and zigzag silicene sheets contain 39,852 and 39,744 atoms. Two layers of silicon atoms on the left and the right, about 3 nm, are fixed in the length direction. The fixed layers are set to move during tensile loading along the y-direction. To study the deformation behavior of the silicene under tensile loading, a positive displacement with a stretch rate of 10 m/s in the lateral directions is applied to the atoms on both the left and the right edges of the silicene sheet. The linear slope of the in-plane stress σ in´plane and the strain ε can be defined as the effective modulus E as follows: The in-plane stiffness is calculated by using a linear fit of the stress-strain slope with a ranged tensile strain of 0.04-0.08. Silicene can be used for gas absorption or separation. Therefore, the temperature effects on the tensile property of the silicene were examined at different temperatures of 300, 400, 500, and 600 K. Two chiral types of armchair and zigzag on the fracture characteristics of the silicene were also studied. Figure 1a shows the armchair silicene sheet exhibiting the stable elastic behavior and the color representing the atomic potential energy variation. The atomistic waves occur, but no warp or wrinkle appears on the tensile silicene surface. By increasing the tensile strain, the armchair silicene has an initial shear deformation around the edge of the silicene sheet under a strain of about 0.156, as depicted in Figure 2b. As the strain increases to 0.226, the high energy band, such as the shear band, occurs in the sheet in the diagonal direction of the nanosheet, as illustrated in Figure 2c. The shear band is about 2-3 nm in width. The local wrinkles are induced by high potential covalently bonded to the interfacial plane. The edge shrinking deformation occurs around the slip band-fixed layer junctions. Stretching the silicene sheet yields wrinklons, which assembles to mimic the complete hierarchy phenomena. A wrinklon occurs and responds to the local transition area, which is needed to merge two wrinkles of wavelength into a larger one. Similar behavior has been discussed with regard to the wrinklons of graphene sheets [18,21]. The fracture gradually becomes large as the strain increases continuously until a strain of 0.3, as shown in Figure 1d. The failure mechanism of silicene sheets under uniaxial tension is attributed to elastic instability, unlike graphene [22]. A crack defect occurs at the high strain edge of the shear band. The sp 3 bonds of silicon atoms exert a strengthening influence on the wrinkles and stress waves. The wrinklons and waves are observed at the shear band cross center zone of the silicene sheet. Figure 1a shows the armchair silicene sheet exhibiting the stable elastic behavior and the color representing the atomic potential energy variation. The atomistic waves occur, but no warp or wrinkle appears on the tensile silicene surface. By increasing the tensile strain, the armchair silicene has an initial shear deformation around the edge of the silicene sheet under a strain of about 0.156, as depicted in Figure 2b. As the strain increases to 0.226, the high energy band, such as the shear band, occurs in the sheet in the diagonal direction of the nanosheet, as illustrated in Figure 2c. The shear band is about 2-3 nm in width. The local wrinkles are induced by high potential covalently bonded to the interfacial plane. The edge shrinking deformation occurs around the slip band-fixed layer junctions. Stretching the silicene sheet yields wrinklons, which assembles to mimic the complete hierarchy phenomena. A wrinklon occurs and responds to the local transition area, which is needed to merge two wrinkles of wavelength into a larger one. Similar behavior has been discussed with regard to the wrinklons of graphene sheets [18,21]. The fracture gradually becomes large as the strain increases continuously until a strain of 0.3, as shown in Figure 1d. The failure mechanism of silicene sheets under uniaxial tension is attributed to elastic instability, unlike graphene [22]. A crack defect occurs at the high strain edge of the shear band. The sp 3 bonds of silicon atoms exert a strengthening influence on the wrinkles and stress waves. The wrinklons and waves are observed at the shear band cross center zone of the silicene sheet. Figure 2a shows the zigzag silicene sheets exhibiting the stable symmetric elastic behavior and the ripple of atomic internal energy variation. Figure 2b,c shows the ripples and stress waves being enhanced by the increasing tensile strain, and the silicene has a high strength to bond elongation in the zigzag direction. When the subject strain of 0.3 is enough for large stretching deformation, as shown in Figure 2d, the waviness and shear band distortions occur around the center of the sheet. The complicated crack defects enhance the chain-similar to the fracture of silicon bond nanostructures. The bond breaking and fragmentation of the silicene sheet yield the tensile failure. This tensile failure process was in agreement with a previous study using density functional theory (DFT) calculations [23]. Wu et al. [23] studied dissociative adsorption of a Figure 2a shows the zigzag silicene sheets exhibiting the stable symmetric elastic behavior and the ripple of atomic internal energy variation. Figure 2b,c shows the ripples and stress waves being enhanced by the increasing tensile strain, and the silicene has a high strength to bond elongation in the zigzag direction. When the subject strain of 0.3 is enough for large stretching deformation, as shown in Figure 2d, the waviness and shear band distortions occur around the center of the sheet. The complicated crack defects enhance the chain-similar to the fracture of silicon bond nanostructures. The bond breaking and fragmentation of the silicene sheet yield the tensile failure. This tensile failure process was in agreement with a previous study using density functional theory (DFT) calculations [23]. Wu et al. [23] studied dissociative adsorption of a H 2 molecule on silicene with different tensile strains via DFT calculations. They found that the biaxial strain reached the critical value of about 12%, above which the structure of silicene after hydrogenation would be destroyed [23]. By comparing the different critical strains between DFT and this present result, similar results suggest both structures of silicene would be destroyed. This relative discrepancy is a result of the scale and deformation differences between DFT and MD. Nanomaterials 2016, 6, 120 4 of 10 H2 molecule on silicene with different tensile strains via DFT calculations. They found that the biaxial strain reached the critical value of about 12%, above which the structure of silicene after hydrogenation would be destroyed [23]. By comparing the different critical strains between DFT and this present result, similar results suggest both structures of silicene would be destroyed. This relative discrepancy is a result of the scale and deformation differences between DFT and MD. Figure 4a,b depicts the in-plane stress-strain curves of the silicene in the armchair and the zigzag directions with different circular hole sizes subject to tension at 300 K, respectively. When the hole size of the armchair and the zigzag silicene increases, the peak stress and the corresponding critical strain decreases. Additionally, the strength of the silicene sheet decreases as the crack hole increases. Figure 4a,b depicts the in-plane stress-strain curves of the silicene in the armchair and the zigzag directions with different circular hole sizes subject to tension at 300 K, respectively. When the hole size of the armchair and the zigzag silicene increases, the peak stress and the corresponding critical strain decreases. Additionally, the strength of the silicene sheet decreases as the crack hole increases. Figure 5a,b shows the snapshots of (a-d) the armchair and (e-h) the zigzag silicene sheets with different circular holes of 2, 4, 8 and 10 nm in diameter (D) at a temperature of 300 K at the peak stress, respectively. The hole of the silicene in the armchair direction results in lower waviness and distortions than that of the silicene sheet in the zigzag direction. The dynamic stress waves interact with the hole and the corner of the sheet. The strain induced stress-wave propagation plays an important role in the silicene surface. The higher potential energy zone occurs around the lateral side of the hole under the tensile strain. This is due to the lateral shrinkage of the sheet. Zhao et al. [26] found out the silicene structure failed due to the instability of the low bulked structure based on density functional theory calculations. Figure 6a-d shows the snapshots of the silicene nanoribbons in the armchair direction with a circular hole of 2 nm in diameter at a temperature of 300 K at different strains of 0.172, 0.184, 0.208 and 0.216, respectively. The corresponding strains at the maximum stress of pure and defective silicene nanoribbons in the armchair direction at a temperature of 300 K are 0.187 and 0.173, respectively. In Figure 6a, it can be seen that the hole crack is not easily to propagate completely before the critical strain of 0.172. The crack moves quickly after the critical strain, which is accompanied by a large amount of local plastic deformation along the diagonal shear band. The dissipation potential energy decreases with increasing strain due to the warping and ripples inducing the bond elongation at the shear band to release the concentration stress around the hole crack. The stress concentration takes place at the reentrant corner of the silicene sheet. Local stress and potential energy around the hole cavity decrease at larger strains, and propagate and expand along the shear zone, as shown in Figure 6b. Figure 6c,d shows that the failure and the wave shock of the interactions among the nanosheet atoms increased at a higher fracture process. Figure 5. Snapshots of (a-d) the armchair and (e-h) the zigzag silicene sheets at uniaxial peak tensile loading at a temperature of 300 K. Figure 5. Snapshots of (a-d) the armchair and (e-h) the zigzag silicene sheets at uniaxial peak tensile loading at a temperature of 300 K. Figure 6a-d shows the snapshots of the silicene nanoribbons in the armchair direction with a circular hole of 2 nm in diameter at a temperature of 300 K at different strains of 0.172, 0.184, 0.208 and 0.216, respectively. The corresponding strains at the maximum stress of pure and defective silicene nanoribbons in the armchair direction at a temperature of 300 K are 0.187 and 0.173, respectively. In Figure 6a, it can be seen that the hole crack is not easily to propagate completely before the critical strain of 0.172. The crack moves quickly after the critical strain, which is accompanied by a large amount of local plastic deformation along the diagonal shear band. The dissipation potential energy decreases with increasing strain due to the warping and ripples inducing the bond elongation at the shear band to release the concentration stress around the hole crack. The stress concentration takes place at the reentrant corner of the silicene sheet. Local stress and potential energy around the hole cavity decrease at larger strains, and propagate and expand along the shear zone, as shown in Figure 6b. Figure 6c,d shows that the failure and the wave shock of the interactions among the nanosheet atoms increased at a higher fracture process. Figure 5. Snapshots of (a-d) the armchair and (e-h) the zigzag silicene sheets at uniaxial peak tensile loading at a temperature of 300 K. Figure 8 shows the in-plane stiffness of silicene in the armchair and the zigzag directions. When the hole increases, the stiffness of the sheets decreases. The stiffness of the silicene sheet with a hole in the armchair direction has a high value than that in the zigzag direction. This behavior can be explained by the bond length and the bond angle of the applied tensile strain. When the tension is applied to the silicene in the armchair direction, the bond in the armchair direction stretches parallel to the tension direction. The bond length monotonically increases with increasing strain, but the bond length non-monotonically increases in the zigzag direction stretch. This result is in agreement with density functional theory calculations by Yang et al. [6]. and 0.224, respectively. The results show that the observed different circular holes vary between the armchair and the zigzag directions. The crack becomes longer but thinner along the tensile direction. The crack propagation originates from the edges and the rapid failure of the silicene sheet. The phenomenon is similar to the previous study on the failure stress and strain of graphene sheets [27]. Figure 8 shows the in-plane stiffness of silicene in the armchair and the zigzag directions. When the hole increases, the stiffness of the sheets decreases. The stiffness of the silicene sheet with a hole in the armchair direction has a high value than that in the zigzag direction. This behavior can be explained by the bond length and the bond angle of the applied tensile strain. When the tension is applied to the silicene in the armchair direction, the bond in the armchair direction stretches parallel to the tension direction. The bond length monotonically increases with increasing strain, but the bond length non-monotonically increases in the zigzag direction stretch. This result is in agreement with density functional theory calculations by Yang et al. [6]. Figure 8 shows the in-plane stiffness of silicene in the armchair and the zigzag directions. When the hole increases, the stiffness of the sheets decreases. The stiffness of the silicene sheet with a hole in the armchair direction has a high value than that in the zigzag direction. This behavior can be explained by the bond length and the bond angle of the applied tensile strain. When the tension is applied to the silicene in the armchair direction, the bond in the armchair direction stretches parallel to the tension direction. The bond length monotonically increases with increasing strain, but the bond length non-monotonically increases in the zigzag direction stretch. This result is in agreement with density functional theory calculations by Yang et al. [6]. Figure 8. In-plane stiffness using a linear fit of the stress-strain data from the tensile strains of 0.04-0.08. Figure 8. In-plane stiffness using a linear fit of the stress-strain data from the tensile strains of 0.04-0.08. Conclusions In summary, MD simulations were performed to examine the mechanical properties of the silicene sheet in the armchair and the zigzag directions at different temperatures subject to tensile loading. The results show that the bond breaking and fragmentation of the silicene sheet yields tensile failure. Improved mechanical properties can be calculated in a silicene sheet yielded in the zigzag direction compared with the tensile loading in the armchair direction. The maximum in-plane stress and the corresponding strain of the silicene sheet decrease as the system temperature increases.
4,449.2
2016-06-24T00:00:00.000
[ "Physics" ]
Goosegrass Detection in Strawberry and Tomato Using a Convolutional Neural Network Goosegrass is a problematic weed species in Florida vegetable plasticulture production. To reduce costs associated with goosegrass control, a post-emergence precision applicator is under development for use atop the planting beds. To facilitate in situ goosegrass detection and spraying, tiny- You Only Look Once 3 (YOLOv3-tiny) was evaluated as a potential detector. Two annotation techniques were evaluated: (1) annotation of the entire plant (EP) and (2) annotation of partial sections of the leaf blade (LB). For goosegrass detection in strawberry, the F-score was 0.75 and 0.85 for the EP and LB derived networks, respectively. For goosegrass detection in tomato, the F-score was 0.56 and 0.65 for the EP and LB derived networks, respectively. The LB derived networks increased recall at the cost of precision, compared to the EP derived networks. The LB annotation method demonstrated superior results within the context of production and precision spraying, ensuring more targets were sprayed with some over-spraying on false targets. The developed network provides online, real-time, and in situ detection capability for weed management field applications such as precision spraying and autonomous scouts. In Florida, many broadleaf horticultural crops are produced using a plasticulture system. This system included raised beds covered in plastic mulch with drip irrigation installed to provide nutrients and moisture. Weeds within this system primarily occur within the planting holes or between the rows, except for purple nutsedge (Cyperus rotundus L.) and yellow nutsedge (Cyperus esculentus L.) which penetrate and emerge through the plastic mulch. Within vegetable horticulture, the prevalent post-emergence weed management options for goosegrass control include hand weeding and herbicides. For pre-plant burn down and within row middles, broad-spectrum herbicides such as paraquat and glyphosate are widely employed. Consequently, both goosegrass and American black nightshade (Solanum americanum Mill.) have developed paraquat resistance 8,9 and ragweed parthenium (Parthenium hysterophorous L.) developed glyphosate resistance 10 . For weed control atop the bed during the cropping cycle, WSSA Group 1 herbicides are the most common post-emergence chemical control option. Group 1 herbicides are becoming increasingly utilized within herbicide mixtures for grass control in row middles depending on weed pressures and resistance issues faced. Implementing precision technology into spraying equipment is a viable option to reduce production costs associated with weed management. Goosegrass and other grass species are excellent targets for precision technology to apply Group 1 herbicides to a variety of broadleaf crops. A prototype precision sprayer was developed to simultaneously detect and spray weeds in plasticulture production within Florida. Briefly, the system was a modified plot sprayer with a digital camera sensor, a controller linked with artificial intelligence for detection, and nozzles controlled by solenoids. The desirable detector for this system is a convolutional neural network. Machine vision-based weed detection is typically conducted using either multispectral/hyperspectral or RGB imagery, the latter being more desirable for economic costs and practical adoption for producers 11 . Recent technological advances in graphical processing units permit training and employing deep learning convolutional neural networks as detectors 12 . Deep convolutional neural network frameworks have been reviewed elsewhere 12,13 . Briefly, neural networks take inspiration from the visual cortex, containing layers for feature extraction, convolution, pooling, activation functions, and class labeling 14 . The system relies on pattern recognition via filters within the convolutional layers for detection and classification 15 . Convolutional neural networks for weed detection have been employed in several crops including turfgrass 16,17 , wheat 18 , and strawberry 19 . For horticultural plasticulture row middles, a convolutional neural network has been developed to detect grasses among broadleaves and sedges 20 . Within broader agriculture, deep neural network applications include strawberry yield prediction 21 , sweet pepper (Capsicum annum L.), and cantaloupe (Cucumis melo var. cantalupo Ser.) fruit detection 22 , and detection of tomato pests and diseases 23 . With the widespread registration of Group 1 herbicides in broadleaf crops and the widespread distribution of goosegrass, the successful development of a detection network would have far-reaching implications for conventional horticulture. Development of a multi-crop, within-crop grass detection network has challenges including training image availability, ease of image collection due to the patchy nature of weeds, and the diverse background of several crops as the negative space. Additionally, the goosegrass within-crop growth habit, as well as the general habit of grassy weeds causes issues for bounding box-based network training. Goosegrass has a tufted plant habit with stems that are erect to spreading and up to 8.5 m tall, and leaves which are 5 cm to 35 cm long and 3 mm to 5 mm wide 24 . For strawberry production, goosegrass leaves have been observed to either penetrate through the crop canopy, growing prostrate along with the plastic, or grow in planting holes where strawberry plants have died. For tomato production, goosegrass plants typically grow at the base of the tomato plants, which are vertically staked for fresh-market production. The study objectives were to (1) develop a network with utilities in multiple broadleaf crops, starting with strawberry and tomato plasticulture, (2) evaluate the use of small label annotation boxes along the leaf-blade length for goosegrass detection compared to boxes encompassing the entire plant habit, and (3) evaluate a piecemeal oversampling technique. Results For strawberry production, the entire plant annotation method (EP) (precision = 0.93; recall = 0.88; F-score = 0.90; accuracy = 0.82) resulted in an overall increased YOLOv3-tiny training fit compared to the leaf-blade annotation method (LB) (precision = 0.39; recall = 0.55; F-score = 0.46; accuracy = 0.30) ( Table 1). Convergence time, in iterations, declined rapidly for EP compared to LB (data not shown). This was expected since EP resulted in fewer bounding boxes and provided larger bounding boxes with a static location. Labeling of goosegrass leaf blades with narrow squares resulted in "ground truth fluidity" with resultant increased training time and reduced fit. While the EP network appeared more successful in training, the network provided inadequate testing results. For goosegrass detection within strawberries, the LB (precision = 0.87; recall = 0.84; F-score = 0.85; accuracy = 0.74) outperformed the EP (precision = 0.93; recall = 0.62; F-score = 0.75; accuracy = 0.60) in terms of overall F-score and accuracy ( Table 2). The EP method demonstrated high precision but tended to miss targets ( Fig. 1). There was no impact of the annotation method on iteration time (Table 3). Compared to the EP, the LB network increased recall substantially at the expense of precision but resulted in the highest F-score. For goosegrass detection in tomato, the EP (precision = 0.77; recall = 0.43; F-score = 0.56; accuracy = 0.38) had higher precision but struggled at detecting plants ( Table 2, Fig. 2). Comparatively, the LB (precision = 0.59; recall = 0.74; F-score = 0.65; accuracy = 0.49) had reduced precision but had an increased recall. The LB derived network resulted in the highest overall F-score and accuracy for goosegrass detection in tomato. Discussion Detection in strawberry production demonstrated suitable identification of goosegrass. For images taken within tomato production, success was limited ( Table 2). This was most likely a consequence of available goosegrass training images within strawberry production but not for tomatoes. While attempts were made to match image acquisition angles and growth stages for both goosegrass and tomato growing in isolation, not having additional training images of the desired target and background together was likely detrimental. This could be due to the degree of actual overlap between the crop and weed in competition, altered growth habit by the weed in competition, or natural variability in the tomato growth habit inducing a stoichiometric effect that requires additional training images to overcome. www.nature.com/scientificreports www.nature.com/scientificreports/ For detection in both tomato and strawberry, the LB outperformed the EP in terms of recall, F-score, and accuracy. The EP networks had consistently higher precision but lower recall. This was likely a consequence of selecting the entire plant habit, increasing the variability between targets, and reducing the number of potential targets for training. Such precision and recall neural network trade-offs have been noted elsewhere, including polyp detection 25 . For precision spraying, the EP network would miss many plants but would typically spray goosegrass only. Comparatively, the LB network would spray goosegrass more regularly with some degree of over-spraying upon undesirable targets. For weed detection in occluded winter wheat using a convolutional neural network based on DetectNet achieved 87% precision and 46% recall 18 . Comparatively, using an object detection convolutional neural network based on You Only Look Once to detect weeds in winter wheat images resulted in 76% precision and 60% recall 26 . Detection of Carolina geranium in strawberry using DetectNet and leaf-level annotation resulted Table 2. Pooled relevant binary classification categories and neural network accuracy measures for goosegrass (Eleusine indica) detection in tomato (Solanum lycopersicum) and strawberry (Fragaria × ananassa) using two annotation methods on digital photography acquired in Central Florida, USA, in 2018 and 2019 a . a The neural network was the tiny version of the state-of-the-art object detection convolutional neural network You Only Look Once Version 3 (Redmon and Farhadi 2018). b EP = Entire plant annotation method. This refers to using a single, large square box to identify goosegrass within digital images. c LB = Leaf-blade annotation method. This refers to using multiple, small square boxes placed along leaf blades and inflorescence to identify goosegrass within digital images. www.nature.com/scientificreports www.nature.com/scientificreports/ in 99% precision and 78% recall 19 . Current results for goosegrass detection in strawberry obtained a relatively similar overall accuracy compared to similar studies using convolutional neural networks alone, but detection in tomatoes may require further sampling. Entire plant annotation Results indicate the potential for a unified network for use across multiple crops. Additional options for precision spraying multi-crop networks include Group 2 herbicides in vegetable plasticulture, group 4 within cereals, and groups 9 and 10 within associated genetically modified crops. While the piecewise image methodology results for tomatoes were limited, network desensitization for additional crops does provide some benefit. Existing networks for goosegrass can be expanded to additional crops and the number of necessary training images should be reduced. Several kinds of grass infest vegetable fields. Since the network did not classify tropical signalgrass [Urochloa distachya (L.) T.Q. Nguyen] as goosegrass (data not shown), additional classes are likely necessary or grouping Table 3. Impact of annotation style on testing iteration time for goosegrass (Eleusine indica) detection in strawberry (Fragaria × ananassa) and tomato (Solanum lycopersicum) production using a convolutional neural network developed at Balm, FL, USA in 2018 a . a The neural network was the tiny version of the state-of-the-art object detection convolutional neural network You Only Look Once Version 3 (Redmon and Farhadi 2018). b LB = Leaf-blade annotation method. This refers to using multiple, small square boxes placed along leaf blades and inflorescence to identify goosegrass within digital images. c Entire plant refers to the annotation method where a single, large square box to was used to identify goosegrass within digital images. www.nature.com/scientificreports www.nature.com/scientificreports/ multiple grass species into a single category 20 . A similar network (YOLOv3) was trained to detect broadleaf species that were not previously part of the training dataset 20 , so this option may be feasible but requires further study. If such is desirable, care should be taken to avoid class imbalances, which negatively impact network performance 27,28 . Network performance enhancement within limited datasets may be improved using convolutional neural networks with traditional machine learning systems (support vector machines), as demonstrated with black nightshade (Solanum lycopersicum L.) and velvetleaf (Abutilon theophrasti Medik) in tomato and cotton (Gossypium hirsutum L.) 29 . The integration of segmentation techniques with neural networks has previously been successful and may help improve precision and recall 30,31 . For example, a weed detection system using blob segmentation and a convolutional neural network achieved 89% weed detection accuracy 32 . For some weed management scenarios, a resource such as CropDeep and DeepWeeds could be used for pre-training or supplementing datasets 33,34 . Using k-means pre-training may improve detection, which improved detection of an image classification convolutional neural network resulted from 2%, up to 93% accuracy 35 . The developed networks demonstrated detection across two broadleaf vegetable crops within vegetable plasticulture production. The LB annotation technique provided superior results for goosegrass detection in strawberry production (F-score = 0.85) compared to the EP annotation technique (F-score = 0.75). Supplementing the model with a majority of isolated tomato and goosegrass images produced moderate results. The LB annotation technique provided better detection (F-score = 0.65) compared to the EP technique (F-score = 0.56). Results demonstrate that the use of the piecemeal technique alone does not provide adequate detection for field-level evaluation but may represent a suitable oversampling strategy to supplement datasets. The developed network provides an online, real-time, and in situ detection capability for weed management field applications such as precision spraying and autonomous scouts. Methods Images were acquired with either a Sony (DSC-HX1, Sony Cyber-shot Digital Still Camera, Sony, Minato, Tonky, Japan) or Nikon digital camera (D3400 with AF-P DX NIKKOR 18-55 mm f3.5-5.6 G VR lenses, Nikon Inc., Melville NY). Training images were taken at the Gulf Coast Research and Education Center (GCREC) in Balm, FL (27.76°N, 82.22°W) and the Strawberry Growers Association (SGA) field site in Dover, FL (28.02°N, 82.23°W). Images were acquired from the perspective of the modified plot sprayer camera (T-30G-6, Bellspray, Inc., Opelousas, LA). Training data (Training 1, Table 4) were acquired during the strawberry growing season at GCREC and SGA. Images were taken in tandem with a previous study 19 . Strawberry plants were transplanted on October 10, 2017, and October 16, 2017, at the GCREC and SGA, respectively. Several datasets were acquired due to limited goosegrass emergence at GCREC, so a piecemeal solution was undertaken. Training images of tomatoes and goosegrass were acquired separately within a plasticulture setting. A training dataset was developed for goosegrass competing with tomatoes (Training 2, Table 4). Goosegrass was grown in isolation (Training 3, Table 4), with seedlings transplanted on March 12, 2018, and May 15, 2018. Images of only tomato plants were collected for network desensitization (Training 4, Table 4). After preliminary testing, additional images were collected for network desensitization for purple nutsedge (Desensitization , Table 4), which was at the 3-leaf stage, before blooming. Five datasets were acquired for network testing to meet each crop objective and provide sufficient samples. Images were collected at two commercial strawberry farms (27.93°N, 82.10°W, and 27.98°N, 82.10°W) (Testing 1, Table 4) and supplemented with images from GCREC (Testing 3, Table 4). Images were collected approximately 134 and 136 days after strawberry transplanting from commercial farms and 60 days after transplanting at GCREC. For testing images in tomato production (Testing 3, www.nature.com/scientificreports www.nature.com/scientificreports/ 5-leaf stage) were transplanted into planting holes containing tomato plants transplanted on March 4, 2019. The tomato data was supplemented with additional tomato images (Testing 4, Table 4) to evaluate the network's ability to discriminate goosegrass from another grass species. A fifth dataset included goosegrass growing in isolation (Testing 5, Table 4). The image resolution of the Nikon digital camera was 4000 × 3000 pixels. Nikon images were resized to 1280 × 853 pixels and cropped to 1280 × 720 pixels (720p) using IrfanView (Version 4.50, Irfan Skiijan, Jajce, Bosnia). The Sony digital camera image resolution was 1920 × 1080 pixels and images were resized to 720p. Training images were annotated using custom software compiled with Lazarus (https://www.lazarus-ide.org/) in two ways. The EP annotation method used a single bounding box to encompass the entire plant habit. The LB annotation method used smaller bounding boxes along the leaf blade to reduce the overall variability of the target. This approach had been utilized previously to improve detection by focusing annotation to individual Carolina geranium (Geranium carolinianum L.) leaves 19 . Due to the leaf shape and potential goosegrass leaf angles, square bounding boxes were not an ideal solution to minimize background noise by annotating entire leaves. Instead, multiple small square bounding boxes, approximately the width of the leaves, were used to label goosegrass along the length of the leaves. Examples of each method are matched by corresponding bounding box output found in Figs. 1 and 2. Bounding box annotation was the preferable technique compared to pixel-wise annotation due to increased accuracy and reduced time consideration 22 . The convolutional neural network utilized was tiny-You Only Look Once Version 3 (YOLOv3-tiny) 36 . YOLOv3-tiny was selected for the implementation into a developed prototype precision sprayer for in situ spraying of grasses in horticultural crops including strawberry and tomato plasticulture. The sprayer has a 50 cm distance between the camera and the solenoid-controlled nozzles. As such, image processing speed was considered a priority. The state-of-the-art object detection neural network for iteration speed and capacity for implementation into the controller was selected. YOLOv3-tiny feature extraction is achieved with the convolutional-based Darknet-19 36,37 . Darknet-19 was derived for YOLOv2, using 3 × 3 filters within its 19 convolutional layers and 1 × 1 filters within its 5 max-pooling layers 38 . Localization is achieved by dividing the image into a grid, predicting multiple bounding boxes within each, and using regression to resolve spatially separated predictions 39 . Bounding box classification permits multiple classification categories and multi-labeling of predictions 37,39 , which is particularly useful for mixed weed communities. YOLOv3-tiny was trained and tested using the Darknet infrastructure 40 and pre-trained with the COCO dataset 41 . YOLOv3-tiny contained augmentation parameters to reduce the opportunity for overtraining on irrelevant features through altering input images. These parameters included color alteration (exposure, hue, and saturation), flipping, cropping, and resizing. Network training continued until either the average loss error stopped decreasing or the validation accuracy (recall or precision) stopped increasing. For training, 10% of the available images were randomly selected as the validation dataset used during training. To assess network effectiveness, classification output was pooled and categorized by binary classification for networks derived from both annotation methods. These categories included true positives (tp), false positives (fp), and false negatives (fn). A tp was when the network correctly identified the target. An fp was when the network falsely predicted the target. An fn was when the network failed to predict the true target. Precision, recall, F-score, and accuracy were used to evaluate the network effectiveness to predict targets 12 . Precision measures the effectiveness of the network in properly identifying its target and was calculated as 39,40 : = + Precision tp tp fp (1) Recall evaluates the effectiveness of the network in target detection and was calculated as 42,43 : = + Recall tp tp fn (2) The F-score is the precision and recall harmonic mean and gives an overall performance measure with considerations to both fp and fn, and is calculated as 42 : For comparison purposes, the testing network accuracy was calculated as: = + + Accuracy tp tp fp fn (4) To validate the network training fit, the "map" command was specified. This method used an intersection of union (IoU) with a threshold of 0.25 to evaluate predicted estimates compared to ground-truth annotation. This measure was included to evaluate the effectiveness of the annotation method on overall training. For network detection accuracy assessment of testing datasets, a separate approach was taken for precision sprayer considerations. For both annotation methods, should any of the plant falls within the predicted bounding box, it was considered a hit (IoU > 0). Additional predicted bounding boxes on the same plant were ignored. This method prioritized the detection of some part of the goosegrass plant and is reliant on the ability of the controller software to compensate and increase the area sprayed if necessary.
4,587.8
2020-06-12T00:00:00.000
[ "Computer Science", "Agricultural and Food Sciences" ]